arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
2008.03226
\section{Introduction} \label{intro} Photoswitches \cite{Crespi2019} are molecules that can change their structure and properties in response to light as illustrated in \autoref{fig:photo}. Photoswitches have found increasing use in molecular, \cite{Eisenreich2018,Dorel2019,Neilson2013,Fuchter2020} supramolecular,\cite{Corra2022,Han2016,Lee2022} and materials applications.\cite{Wang2021, Dong2018,Garcia-Amoros2012a, Hou2019,GouletHanssens2020} On the molecular level, the incorporation of a photoswitchable motif into a drug molecule can provide a means of turning on, or off, its activity using light. \cite{Hull2018,Broichhagen2015} Photoswitchable molecules have demonstrated use as the active moiety in light-responsive molecular pumps, serving to drive systems out of equilibrium. \cite{Kathan2017,Corra2022} Materials designed to transfer information, \cite{Garcia-Amoros2011,Greenfield2022} via light, have also benefited from the incorporation of photoswitchable molecules as the responsive component. In all of these examples, the structure of the photoswitch, \cite{Crespi2019, Crespi2019a} and hence its photophysical properties, is a key consideration to efficient light addressability. Azobenzene-based photoswitches switch about their N=N bond giving rise to two isomeric forms, \textit{cis}-\textit{trans} or \textit{E}-\textit{Z} isomers. These photoswitches are commonly employed in applications seeking to exploit the significant change in structure, dipole moment, or conductivity of their isomeric forms. \cite{2011_Beharry, 2015_Dong} Recently, azoheteroarenes, where one or more of the phenyl rings of azobenzene are replaced by heteroarene rings, have emerged as a promising subclass of the azobenzene photoswitch.\cite{Greenfield2022} Azoheteroarenes demonstrate an expansive structural-property tunability of their photophysical properties. These properties include the degree of photoswitching induced by a specified wavelength, quantified by the photostationary state (PSS), and the thermal half-life of the metastable photogenerated state. Factors that can determine an azoarene’s usefulness in a particular application include the thermal half-life of the metastable isomer, quantum yields of photoswitching and the steady-state distribution of a given isomer at a particular irradiation wavelength (PSS). The ideal thermal half-life is dependent on the targeted application, for example, information transfer requires photoswitches with short thermal half-lives \cite{Garcia-Amoros2012a} whilst energy storage applications benefit from photoswitches with long thermal half-lives.\cite{Dong2018} Achieving a high PSS and separated electronic absorption bands of the isomers is generally desirable, however, as these properties determine the addressability of each isomeric form. Through chemical design, the $\pi-\pi^*$ and $n-\pi^*$ bands of the \emph{E} and \emph{Z} isomers can be tuned to ensure minimal spectral overlap for a given irradiation wavelength, maximising the composition of a specific isomer at said PSS. Moreover, red-shifting the absorption spectra away from the UV region is also beneficial; use of low energy light reduces photo-induced degradation of materials, and also increases the penetration depth in tissue. \cite{Fuchter2020} Taken together, azoarene photoswitches have been harnessed in a myriad of applications including photopharmacology,\cite{2020_Fuchter} organocatalysis,\cite{2012_Neilson} molecular solar thermal energy storage,\cite{2021_Losantos, Greenfield2021} data storage, real-time information transfer,\cite{2020_Zhuang} MRI contrast agents ,\cite{2015_Dommaschk} and chemical sensing. \cite{2016_Balamurugan} \begin{figure*}[ht!] \centering {\includegraphics[width=0.68\textwidth]{Figures/Screenshot_2021-08-27_at_22_48_58.png}\label{fig:photo_mech}} \caption{Photoswitchable molecules undergo reversible structural changes between multiple states upon irradiation with light. } \label{fig:photo} \end{figure*} To date, structural features that dictate the photophysical properties of these systems are typically post-rationalised following the synthesis and characterisation of a novel structure \cite{2014_Weston, 2017_Calbo, Calbo2019, Crespi2019a} or predicted using quantum chemical calculations such as density functional theory (DFT) and time-dependent density functional theory (TD-DFT). \cite{Calbo2019, Crespi2019a} Both of these approaches are limited by the time it takes to perform the synthesis or the calculation in silico, although it should be noted that high-throughput DFT approaches may have potential to mitigate the wall-clock time to some extent in the future. \cite{2017_Lopez, 2018_Wilbraham, 2020_Choudhary} In light of this, human intuition remains the guide for candidate selection in many photoswitch chemistry laboratories. Advances in molecular machine learning however, have taken great strides in recent years in areas such as molecule generation, \cite{2018_Design, 2017_Grammar, 2018_Jin, 2020_Griffiths, 2019_Elton, 2021_Grosnit, 2019_Hong, 2021_Seo} chemical reaction prediction, \cite{2017_Schwaller, 2017_Jin, 2017_Liu, 2020_Schwaller} and molecular property prediction. \cite{2019_Zhang, 2019_Ryu, 2018_Wu, 2020_Yang, 2020_Jin, 2020_Moon, 2019_Lim} In particular, machine learning property prediction has the potential to cut the attrition rate in the discovery of novel and impactful molecules by virtue of its short inference time. A rapid, accessible, and accurate machine learning prediction of a photoswitch's properties prior to synthesis would allow promising structures to be prioritised, facilitating photoswitch discovery as well as revealing new structure-property relationships. Recently work by Lopez and co-workers \cite{Mukadum2021} employed machine learning to accelerate a quantum chemistry screening workflow for photoswitches. The screening library in this case is generated from 29 known azoarenes and their derivatives yielding a virtual library of 255,991 azoarenes in total. The authors observed that screening using active search tripled the discovery rate of photoswitches compared to random search according to a binary labelling system which assigns a positive label to a molecule possessing a $\lambda_{\text{max}} > 450 \text{nm}$ and a negative label otherwise. The approach highlights the potential for active learning and Bayesian optimisation methodology to accelerate DFT-based screening. Nonetheless, to our knowledge, the application of machine learning to predict experimental photophysical properties, and the prospective experimental validation of machine learning predictions, remain key open questions. In this paper we present an experimentally validated framework for molecular photoswitch discovery based on curating a large dataset of experimental photophysical data, and multitask learning using multioutput Gaussian processes. This framework was designed with the goals of: (i) performing faster prediction relative to TD-DFT and directly trained on experimental data; (ii) obtaining improved accuracy relative to human experts; (iii) operationalising model predictions in the context of laboratory synthesis. To achieve these goals, a dataset of the electronic absorption properties of 405 photoswitches in their \emph{E} and \emph{Z} isomeric forms was curated, a full description of the dataset and collated properties is provided in Section~\ref{sec:data_description}. Following an extensive benchmark study, we identified an appropriate machine learning model and molecular representation for prediction, as detailed in Section~\ref{sec:model_choice}. A key feature of this model is that it is performant in the small data regime as photoswitch properties (data labels) obtained via laboratory measurement are expensive to collect in both cost and time. Our model uses a multioutput Gaussian processes (\textsc{mogp}s) approach due to its ability to operate in the multitask learning setting, amalgamating information obtained from molecules with multiple labels. In Section~\ref{sec:dft} we show that the \textsc{mogp} model trained on the curated dataset obtains comparable predictive accuracy to TD-DFT (at the CAM-B3LYP level of theory) and only suffers slight degradations in accuracy relative to TD-DFT methods with data-driven linear corrections whilst maintaining inference time on the order of seconds. A further benchmark against a cohort of human experts as well as a study on how the predictive performance varies as a function of the dataset used for model training is provided in the Supporting Information. In Section~\ref{sec:valid} we use our approach to screen a set of commercially available azoarenes, and identify several motifs that display separated electronic absorption bands of their isomers, exhibit red-shifted absorptions, and are suited for information transfer and photopharmacological applications. \section{Dataset Curation} \label{sec:data_description} Experimentally-determined properties of azobenzene-derived photoswitch molecules reported in the literature were curated. We include azobenzene derivatives with a diverse range of substitution patterns and functional groups to cover a large volume of chemical space. This is vitally important from a synthetic point-of-view as such functional groups serve as handles for further synthetic modification. Furthermore, we also included the azoheteroarenes and cyclic azobenzenes which have established themselves as possessing promising photophysical and photochemical properties to unmodified azobenzene motifs. \cite{2017_Calbo} The dataset includes properties for 405 photoswitches. The molecular structures of these switches are denoted according to the simplified molecular input line entry system (SMILES). \cite{1988_Weininger} A full list of references for the data sources is provided in the Supporting Information. The following properties were collated from the literature, where available. (i) The rate of thermal isomerisation (units = $s^{-1}$), which is a measure of the thermal stability of the metastable isomer in solution. This corresponds to the \textit{Z} isomer for non-cyclic azophotoswitches and the \textit{E} isomer for cyclic azophotoswitches. (ii) The PSS of the stated isomer at the given photoirradiation wavelength. These values are typically obtained by continuous irradiation of the photoswitch in solution until a steady state distribution of the \emph{E} and \emph{Z} isomers is obtained. The reported PSS values correspond to solution-phase measurements performed in the stated solvents. (iii) The irradiation wavelength, reported in nanometers. This corresponds to the specific wavelength of light used to irradiate samples from \emph{E}-\emph{Z} or \emph{Z}-\emph{E} such that a PSS is obtained, in the stated solvent. (iv) The experimental transition wavelengths, reported in nanometers. These values correspond to the wavelength at which the $\pi-\pi{^*}$/$\emph{n}-\pi{^*}$ electronic transition has a maximum for the stated isomer. This data was collated from solution-phase experiments in the solvent stated. (v) DFT-Computed Transition Wavelengths, reported in nanometers. These values were obtained using solvent continuum TD-DFT methods and correspond to the predicted $\pi-\pi{^*}$/$\emph{n}-\pi{^*}$ electronic transition maximum for the stated isomer. (vi) The extinction coefficient (in units of M$^{-1}$cm$^{-1}$), corresponding to how strongly a molecular species absorbs light, in the stated solvent. (vii) The theoretically-computed Wiberg Index \cite{1968_Wiberg} (through the analysis of the SCF density calculated at the PBE0/6-31G** level of theory \cite{2017_Calbo}), which is a measure of the bond order of the N=N bond in an azo-based photoswitch, giving an indication of the ‘strength’ of the azo bond. Using the data collated in this dataset, we focus on using our model to predict the four experimentally-determined transition wavelengths below. We focus on these four properties as they are the core determinants of quantitative, bidirectional photoswitching. \cite{2019_Crespi} These include, the $\pi-\pi{^*}$ transition wavelength of the \emph{E} isomer (data labels for 392 molecules exist in our dataset). The \emph{n}$-\pi{^*}$ transition wavelength of the \emph{E} isomer (data labels for 141 molecules exist in our dataset). The $\pi-\pi{^*}$ transition wavelength of the \emph{Z} isomer (data labels for 93 molecules exist in our dataset). Finally, the \emph{n}$-\pi{^*}$ transition wavelength of the \emph{Z} isomer (data labels for 123 molecules exist in our dataset). We would like to emphasise that other photophysical or thermal properties could also be investigated using machine learning approaches, notably the thermal half-life of the metastable state. However, there are fewer reports of experimentally-derived thermal half-lives significantly reducing the data that we can train our machine learning models on; these other properties will be investigated in future studies. \section{Machine Learning Prediction Pipeline}\label{sec:model_choice} There are three constituents to the prediction pipeline: A dataset, a model and a representation. In terms of the choice of dataset used for model training, we describe our curated dataset in \autoref{sec:data_description}. We present results in the Supporting Information comparing models trained on the curated dataset against those trained on a large out-of-domain dataset of 6142 photoswitches. \cite{2019_Beard} In terms of the choice of model, we evaluate a broad range including Gaussian processes (\textsc{gp}), random forest (\textsc{rf}), Bayesian neural networks, graph convolutional networks, message-passing neural networks, graph attention networks, LSTMs with augmented SMILES and attentive neural processes (\textsc{anp}). The full results of our experiments, as well as all hyperparameter settings, are provided in the Supporting Information where Wilcoxon signed rank tests \cite{1945_Wilcoxon} determine that there is weak evidence to support that multitask learning affords improvements over the single task setting. All experiments may be reproduced via the scripts provided at \url{https://github.com/Ryan-Rhys/The-Photoswitch-Dataset}. We chose the multioutput Gaussian process (\textsc{mogp}) to take forward to the comparison against TD-DFT and experimental screening due to its predictive performance in the multitask setting as well as its ability to represent uncertainty estimates. We illustrate some use-cases for uncertainty estimates with confidence-error curves in the Supporting Information. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.98\textwidth]{Figures/white_box6.png} \end{center} \caption{Marginal boxplot showing the performance of representations aggregated over different models (\textsc{rf}, \textsc{gp}, \textsc{mogp} and \textsc{anp}). We evaluate performance on 20 random train/test splits of the photoswitch dataset in a ratio of 80/20 using the mean absolute error (MAE) as the performance metric. An individual box is computed using the mean values of the MAE for the four models for the representation indicated by the associated colour and shows the range in addition to the upper and lower quartiles of the error distribution. The plot indicates that fragprints are the best representation on the \emph{E} isomer $\pi - \pi^*$ prediction task and RDKit fragments alone are disfavoured across all tasks.} \label{boxplot} \end{figure} In terms of the choice of representation we evaluate three commonly-used descriptors: RDKit fragment features, \cite{rdkit} ECFP fingerprints \cite{2010_Rogers} as well as a hybrid 'fragprints' representation formed by concatenating the Morgan fingerprint and fragment feature vectors. The performance of the RDKit fragment, ECFP fingerprint and fragprint representations on the wavelength prediction tasks is visualised in \autoref{boxplot} where aggregation is performed over the \textsc{rf}, \textsc{gp}, \textsc{mogp} and \textsc{anp} models. This analysis motivated our use of the fragprints representation in conjunction with the \textsc{mogp} to take forward to the TD-DFT comparison and experimental screening. We now briefly describe Gaussian processes and in particular the multioutput Gaussian process with Tanimoto kernel that we employ for prediction. \subsection{Gaussian Processes} In the context of machine learning a Gaussian process is a Bayesian nonparametric model for functions. Practical advantages of \textsc{gp}s for molecular datasets include the fact that they have few hyperparameters to tune and maintain uncertainty estimates over property values. \cite{2006_Rasmussen, 2007_Obrezanova, moss2020gaussian} A \textsc{gp} is defined as a collection of random variables, $\{f(\mathbf{x_1}), f(\mathbf{x_2}), \dotsc \}$ any finite subset of which are distributed according to a multivariate Gaussian. \cite{2006_Rasmussen} A stochastic function $f: \mathbb{R}^D \to \mathbb{R}$ that follows a \textsc{gp} is fully specified by a mean function $m(\cdot)$ and a covariance function or kernel $k(\cdot, \cdot)$ and is written $f \sim \mathcal{GP}(m(\cdot), k(\cdot,\cdot))$. When using \textsc{gp}s for molecular property regression tasks we seek to perform Bayesian inference over a latent function $f$ that represents the mapping between the inputs $\{\mathbf{x_1}, \dotsc , \mathbf{x_N}\}$ and their property values $\{f(\mathbf{x_1}), \dotsc , f(\mathbf{x_N})\}$. In practice we receive the inputs together with potentially noise-corrupted observations of their property values $\{y_1, \dotsc , y_N\}$. The mean function $m(\mathbf{x})$ is typically set to zero following standardisation of the data. The kernel function $k(\mathbf{x} ,\mathbf{x'})$ computes the similarity between molecules $\mathbf{x}$ and $\mathbf{x'}$. In all our experiments we use bit/count vectors to represent molecules and hence we choose the Tanimoto kernel \cite{2005_Ralaivola} defined as \begin{equation}\label{eq:tani} k_{\text{Tanimoto}}(\mathbf{x}, \mathbf{x'}) = \sigma_{f}^2 \cdot \frac{\langle\mathbf{x}, \mathbf{x'}\rangle}{\norm{\mathbf{x}}^2 + \norm{\mathbf{x'}}^2 - \langle\mathbf{x}, \mathbf{x'}\rangle}, \end{equation} \noindent where $\mathbf{x}$ and $\mathbf{x'}$ are count vectors, $\sigma_{f}$ is a signal variance hyperparameter and $\langle\cdot, \cdot\rangle$ represents the Euclidean dot product. Given our choice of mean function and kernel we place a \textsc{gp} prior over $f$, $p(f(\mathbf{x})| \theta) = \mathcal{GP}\big(0, K(\mathbf{x}, \mathbf{x'})\big)$ where the notation $K(\mathbf{x}, \mathbf{x}')$ is taken to mean a kernel matrix whose entries are given as $[K]_{ij} = k(\mathbf{x}_i, \mathbf{x}_j)$ and $\theta$ as representing the set of kernel hyperparameters (e.g. the signal variance in \autoref{eq:tani}). We also specify a likelihood function $p(y_i | f)$ which depends on $f(\mathbf{x_i})$ only and is typically taken to be Gaussian $\mathcal{N}(y_i | f(\mathbf{x_i}), \sigma_y^2)$. We assume the noise level $\sigma_y^2$ is homoscedastic in this paper but it can also set to be heteroscedastic by introducing a dependence on the input $\sigma_y^2(\mathbf{x})$. \cite{2021_Griffiths} Once we have observed some data $(X, \mathbf{y})$, where $X = \{\mathbf{x_i}\}_{i=1}^N$ is a set of molecules and $\mathbf{y} = \{y_i\}_{i=1}^N$ are their property values, the joint distribution over the observed data $\mathbf{y}$ and the predicted function values $\mathbf{f}_*$ at test locations $X_*$ may be written as \begin{align}\label{eq:joint_prior} \begin{bmatrix} \mathbf{y} \\ \mathbf{f_*} \\ \end{bmatrix} \sim \mathcal{N} \bigg(0, \begin{bmatrix} K(X, X') + \sigma_{y}^2 I & K(X, X_*)\: \\ K(X_*, X)\phantom{+ \: \: \sigma{y}^2} & K(X_*, X_*) \end{bmatrix} \bigg). \end{align} \noindent The joint prior in \autoref{eq:joint_prior} may be conditioned on the observations through $p(\mathbf{f_*}| \mathbf{y}) = \frac{p(\mathbf{f_*}, \mathbf{y})}{p(\mathbf{y})}$ \noindent which enforces that the joint prior agree with the observed target values $\mathbf{y}$. The predictive distribution is given as $p(\mathbf{f_*}| X, \mathbf{y}, X_*) = \mathcal{N}\big(\mathbf{\bar{f}_*}, \text{cov}(\mathbf{f_*})\big)$ with the predictive mean at test locations $X_*$ being $\mathbf{\bar{f_*}} = K(X_*, X)[K(X, X) + \sigma_{y}^2 I]^{-1} \mathbf{y}$ and the predictive uncertainty being $\text{cov}(\mathbf{f_*}) = K(X_*, X_*) - K(X_*, X)[K(X, X) + \sigma_{y}^2 I]^{-1} K(X, X_*)$. The predictive mean is the quantity used for prediction while the predictive uncertainty can inform us as to the model's prediction confidence. The \textsc{gp} hyperparameters are learned through the optimisation of the marginal likelihood \begin{align}\label{eq:log_lik} \log p(\mathbf{y}| X, \theta) =& \underbrace{-\frac{1}{2} \mathbf{y}^{\top}(K_{\theta}(X, X') + \sigma_{y}^2I)^{-1} \mathbf{y}}_\text{encourages fit with data} \\ &\underbrace{-\frac{1}{2} \log | K_{\theta}(X, X') + \sigma_{y}^2 I |}_\text{controls model capacity} -\frac{N}{2} \log(2\pi) \nonumber. \end{align} \noindent Where $N$ is the number of observations and the subscript notation on the kernel matrix $K_{\theta}(X, X')$ is chosen to indicate the dependence on the set of hyperparameters $\theta$. The two terms in the expression for the marginal likelihood represent the Occam factor \cite{2001_Rasmussen} in their preference for selecting models of intermediate capacity. \subsection{Multioutput Gaussian Processes (MOGPs)} A \textsc{mogp} generalises the idea of the \textsc{gp} to multiple outputs and a common use case is multitask learning. In multitask learning, tasks are learned in parallel using a shared representation; the idea being that learning for one task may benefit from the training signals of related tasks. In the context of photoswitches, the tasks constitute the prediction of the four transition wavelengths. We wish to perform Bayesian inference over a stochastic function $f: \mathbb{R}^D \to \mathbb{R}^P$ where $P$ is the number of tasks and we possess observations $\{(\mathbf{x_{11}}, y_{11}), \dotsc , (\mathbf{x_{1N}}, y_{1N}), \dotsc , (\mathbf{x_{P1}}, y_{P1}), \dotsc , (\mathbf{x_{PN}}, y_{PN})\}$. We do not necessarily have property values for all tasks for a given molecule. To construct a multioutput \textsc{gp} we compute a new kernel function $k(\mathbf{x}, \mathbf{x'}) \cdot B[i, j]$ where $B$ is a positive semidefinite $P \times P$ matrix , where the $(i, j)\text{th}$ entry of the matrix $B$ multiplies the covariance of the $i$-th function at $\mathbf{x}$ and the $j$-th function at $\mathbf{x'}$. Such a multioutput \textsc{gp} is termed the intrinsic model of coregionalisation (ICM). \cite{2007_Williams} Inference proceeds in the same manner as for vanilla \textsc{gp}s, substituting the new expression for the kernel into the equations for the predictive mean and variance. Positive semi-definiteness of $B$ may be guaranteed through parametrising the Cholesky decomposition $LL^{\top}$ where $L$ is a lower triangular matrix and the parameters may be learned alongside the kernel hyperparameters through maximising the marginal likelihood in \autoref{eq:log_lik} substituting the appropriate kernel. While it has been widely cited that \textsc{gp}s scale poorly to large datasets due to the $O(N^3)$ cost of training, where $N$ is the number of datapoints, \cite{2006_Rasmussen} recent advances have seen \textsc{gp}s scale to millions of data points using multi GPU parallelisation. \cite{2019_Pleiss} Nonetheless, on CPU hardware scaling \textsc{gp}s to datasets on the order of $10,000$ data points can prove challenging. For the applications we consider however, we are unlikely to be fortunate enough to encounter datasets of relevant experimental measurements on the order of tens of thousands of data points and so CPU hardware is sufficient for this study. \section{MOGP Prediction Compared against TD-DFT}\label{sec:dft} We compare the \textsc{mogp}, Tanimoto kernel and fragprints combination against two widely-utilised levels of TD-DFT: CAM-B3LYP \cite{2004_Yanai} and PBE0. \cite{1996_Perdew, 1999_Adamo} While the CAM-B3LYP level of theory offers highly accurate predictions, its computational cost is high relative to that of machine learning methods. To obtain the predictions for a single photoswitch molecule one is required to perform a ground state energy minimisation followed by a TD-DFT calculation. \cite{2015_Belostotskii} In the case of photoswitches these calculations need to be performed for both molecular isomers and possibly multiple conformations which further increases the wall-clock time. When screening multiple molecules is desirable, this cost, in addition to the expertise required to perform the calculations may be prohibitive, and so in practice it is easier to screen candidates based on human chemical intuition. In contrast, inference in a data-driven model is on the order of seconds but may yield poor results if the training set is out-of-domain relative to the prediction task. Further background on TD-DFT is available in the Supporting Information. In \autoref{tab_merge1} we present the performance comparison against 99 molecules and 114 molecules for CAM-B3LYP and PBE0 respectively both using the 6-31G** basis set taken from the results of a benchmark quantum chemistry study \cite{2011_Jacquemin} to which the reader is referred for all information pertaining to the details of the calculations. \footnote[1]{The TD-DFT CPU runtime estimates are taken from \citep{2015_Belostotskii} and hence represent a ballpark figure that is liable to decrease with advances in high performance computing.} We elect to include an additional $15$ molecules in the test set for PBE0. These additional molecules are not featured in the study by \citet{2011_Jacquemin} but are reported in \citep{2017_Calbo} using the same basis set. It should also be noted that the data presented in \citet{2011_Jacquemin} contains measurements for the same molecules under different solvents. In our work we absorb solvent effects into the noise. Specifically, we do not treat the solvent as part of the molecular representation. As such, for duplicated molecules we choose a single solvent measurement at random. We report the mean absolute error (MAE) and additionally the mean signed error (MSE) in order to assess systematic deviations in predictive performance for the TD-DFT methods. For the \textsc{mogp} model, we perform leave-one-out validation, testing on a single molecule and training on the others in addition to the experimentally-determined property values for molecules acquired from synthesis journal papers. We then average the prediction errors and report the standard error. \begin{table*}[h] \caption{\textsc{mogp} against TD-DFT performance comparison on the PBE0 benchmark consisting of 114 molecules, and the CAM-B3LYP benchmark consisting of 99 molecules. Best metric values for each benchmark are highlighted in bold.} \resizebox{0.98\textwidth}{!}{ \centering \begin{tabular}{l l | c c | c} \toprule \multicolumn{2}{c|}{{\bf Method}} & \multicolumn{2}{c|}{{\bf Accuracy Metric (nm)}} & \multicolumn{1}{c}{{\bf CPU Runtime ($\downarrow$)}} \\ & & MAE ($\downarrow$) & MSE & \\ \hline \multicolumn{2}{c|}{{\bf \underline{PBE0 Benchmark}}} & & \\ \textsc{mogp} & & $14.7 \pm 1.2$ & $\textbf{0.1} \pm \textbf{1.9}$ & $\textbf{<}$ \textbf{1 minute} \\ PBE0 & uncorrected & $26.0 \pm 1.8$ & $-19.1 \pm 2.5$ & \\ & linear correction & $\textbf{12.4} \pm \textbf{1.3}$ & $-1.2 \pm 1.8$ & ca. 228 days \\ \hline \multicolumn{2}{c|}{{\bf \underline{CAM-B3LYP Benchmark}}} & & \\ \textsc{mogp} & & $15.4 \pm 1.4$ & $-0.3 \pm 2.1$ & $\textbf{<}$ \textbf{1 minute} \\ CAM-B3LYP & uncorrected & $16.5 \pm 1.6$ & $6.7 \pm 2.2$ & \\ & linear correction & $\textbf{10.7} \pm \textbf{1.2}$ & $\textbf{0.0} \pm \textbf{1.6}$ & ca. 396 days \\ \bottomrule \end{tabular}} \label{tab_merge1} \end{table*} The \textsc{mogp} model outperforms PBE0 by a large margin and provides comparable performance to CAM-B3LYP. In terms of runtime, there is no contest. The MSE values for the TD-DFT methods however indicate that there is systematic deviation in the TD-DFT predictions. This motivates the addition of a data-driven correction to the TD-DFT predictions. As such, we train a Lasso model with an $L_1$ multiplier of $0.1$ on the prediction errors of the TD-DFT methods and apply this correction when evaluating the TD-DFT methods on the heldout set in leave-one-out validation. We choose to use Lasso as empirically it outperforms linear regression in fitting the errors due to inducing sparsity in the high-dimensional fragprint feature vectors. We show the Spearman rank-order correlation coefficients of all methods and the error distributions in the Supplementary Information. There, it is observed that an improvement is obtained in the correlation between TD-DFT predictions on applying the linear correction. Furthermore, the error distribution becomes more symmetric on applying the correction. \section{Human Performance Benchmark}\label{sec:human} \begin{figure*}[!htbp] \centering \subfigure{\includegraphics[width=0.41\textwidth]{Figures/human_mols2.png}} \subfigure{\includegraphics[width=0.55\textwidth]{Figures/human_performance_comparison_new.png}} \caption{A performance comparison between human experts (orange) and the \textsc{mogp}-fragprints model (blue). MAEs are computed on a per molecule basis across all human participants.} \label{human} \end{figure*} In practice, candidate screening is undertaken based on the opinion of a human expert due to the speed at which predictions may be obtained. While inference in a data-driven model is comparable to the human approach in terms of speed, we aim in this section to compare the predictive accuracy of the two approaches. In order to achieve this, we assembled a panel of 14 human experts, comprising Postdoctoral Research Assistants and PhD students in photoswitch chemistry with a median research experience of 5 years. The assigned task is to predict the \emph{E} isomer $\pi-\pi{^*}$ transition wavelength for five molecules taken from the dataset. A reference molecule is also provided with associated $\pi-\pi{^*}$ wavelength. The reference molecule possesses either single, double or triple point changes from the target molecule and serves to mimic the laboratory decision-making process of predicting an unknown molecule's property with respect to a known one. In all instances, those polled have received formal training in the fundamentals of UV-Vis spectroscopy. We note that one of the limitations of this study is that the human chemists are not provided with the full dataset of 405 photoswitch molecules in advance of making their predictions. Our goal in constructing the study in this fashion was to enable a comparison of the benefits of dataset curation, together with a machine learning model to internalise the information contained in the data, against the experience acquired over a photoswitch chemist's research career. Analysing the MAE across all humans per molecule \autoref{human}, we note that the humans perform worse than the \textsc{mogp} model in all instances. In going from molecule 1 to 5, the number of point changes on the molecule increases steadily, thus, increasing the difficulty of prediction. Noticeably, the human performance is approximately five-fold worse on molecule 5 (three point changes) relative to molecule 1 (one point change). This highlights the fact that in instances of multiple functional group modifications, human experts are unable to reliably predict the impact on the \emph{E} isomer $\pi-\pi^*$ transition wavelength. The full results breakdown is provided in the Supplementary Information. \section{Screening for Novel Photoswitches using the MOGP} \label{sec:valid} Having determined that the \textsc{mogp} approach does not suffer substantial degradation in accuracy relative to TD-DFT we use it to perform experimental screening over $7,265$ commercially available photoswitch molecules. Diazo-containing compounds supplied by Molport and Mcule were identified. As of November 2020, when the experiments were planned, there were 7,265 commercially purchasable diazo molecules. The full list is made available in the GitHub repository. We then used the \textsc{mogp} to score the list, identifying 11 molecules satisfying our screening criteria detailed in the following section. Our aim is to discover a novel azophotoswitch motif which satisfies the criteria. \subsection{Screening Criteria} To demonstrate the utility of our approach, we screened commercially available photoswitches based on selective criteria and compared their experimental photophysical properties to the predictions made by the \textsc{mogp} model. The criteria imposed were selected to showcase that properties could be obtained using the \textsc{mogp} model which are typically difficult to engineer, yet beneficial for materials and photopharmacological applications. The criteria are: \begin{enumerate} \item A $\pi-\pi^*$ maximum in the range of 450-600 nm for the \textit{E} isomer. \item A separation greater than 40 nm between the $\pi-\pi^*$ of the \textit{E} isomer and the $\pi-\pi^*$ of the \textit{Z} isomer. \end{enumerate} The first criterion was chosen so as to limit UV-included damage to materials and improve tissue penetration depths and the second criterion was chosen by analogy to azopyrazole photoswitches reported previously \cite{2014_Weston} where the specified level of band separation provided complete bidirectional photoswitching; this degree of energetic separation between the $\pi-\pi^*$ bands of the isomers enables one isomer to be selectively addressed using light emitting diodes (LEDs), which are commonly used for their low power consumption but often express broad emission profiles relative to laser diodes, see Supporting Information. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.8\textwidth]{Figures/structures_5.png} \end{center} \caption{The chemical structures of the 11 commercially available azo-based photoswitches that were predicted to meet the criteria.} \label{structures2} \end{figure*} \subsection{Lead Candidates} Based on our stated selection criteria, 11 commercially available molecules were identified via the predictions of the \textsc{mogp} model, Figure \ref{structures2}. The SMILES for these structures are provided in the Supplementary Information. Solutions of these photoswitches were prepared in the dark to a concentration of 25 $\mu$M in DMSO. The UV-vis spectra of these photoswitches in their thermodynamically stable \emph{E} isomeric form was recorded using a photodiode array spectrophotometer. The samples were continuously irradiated with various wavelengths of light directed 90$^{\circ}$ to the measurement path. UV-vis spectra were repeatedly recorded during irradiation until no further change in the UV-vis trace was observed, indicating attainment of the PSS. This \emph{in situ} irradiation procedure was implemented so that even compounds that display short thermal half-lives could be measured reliably. By repeating this measurement process with one or more different irradiation wavelengths, we were able to quantify the PSS and subsequently predict the UV-vis spectrum of the pure \emph{Z} isomer using the method detailed by Fischer. \cite{Fischer1967} With both the spectrum of the E and Z isomers for each photoswitch in hand, the experimentally determined wavelength of the $\pi-\pi^*$ band of each isomer was determined and compared with that predicted by our model. Full experimental details are made available in the SI. We compare the model predictions against the experimentally-determined values in \autoref{tab_preds} The \textsc{mogp} MAE on the \emph{E} isomer $\pi-\pi^*$ wavelength prediction task was $22.7$ nm and $21.6$ nm on the \emph{Z} isomer $\pi-\pi^*$ wavelength prediction task, comparable for the \emph{E} isomer $\pi-\pi^*$ and slightly higher for the \emph{Z} isomer $\pi-\pi^*$ relative to the benchmark study in section 3, reflecting the challenge of achieving strong generalisation performance when extrapolating to large regions of chemical space. The first criterion, is a requirement on the absolute rather than the relative value of the $\pi-\pi^*$ transition wavelengths and so the experimental values may be subject to shifts depending on the solvent. Molecules can display solvatochromism in that the dielectric of the solvent, as well as hydrogen-bonding interactions, can influence the electronic transitions giving rise to hypsochromic or bathochromic shifts in the absorption spectra. This can manifest as changes in the position, intensity and shape of the UV-vis absorption spectrum. As such, the 450 nm criterion could be considered a rough guide and candidates that are just short of the threshold may fulfill the criterion in a different solvent. Nonetheless, given that the \textsc{mogp} model is trained on just a few hundred data points and is asked to extrapolate to several thousand structures, the accuracy is promising with the advent of further experimental data. In terms of satisfying the pre-specified criteria, 7 of the 11 molecules possessed an \emph{E} isomer $\pi-\pi^*$ wavelength greater than 450 nm, 10 of the 11 molecules possessed a separation between the \emph{E} and \emph{Z} isomer $\pi-\pi^*$ wavelengths of greater than 40 nm and 6 of the 11 molecules satisfied both criteria. Compound 7 did not photoswitch under irradiation. \definecolor{caribbeangreen}{rgb}{0.0, 0.8, 0.6} \definecolor{carrotorange}{rgb}{0.93, 0.57, 0.13} \definecolor{cinnabar}{rgb}{0.89, 0.26, 0.2} \begin{table}[h] \caption{\textsc{mogp} predictions compared against experimental values (nm). Traffic light system indicates whether the molecules satisfied the criteria. Both criteria (\color{caribbeangreen}{green}\color{black}{) and one criterion}(\color{carrotorange}{orange}\color{black}{). All molecules satisfied at least one criterion. The model MAE was 22.7 nm for the \emph{E} isomer $\pi - \pi^*$} and 21.6 nm for the \emph{Z} isomer $\pi - \pi^*$.} \resizebox{0.983\textwidth}{!}{ \centering \begin{tabular}{c|cc|cccc} \toprule & \multicolumn{2}{c|}{{ \bf \underline{Model}}} & \multicolumn{4}{c}{{ \bf \underline{Experimental}}} \\ Switch & \begin{tabular}[c]{@{}c@{}}\emph{E} $\pi - \pi^*$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\emph{Z} $\pi - \pi^*$. \end{tabular} & \begin{tabular}[c]{@{}c@{}}\emph{E} $\pi - \pi^*$ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\emph{Z} $\pi - \pi^*$ \end{tabular} & \emph{Z} PSS (\%) & \begin{tabular}[c]{@{}c@{}}ca. t$\frac{1}{2}$ (s)\end{tabular} \\ \midrule \color{carrotorange} \textbf{1} & \color{carrotorange}456 & \color{carrotorange}368 & \color{carrotorange}446 & \color{carrotorange}355 & \color{carrotorange}90 (405 nm) & \color{carrotorange}<5 \\ \color{carrotorange} \textbf{2} & \color{carrotorange}459 & \color{carrotorange}377 & \color{carrotorange}441 & \color{carrotorange}356 & \color{carrotorange}96 (405 nm) & \color{carrotorange}<1 \\ \color{carrotorange} \textbf{3} & \color{carrotorange}457 & \color{carrotorange}377 & \color{carrotorange}399 & \color{carrotorange}331 & \color{carrotorange}66 (405 nm) & \color{carrotorange}<10 \\ \color{carrotorange} \textbf{4} & \color{carrotorange}463 & \color{carrotorange}373 & \color{carrotorange}445 & \color{carrotorange}357 & \color{carrotorange}94 (405 nm) & \color{carrotorange}<1 \\ \color{caribbeangreen} \textbf{5} & \color{caribbeangreen}471 &\color{caribbeangreen} 381 & \color{caribbeangreen}450 & \color{caribbeangreen}370 & \color{caribbeangreen}68 (450 nm) & \color{caribbeangreen}<1 \\ \color{caribbeangreen} \textbf{6} & \color{caribbeangreen}460 & \color{caribbeangreen}368 & \color{caribbeangreen}451 & \color{caribbeangreen}360 & \color{caribbeangreen}92 (405 nm) & \color{caribbeangreen}<30 \\ \color{carrotorange} \textbf{7} & \color{carrotorange}467 & \color{carrotorange}369 & \color{carrotorange}534 & \color{carrotorange}\textit{n/a} & \color{carrotorange}\textit{n/a} & \color{carrotorange}\textit{n/a} \\ \color{caribbeangreen} \textbf{8} & \color{caribbeangreen}450 & \color{caribbeangreen}359 & \color{caribbeangreen}465 & \color{caribbeangreen}376 & \color{caribbeangreen}87 (405 nm) & \color{caribbeangreen}<10 \\ \color{caribbeangreen} \textbf{9} & \color{caribbeangreen}453 & \color{caribbeangreen}369 & \color{caribbeangreen}468 & \color{caribbeangreen}399 & \color{caribbeangreen}60 (450 nm) & \color{caribbeangreen}<10 \\ \color{caribbeangreen} \textbf{10} & \color{caribbeangreen}453 & \color{caribbeangreen}363 & \color{caribbeangreen}471 & \color{caribbeangreen}398 & \color{caribbeangreen}15 (450 nm) & \color{caribbeangreen}<1 \\ \color{caribbeangreen} \textbf{11} & \color{caribbeangreen}453 & \color{caribbeangreen}360 & \color{caribbeangreen}452 & \color{caribbeangreen}379 & \color{caribbeangreen}88 (405 nm) & \color{caribbeangreen}<1 \\ \bottomrule \end{tabular}} \label{tab_preds} \end{table} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=\textwidth]{Figures/UV_vis3.png} \end{center} \caption{The experimental UV-vis absorption spectrum of switches \textbf{1}-\textbf{11} measured at 25 $\mu$M in DMSO and shown as the molar extinction coefficient (M$^{-1}$ cm$^{-1}$). Different irradiation wavelengths were employed in order to predict the "pure" \emph{Z} spectra using the procedure detailed by Fischer.\cite{Fischer1967} The chemical structures of these switches are shown in Figure \ref{structures2} above.} \label{UV_vis} \end{figure*} \section{Conclusions}\label{sec:conc} We have proposed a data-driven prediction pipeline underpinned by dataset curation and multioutput Gaussian processes. We demonstrated that a \textsc{mogp} model trained on a small curated azophotoswitch dataset can achieve comparable predictive accuracy to TD-DFT and only slightly degraded performance relative to TD-DFT with a data-driven linear correction in near-instantaneous time. We use our methodology to discover several motifs that displayed separated electronic absorption bands of their isomers, as well as exhibiting a red-shifted absorption, and are suited for information transfer materials and towards photopharmacological applications. Sources of future work include the curation of a dataset of the thermal reversion barriers to improve the predictive capabilities of machine learning models as well as investigating how synthetic chemists may use model uncertainty estimates in the decision process to screen molecules e.g. via active learning \cite{2021_Mukadum} and Bayesian optimisation. The confidence-error curves in the Supporting Information show initial promise in this direction and indeed understanding how best to tailor calibrated Bayesian models to molecular representations \cite{moss2020gaussian, 2022_Gauche} is an avenue worthy of pursuit. We release our curated dataset and all code to train models at \url{https://github.com/Ryan-Rhys/The-Photoswitch-Dataset} in order that the photoswitch community may derive benefit from our work. \begin{suppinfo} Supplementary information is provided as a single merged pdf file. \end{suppinfo}
2210.15531
\section{Introduction} \subsection{Motivation and prior work} A classical first-order approach for the minimization of an additive composite problem is the celebrated proximal gradient algorithm. In particular, if the gradient mapping of the smooth part of the cost function is Lipschitz continuous the algorithm converges with a constant step-size which avoids a possibly expensive, in terms of function evaluations, linesearch procedure. However, there are many functions which do not exhibit a globally Lipschitz continuous gradient mapping and thus proximal gradient does not converge with a constant step-size. As a remedy Lipschitz smoothness was generalized to the notion of relative\footnote{relative to a ``simple'' reference function whose conjugate can be computed easily} smoothness \cite{birnbaum2011distributed,bauschke2017descent,lu2018relatively}. Such functions exhibit a Bregman version of the classical descent inequality and can be optimized with a Bregman version of the proximal gradient method \cite{birnbaum2011distributed,bauschke2017descent,lu2018relatively} using a constant step-size. For instance this allows one to consider functions whose Hessians grow like a polynomial in $x$ or $1/x$ \cite{lu2018relatively}. If, in addition, the function exhibits global lower bounds relative to the reference function the method converges even linearly \cite{lu2018relatively,bauschke2019linear}. However, there exist many cost functions which are neither Lipschitz smooth nor Bregman smooth relative to a simple reference function. This leads us to consider functions that exhibit an anisotropic descent inequality \cite{laude2021conjugate,laude2021lower}. This gives rise to a different generalization of the classical proximal gradient algorithm which is studied in this paper. In the convex case the algorithm can be interpreted as a proximal extension of \emph{dual space preconditioning} \cite{maddison2021dual} for gradient descent. The results of this work are summarized as follows: \subsection{Summary of results} \begin{itemize} \item In \cref{sec:aproxgrad_method} we recapitulate the notion of anisotropic smoothness \cite{laude2021conjugate,laude2021lower} and introduce the proposed algorithm. In particular, we shall discuss the well-definedness of the iterates under certain constraint qualifications. \item In \cref{sec:phi_convexity} we recapitulate the basic notions of $\Phi$-convexity and $\Phi$-concavity and show that the anisotropic descent inequality can be formulated equivalently in terms of $\Phi$-concavity. This is used heavily for developing its calculus and allows us to provide an equivalent interpretation of the algorithm in terms of a \emph{difference of $\Phi$-convex approach}. \item In \cref{sec:calculus_asmooth} we develop a calculus for the anisotropic descent property and provide examples. In particular we show that the anisotropic descent property is closed under pointwise average if the dual Bregman distance is jointly convex. This builds upon recent advances for the Bregman proximal average \cite{wang2021bregman}. A refinement is proved for the exponential reference function which shows that in contrast to Euclidean Lipschitz smoothness the constant in the anisotropic descent property is invariant under pointwise conic combinations. \item In \cref{sec:aproxgrad_analysis} we show subsequential convergence of our method under anisotropic smoothness. As a measure of stationarity we employ a regularized gap function which vanishes at rate $\mathcal{O}(1/K)$. This technique allows one to show that $1/L$ is a feasible step-size even if both terms are nonconvex, as long as an anisotropic prox-boundedness condition holds true. To our knowledge this is also new in the Euclidean case where existing analyses require the step-size to be smaller than $1/L$. We also discuss a linesearch variant of the method if the smoothness constant is unknown. Furthermore, we prove the linear convergence of the anisotropic proximal gradient method under a non-Euclidean generalization of the proximal gradient dominated condition \cite{karimi2016linear}. This is closely related to the PL-inequality. We prove that the anisotropic proximal gradient dominated condition is implied by anisotropic strong convexity of the smooth part, which was recently shown to be equivalent to dual relative smoothness in the Bregman sense \cite{laude2021conjugate}. \item In \cref{sec:dca} we examine an equivalent reformulation of the proposed algorithm in terms of a difference of $\Phi$-convex approach. This is leveraged to prove linear convergence if the nonsmooth part of the objective exhibits certain anisotropic lower bounds. \item In \cref{sec:apps} we consider applications based on the examples developed in \cref{sec:calculus_asmooth}. For the exponential reference function we reveal connections to existing algorithms for regularized optimal transport \cite{cuturi2013sinkhorn,benamou2015iterative} and AdaBoost \cite{freund1997decision,schapire1998improved,collins2002logistic}. Exploiting the freedom of a nonzero composite term we provide examples beyond these existing applications. \end{itemize} \subsection{Notation and preliminaries} We denote by $\langle \cdot, \cdot\rangle$ the standard Euclidean inner product on $\mathbb{R}^n$ and by $\|x\|:=\sqrt{\langle x, x \rangle}$ for any $x\in \mathbb{R}^n$ the standard Euclidean norm on $\mathbb{R}^n$. In accordance with \cite{moreau1966fonctionnelles,RoWe98} we extend the classical arithmetic on $\mathbb{R}$ to the extended real line $\overline{\mathbb{R}}:=\mathbb{R}\cup\{-\infty, +\infty\}$. We define upper addition $-\infty \mathbin{\dot{+}} \infty = \infty$ and lower addition $-\infty \mathbin{\text{\d{\ensuremath{+}}}} \infty = -\infty$, and accordingly upper subtraction $\infty \mathbin{\dot{-}} \infty = \infty$ and lower subtraction $\infty \mathbin{\text{\d{\ensuremath{-}}}} \infty = -\infty$. The effective domain of an extended real-valued function $f : \mathbb{R}^n \to \overline{\mathbb{R}}$ is denoted by $\dom f:=\{x\in\mathbb{R}^n : f(x)<\infty\}$, and we say that $f$ is proper if $\dom f\neq\emptyset$ and $f(x) > -\infty$ for all $x \in \mathbb{R}^n$; lower semicontinuous (lsc) if $f(\bar x)\leq\liminf_{x\to\bar x}f(x)$ for all $\bar x\in\mathbb{R}^n$; super-coercive if $f(x)/\|x\|\to\infty$ as $\|x\|\to\infty$. We define by $\Gamma_0(\mathbb{R}^n)$ the class of all proper, lsc convex functions $f:\mathbb{R}^n \to \overline{\mathbb{R}}$. For any functions $f :\mathbb{R}^n \to \overline{\mathbb{R}}$ and $g:\mathbb{R}^n \to \overline{\mathbb{R}}$ we define the infimal convolution or epi-addition of $f$ and $g$ as $(f \infconv g)(x) = \inf_{y \in \mathbb{R}^n} g(x-y) + f(y)$. For a proper function $f :\mathbb{R}^n \to \overline{\mathbb{R}}$ and $\lambda \geq 0$ we define the epi-scaling $(\lambda \star f)(x) = \lambda f(\lambda^{-1} x)$ for $\lambda > 0$ and $(\lambda \star f)(x)=\delta_{\{0\}}(x)$ otherwise. We adopt the operator precedence for epi-multiplication and epi-addition from the pointwise case. We denote by $\mathbb{R}^n_+=[0, +\infty)^n$ the nonnegative orthant and by $\mathbb{R}^n_{++}=(0, +\infty)^n$ the positive orthant. We say that the function $\xi(x,y)$ is level-bounded in $x$ locally uniformly in $y$ in the sense of \cite[Definition 1.16]{RoWe98} if for every $\bar y \in \mathbb{R}^n$ and $\alpha \in \mathbb{R}$ we can find a neighborhood $V \in \mathcal{N}(\bar y)$ along with a bounded set $B\subset \mathbb{R}^n$ such that $\{x \in \mathbb{R}^n \mid \xi(x, y) \leq \alpha\} \subseteq B$ for all $y \in V$. Let $g_-:=g(-(\cdot))$ denote the reflection of $g$. Since convex conjugation and reflection commute we adopt the notation: $(g_-)^* = (g^*)_- =:g^*_-$. We adopt the notions of essential smoothness, essential strict convexity and Legendre type functions from \cite[Section 26]{Roc70}: We say that a function $f \in \Gamma_0(\mathbb{R}^n)$ is \emph{essentially smooth}, if $\intr(\dom f) \neq \emptyset$ and $f$ is differentiable on $\intr(\dom f)$ such that $\|\nabla f(x^\nu)\|\to \infty$, whenever $\intr(\dom f) \ni x^\nu \to x \in \bdry\dom f$, and \emph{essentially strictly convex}, if $f$ is strictly convex on every convex subset of $\dom \partial f$, and \emph{Legendre type}, if $f$ is both essentially smooth and essentially strictly convex. Otherwise we adopt the notation from \cite{RoWe98}. Our concepts and algorithm involve a reference function $\phi$ which complies with the following standing requirement which is assumed valid across the entire paper unless stated otherwise: \begin{assumenum} \item \label{assum:a1} $\phi \in \Gamma_0(\mathbb{R}^n)$ is of Legendre type with $\dom \phi = \mathbb{R}^n$. \end{assumenum} The dual reference function $\phi^*$ is of Legendre type as well \cite[Theorem 26.3]{Roc70} but does not necessarily have full domain. Instead, thanks to a conjugate duality between super-coercivity and full domain \cite[Proposition 2.16]{bauschke1997legendre}, $\phi^*$ is super-coercive. The gradient mapping $\nabla \phi : \mathbb{R}^n \to \intr \dom \phi^*$ is a diffeomorphism between $\mathbb{R}^n$ and $\intr \dom \phi^*$ with inverse $(\nabla \phi)^{-1} = \nabla \phi^*$ \cite[Theorem 26.5]{Roc70}. A leading example considered in this paper is the exponential reference function: \begin{example} \label{ex:reference_function} Let $\phi :\mathbb{R}^n \to \mathbb{R}$ defined by $\phi(x)=\Exp(x) :=\sum_{j=1}^n \exp(x_j)$. Then $\phi \in \Gamma_0(\mathbb{R}^n)$ is Legendre type with full domain and thus complies with our standing assumption. The convex conjugate $\phi^*$ is the von Neumann entropy defined by $$ \phi^*(x) = H(x):= \begin{cases} \sum_{j=1}^n x_j \log(x_j) - x_j & \text{if $x \in \mathbb{R}_{+}^n$} \\ +\infty & \text{otherwise,} \end{cases} $$ where $0\log(0):= 0$. For the gradients we have $\nabla \phi(x) = (\exp(x_j))_{j=1}^n$ and $\nabla \phi^*(x) = (\log(x_j))_{j=1}^n$. \end{example} In the course of the paper we often consider the Bregman distance $D_{\phi^*}$ generated by the dual reference function $\phi^*$: \begin{definition}[Bregman distance] We define the Bregman distance generated by $\phi^*$ as: $$ D_{\phi^*}(x, y) := \begin{cases} \phi^*(x) - \phi^*(y) - \langle \nabla \phi^*(y), x-y \rangle & \text{if $x \in \dom \phi^*$ and $y \in \intr \dom \phi^*$} \\ +\infty & \text{otherwise.} \end{cases} $$ \end{definition} Thanks to \cite[Theorem 3.7(iv)]{bauschke1997legendre} we have that $D_{\phi^*}(x, y) = 0$ if and only if $x=y$. For $\phi=\Exp$ being the exponential reference function as in \cref{ex:reference_function} the Bregman distance $D_{\phi^*}$ is the classical KL-divergence. Thanks to \cite[Theorem 3.7(v)]{bauschke1997legendre} have the following key relation between $D_{\phi^*}$ and $D_{\phi}$: \begin{lemma} \label{thm:dual_bregman} We have that the identity $$ D_{\phi^*}(x, y) = D_{\phi}(\nabla \phi^*(y), \nabla \phi^*(x)), $$ holds true for any $x,y \in \intr \dom \phi^*$. \end{lemma} \section{Anisotropic proximal gradient} \label{sec:aproxgrad_method} \subsection{Anisotropic descent inequality} The algorithm that is developed in this paper is based on the following anisotropic descent property generalizing \cite{laude2021conjugate} to reference functions $\phi$ which are not necessarily super-coercive. Up to sign-flip it can be regarded as a globalization of anisotropic prox-regularity \cite[Definition 2.13]{laude2021lower} specialized to smooth functions: \begin{definition}[anisotropic descent inequality] \label{def:adescent} Let $f \in \mathcal{C}^1(\mathbb{R}^n)$ such that the following constraint qualification holds true \begin{align} \label{eq:cq_adescent} \ran \nabla f \subseteq \ran \nabla \phi. \end{align} Then we say that $f$ satisfies the anisotropic descent property relative to $\phi$ with constant $L>0$ if for all $\bar x \in \mathbb{R}^n$ \begin{equation}\label{eq:adescent} f(x) \leq f(\bar x) + \tfrac{1}{L} \star \phi(x-\bar x + L^{-1}\nabla\phi^*(\nabla f(\bar x))) - \tfrac{1}{L} \star \phi(L^{-1}\nabla\phi^*(\nabla f(\bar x))) \quad \forall x\in\mathbb{R}^n. \end{equation} If $L=1$ we say that $f$ satisfies the anisotropic descent property relative to $\phi$ without further mention of $L$. \end{definition} \begin{remark} Since $\nabla (L^{-1} \star \phi)^* = L^{-1}\nabla \phi^*$, saying that $f$ satisfies the anisotropic descent property relative to $\phi$ with constant $L>0$ means equivalently that $f$ satisfies the anisotropic descent property relative to $\frac1L \star \phi$ (with constant $1$). \end{remark} The constraint qualification \cref{eq:cq_adescent} ensures that the expression $\nabla\phi^*(\nabla f(\bar x)) \in \mathbb{R}^n$ is well defined. If we choose $x:=\bar x$ the descent inequality \cref{eq:adescent} collapses to $f(\bar x) \leq f(\bar x)$ and thus $$ x \mapsto f(\bar x) + \tfrac{1}{L} \star \phi(x-\bar x + L^{-1}\nabla\phi^*(\nabla f(\bar x))) - \tfrac{1}{L} \star \phi(L^{-1}\nabla\phi^*(\nabla f(\bar x))), $$ as a function in $x$ is an upper bound or majorizer of $f$ at $\bar x$. With some abuse of terminology we often refer to a function that only has anisotropic upper bounds as anisotropically smooth. This is different from \cite{laude2021conjugate} which assumes the existence of both, upper and lower bounds. For convex functions, however, the lower bound inequality holds automatically and thus the two notions coincide. The right-hand side of the anisotropic descent inequality can be rewritten in terms of a proximal linearization of $f$ similar to the Bregman descent inequality \cite{birnbaum2011distributed,bauschke2017descent,lu2018relatively} where the arguments in the Bregman distance are shifted by an anisotropic gradient step $\bar y = \bar x - L^{-1}\nabla \phi^*(\nabla f(\bar x))$. In particular this guarantees the following shift-invariance property which is in general lacking in the Bregman descent inequality: \begin{remark}[shift-invariance and generalization of Euclidean descent lemma] \label{rem:shift_invariance} Assume that $f$ has the anisotropic descent property. By definition of the Bregman distance \cref{eq:adescent} can be rewritten as: $$ f(x) \leq f(\bar x) + \langle \nabla f(\bar x), x- \bar x \rangle + D_{L^{-1} \star \phi}(x - \bar y, \bar x - \bar y), $$ for all $x,\bar x \in \mathbb{R}^n$ and $\bar y = \bar x - L^{-1}\nabla \phi^*(\nabla f(\bar x))$. Choose $u,\bar u \in \mathbb{R}^n$ and define $x = u - a$ and $\bar x = \bar u - a$. Then the above inequality yields $$ f(u - a) \leq f(\bar u - a) + \langle \nabla f(\bar u - a), u- \bar u \rangle + D_{L^{-1} \star \phi}(u - \bar w, \bar u - \bar w), $$ for $\bar w = \bar u - L^{-1}\nabla \phi^*(\nabla f(\bar u - a))$. Thus the shifted version $f(\cdot - a)$ of $f$ also has the anisotropic descent property. For $\phi=\frac{1}{2}\|\cdot\|^2$ the upper bound inequality \cref{eq:adescent} specializes to the classical descent lemma: $$ f(x) \leq f(\bar x) + \langle \nabla f(\bar x), x - \bar x \rangle + \tfrac{L}{2}\|x - \bar x\|^2. $$ \end{remark} Next we will introduce the algorithm and discuss the well-definedness of its iterates. \subsection{Definition, well-definedness and standing assumptions} In this paper our goal is to develop a novel algorithm for nonconvex composite minimization problems of the form: \begin{align} \label{eq:opt_prob} \tag{P} \text{minimize}~ F := f + g, \end{align} where \begin{assumenum}[resume*] \item \label{assum:a2} $f\in \mathcal{C}^1(\mathbb{R}^n)$ has the anisotropic descent property with constant $L>0$; \item \label{assum:a3} $g:\mathbb{R}^n \to \overline{\mathbb{R}}$ is proper lsc; \item \label{assum:a4} $\inf F > -\infty$, \end{assumenum} holds without further mention unless stated otherwise. This suggests the following iterative procedure for minimizing the cost function: In each iteration we minimize the anisotropic upper bound at the current iterate $x^k$ to compute the next iterate $x^{k+1}$: \begin{align} \label{eq:majorize_minimize} x^{k+1} = \argmin_{x \in \mathbb{R}^n} f(x^k) + \tfrac{1}{L} \star \phi(x-x^k + L^{-1}\nabla\phi^*(\nabla f(x^k))) - \tfrac{1}{L} \star \phi(L^{-1}\nabla\phi^*(\nabla f(x^k))) + g(x). \end{align} Choose $\lambda >0$. Introducing a new variable $y^k := x^k - \lambda \nabla \phi^*(\nabla f(x^k))$ we can rewrite the update \cref{eq:majorize_minimize} in terms of a Gauss--Seidel like scheme listed in \cref{alg:aproxgrad}. \begin{algorithm}[H] \caption{Anisotropic proximal gradient} \label{alg:aproxgrad} \begin{algorithmic} \REQUIRE Let $\lambda >0$. Let $x^0 \in \mathbb{R}^n$. \FORALL{$k=0, 1, \dots$} \STATE \begin{align} y^k &= x^k - \lambda \nabla \phi^*(\nabla f(x^k)) \label{eq:forward} \\ x^{k+1} &\in \argmin_{x \in \mathbb{R}^n} ~\lambda \star \phi(x - y^k) + g(x), \label{eq:backward} \end{align} \ENDFOR \end{algorithmic} \end{algorithm} We refer to \cref{eq:forward} as the anisotropic \emph{forward-step} and \cref{eq:backward} is the anisotropic \emph{backward-step}. \begin{example} For $\phi \in \Gamma_0(\mathbb{R}^n)$ being a Euclidean norm defined as $\phi(x)=\frac12\|x\|_M^2:=\frac12\langle x, M x\rangle$ with $M$ symmetric positive definite we have $\nabla \phi^*(x)=M^{-1}x$. Thus the algorithm becomes a scaled version of the Euclidean proximal gradient method: \begin{align} y^k &= x^k - \lambda M^{-1}\nabla f(x^k) \\ x^{k+1} &= \argmin_{x \in \mathbb{R}^n} ~ \frac{1}{2\lambda}\|x - y^k\|_M^2 + g(x), \end{align} where the backward-step is a scaled Euclidean proximal mapping. \end{example} More generally, the backward-step amounts to the \emph{left} anisotropic proximal mapping of $g$ at $y^k$ which is formally defined follows: \begin{definition}[left and right anisotropic proximal mapping and Moreau envelope] Let $\lambda >0 $ and $x \in \mathbb{R}^n$. Then the right anisotropic proximal mapping of $g$ with parameter $\lambda$ at $x$ is defined as: \begin{align} \aprox[\lambda]{\phi}{g}(x) := \argmin_{y \in \mathbb{R}^n} ~\lambda \star \phi(x - y) + g(y), \end{align} and the right anisotropic Moreau envelope is the infimal convolution of $g$ and $\lambda \star \phi$: \begin{align} (g \infconv \lambda \star \phi)(x) = \inf_{y \in \mathbb{R}^n}~\lambda \star \phi(x - y) + g(y). \end{align} The left anisotropic proximal mapping and Moreau envelope of $g$ with parameter $\lambda$ are defined as \begin{align} y \mapsto \argmin_{x \in \mathbb{R}^n} ~\lambda\star \phi(x - y) + g(x) \quad \text{and} \quad y \mapsto \inf_{x \in \mathbb{R}^n} ~\lambda\star \phi(x - y) + g(x). \end{align} The reflection $\phi_-$ of $\phi$ allows us to express the left anisotropic proximal mapping and Moreau envelope in terms of the right anisotropic proximal mapping and Moreau envelope aka infimal convolution: \begin{align} \argmin_{x \in \mathbb{R}^n} ~\lambda \star \phi(x-y) + g(x) &= \argmin_{x \in \mathbb{R}^n}~ \lambda \star \phi_-(y-x) + g(x) = \aprox[\lambda]{\phi_-}{g} (y), \\ \inf_{x \in \mathbb{R}^n} ~\lambda \star \phi(x-y) + g(x) &= \inf_{x \in \mathbb{R}^n}~ \lambda \star \phi_-(y-x) + g(x) =(g \infconv \lambda \star \phi_-)(x). \end{align} If $\phi(x)=\frac{1}{2}\|x\|^2$ left and right anisotropic proximal mapping and Moreau envelope both coincide with the Euclidean proximal mapping and Moreau envelope. \end{definition} The local continuity of the anisotropic proximal mapping and continuous differentiability of the anisotropic Moreau envelope was studied in \cite{laude-wu-cremers-aistats-19,laude2021lower} under (anisotropic) prox-regularity however, using the pointwise scaling in place of the epi-scaling. For both scalings a sufficient condition for the non-emptyness of the proximal mapping and in particular the continuity of the anisotropic Moreau envelope aka infimal convolution $g \infconv \lambda \star \phi_-$ is level-boundedness of $\xi(x,y):=g(x) + \lambda \star \phi(x-y)$ in $x$ locally uniformly in $y$ \cite[Theorem 1.17]{RoWe98}. \begin{remark}[threshold of anisotropic prox-boundedness] \label{rem:level_boundedness} Without loss of generality we can assume that $\phi(0)=0$ since a replacement of $\phi$ with the perturbed reference function $\phi(x)-\phi(0)$ does neither affect the descent inequality \cref{def:adescent} nor the updates in the algorithm. Then, thanks to \cite[Lemma 4.4]{burke2013epi}, $\lambda_1 \star \phi(x-y) \leq \lambda_2 \star \phi(x-y)$ for $\lambda_2 < \lambda_1$ and thus uniform level-boundedness of $\xi(x,y; \lambda):=g(x) + \lambda \star \phi(x-y)$ for $\lambda=\lambda_0$ implies uniform level-boundedness of $\xi(x,y; \lambda)$ for any $\lambda < \lambda_0$. This leads us to define the threshold of anisotropic prox-boundedness: \begin{align} \lambda_g:= \sup \{\lambda >0 \mid \text{$\xi(x,y;\lambda)$ is level-bounded in $x$ locally uniformly in $y$} \}. \end{align} Note that $\lambda_g=+\infty$ if $g$ (or $\phi$) is bounded from below and $\phi$ (or $g$) is coercive. \end{remark} We make the following additional assumption on $\lambda_g$ and $\lambda$ to ensure well-definedness of the proximal mapping and continuity of the anisotropic Moreau envelope: \begin{assumenum}[resume*] \item \label{assum:a5} Assume that $\lambda_g >0$ and choose $\lambda>0$ such that $\lambda < \lambda_g$ and $\lambda \leq 1/L$. \end{assumenum} More specifically, if $g$ is convex continuity of the anisotropic proximal mapping and Moreau envelope is implied by validity of a \emph{constraint qualification} (CQ): \begin{assumenum}[resume*] \item \label{assum:a5'} $g$ is convex and $\emptyset \neq \dom g^* \cap \intr (\dom \phi_-^*)$ and $\lambda \leq 1/L$. \end{assumenum} Henceforth, whenever $g$ is assumed to be convex we impose \cref{assum:a5'} in place of \cref{assum:a5}. In fact, \cref{assum:a5'} implies that $\lambda_g = +\infty$ as will be shown in \cref{rem:cq}. Note that for $g\equiv 0$ the CQ becomes $0 \in \intr \dom \phi^*$ and the algorithm reduces to $x^{k+1}= x^k - \lambda \nabla \phi^*(\nabla f(x^k)) + \lambda \nabla \phi^*(0)$. This is equivalent to \emph{dual space preconditioning} for gradient descent \cite{maddison2021dual}, where $\phi$ is replaced with $\phi- \langle \nabla \phi(0), \cdot \rangle$. For $g$ convex under \cref{assum:a5'} thanks to \cite[Theorem 3.1(ii)]{combettes2013moreau} and \cite[Proposition 2.16]{wang2021bregman}, originally due to \cite[Theorem 3.2]{Teboulle92} we have the following generalization of Moreau's decomposition. This allows us to compute the anisotropic proximal mapping through known closed form expressions of the Bregman proximal mapping. In addition, the anisotropic Moreau envelope is $\mathcal{C}^1(\mathbb{R}^n)$ and the anisotropic proximal mapping is continuous as summarized in the following lemma: \begin{lemma}[Bregman--Moreau decomposition and continuity of anisotropic proximal mapping \label{thm:moreau_decomposition} Let $h\in \Gamma_0(\mathbb{R}^n)$. Assume that $\emptyset \neq \dom h^* \cap \intr \dom \phi^*$. Define by $\bprox[\lambda]{\phi^*}{h^*}:=\argmin_{y \in \mathbb{R}^n} h^*(y) + \lambda D_{\phi^*}(y, \cdot)$ the left Bregman proximal mapping of $h^*$ with parameter $\lambda^{-1}$ and reference function $\phi^*$. Then the following statements hold: \begin{lemenum} \item \label{thm:moreau_decomposition:decomp $\id = \aprox[\lambda]{\phi}{h} + \lambda \nabla \phi^* \circ \bprox[\lambda]{\phi^*}{h^*}\circ \nabla \phi \circ \lambda^{-1} \id$; \item \label{thm:moreau_decomposition:smooth} $h \infconv \lambda \star \phi \in \mathcal{C}^1(\mathbb{R}^n)$ is convex with $ \nabla (h \infconv \lambda \star \phi) = \nabla \phi \circ \lambda^{-1}(\id - \aprox[\lambda]{\phi}{h}) $; \item \label{thm:moreau_decomposition:bprox} $\bprox[\lambda]{\phi^*}{h^*} \circ \nabla(\lambda \star \phi) =(\partial h^* + \lambda \nabla \phi^*)^{-1} = \nabla (h \infconv \lambda \star \phi)$; \item \label{thm:moreau_decomposition:aprox} $\aprox[\lambda]{\phi}{h} = (\id + \lambda \nabla \phi^* \circ \partial h)^{-1}$. \end{lemenum} \end{lemma} \begin{proof} Thanks to \cite[Proposition 6.19(vii)]{BaCo110} for $L=\id :\mathbb{R}^n \to \mathbb{R}^n$ the constraint qualification implies that \begin{align} \label{eq:cq_reformulate} 0 \in \relint (\dom \phi^* - \dom h^*). \end{align} ``\labelcref{thm:moreau_decomposition:decomp}'': Let $x \in \dom \phi=\mathbb{R}^n$. Invoking \cite[Theorem 3.1(ii)]{combettes2013moreau} we obtain $$ x = \aprox{\lambda \star \phi}{h}(x) + \nabla (\lambda \star \phi)^*\big(\bprox{(\lambda \star \phi)^*}{h^*}(\nabla (\lambda \star \phi)(x)) \big). $$ Note that $(\lambda \star \phi)^* = \lambda \phi^*$, $\nabla (\lambda \star \phi) = \nabla \phi(\lambda^{-1}(\cdot))$ and $\nabla (\lambda \star \phi)^* = \lambda \nabla \phi^*$. Thus we have $$ x = \aprox[\lambda]{\phi}{h}(x) + \lambda \nabla \phi^*\big(\bprox[\lambda]{\phi^*}{h^*}(\nabla \phi(\lambda^{-1} x)) \big), $$ as claimed. ``\labelcref{thm:moreau_decomposition:smooth}'': By the constraint qualification \cref{eq:cq_reformulate} in view of \cite[Proposition 15.7]{BaCo110} we have that $h \infconv \lambda \star \phi \in \Gamma_0(\mathbb{R}^n)$ with $\dom (h \infconv \lambda \star \phi) = \dom h + \lambda \dom \phi= \dom h + \mathbb{R}^n =\mathbb{R}^n$ and the infimal convolution is exact. Invoking \cite[Theorem 15.3]{BaCo110} we have that $(h^* + \lambda \phi^*)^* = h \infconv \lambda \star \phi$. By (i) $\aprox{\lambda \star \phi}{h}$ is single-valued and thanks to \cite[Proposition 18.7]{BaCo110} we have that $h \infconv \lambda \star \phi$ is differentiable with $\nabla (h \infconv \lambda \star \phi)(x) = \nabla( \lambda \star \phi)(x - \aprox{\lambda \star \phi}{h}(x)) = \nabla \phi(\lambda^{-1}(x - \aprox{\lambda \star \phi}{h}(x)))$ for any $x \in \mathbb{R}^n$. By \cite[Corollary 25.5.1]{Roc70}, since $h \infconv \lambda \star \phi$ is finite-valued and convex the gradient map $\nabla (h \infconv \lambda \star \phi) :\mathbb{R}^n \to \mathbb{R}^n$ is actually continuous. ``\labelcref{thm:moreau_decomposition:bprox}'': We have for any $x \in \mathbb{R}^n$ that \begin{align*} \bprox[\lambda]{\phi^*}{h^*}(\nabla \phi(\lambda^{-1} x)) &= \nabla \phi\big(\lambda^{-1}(x - \aprox[\lambda]{\phi}{h}(x)) \big) =\nabla (h \infconv \lambda \star \phi)(x)\\ &= (\partial (h^* + \lambda \phi^*))^{-1}(x) = (\partial h^* + \lambda \nabla \phi^*)^{-1}(x), \end{align*} where the first identity follows by \labelcref{thm:moreau_decomposition:decomp}, the second by \labelcref{thm:moreau_decomposition:smooth}, the third by $h \infconv \lambda \star \phi=(h^* + \lambda \phi^*)^*$ and convex conjugacy and the last by the constraint qualification and the sum-rule \cite[Corollary 10.9]{RoWe98} for the subdifferential: $\partial (h^* + \lambda \phi^*) = \partial h^* + \lambda \nabla \phi^*$. ``\labelcref{thm:moreau_decomposition:aprox}'': By Fermat's rule \cite[Theorem 10.1]{RoWe98} for the convex case $y \in \aprox[\lambda]{\phi}{h}(x) = \argmin_{y \in \mathbb{R}^n} \lambda \star \phi(x - y) + h(y)$ is equivalent to $\nabla \phi(\lambda^{-1}(x-y)) \in \partial h(y)$. Equivalently this means $y \in (\id + \lambda \nabla \phi^* \partial h)^{-1}(x)$. \ifxfalsetrue \qed \fi \end{proof} \begin{remark} \label{rem:cq} \Cref{assum:a5'} for convex $g$ implies that $\xi(x,y)$ is level-bounded in $x$ locally uniformly in $y$ for any $\lambda >0$ and thus in particular $\lambda_g=+\infty$, i.e., \cref{assum:a5} holds true for any $\lambda \leq 1/L$. This can be proved as follows: Let $\lambda >0$. Rewrit \begin{align*} \xi(x,y)&=g(x) + \lambda \star \phi(x-y) = (g+ \langle \nabla \phi(0), \cdot\rangle)(x) + \lambda \star (\phi - \langle \nabla \phi(0), \cdot\rangle)(x-y) - \langle \nabla \phi(0),y \rangle \\ &=: g_0(x) + \lambda \star \phi_0(x- y) - \langle \nabla \phi(0),y \rangle, \end{align*} which replaces $\phi$ with $\phi_0:=\phi -\langle \nabla \phi(0),\cdot \rangle$ and $g$ with $g_0:=g+ \langle \nabla \phi(0), \cdot\rangle$. Thus we have $\nabla \phi_0(0) = 0$. Since $\ran \nabla \phi_0=\intr(\dom \phi_0^*)$ this implies that $0 \in \intr(\dom \phi_0^*)$ and in view of \cite[Theorem 11.8(c)]{RoWe98} $\phi_0$ is level-bounded. Assume that $\xi(x,y)$ is not level-bounded in $x$ locally uniformly in $y$. This means that there is $\bar y \in \mathbb{R}^n$ and $\alpha >0$ and we can find sequences $\|x^\nu\| \to \infty$ and $y^\nu \to \bar y$ such that \begin{align} \label{eq:remark_level_bounded} g_0(x^\nu) + \lambda \star \phi_0(x^\nu-y^\nu) - \langle \nabla \phi(0),y^\nu \rangle=\xi(x^\nu, y^\nu) \leq \alpha. \end{align} Let $\bar v \in \dom g^* \cap \intr \dom\phi_-^*$ which exists by assumption. Since $g_0^*=g(\cdot - \nabla \phi(0))$, $\phi_0^* = \phi(\cdot + \nabla \phi(0))$ and $(\phi_0)_-^*=\phi_-^*(\cdot -\nabla \phi(0))$ this implies that $\bar v_0 :=\bar v + \nabla \phi(0) \in \dom g^*_0 \cap \intr \dom(\phi_0)_-^*$. Since $\intr \dom \phi_0^*$ is open $\bar v_0 /\tau \in \intr \dom (\phi_0)_-^*$ for $\tau < 1$ sufficiently large implying that $\bar v_0 \in \intr \dom (\tau \star (\phi_0)_-^*) \cap \dom g_0^*$. Then we can invoke \cref{thm:moreau_decomposition:smooth} for $\tau (\phi_0)_-$ and $g_0$ and obtain that $g_0 \infconv \lambda \star (\tau (\phi_0)_-)$ is continuous. Thus there is a local uniform lower bound $\beta\in \mathbb{R}$ such that $\beta \leq (g_0 \infconv \lambda \star (\tau (\phi_0)_-))(y^\nu)= \inf_{x\in \mathbb{R}^n} g_0(x) + \tau (\lambda \star \phi_0)(x-y^\nu) \leq g_0(x^\nu) + \tau (\lambda \star \phi_0)(x^\nu-y^\nu)$ for $\nu$ sufficiently large. Summing with \cref{eq:remark_level_bounded} we obtain $$ (1-\tau) (\lambda \star \phi_0)(x^\nu-y^\nu) - \langle\nabla \phi(0),y^\nu \rangle \leq \alpha -\beta. $$ Since $\phi_0$ is level-bounded passing to $\nu \to \infty$ we obtain that $\infty \leq \alpha -\beta$, a contradiction. \end{remark} \section{Anisotropic smoothness and generalized concavity} \label{sec:phi_convexity} In this section we recapitulate the existing notions of $\Phi$-convexity and $\Phi$-conjugacy \cite{moreau1970inf} which are used heavily as tools in the remainder of the manuscript. Classically, these notions appear in the context of eliminating duality gaps in nonconvex and nonsmooth optimization or optimal transport theory; see, e.g., \cite{rockafellar1974augmented,penot1990strongly,Vil08,bauermeister2021lifting}. Following \cite{penot1990strongly,laude2021conjugate}, we will interpret the anisotropic descent inequality in terms of $\Phi$-concavity where the infimal convolution operation and its inverse, the infimal deconvolution \cite{hiriart1986formulations} form the underlying pair of $\Phi$-conjugate transforms. The $\Phi$-concavity interpretation of anisotropic smoothness will be used heavily in the development of its calculus and the generation of practical examples. In addition, this allows us to provide an equivalent interpretation of the algorithm in terms of a difference of $\Phi$-convex approach. \begin{definition}[$\Phi$-convex and $\Phi$-concave functions] Let $X$ and $Y$ be nonempty sets and $\Phi: X \times Y \to \mathbb{R}$ a real-valued coupling. Let $h_X : X \to \overline{\mathbb{R}}$ and $h_Y: Y \to \overline{\mathbb{R}}$. We say that $h_X$ is $\Phi$-convex on $X$ if there is an index set $\mathcal{I}$ and parameters $(y_i, \beta_i) \in Y \times \overline{\mathbb{R}}$ for $i \in \mathcal{I}$ such that \begin{align} h_X(x) = \sup_{i \in \mathcal{I}} \Phi(x, y_i) - \beta_i\quad \forall x \in X. \end{align} If $\mathcal{I} =\emptyset$ we say that $h_X \equiv -\infty$ is $\Phi$-convex on $X$ by definition. We say that $h_Y$ is $\Phi$-convex on $Y$ if there is an index set $\mathcal{J}$ and parameters $(x_j, \alpha_j) \in X \times \overline{\mathbb{R}}$ for $j \in \mathcal{J}$ such that \begin{align} h_Y(y) = \sup_{j \in \mathcal{J}} \Phi(x_j, y) - \alpha_j \quad \forall y \in Y. \end{align} If $\mathcal{J} =\emptyset$ we say that $h_Y \equiv -\infty$ is $\Phi$-convex on $Y$ by definition. We say that $h_X$ or $h_Y$ is $\Phi$-concave if $-h_X$ or $-h_Y$ is $\Phi$-convex. \end{definition} If $X=Y$ are indistinguishable and $\Phi$ is not symmetric we shall refer to $h_X$ as left $\Phi$-convex and $h_Y$ as right $\Phi$-convex. We shall also introduce the notion of a $\Phi$-conjugate and a $\Phi$-biconjugate function: \begin{definition}[$\Phi$-conjugate functions] Let $X$ and $Y$ be nonempty sets and $\Phi: X \times Y \to \mathbb{R}$ a real-valued coupling. Let $h_X: X \to \overline{\mathbb{R}}$. Then we define \begin{align} h_X^\Phi(y)=\sup_{x \in X} \Phi(x, y) - h_X(x), \end{align} as the $\Phi$-conjugate of $f$ on $Y$ and \begin{align} h_X^{\Phi\Phi}(x)=\sup_{y \in Y} \Phi(x, y) - h_X^\Phi(y), \end{align} as the $\Phi$-biconjugate back on $X$. The definitions of $h_Y^\Phi$ and $h_Y^{\Phi\Phi}$ for $h_Y:Y \to \overline{\mathbb{R}}$ are parallel. \end{definition} If $X=Y$ are indistinguishable and $\Phi$ is not symmetric we shall refer to $h_X^\Phi$ as the left $\Phi$-conjugate and $h_Y^\Phi$ as the right $\Phi$-conjugate. From the definition it is clear that $h_X^\Phi$ is $\Phi$-convex on $Y$ and $h_X^{\Phi\Phi}$ is $\Phi$-convex back on $X$. The following statement is standard in literature, see, e.g., \cite{balder1977extension,dolecki1978convexity} and references therein: \begin{proposition} \label{thm:phi_envelope} Let $X$ and $Y$ be nonempty sets and $\Phi: X \times Y \to \mathbb{R}$ a real-valued coupling and $h_X:X\to \overline{\mathbb{R}}$. Then we have \begin{propenum} \item \label{thm:phi_envelope:phi_convex} $h_X^\Phi$ is $\Phi$-convex on $Y$ and $h_X^{\Phi\Phi}$ is $\Phi$-convex on X; \item \label{thm:phi_envelope:fenchel_young} $h_X(x) + h_X^\Phi(y) \geq \Phi(x, y) \quad \forall x \in X, \quad \forall y \in Y$; \item \label{thm:phi_envelope:phi_biconj} $h_X(x) \geq h_X^{\Phi\Phi}(x) \quad \forall x \in X$. \end{propenum} In addition, $h_X^{\Phi\Phi}$ is the pointwise largest $\Phi$-convex function below $h_X$. In particular, this means that $h_X$ is $\Phi$-convex on $X$ if and only if $h_X(x) = h_X^{\Phi\Phi}(x)$ for all $x \in X$. The statements for $h_Y:Y \to \overline{\mathbb{R}}$ are parallel. \end{proposition} We shall also introduce the notion of a $\Phi$-subgradient. \begin{definition}[$\Phi$-subgradients] Let $X$ and $Y$ be nonempty sets and $\Phi: X \times Y \to \mathbb{R}$ a real-valued coupling. Let $h_X: X \to \overline{\mathbb{R}}$. Then we say that $y \in Y$ is a $\Phi$-subgradient of $h_X$ at $\bar x \in X$ if \begin{align} h_X(x) \geq h_X(\bar x) + \Phi(x, y) - \Phi(\bar x, y) \quad \forall x \in X, \end{align} or in other words $\bar x \in \argmax_{x \in X} \Phi(x, y) -h_X(x)$. We call the set $\partial_\Phi h_X(\bar x)$ of all such $y \in Y$ the $\Phi$-subdifferential of $f$ at $\bar x$. The definitions for $h_Y: Y \to \overline{\mathbb{R}}$ and $\partial_\Phi h_Y(\bar y)$ for some $\bar y \in Y$ are parallel. \end{definition} The following equivalent statements are standard in literature, see, e.g., \cite{balder1977extension,dolecki1978convexity} and references therein: \begin{proposition} \label{thm:phi_subgradients} Let $X$ and $Y$ be nonempty sets and $\Phi: X \times Y \to \mathbb{R}$ a real-valued coupling. Let $h_X: X \to \overline{\mathbb{R}}$. Then for any $\bar x \in X$ and $\bar y \in Y$ the following statements are equivalent: \begin{propenum} \item $\bar y \in \partial_\Phi h_X(\bar x)$; \item $h_X(\bar x) + h_X^\Phi(\bar y) = \Phi(\bar x, \bar y)$; \item $\bar x \in \argmin_{x \in X} h_X(x) - \Phi(x, \bar y)$; \end{propenum} where any of the above equivalent statements implies that $h_X(\bar x) = h_X^{\Phi\Phi}(\bar x)$ and $\bar x \in \partial_\Phi h_X^\Phi(\bar y)$. If, in addition, $h_X$ is $\Phi$-convex, any of the above statements is equivalent to $\bar x \in \partial_\Phi h_X^\Phi(\bar y)$. The statements for $h_Y:Y \to \overline{\mathbb{R}}$ are parallel. \end{proposition} For the remainder of this section we choose $X=Y=\mathbb{R}^n$ and the coupling \begin{align} \Phi(x,y):=-\lambda \star \phi(x-y), \end{align} for $\lambda :=L^{-1}$. The anisotropic descent inequality can be understood in terms of a $\Phi$-subgradient inequality due to the following correspondence between $\Phi$-subgradients and classical gradients. This adapts \cite{laude2021conjugate} to reference functions $\phi$ which are not necessarily super-coercive: \begin{proposition}[correspondence between $\Phi$-subgradients and gradients] \label{thm:phi_subgradient_gradient_correspondence} Let $h \in \mathcal{C}^1(\mathbb{R}^n)$ and let $L>0$. Define $\Phi(x,y):=-\lambda\star \phi(x-y)$ for $\lambda:=L^{-1}$. Assume that $\ran \nabla h \subseteq \ran \nabla \phi$. Then the following statements are equivalent: \begin{propenum} \item \label{thm:phi_subgradient_gradient_correspondence:asmooth} $h$ satisfies the anisotropic descent inequality relative to $\phi$ with constant $L$; \item \label{thm:phi_subgradient_gradient_correspondence:phi_subgrad} $\partial_\Phi (-h)= \id - \lambda\nabla \phi^* \circ \nabla h$. \end{propenum} In particular, any of the above equivalent statements implies that $\partial_\Phi (-h)$ is single-valued. \end{proposition} \begin{proof} Without loss of generality assume that $L=1=\lambda$ by redefining $\phi$ as $\lambda \star \phi$. ``\labelcref{thm:phi_subgradient_gradient_correspondence:asmooth} $\Rightarrow$ \labelcref{thm:phi_subgradient_gradient_correspondence:phi_subgrad}'': Let $\bar x \in \mathbb{R}^n$. Then we have that for any $x$ $$ -h(x) \geq -\phi(x - \bar x +\nabla \phi^*(\nabla h(\bar x))) + \phi(\nabla \phi^*(\nabla h(\bar x))) -h(\bar x), $$ which means by definition of the $\Phi$-subdifferential that $\bar x - \nabla \phi^*(\nabla h(\bar x)) \in \partial_\Phi (-h)(\bar x)$ for $\Phi(x,y):=-\phi(x-y)$. Now take $\bar y \in \partial_\Phi (-h)(\bar x)$. This means in view of \cref{thm:phi_subgradients} that $\bar x \in \argmin_{x \in \mathbb{R}^n} \phi(x - \bar y) - h(x)$ and thus by Fermat's rule $0 = \nabla \phi(\bar x - \bar y) - \nabla h(\bar x)$. This means that $\bar y = \bar x - \nabla \phi^*(\nabla h(\bar x))$. ``\labelcref{thm:phi_subgradient_gradient_correspondence:phi_subgrad} $\Rightarrow$ \labelcref{thm:phi_subgradient_gradient_correspondence:asmooth}'': Assume that $\partial_\Phi (-h)= \id - \nabla \phi^* \circ \nabla h$. Let $\bar x \in \mathbb{R}^n$. Then $\bar x - \nabla \phi^*(\nabla h(\bar x)) \in \partial_\Phi (-h)(\bar x)$. By definition of the $\Phi$-subdifferential this means the anisotropic descent inequality holds at $\bar x$. \ifxfalsetrue \qed \fi \end{proof} In particular the above result implies that the forward-step \cref{eq:forward} in \cref{alg:aproxgrad} is the unique $\Phi$-subgradient of $-f$ at $x^k$. Next we prove that anisotropic smoothness is equivalent to $\Phi$-concavity of $h$ which means that $h$ has an infimal convolution expression. In the convex case we also show a conjugate duality between anisotropic smoothness and relative strong convexity in the Bregman sense refining \cite[Lemma 4.2]{wang2021bregman}. In the smooth case relative strong convexity was considered in \cite{lu2018relatively,bauschke2019linear}. To this end we adapt \cite{laude2021conjugate} to reference functions $\phi$ which are not necessarily super-coercive. \begin{proposition}[infimal convolution representation of anisotropic smoothness] \label{thm:phi_convex_asmooth} Let $h:\mathbb{R}^n \to \overline{\mathbb{R}}$ be proper lsc and $L>0$. Define $\Phi(x,y):=-\lambda\star \phi(x-y)$ for $\lambda:=L^{-1}$. Then the following are equivalent: \begin{propenum} \item \label{thm:phi_convex_asmooth:asmooth} $h$ has the anisotropic descent property relative to $\phi$ with constant $L$; \item \label{thm:phi_convex_asmooth:infconv} $h\in \mathcal{C}^1(\mathbb{R}^n)$ with $\ran \nabla h \subseteq \ran \nabla \phi$ and $-h$ is $\Phi$-convex, i.e., $h = -\xi^\Phi = \xi \infconv L^{-1}\star \phi$ for some proper $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$. \end{propenum} Any of the above statements implies that the infimal convolution in the second statement can be taken to be exact for some $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$ lsc and the infimum in the definition of $\Phi$-convexity is attained. If, in addition, $h$ is convex the assumption $h\in \mathcal{C}^1(\mathbb{R}^n)$ is superfluous in the second statement, $\xi$ can be taken to be convex lsc and the above statements are equivalent to $$ h^* = \psi + L^{-1}\phi^*, $$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$ with $\dom \psi \cap \intr \dom\phi^* \neq \emptyset$, i.e., $h^*$ is strongly convex relative to $\phi^*$ with constant $L^{-1}$ in the Bregman sense. \end{proposition} \begin{proof} Without loss of generality we assume that $L=1=\lambda$ by replacing $\phi$ with $\lambda \star \phi$. ``\labelcref{thm:phi_convex_asmooth:asmooth} $\Rightarrow$ \labelcref{thm:phi_convex_asmooth:infconv}'': By definition $h \in \mathcal{C}^1(\mathbb{R}^n)$. Let $\bar x \in \mathbb{R}^n$. In view of \cref{thm:phi_subgradient_gradient_correspondence} we have that $\bar x - \nabla \phi^*(\nabla h(\bar x)) \in \partial_\Phi (-h)(\bar x)$ for $\Phi(x,y):=-\phi(x-y)$. Invoking \cref{thm:phi_subgradients} this means that $(-h)(\bar x) = (-h)^{\Phi\Phi}(\bar x)$. Since $\bar x \in \mathbb{R}^n$ was arbitrary we know by \cref{thm:phi_envelope} that $-h$ is $\Phi$-convex. By definition of the $\Phi$-biconjugate this means that $h=-(-h)^{\Phi\Phi} = \xi \infconv \phi$ for $\xi:=(-h)^\Phi$. ``\labelcref{thm:phi_convex_asmooth:infconv} $\Rightarrow$ \labelcref{thm:phi_convex_asmooth:asmooth}'': Suppose that $h \in \mathcal{C}^1(\mathbb{R}^n)$ and $h=\xi \infconv \phi$ for some $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$. Let $\bar x \in \mathbb{R}^n$. Since $h \in \mathcal{C}^1(\mathbb{R}^n)$ it is in particular finite-valued and we have $h(\bar x)=\inf_{y \in \mathbb{R}^n} \xi(y) + \phi(\bar x - y) \in \mathbb{R}$. This means that for any $\varepsilon > 0$ there exists $\bar y_\varepsilon$ such that \begin{align} \label{eq:eps_inf} h(\bar x) + \varepsilon = \xi(\bar y_\varepsilon) + \phi(\bar x - \bar y_\varepsilon), \end{align} while for any $x \in \mathbb{R}^n$ we have that $h(x) \leq \xi(\bar y_\varepsilon) + \phi(x - \bar y_\varepsilon)$. Combining the identities and eliminating $\xi(\bar y_\varepsilon)$ we obtain that for any $x \in \mathbb{R}^n$: \begin{align} \label{eq:ineq_ekeland} \phi(x - \bar y_\varepsilon)-h(x) + \varepsilon \geq \phi(\bar x - \bar y_\varepsilon) -h(\bar x), \end{align} showing that $\bar x \in \epsargmin \{\phi(\cdot - \bar y_\varepsilon)-h \}$. Ekeland’s variational principle with $\delta=\sqrt{\varepsilon}$, see \cite[Proposition 1.43]{RoWe98}, yields the existence of a point $\bar x_\varepsilon \in \overline{\mathrm{B}}(\bar x; \sqrt{\varepsilon})$ with $\phi(\bar x_\varepsilon - \bar y_\varepsilon)-h(\bar x_\varepsilon) \leq \phi(\bar x - \bar y_\varepsilon) -h(\bar x)$ and $\bar x_\varepsilon \in \argmin \{\phi(\cdot - \bar y_\varepsilon)-h + \sqrt{\varepsilon}\|\cdot - \bar x_\varepsilon\|\}$. By Fermat's rule \cite[Theorem 10.1]{RoWe98} and the fact that $\partial \|\cdot\|(0)=\overline{\mathrm{B}}(0; 1)$ we have $0 \in \nabla \phi(\bar x_\varepsilon - \bar y_\varepsilon) -\nabla h(\bar x_\varepsilon) + \overline{\mathrm{B}}(0; \sqrt{\varepsilon})$ and thus \begin{align} \label{eq:u_eps} \nabla h(\bar x_\varepsilon) - \nabla \phi(\bar x_\varepsilon - \bar y_\varepsilon) =: u_\varepsilon \in \overline{\mathrm{B}}(0; \sqrt{\varepsilon}). \end{align} Since $\ran \nabla \phi = \intr(\dom \phi^*)$ we have that $\nabla h(\bar x_\varepsilon) -u_\varepsilon = \nabla \phi(\bar x_\varepsilon - \bar y_\varepsilon) \in \intr(\dom \phi^*)$. Passing to $\varepsilon \to 0$, by continuity, $\intr(\dom \phi^*)\ni \nabla h(\bar x_\varepsilon) -u_\varepsilon \to \nabla h(\bar x) \in \intr(\dom \phi^*)$, where the inclusion follows by the constraint qualification $\ran \nabla h \subseteq \ran \nabla \phi = \intr(\dom \phi^*)$. Since $\nabla\phi^*$ is continuous relative to $\intr(\dom \phi^*)$ we have by rewriting~\cref{eq:u_eps} $$ \bar y_\varepsilon =\bar x_\varepsilon - \nabla \phi^*(\nabla h(\bar x_\varepsilon) -u_\varepsilon) \to \bar x - \nabla \phi^*(\nabla h(\bar x)) =: \bar y, $$ for $\varepsilon \to 0$. Thus, passing to $\varepsilon \to 0$ in \cref{eq:ineq_ekeland} we obtain again by continuity that $$ \phi(x - \bar x +\nabla \phi^*(\nabla h(\bar x)))-h(x) \geq \phi(\nabla \phi^*(\nabla h(\bar x))) -h(\bar x), $$ which is the anisotropic descent inequality. Thanks to the previous part of the proof the function $\xi$ in the expression $h = \xi \infconv \phi$ can be taken as $\xi=(-h)^\Phi$ which is proper lsc as it is a pointwise supremum over continuous functions. Recall that $\bar y_{\varepsilon} \to \bar x - \nabla \phi^*(\nabla h(\bar x)) = \bar y$ for $\varepsilon \to 0$. Then we have: $$ \inf_{y \in \mathbb{R}^n} \xi(y) + \phi(\bar x - y) \leq \xi(\bar y) + \phi(\bar x - \bar y) \leq \lim_{\varepsilon \searrow 0} \xi(\bar y_{\varepsilon}) + \phi(\bar x - \bar y_{\varepsilon}) =h(\bar x) = \inf_{y \in \mathbb{R}^n} \xi(y) + \phi(\bar x - y), $$ where the second inequality holds since $\xi$ is proper lsc and the second last equality holds thanks to \cref{eq:eps_inf}. This completes the reverse direction of the proof. Finally, assume that $h \in \Gamma_0(\mathbb{R}^n)$ and $h = \xi \infconv \phi$ for some $\xi :\mathbb{R}^n \to \overline{\mathbb{R}}$. Then by definition of the convex conjugate we have that $h^*=\xi^* + \phi^*$ and $\xi^* \in \Gamma_0(\mathbb{R}^n)$. By assumption we have $\dom \partial h^* =\ran \partial h \subseteq \ran \nabla \phi = \intr \dom \phi^*$ implying that $\emptyset \neq \dom h^* \cap \intr \dom \phi^*$. Since $\dom h^* = \dom \xi^* \cap \dom \phi^*$ we have $\emptyset \neq (\dom \xi^* \cap \dom \phi^*) \cap \intr \dom \phi^* = \dom \xi^* \cap \intr \dom \phi^*$. To show the opposite direction let $h^* = \psi + \phi^*$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$ such that $\dom \psi \cap \intr \dom\phi^* \neq \emptyset$. Then we can invoke \cite[Proposition 6.19(vii)]{BaCo110} and \cite[Theorem 15.3]{BaCo110} to obtain that $h=h^{**} =(\psi + \phi^*)^*= \psi^* \infconv \phi \in \Gamma_0(\mathbb{R}^n)$. By \cref{thm:moreau_decomposition:smooth} $\psi^* \infconv \phi \in \mathcal{C}^1(\mathbb{R}^n)$ with $\nabla (\psi^* \infconv \phi) = \nabla \phi \circ (\id - \aprox{\phi}{\psi^*})$. This implies that $\ran \partial (\psi^* \infconv \phi) \subseteq \ran \nabla \phi$. This verifies all claims in \labelcref{thm:phi_convex_asmooth:infconv} and concludes the proof. \ifxfalsetrue \qed \fi \end{proof} As shown above anisotropic smoothness means that $-h$ is $\Phi$-convex and equivalently $h=-(-h)^{\Phi\Phi}=(-h)^{\Phi}\infconv \lambda \star \phi$ is an infimal convolution. This leads us to study the corresponding conjugate transform \begin{align} (-h)^{\Phi =-(-h)\infconv\lambda \star \phi_-= \sup_{u \in \mathbb{R}^n} h(\cdot + u) - \lambda \star \phi(u)=:h \infdeconv \lambda \star \phi, \end{align} which is also called the \emph{infimal deconvolution} of $h$ and $\phi$ due to \cite{hiriart1986formulations}. For $h \in \Gamma_0(\mathbb{R}^n)$ we have the following convexity and uniqueness property of the deconvolution of $h$ and $\phi$. \begin{proposition}[convexity of infimal deconvolution] \label{thm:convexity_deconvolution} Let $h \in \Gamma_0(\mathbb{R}^n)$ such that $h = \xi \infconv \lambda \star \phi$ for some $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$. Then we have $h \infdeconv \lambda \star \phi= (h^* \mathbin{\dot{-}} \lambda \phi^*)^* \in \Gamma_0(\mathbb{R}^n)$. If, in addition, $\phi$ is super-coercive and $\xi \in \Gamma_0(\mathbb{R}^n)$ then $\xi=h \infdeconv \lambda \star \phi$, i.e., $\xi=h \infdeconv \lambda \star \phi$ is the unique function among $\xi \in \Gamma_0(\mathbb{R}^n)$ for which we have $h = \xi \infconv \lambda \star \phi$. \end{proposition} \begin{proof} Without loss of generality assume that $\lambda = 1$ by redefining $\phi$ with $\lambda \star \phi$. Let $h\in \Gamma_0(\mathbb{R}^n)$ with $h=\xi\infconv \phi$ for some $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$. By definition this means that $-h$ is $\Phi$-convex relative to $\Phi(x,y)=-\phi(x-y)$. Invoking \cref{thm:phi_envelope} this means that $h=-(-h)^{\Phi\Phi} =(-h)^\Phi \infconv \phi$. We have thanks to Hiriart-Urruty \cite[Theorem 2.2]{hiriart1986general} $$ (-h)^\Phi = h \infdeconv \phi = (h^* \mathbin{\dot{-}} \phi^*)^* > -\infty, $$ where the last inequality follows since $-h=(-h)^{\Phi\Phi}$ and $\Phi$ are finite-valued. Since $(-h)^{\Phi\Phi} >-\infty$ we also have that $(-h)^\Phi \not\equiv +\infty$ and thus $(-h)^\Phi$ is proper. Since $(-h)^\Phi=(h^* \mathbin{\dot{-}} \phi^*)^*$ we have that $(-h)^\Phi \in \Gamma_0(\mathbb{R}^n)$. Now assume that, in addition, $\phi$ is super-coercive and $\xi \in \Gamma_0(\mathbb{R}^n)$. Then $\dom \phi^* = \mathbb{R}^n$ and by taking convex conjugates on either side of the equations $h = \xi \infconv \phi=(-h)^\Phi \infconv \phi$ we obtain by definition of the infimal convolution that $h^{*}=\xi^* + \phi^* = ((-h)^\Phi)^* + \phi^*$. This implies that for all $x \in \mathbb{R}^n$ we have $ h^{*}(x) - \phi^*(x)=\xi^*(x) =((-h)^\Phi)^*(x), $ and thus, since $\xi, (-h)^\Phi \in \Gamma_0(\mathbb{R}^n)$ we have by taking convex conjugates again: $ (h^* - \phi^*)^* = \xi = (-h)^\Phi. $ \ifxfalsetrue \qed \fi \end{proof} The proof of the second part of the above result follows alternatively from \cite[Theorem 3.34]{RoWe98}. \section{Calculus for anisotropic smoothness and examples} \label{sec:calculus_asmooth} \subsection{Basic calculus} As discussed in the previous section anisotropic smoothness is equivalent to the existence of an infimal convolution expression. In this section, we shall study operations that preserve this property. To begin with we observe that like its Euclidean counterpart anisotropic smoothness is separable: \begin{proposition}[separability of anisotropic smoothness] \label{prop:separable} Let $h_1,h_2 \in\mathcal{C}^1(\mathbb{R}^n)$ have the anisotropic descent property relative to $\phi_1, \phi_2$ with constants $L_1,L_2$. Then $h(x_1,x_2):=h_1(x_1)+h_2(x_2)$ has the anisotropic descent property relative to $\phi(x_1,x_2):=L_1^{-1} \star \phi_1(x_2) + L_2^{-1} \star \phi_2(x_2)$. \end{proposition} \begin{proof} The result follows by summing the individual descent inequalities. \ifxfalsetrue \qed \fi \end{proof} We have the following elementary property: \begin{proposition}[pointwise scaling and tilting] Let $h \in\mathcal{C}^1(\mathbb{R}^n)$ have the anisotropic descent property relative to $\phi$ with constant $L$ and let $c \in \mathbb{R}^n$ and $\alpha >0$. Then $\alpha h+\langle c,\cdot \rangle$ has the anisotropic descent property relative to $\alpha \phi+\langle c,\cdot \rangle$ with constant $L$. \end{proposition} \begin{proof} This follows by definition of the anisotropic descent property noting that $(\alpha \phi+\langle c,\cdot \rangle)^*= (\alpha \star \phi^*)(\cdot - c)$ and $\nabla (\alpha \star \phi^*)(\cdot - c) = \nabla \phi^*((\cdot )\alpha^{-1} - c)$. \ifxfalsetrue \qed \fi \end{proof} Based on the $\Phi$-concavity interpretation of anisotropic smoothness we can prove the following elementary transitivity property of anisotropic smoothness: \begin{proposition}[transitivity of anisotropic smoothness] \label{thm:transitivity} Let $h \in \mathcal{C}^1(\mathbb{R}^n)$ and $\phi_1,\phi_2 \in \Gamma_0(\mathbb{R}^n)$ be Legendre type with full domain. Assume that $h$ is anisotropically smooth relative to $\phi_1$ and let $\phi_1$ be anisotropically smooth relative to $\phi_2$. Then $h$ has the anisotropic descent property relative to $\phi_2$. \end{proposition} \begin{proof} Thanks to \cref{thm:phi_convex_asmooth} anisotropic smoothness of $h$ relative to $\phi_1$ implies that $h = \xi \infconv \phi_1$ for some $\xi:\mathbb{R}^n \to \overline{\mathbb{R}}$ proper lsc such that $\ran \nabla h \subseteq \ran \nabla \phi_1$. Anisotropic smoothness of $\phi_1$ relative to $\phi_2$ implies that $\phi_1 = \psi \infconv \phi_2$ for some $\psi:\mathbb{R}^n \to \overline{\mathbb{R}}$ proper lsc such that $\ran \nabla \phi_1 \subseteq \ran \nabla \phi_2$. Thus we have $h = \xi \infconv (\psi \infconv \phi_2) = (\xi \infconv \psi) \infconv \phi_2$ and $\ran \nabla h \subseteq \ran \nabla \phi_1 \subseteq \ran \nabla \phi_2$. Again invoking \cref{thm:phi_convex_asmooth} $h$ is anisotropically smooth relative to $\phi_2$. \ifxfalsetrue \qed \fi \end{proof} We show that anisotropic smoothness with constant $L_1$ is preserved for any $L_2>L_1$. \begin{proposition}[epi-scaling preserves anisotropic smoothness] \label{thm:episcaling_asmoothness} Let $h \in \mathcal{C}^1(\mathbb{R}^n)$ have the anisotropic descent property with constant $L_1>0$ relative to $\phi$. Then $h$ has the anisotropic descent property relative to $\phi$ for any $L_2>L_1$, and in particular we have the following monotonicity property for any $\bar x, x \in \mathbb{R}^n$: \begin{align*} h(x) &\leq h(\bar x) + \tfrac{1}{L_1} \star \phi(x-\bar x + {L_1}^{-1}\nabla\phi^*(\nabla h(\bar x))) - \tfrac{1}{{L_1}} \star \phi({L_1}^{-1}\nabla\phi^*(\nabla h(\bar x))) \\ &\leq h(\bar x) + \tfrac{1}{L_2} \star \phi(x-\bar x + {L_2}^{-1}\nabla\phi^*(\nabla h(\bar x))) - \tfrac{1}{{L_2}} \star \phi({L_2}^{-1}\nabla\phi^*(\nabla h(\bar x))). \end{align*} \end{proposition} \begin{proof} Let $L_2>L_1 >0$ and assume that $h$ is anisotropically smooth relative to $\phi$ with constant $L_1$. Let $\bar x, x \in \mathbb{R}^n$. Define $\bar y:=\bar x - L_1^{-1}\nabla\phi^*(\nabla h(\bar x))$ and $\xi_{\bar x}:=L_1^{-1} \star \phi(\cdot - \bar y)$. Then we have that $\xi_{\bar x}^* = L_1^{-1}\phi^* + \langle \cdot, \bar y \rangle = \psi_{\bar x} + L_2^{-1} \phi^*$ for $\psi_{\bar x} :=(L_1^{-1} -L_2^{-1}) \phi^*+ \langle \cdot, \bar y \rangle \in \Gamma_0(\mathbb{R}^n)$. Invoking \cref{thm:phi_convex_asmooth} we have that $\xi_{\bar x}$ is anisotropically smooth relative to $L_2^{-1} \star \phi$. By definition of $\bar y$ we have that $\nabla \xi_{\bar x}(\bar x) =\nabla \phi(L_1(\bar x - \bar y))= \nabla h(\bar x)$. Thus the descent inequality for $\xi_{\bar x}$ reads \begin{align} \xi_{\bar x}(x) &\leq \xi_{\bar x}(\bar x) + L_2^{-1} \star \phi(x - \bar x + L_2^{-1}\nabla \phi^*(\nabla \xi_{\bar x}(\bar x))) - L_2^{-1} \star \phi(L_2^{-1}\nabla \phi^*(\nabla \xi_{\bar x}(\bar x))) \notag\\ &= \xi_{\bar x}(\bar x) + L_2^{-1} \star \phi(x - \bar x + L_2^{-1}\nabla \phi^*(\nabla h(\bar x))) - L_2^{-1} \star \phi(L_2^{-1}\nabla \phi^*(\nabla h(\bar x))) \label{eq:descent_xibar_L2}. \end{align} Then we have \begin{align*} h(x) &\leq h(\bar x) + \tfrac{1}{L_1} \star \phi(x-\bar y) - \tfrac{1}{{L_1}} \star \phi(\bar x - \bar y) = h(\bar x) + \xi_{\bar x}(x) - \xi_{\bar x}(\bar x) \\ &\leq h(\bar x) + \tfrac{1}{L_2} \star \phi(x-\bar x + {L_2}^{-1}\nabla\phi^*(\nabla h(\bar x))) - \tfrac{1}{{L_2}} \star \phi({L_2}^{-1}\nabla\phi^*(\nabla h(\bar x))), \end{align*} where the first inequality follows from the definition of $\bar y$ and anisotropic smoothness of $f$ relative to $\phi$ with constant $L_1$ and the last inequality follows by \cref{eq:descent_xibar_L2}. In particular this shows that $f$ has the anisotropic descent property relative to $\phi$ for constant $L_2$. \ifxfalsetrue \qed \else \qedhere \fi \end{proof} \subsection{Pointwise addition under joint convexity of dual Bregman distance} Clearly, anisotropic smoothness is closed under epi-addition; see \cite[Corollary 4.9]{laude2021conjugate}. In contrast, the pointwise sum of anisotropically smooth functions is in general not anisotropically smooth, see \cite[Example 4.10]{laude2021conjugate}, unless the dual Bregman divergence is jointly convex \cite[Theorem 5.1(ii)]{wang2021bregman}. In this section we will generalize this to nonconvex infimal convolutions and consider a refinement for the exponential reference function showing closedness under pointwise conic combinations. This is false in general for Euclidean Lipschitz smoothness with the same Lipschitz constant. The following result shows closedness under pointwise average if the dual Bregman distance is jointly convex: \begin{lemma}[closedness of anisotropic smoothness under pointwise average] \label{thm:closed_addition} Let $h_1,h_2 \in \mathcal{C}^1(\mathbb{R}^n)$ be anisotropically smooth relative to $\phi$ with constants $L_1,L_2$ respectively and assume that $D_{\phi^*}$ is jointly convex. Let $\alpha \in [0,1]$. Then $\alpha h_1 + (1-\alpha) h_2$ is anisotropically smooth relative to $\phi$ with constant $L:=\max\{L_1, L_2\}$. \end{lemma} \begin{proof} Let $L:=\max\{L_1, L_2\}$ and define $\Phi(x,y):=-L^{-1} \star \phi(x-y)$. Thanks to \cref{thm:episcaling_asmoothness} $h_1,h_2$ are anisotropically smooth relative to $\phi$ with constant $L$. Invoking \cref{thm:phi_convex_asmooth} we have $h_1(x) = \inf_{y \in \mathbb{R}^n} L^{-1} \star \phi(x - y) + (-h_1)^\Phi(y)$ and $h_2(x) = \inf_{u \in \mathbb{R}^n} L^{-1} \star \phi(x - u) + (-h_2)^\Phi(u)$. Without loss of generality assume that $L=1$. Let $\alpha \in [0,1]$ and $x\in \mathbb{R}^n$. Then we have \begin{align} (\alpha h_1 + (1-\alpha) h_2)(x) &= \inf_{y \in \mathbb{R}^n} \alpha \phi(x - y) + \alpha (-h_1)^\Phi(y)) + \inf_{u \in \mathbb{R}^n} (1-\alpha)\phi(x - u) + (1-\alpha)(-h_2)^\Phi(u) \notag \\ &= \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \alpha \phi(x - y) + (1-\alpha) \phi(x - u) + \alpha (-h_1)^\Phi(y) + (1-\alpha)(-h_2)^\Phi(u) \label{eq:inf_conv_expr_average_0}. \end{align} Let $y,u \in \mathbb{R}^n$. Assuming that $\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u)=:\xi(\cdot, y, u)$ is anisotropically smooth relative to $\phi$ (which is verified below) we have by \cref{thm:phi_convex_asmooth} that \begin{align} \alpha \phi(x - y) + (1-\alpha) \phi(x - u) = \inf_{v \in \mathbb{R}^n} \phi(x - v) + (-\xi(\cdot, y, u))^\Phi(v). \label{eq:inf_conv_expr_average} \end{align} Define $\zeta(v) := \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} (-\xi(\cdot; y, u))^\Phi(v) + \alpha (-h_1)^\Phi(y) + (1-\alpha)(-h_2)^\Phi(u)$. Substituting \cref{eq:inf_conv_expr_average} and the expression of $\zeta$ in \cref{eq:inf_conv_expr_average_0} an interchange of the order of minimization yields: \begin{align*} (\alpha h_1 + (1-\alpha) h_2)(x) &= \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \alpha \phi(x - y) + (1-\alpha) \phi(x - u) + \alpha (-h_1)^\Phi(y) + (1-\alpha)(-h_2)^\Phi(u) \\ &=\inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \inf_{v \in \mathbb{R}^n} \phi(x - v) + (-\xi(\cdot; y, u))^\Phi(v) + \alpha (-h_1)^\Phi(y) + (1-\alpha)(-h_2)^\Phi(u) \\ &=\inf_{v \in \mathbb{R}^n} \phi(x - v) + \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} (-\xi(\cdot; y, u))^\Phi(v) + \alpha (-h_1)^\Phi(y) + (1-\alpha)(-h_2)^\Phi(u) \\ &= \inf_{v \in \mathbb{R}^n} \phi(x - v) + \zeta(v). \end{align*} By convexity of $\ran \nabla \phi= \intr\dom \phi^*$ we have that $\ran \nabla (\alpha h_1 + (1-\alpha) h_2) \subseteq \alpha \ran \nabla h_1 + (1-\alpha) \ran \nabla h_2 \subseteq \ran \nabla \phi$. In view of \cref{thm:phi_convex_asmooth} $\alpha h_1 + (1-\alpha) h_2$ is anisotropically smooth relative to $\phi$. It remains to show that $\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u)$ is anisotropically smooth. We have that $\phi(\cdot - y) = (\langle \cdot, y \rangle + \phi^*)^*$ and $\phi(\cdot - u) = (\langle \cdot, u \rangle + \phi^*)^*$. Thus we can invoke \cite[Lemma 4.2]{wang2021bregman} and obtain that $\nabla \phi(\cdot - y)$ and $\nabla \phi(\cdot - u)$ are both $\nabla \phi^*$-firmly nonexpansive on $\mathbb{R}^n$ in the sense of \cite[Definition 4.1]{wang2021bregman}. Since $D_{\phi^*}$ is jointly convex, in view of \cite[Remark 4.6]{wang2021bregman} we can invoke \cite[Lemma 4.4]{wang2021bregman} and obtain that $\nabla (\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u)) = \alpha \nabla \phi(\cdot - y)+ (1-\alpha) \nabla \phi(\cdot - u)$ is $\nabla \phi^*$-firmly nonexpansive on $\mathbb{R}^n$ as well. By convexity of $\dom \phi^*$ we have that \begin{align*} \dom (\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u))^* &= \dom \alpha \star (\langle \cdot, y \rangle + \phi^*) + \dom (1-\alpha) \star (\langle \cdot, u \rangle + \phi^*) \\ &= \alpha \dom \phi^* + (1-\alpha) \dom \phi^* = \dom \phi^*. \end{align*} Thus we can again invoke \cite[Lemma 4.2]{wang2021bregman} to obtain that $\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u) = \psi \infconv \phi$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$ and $\ran \nabla (\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u)) \subseteq \ran \nabla \phi$. Again thanks to \cref{thm:phi_convex_asmooth} $\alpha \phi(\cdot - y) + (1-\alpha) \phi(\cdot - u)$ is anisotropically smooth relative to $\phi$ as claimed. \ifxfalsetrue \qed \fi \end{proof} \begin{remark} \label{rem:recursive} The above result generalizes directly to any finite weighted sum $h := \sum_{i=1}^m \alpha_i h_i$ for weights $\alpha_i \geq 0$ with $\sum_{i=1}^m \alpha_i =1$: This follows by applying the previous result recursively noting that $\sum_{i=1}^{m} \alpha_i h_i = (1 - \alpha_m) \sum_{i=1}^{m-1} \alpha_i / (1 - \alpha_m) h_i + \alpha_m h_m$. \end{remark} In the convex case and in particular the subsequent examples we often verify anisotropic smoothness via its dual characterization in terms of relative strong convexity in the Bregman sense; see \cref{thm:phi_convex_asmooth}. For that reason we shall prove the following lemma adopting the proof strategy of \cite[Lemma 4.2]{wang2021bregman}: \begin{lemma} \label{lem:characterization_strong_convex} Let $h \in \Gamma_0(\mathbb{R}^n)$ assume that $\dom h \subseteq \dom \phi^*$ with $\relint \dom h \cap \intr \dom \phi^* \neq \emptyset$. Then the following are equivalent: \begin{lemenum} \item \label{lem:characterization_strong_convex:strcvx} $h = \psi + \phi^*$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$; \item \label{lem:characterization_strong_convex:interior} $h \mathbin{\dot{-}} \phi^*$ is convex on $\relint \dom h$, \end{lemenum} where the first item implies that $\dom \psi \cap \intr \dom \phi^* \neq \emptyset$ and $\psi$ can be taken as $\cl \xi$ with $\xi(x) := h(x) - \phi^*(x) \in\mathbb{R}$ if $x \in \relint \dom h$ and $+\infty$ otherwise. \end{lemma} \begin{proof} ``\labelcref{lem:characterization_strong_convex:strcvx} $\Rightarrow$ \labelcref{lem:characterization_strong_convex:interior}'': Let $\psi \in \Gamma_0(\mathbb{R}^n)$. Then $\psi \equiv h \mathbin{\dot{-}} \phi^*$ is finite-valued and convex on $\relint \dom h \subseteq \dom h \subseteq \dom \phi^*$. ``\labelcref{lem:characterization_strong_convex:interior} $\Rightarrow$ \labelcref{lem:characterization_strong_convex:strcvx}'': Define $\xi(x) := h(x) - \phi^*(x) \in\mathbb{R}$ if $x \in \relint \dom h$ and $+\infty$ otherwise. Then $\dom \xi = \relint \dom h$. Since $\xi$ is proper, convex, by \cite[Theorem 7.4]{Roc70} $\cl \xi \in \Gamma_0(\mathbb{R}^n)$ with $\xi \equiv \cl \xi$ on $\relint \dom h$. Next we prove that $h \equiv \cl \xi + \phi^*$ on $\cl \dom h$: For every $x \in \relint \dom h \subseteq \dom \phi^*$ we have $(\cl \xi)(x) + \phi^*(x) = \xi(x) +\phi^*(x)=h(x)$. Now take $x_0 \in \relint \dom h\cap \intr \dom \phi^*$ which exists by assumption and any $x \in \cl\dom h$. Then we have by the line segment principle \cite[Theorem 6.1]{Roc70} that $(1-\tau) x_0 + \tau x \in \relint \dom h$ for every $\tau \in [0,1)$. In view of \cite[Corollary 7.4.1]{Roc70} $\relint(\dom \cl \xi) = \relint \dom \xi = \relint \dom h$. Therefore $x_0 \in \relint(\dom \cl \xi) \cap \intr \dom \phi^*=\relint (\dom \cl \xi \cap \dom \phi^*) = \relint(\dom (\cl \xi + \phi^*))$ by \cite[Theorem 6.5]{Roc70} and $\cl \xi + \phi^* \in \Gamma_0(\mathbb{R}^n)$ by \cite[Theorem 9.3]{Roc70}. Then we can invoke \cite[Theorem 7.5]{Roc70} and obtain \begin{align*} (\cl \xi)(x)+\phi^*(x)&=\lim_{\tau \nearrow 1} (\cl \xi)((1-\tau) x_0 + \tau x) + \phi^*((1-\tau) x_0 + \tau x) = \lim_{\tau \nearrow 1} h((1-\tau) x_0 + \tau x) = h(x), \end{align*} where the second equality holds since $(1-\tau) x_0 + \tau x \in \relint \dom h$ and $\cl \xi + \phi^* \equiv h$ on $\relint \dom h$, and the last equality again by \cite[Theorem 7.5]{Roc70} since $h \in \Gamma_0(\mathbb{R}^n)$ and $x_0 \in \relint \dom h$. This implies that $\cl \xi + \phi^* \equiv h$ on $\cl \dom h$. By \cite[Theorem 7.4]{Roc70} $\cl \xi$ differs from $\xi$ at most at relative boundary points of $\dom \xi = \relint \dom h$, i.e., at points $x \in \cl \dom \xi \setminus \relint \dom \xi = \cl \dom h \setminus \relint \dom h$. This implies that $\cl \xi \equiv \xi \equiv +\infty$ on $\mathbb{R}^n \setminus \cl \dom h$ and thus $\cl \xi + \phi^* \equiv h \equiv +\infty$ on $\mathbb{R}^n \setminus \cl \dom h$. Altogether this implies that $\psi + \phi^* = h$ for $\psi = \cl \xi$. This completes the reverse direction of the proof. Finally, since $h=\psi + \phi^*$ we have by assumption that $\emptyset \neq \dom h \cap \intr \dom \phi^*= (\dom \psi \cap \dom \phi^*) \cap \intr \dom \phi^* = \dom \psi \cap \intr \dom \phi^*$. \ifxfalsetrue \qed \fi \end{proof} The next example shows that the loss function in logistic regression is anisotropically smooth relative to a softmax approximation of the absolute value function: \begin{example} \label{ex:logistic} Let $f(x)=\frac{1}{m} \sum_{i=1}^m \ell(\langle a_i, x \rangle)$ with $a_i \in [-1, 1]^n$ and $\ell(t)=\log(1 + \exp(t))$. Then $f$ is anisotropically smooth relative to the symmetrized logistic loss $\phi(x)=\sum_{j=1}^n h(x_j)$ for $h(t)=2\log(1+ \exp(t)) - t$ with constant $L = \max_{1\leq i \leq m} \|a_i\|^2$. Note that for $L\to \infty$ $\frac1L\star h$ converges pointwisely to the absolute value function. \end{example} \begin{proof} To show the claimed result we first show that $f_i:=\ell(\langle a_i,\cdot \rangle)$ is anisotropically smooth relative to $\phi$ with constant $L$. The claimed result then follows by verifying joint convexity of $D_{\phi^*}$ and invoking \cref{thm:closed_addition}. We have $h^*(t)=(t + 1)\log((t + 1)/2) + (1-t)\log((1-t)/2)$ with $\dom h^*=[-1, 1]$ where $t\log t + (1-t)\log(1-t)=0$ for $t\in\{0,1\}$. To prove the claimed result we invoke \cref{thm:phi_convex_asmooth} and verify that $(\ell \circ a_i^\top)^*= \psi + \frac{1}{L}\phi^*$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$ and $\dom \psi \cap \intr \dom \phi^* \neq \emptyset$: Owing to \cite[Theorem 11.23(b)]{RoWe98} the conjugate $(\ell \circ a_i^\top)^*$ amounts to the infimal postcomposition $a_i \ell^*$ of $\ell^*(t) = t\log t + (1-t)\log(1-t)$ by $a_i$ with $\dom \ell^*=[0,1]$ and is given as follows: \begin{align*} (a_i \ell^*)(x)= \inf \{\ell^*(t) \mid t \in \mathbb{R} : a_i t = x\} = \begin{cases} h(t) & \text{if $a_i t = x$ for some $t\in [0, 1]$,} \\ + \infty & \text{otherwise}. \end{cases} \end{align*} Then we have that $\dom a_i \ell^* = \{a_i t \mid t \in [0, 1]\} \subseteq \dom \phi^*$ and $\tfrac{1}{2} a_i \in \intr \dom \phi^* \cap \relint (\dom a_i \ell^*)$. Thus we can invoke \cref{lem:characterization_strong_convex} and verify convexity of $a_i \ell^* \mathbin{\dot{-}} \frac{1}{L} \phi^*$ on the relative interior of $\dom a_i \ell^*$: Equivalently this means that the one-dimensional function $\ell^* \mathbin{\dot{-}} \frac{1}{L} \phi^* \circ a_i$ is convex on $(0, 1)$. Since $(h^*)''(t)=2/(1-t^2)$ and $(\ell^*)''(t)=\frac{1}{t-t^2}>0$ on $(0,1)$ this is implied by $$ L\geq \frac{a_i^\top \nabla^2 \phi^*(a_i t) a_i}{(\ell^*)''(t)} = a_i^\top \diag 2(t-t^2)/(1-a_i^2 t^2) a_i =\sum_{j=1}^m \frac{2(t-t^2) a_{ij}^2}{1-a_{ij}^2 t^2}, $$ for all $t \in (0,1)$. Clearly, $\frac{2(t-t^2) a_{ij}^2}{1-a_{ij}^2 t^2} \leq \frac{2(t-t^2) a_{ij}^2}{1-t^2}$ and $\frac{2(t-t^2) a_{ij}^2}{1-t^2}$ is monotonically increasing on $[0,1)$. For $t \to 1$ the expression is indefinite. Thus we apply L'Hospital's rule and obtain: $$ \lim_{t\to 1} \frac{2(t-t^2) a_{ij}^2}{1-t^2} = \lim_{t\to 1} \frac{2 a_{ij}^2 (1-2t)}{-2t}= \lim_{t\to 1} \frac{(2t-1) a_{ij}^2}{t} = a_{ij}^2. $$ and thus we have for $t \in (0,1)$ $$ \frac{(\phi^* \circ a_i)''(t) }{(h^*)''(t)} =\sum_{j=1}^m \frac{2(t-t^2) a_{ij}^2}{1-a_{ij}^2 t^2} \leq \sum_{j=1}^m \frac{2(t-t^2) a_{ij}^2}{1-t^2} \leq \sum_{j=1}^m a_{ij}^2= \|a_i\|^2 =:L_i. $$ As a consequence $f_i$ is anisotropically smooth relative to $\frac{1}{L_i} \star \phi$. Since $1/(h^*)''(t)=(1-t^2)/2$ is concave by \cite[Theorem 3.3(i)]{bauschke2001joint} $D_{\phi^*}$ is jointly convex. Then we can invoke \cref{thm:closed_addition} recursively as in \cref{rem:recursive} to show that $f=\frac{1}{m}\sum_{i=1}^m f_i$ is anisotropically smooth relative to $\frac{1}{L} \star \phi$ with $L = \max_{1\leq i \leq m} \|a_i\|^2$. \ifxfalsetrue \qed \fi \end{proof} Next we show an interesting refinement of the previous result for the exponential reference function $\phi=\Exp$ which reveals that the anisotropic descent property is closed under pointwise conic combinations. Note that this is false in general for Euclidean Lipschitz smoothness with the same Lipschitz constant. \begin{lemma}[closedness under pointwise conic combinations for exponential smoothness] \label{thm:closed_addition_exp} Let $h_1,h_2 \in \mathcal{C}^1(\mathbb{R}^n)$ be anisotropically smooth relative to $\phi=\Exp$ with constants $L_1, L_2$. Let $\alpha,\beta\geq 0$ such that either $\alpha>0$ or $\beta>0$. Then $\alpha h_1 + \beta h_2$ is anisotropically smooth relative to $\phi$ with constant $L:=\max\{L_1,L_2\}$. \end{lemma} \begin{proof} First note that $h \equiv 0$ is not anisotropically smooth since $\dom h^* =\{0\}\not\subseteq \intr(\dom \phi^*)$. Let $L:=\max\{L_1, L_2\}$ and define $\Phi(x,y):=-L^{-1} \star \Exp(x-y)$. Thanks to \cref{thm:episcaling_asmoothness} $h_1,h_2$ are anisotropically smooth relative to $\Exp$ with constant $L$. Thanks to \cref{thm:phi_convex_asmooth} we have $h_1(x) = \inf_{y \in \mathbb{R}^n} L^{-1}\star \phi(x - y) + (-h_1)^\Phi(y)$ and $h_2(x) = \inf_{u \in \mathbb{R}^n} L^{-1}\star \phi(x - u) + (-h_2)^\Phi(u)$. Without loss of generality assume that $L=1$. Let $x\in \mathbb{R}^n$. Then we have \begin{align} (\alpha h_1 + \beta h_2)(x) &= \inf_{y \in \mathbb{R}^n} \alpha \phi(x - y) + \alpha (-h_1)^\Phi(y)) + \inf_{u \in \mathbb{R}^n} \beta \phi(x - u) + \beta (-h_2)^\Phi(u) \notag \\ &= \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \alpha \phi(x - y) + \beta \phi(x - u) + \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) \label{eq:exp_smoothness_eq}. \end{align} Let $y,u \in \mathbb{R}^n$. It holds that \begin{align*} \alpha \phi(x - y) + \beta \phi(x - u) &= \sum_{i=1}^n \alpha \exp(x_i - y_i) + \beta \exp(x_i - u_i) = \sum_{i=1}^n\exp(x_i)(\alpha \exp(-y_i) + \beta \exp(-u_i)) \\ &=\sum_{i=1}^n\exp(x_i + \log(\alpha \exp(-y_i) + \beta \exp(-u_i))) \\ &=\inf_{v \in \mathbb{R}^n} \sum_{i=1}^n\exp(x_i - v_i) + \xi(v; y, u) =\inf_{v \in \mathbb{R}^n} \phi(x-v) + \xi(v; y,u), \end{align*} where the third equality holds since either $\alpha>0$ or $\beta >0$ and thus $\alpha \exp(-y_i) + \beta \exp(-u_i)) > 0$ and the second last equality involves the expression \begin{align} \xi(v; y,u) = \begin{cases} 0 &\text{if $v_i:=-\log(\alpha \exp(-y_i) + \beta \exp(-u_i))$ for all $1 \leq i \leq n$}, \\ +\infty & \text{otherwise.} \end{cases} \end{align} Define $\zeta(v) := \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \xi(v; y,u) + \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) = \inf \{ \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) \mid y,u \in \mathbb{R}^n : -\log(\alpha \exp(-y_i) + \beta \exp(-u_i)) = v_i \}$. Substituting the expression for $\zeta$ in \cref{eq:exp_smoothness_eq} an interchange of the order of minimization leads to: \begin{align*} (\alpha h_1 + \beta h_2)(x) &= \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \alpha \phi(x - y) + \beta \phi(x - u) + \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) \\ &= \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \inf_{v \in \mathbb{R}^n} \phi(x-v) + \xi(v; y,u) + \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) \\ &= \inf_{v \in \mathbb{R}^n}\phi(x-v) + \inf_{y \in \mathbb{R}^n}\inf_{u \in \mathbb{R}^n} \xi(v; y,u) + \alpha (-h_1)^\Phi(y) + \beta (-h_2)^\Phi(u) \\ &= \inf_{v \in \mathbb{R}^n} \phi(x - v) + \zeta(v). \end{align*} By assumption we have that $\ran \nabla h_1 \subseteq \mathbb{R}_{++}^n$ and $\ran \nabla h_2 \subseteq \mathbb{R}_{++}^n$ and since either $\alpha>0$ or $\beta >0$ we have $\ran (\alpha \nabla h_1 + \beta \nabla h_2) \subseteq \mathbb{R}_{++}^n = \ran \nabla \phi$. Again thanks to \cref{thm:phi_convex_asmooth} $\alpha h_1 + \beta h_2$ is anisotropically smooth relative to $\phi$ as claimed. \ifxfalsetrue \qed \fi \end{proof} \begin{example} \label{ex:exp_smoothness} Let $f(x)=\sum_{i=1}^m \sigma \star \exp(\langle a_i, x \rangle - b_i)$ with $a_i \in \mathbb{R}_{++}^n$ and $\sigma >0$. Then $f$ is anisotropically smooth relative to $\phi(x)=\Exp(x)$ with constant $L = \max_{1 \leq i \leq m} \|a_i\|_1/\sigma$. \end{example} \begin{proof} We show that $f_i=\sigma \star \exp(\langle a_i, x \rangle - b_i)$ is anisotropically smooth relative to $\phi$ with constant $L$ and then invoke \cref{thm:closed_addition_exp}. To prove the claimed result we invoke \cref{thm:phi_convex_asmooth} and verify that $(\sigma \star \exp(\cdot - b_i) \circ a_i^\top)^* = \psi + \frac{1}{L}\phi^*$ for some $\psi \in \Gamma_0(\mathbb{R}^n)$ and $\emptyset \neq \dom \psi \cap \intr \dom\phi^*$: Owing to \cite[Theorem 11.23(b)]{RoWe98} the conjugate $(\sigma \star \exp(\cdot - b_i) \circ a_i^\top)^*$ amounts to the infimal postcomposition $a_i \ell$ of $\ell(t)=\sigma h(t) + b_i t$ by $a_i$ with $h$ being the von Neumann entropy and is given as follows: \begin{align*} (a_i \ell)(x)= \inf \{\ell(t) \mid t \in \mathbb{R} : a_i t = x\} = \begin{cases} \ell(t) & \text{if $a_i t = x$ for some $t\in \mathbb{R}_{+}$,} \\ + \infty & \text{otherwise}. \end{cases} \end{align*} We have $\dom a_i \ell = \{a_i t \mid t \in \mathbb{R}_{+}\} \subseteq \dom H$ and since $a_i \in \mathbb{R}_{++}^n$ we have $\relint \dom (a_i \ell) \cap \intr \dom \phi^* \neq \emptyset$. Therefore we can invoke \cref{lem:characterization_strong_convex} to show convexity of $a_i \ell \mathbin{\dot{-}} \frac{1}{L} H$ on $\relint \dom (a_i \ell)$. Equivalently this means that the one-dimensional function $\ell \mathbin{\dot{-}} \frac{1}{L} H \circ a_i$ is convex on $\mathbb{R}_{++}$. This is implied by $$ \ell''(t) - \frac{1}{L} a_i^\top \nabla^2 H(a_i t) a_i=\sigma h''(t) - \frac{1}{L} a_i^\top \nabla^2 H(a_i t) a_i = \frac{\sigma}{t} - \frac{1}{L} \|a_i\|_1\frac{1}{t} = \frac{1}{t}(\sigma -\|a_i\|_1/L) \geq 0, $$ for all $t > 0$. This is guaranteed by the choice of $L= \max_{1 \leq i \leq m} \|a_i\|_1/\sigma$. Applying \cref{thm:closed_addition_exp} recursively we obtain the claimed result. \ifxfalsetrue \qed \fi \end{proof} \begin{remark} The assumption $a_i \in \mathbb{R}_{++}^n$ can be relaxed to $a_i \in \mathbb{R}_{+}^n$ as long as $\sum_{i=1}^m a_i \in \mathbb{R}_{++}^n$. Identifying the vectors $a_i$ as the rows of a matrix $A\in \mathbb{R}_{+}^{m \times n}$ this means equivalently that each column of $A$ has at least one nonzero element. This is proved using a continuity argument: Choose $x, \bar x \in \mathbb{R}^n$. Let $\varepsilon > 0$ and denote by $ \mathds{1} \in \mathbb{R}^n$ the vector of all ones. Then it holds that $a_i + \varepsilon \mathds{1} \in \mathbb{R}_{++}$ and for $f_\varepsilon := \sum_{i=1}^m \sigma \star \exp(\langle a_i + \varepsilon \mathds{1}, \cdot \rangle - b_i)$ and $L_\varepsilon := \max_{1 \leq i \leq m} \|a_i + \varepsilon \mathds{1}\|_1/\sigma = \max_{1 \leq i \leq m} \|a_i\|_1/\sigma + \varepsilon n/\sigma$ we have thanks to the derivation above that $$ f_\varepsilon(x) \leq f_\varepsilon(\bar x) + \tfrac{1}{L_\varepsilon} \star \phi(x-\bar x + L_\varepsilon^{-1}\nabla\phi^*(\nabla f_\varepsilon(\bar x))) - \tfrac{1}{L_\varepsilon} \star \phi(L_\varepsilon^{-1}\nabla\phi^*(\nabla f_\varepsilon(\bar x))). $$ By continuity we have for $\varepsilon \to 0$ that $L_\varepsilon \to L = \max_{1 \leq i \leq m} \|a_i\|_1/\sigma$ and $\mathbb{R}_{++}^n \ni \nabla f_\varepsilon(\bar x)=\sum_{i=1}^m (a_i + \varepsilon \mathds{1}) (\sigma \star \exp)(\langle a_i + \varepsilon \mathds{1}, \bar x \rangle - b_i) \to \nabla f(\bar x)=\sum_{i=1}^m a_i(\sigma \star \exp)(\langle a_i, \bar x \rangle - b_i) \in \mathbb{R}_{++}^n$ since by assumption $\sum_{i=1}^m a_i \in \mathbb{R}_{++}$. Passing to $\varepsilon \to 0$ in the previous inequality since $\nabla\phi^*$ is continuous relative to $\mathbb{R}_{++}^n$ we deduce that the descent inequality holds at the chosen points $x, \bar x$. Since $x, \bar x \in \mathbb{R}^n$ are arbitrary we obtain the claimed result. \end{remark} \section{Analysis of the anisotropic proximal gradient method} \label{sec:aproxgrad_analysis} \subsection{Subsequential convergence and linesearch} Recall validity of \cref{assum:a1,assum:a2,assum:a3,assum:a4,assum:a5} and the definition of the threshold of anisotropic prox-boundedness $\lambda_g$ and $\lambda < \lambda_g$. The main goal of this section is to analyse the subsequential convergence of the anisotropic proximal gradient method. For that purpose we shall introduce the following regularized gap function as a measure of stationarity: \begin{align} \gap{\phi}{F}(\bar x, \lambda) := \frac{1}{\lambda}\big(F(\bar x) - F_{\lambda}(\bar x) \big), \end{align} for any $\lambda < \lambda_g$ and \begin{align} F_{\lambda}(\bar x) &:= \inf_{x \in \mathbb{R}^n} f(\bar x) + \lambda \star \phi(x-\bar x + \lambda \nabla\phi^*(\nabla f(\bar x))) - \lambda \star \phi(\lambda \nabla\phi^*(\nabla f(\bar x))) + g(x) \notag \\ &= f(\bar x) + (g \infconv \lambda \star \phi_-)(\bar x - \lambda \nabla\phi^*(\nabla f(\bar x))) - \lambda \star \phi(\lambda \nabla\phi^*(\nabla f(\bar x))), \end{align} is an anisotropic generalization of the forward-backward envelope \cite{patrinos2013proximal,stella2017forward}. The regularized gap function is a generalization of \cite[Equation 13]{karimi2016linear} to the anisotropic case and is closely related to regularized gap functions for solving variational inequalities \cite{larsson1994class}. The particular scaling $\frac{1}{\lambda}$ will be important subsequently for proving a linear convergence result under anisotropic proximal gradient dominance. Next we show the following key properties of $\gap{\phi}{F}(\bar x, \lambda)$ to justify its choice as a measure of stationarity: \begin{lemma} \label{thm:continuity_aprox_grad} For any $\lambda < \lambda_g$ the gap function $\gap{\phi}{F}(\cdot, \lambda) \geq 0$ is lsc and for any $x^\star \in \mathbb{R}^n$ with $\gap{\phi}{F}(x^\star, \lambda) = 0$ we have that $x^\star$ is a stationary point of $F$, i.e., $0 \in \nabla f(x^\star) + \partial g(x^\star)$. If, in addition, $g\equiv 0$, the gap simplifies to $\gap{\phi}{F}(\cdot, \lambda)=D_{\phi^*}(0, \nabla f(\cdot))$. \end{lemma} \begin{proof} Let $\lambda < \lambda_g$. Let $\bar x \in \mathbb{R}^n$ and define $$ \xi(x, \bar x) := f(\bar x) + \lambda \star \phi(x-\bar x + \lambda \nabla\phi^*(\nabla f(\bar x))) - \lambda \star \phi(\lambda \nabla\phi^*(\nabla f(\bar x))) + g(x). $$ Then $F_\lambda(\bar x)= \inf_{x \in \mathbb{R}^n} \xi(x, \bar x) \leq \xi(\bar x, \bar x)= F(\bar x)$ implying that $\gap{\phi}{F}(\bar x, \lambda) \geq 0$. Let $x^\star \in \mathbb{R}^n$ and assume that $\gap{\phi}{F}(x^\star, \lambda ) = 0$, i.e., $F(x^\star) = F_{\lambda}(x^\star)$. Therefore $F_{\lambda}(x^\star) = \inf_{x \in \mathbb{R}^n} \xi(x, x^\star) \leq \xi(x^\star, x^\star) = F(x^\star) = F_{\lambda}(x^\star)$ and thus the infimum in the definition of $F_{\lambda}(x^\star)$ is attained at $x^\star$. By Fermat's rule \cite[Theorem 10.1]{RoWe98} we have $$ 0 \in \partial \xi(\cdot, x^\star)(x^\star) = \nabla \phi(\nabla \phi^*(\nabla f(x^\star))) + \partial g(x^\star) = \nabla f(x^\star) + \partial g(x^\star). $$ Since by assumption $h(x,y)=g(x) + \lambda \star \phi(x-y)$ is level-bounded in $x$ locally uniformly in $y$, thanks to \cite[Theorem 1.17]{RoWe98}, $g \infconv \lambda \star \phi_-$ is continuous and thus $F_\lambda$ as a composition of continuous functions is continuous as well. Since $F=f+g$ is lsc we have that $\gap{\phi}{F}(\bar x, \lambda) := \frac{1}{\lambda}(F(\bar x) - F_{\lambda}(\bar x))$ is lsc. If, in particular, $g\equiv 0$ we have for any $\bar x \in \mathbb{R}^n$ \begin{align*} \gap{\phi}{F}(\bar x, \lambda) &= -\frac{1}{\lambda} \min_{x \in \mathbb{R}^n} \lambda \star \phi(x-\bar x + \lambda \nabla\phi^*(\nabla f(\bar x))) - \lambda \star \phi(\lambda \nabla\phi^*(\nabla f(\bar x))) \\ &= \phi(\nabla\phi^*(\nabla f(\bar x))) -\min_{x \in \mathbb{R}^n} \phi(\lambda^{-1}(x - \bar x) + \nabla\phi^*(\nabla f(\bar x))) \\ &=\phi(\nabla\phi^*(\nabla f(\bar x))) - \phi(\nabla \phi^*(0)) - \langle \nabla \phi(\nabla \phi^*(0)), \nabla\phi^*(\nabla f(\bar x)) - \nabla \phi^*(0) \rangle \\ &=D_{\phi}(\nabla \phi^*(\nabla f(\bar x)), \nabla \phi^*(0))= D_{\phi^*}(0, \nabla f(\bar x)), \end{align*} where the identities hold due to the constraint qualification $0\in\intr\dom\phi^*$ and the fact that $\hat x$ minimizes $x \mapsto \phi(\lambda^{-1}(x - \bar x) + \nabla\phi^*(\nabla f(\bar x)))$ if and only if $\nabla \phi^*(0) = \lambda^{-1}(\hat x - \bar x) + \nabla\phi^*(\nabla f(\bar x))$, the identity $\nabla \phi(\nabla \phi^*(0))=0$ and the identity in \cref{thm:dual_bregman}. \ifxfalsetrue \qed \fi \end{proof} We first prove the following sufficient decrease property: \begin{lemma} \label{thm:sufficient_descent} Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of backward-steps generated by \cref{alg:aproxgrad}. Then the following sufficient decrease property holds true for all $k \in \mathbb{N}_0$: \begin{align} F(x^{k+1}) \leq F(x^k) - \lambda \gap{\phi}{F}(x^k, \lambda). \end{align} \end{lemma} \begin{proof} We have: \begin{align*} F(x^{k+1}) &= f(x^{k+1}) +g(x^{k+1}) \\ &\leq f(x^k) + \tfrac{1}{L} \star \phi(x^{k+1} - x^k + L^{-1}\nabla \phi^*(\nabla f(x^k))) - \tfrac{1}{L} \star \phi(L^{-1}\nabla \phi^*(\nabla f(x^k))) + g(x^{k+1}) \\ &\leq f(x^k) + \lambda \star \phi(x^{k+1} - x^k + \lambda \nabla \phi^*(\nabla f(x^k))) - \lambda \star \phi(\lambda \nabla \phi^*(\nabla f(x^k))) + g(x^{k+1}) \\ &= F_{\lambda}(x^k) = F(x^k) - (F(x^k) -F_{\lambda}(x^k)) = F(x^k) - \lambda \gap{\phi}{F}(x^k, \lambda), \end{align*} where the first inequality follows by the anisotropic descent inequality, the second inequality by \cref{thm:episcaling_asmoothness} and the choice $\lambda^{-1} \geq L$ and the second equality by the update of $x^{k+1}$ and the definition of $F_{\lambda}(x^k)$. \ifxfalsetrue \qed \fi \end{proof} \begin{theorem} \label{thm:asymptotic_convergence} Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of backward-steps generated by \cref{alg:aproxgrad}. The sequence $\{F(x^k)\}_{k \in \mathbb{N}_0}$ is nonincreasing and convergent. In addition, every limit point $x^\star$ of the sequence of iterates $\{x^k\}_{k \in \mathbb{N}_0}$ is a stationary point of $F$, i.e., $0 \in \nabla f(x^\star) + \partial g(x^\star)$. In particular the minimum over the past regularized gaps vanishes sublinearly at rate $\mathcal{O}(1/K)$: $$ \min_{k \in \{0,2,\ldots, K-1\}} \gap{\phi}{F}(x^k, \lambda) \leq \frac{1}{\lambda K} \big(F(x^0) - \inf F \big). $$ \end{theorem} \begin{proof} Thanks to \cref{thm:sufficient_descent} we have \begin{align*} F(x^{k+1}) &\leq F(x^k) - \lambda \gap{\phi}{F}(x^k, \lambda). \end{align*} This implies that $F(x^{k})$ is monotonically decreasing. Since $-\infty < \inf F \leq F(x^{k})$ is bounded from below this means that $F(x^{k}) \to F^*$ converges to some $F^*$. Summing the inequality we obtain since $-\infty < \inf F \leq F(x^{K})$ is bounded from below: \begin{align} -\infty < \inf F - F(x^0) &\leq F(x^{K}) -F(x^0) \notag\\ &= \sum_{k=0}^{K-1} F(x^{k+1}) -F(x^k)\leq - \lambda \sum_{k=0}^{K-1} \gap{\phi}{F}(x^k, \lambda) \label{eq:inequality_telescope}. \end{align} This implies that $\{\sum_{k=0}^{K-1} \gap{\phi}{F}(x^k, \lambda) \}_{K \in \mathbb{N}}$ is convergent and thus we have \begin{align} \label{eq:limit_convergence} \lim_{k\to \infty} \gap{\phi}{F}(x^k, \lambda) = 0. \end{align} Let $x^\star$ be a limit point of $\{x^k\}_{k \in \mathbb{N}_0}$. Let $x^{k_j} \to x^\star$ be a corresponding subsequence. In view of \cref{thm:continuity_aprox_grad} $\gap{\phi}{F}(\cdot, \lambda)$ is lsc and thus we have $$ 0 \leq \gap{\phi}{F}(x^\star, \lambda) \leq \lim_{j \to \infty} \gap{\phi}{F}(x^{k_j}, \lambda) = 0, $$ where the last equality follows from \cref{eq:limit_convergence}. Then invoking \cref{thm:continuity_aprox_grad} again this implies that $x^\star$ is a stationary point of $f+g$, i.e., $0 \in \nabla f(x^\star) + \partial g(x^\star)$. Thanks to \cref{eq:inequality_telescope} we have $$ K \cdot \min_{k \in \{0,2,\ldots, K-1\}} \gap{\phi}{F}(x^k, \lambda) \leq \sum_{k=0}^{K-1} \gap{\phi}{F}(x^k, \lambda) \leq \frac{1}{\lambda}(F(x^0) - \inf F). $$ Dividing by $K$ we obtain the claimed sublinear rate. \ifxfalsetrue \qed \fi \end{proof} \begin{remark} To our knowledge feasibility of the step-size $\lambda=1/L$ for nonconvex $g$ is also new in the Euclidean case as existing results typically require $\lambda <1/L$. In the Euclidean case uniform level-boundedness is implied by prox-boundedness with some threshold $\lambda_g$; see \cite[Theorem 1.25]{RoWe98}. In the anisotropic case this holds for any $\lambda >0$ if for example $g$ (or $\phi$) is bounded from below and $\phi$ (or $g$) is coercive in which case we have $\lambda_g = \infty$. \end{remark} If $L$ is unknown (or only an over-estimate is available) we can still apply our algorithm by estimating a (sharper) step-size $\lambda_k$ in each iteration $k$ using a backtracking linesearch: Given $0<\lambda_{\max}< \lambda_g$ and a constant $0<\alpha < 1$ we can backtrack $\lambda_k = \alpha^t \lambda_{\max}$ until for some $t \in \mathbb{N}_0$ the descent inequality \cref{eq:adescent} with constant $L=1/\lambda_k$ is valid at $x^k$ and $x^{k+1}$. The complete scheme is listed in \cref{alg:linesearch_aproxgrad}. \begin{algorithm}[H] \caption{Linesearch anisotropic proximal gradient} \label{alg:linesearch_aproxgrad} \begin{algorithmic} \REQUIRE Let $0< \alpha < 1$ and choose some $\lambda_g >\lambda_{\max} >0$. Let $x^0\in \mathbb{R}^n$. \FORALL{$k=0, 1, \ldots$} \STATE $t\gets0$ \REPEAT \STATE $\lambda_k\gets\alpha^t \cdot \lambda_{\max}$ \STATE $y^k\gets x^{k} - \lambda_k \nabla \phi^*(\nabla f(x^k))$ \STATE $x^{k+1} \gets\argmin_{x \in \mathbb{R}^n} ~\lambda_k \star \phi(x - y^k) + g(x)$ \STATE $t\gets t+1$ \UNTIL{$f(x^{k+1}) \leq \lambda_k \star \phi(x^{k+1} - y^k) -\lambda_k \star \phi(\lambda_k \nabla f(x^k)) + f(x^k)$} \ENDFOR \end{algorithmic} \end{algorithm} In the following theorem we show that \cref{alg:linesearch_aproxgrad} converges subsequentially: \begin{theorem}[asymptotic convergence under backtracking linesearch] \label{thm:linesearch} Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of iterates generated by \cref{alg:linesearch_aproxgrad} and denote by $\{\lambda_k\}_{k \in \mathbb{N}_0}$ the corresponding sequence of step-sizes. Then the sequence $\{F(x^k)\}_{k \in \mathbb{N}_0}$ is nonincreasing and convergent. In addition, every limit point $x^\star$ of $\{x^k\}_{k \in \mathbb{N}_0}$ is a stationary point of $F$, i.e., $0 \in \nabla f(x^\star) + \partial g(x^\star)$. In particular the minimum over the past regularized gaps vanishes sublinearly at rate $\mathcal{O}(1/K)$: $$ \min_{k \in \{0,2,\ldots, K-1\}} \gap{\phi}{F}(x^k, \lambda_{\min}) \leq \frac{1}{K\lambda_{\min}} \big(F(x^0) - \inf F \big), $$ where $\lambda_{\min}:=\min \{\lambda_{\max}, \alpha/L \}$. \end{theorem} \begin{proof} Thanks to \cref{thm:episcaling_asmoothness} in each iteration $k \in \mathbb{N}_0$ the linesearch terminates with some $\lambda_k$ with $\lambda_{\min} \leq \lambda_k \leq \lambda_{\max}$ such that $$ f(x^{k+1}) \leq f(x^k) + \lambda_k \star \phi(x^{k+1} - x^k + \lambda_k \nabla \phi^*(\nabla f(x^k))) - \lambda_k\star \phi(\lambda_k \nabla \phi^*(\nabla f(x^k))). $$ Adding $g(x^{k+1})$ to both sides of the inequality we have: \begin{align*} F(x^{k+1}) &= f(x^{k+1}) +g(x^{k+1}) \\ &\leq f(x^k) + \lambda_k \star \phi(x^{k+1} - x^k + \lambda_k \nabla \phi^*(\nabla f(x^k))) - \lambda_k \star \phi(\lambda_k \nabla \phi^*(\nabla f(x^k))) + g(x^{k+1}) \\ &=\inf_{x \in \mathbb{R}^n} f(x^k) + \lambda_k \star \phi(x - x^k + \lambda_k\nabla \phi^*(\nabla f(x^k))) - \lambda_k \star \phi(\lambda_k \nabla \phi^*(\nabla f(x^k))) + g(x) \\ &\leq \inf_{x \in \mathbb{R}^n} f(x^k) + \lambda_{\min} \star \phi(x - x^k + \lambda_{\min}\nabla \phi^*(\nabla f(x^k))) - \lambda_{\min} \star \phi(\lambda_{\min} \nabla \phi^*(\nabla f(x^k))) + g(x) \\ &=F_{\lambda_{\min}}(x^k) = F(x^k) - (F(x^k) -F_{\lambda_{\min}}(x^k)) = F(x^k) - \lambda_{\min} \gap{\phi}{F}(x^k, \lambda_{\min}). \end{align*} where the second equality holds by the $x$-update and the last inequality holds by \cref{thm:episcaling_asmoothness} and the inequality $\lambda_{\min}^{-1} \geq \lambda_k^{-1}$ and the last equalities by definition of $F_{\lambda_{\min}}$. Summing the inequality from $k=0$ to $k=K-1$ and adapting the proof of \cref{thm:asymptotic_convergence} we obtain the claimed result. \ifxfalsetrue \qed \fi \end{proof} \subsection{Linear convergence under anisotropic proximal gradient dominance} The following definition and subsequent theorem is a generalization of the proximal PL-inequality due to \cite{karimi2016linear} to the anisotropic case: \begin{definition}[anisotropic proximal gradient dominance] We say that $F:=f+g$ satisfies the anisotropic proximal gradient dominance condition relative to $\phi$ with constant $\mu>0$ for parameter $0<\lambda\leq \min\{\mu^{-1},\lambda_g\}$ if for all $\bar x \in \mathbb{R}^n$ $$ \mu(F(\bar x) - \inf F) \leq \gap{\phi}{F}(\bar x, \lambda). $$ \end{definition} The linear convergence under anisotropic proximal gradient dominance follows immediately from \cref{thm:sufficient_descent}: \begin{theorem} \label{thm:linear_convergence} Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of backward-steps generated by \cref{alg:aproxgrad}. Let $F:=f+g$ satisfy the anisotropic proximal gradient dominance condition relative to $\phi$ with constant $0<\mu<L$ for parameter $1/L < \lambda_g$. Let $\lambda = 1/L$. Then $\{F(x^k)\}_{k \in \mathbb{N}_0}$ converges linearly, in particular, $$ F(x^{k}) - \inf F \leq \left(1 - \frac\mu L \right)^k (F(x^0) - \inf F). $$ \end{theorem} \begin{proof} Thanks to \cref{thm:sufficient_descent} we have for any $k \in \mathbb{N}$ that \begin{align*} F(x^{k}) - \inf F \leq F(x^{k-1}) - \inf F - \frac{1}{L} \gap{\phi}{F}(x^{k-1}, L^{-1}). \end{align*} The anisotropic gradient dominance condition yields that $$ F(x^{k}) - \inf F \leq F(x^{k-1}) - \inf F - \frac{\mu}{L}(F(x^{k-1}) - \inf F) = \left(1 - \frac\mu L \right)(F(x^{k-1}) - \inf F). $$ Iterating the inequality we obtain the claimed linear convergence. \ifxfalsetrue \qed \fi \end{proof} If $g\equiv 0$ our method specializes to dual space preconditioning for gradient descent \cite{maddison2021dual} for which the linear convergence was established under different conditions \cite[Theorem 3.9]{maddison2021dual}. As shown in \cite[Proposition 3.3]{maddison2021dual} these conditions are equivalent to Bregman strong convexity and smoothness of $f^*$ relative to $\phi^*$. As we shall see next this is a special case of our setting if $g\equiv 0$. Next we show that $F=f+g$ satisfies the anisotropic proximal gradient dominance condition if $f$ is anisotropically strongly convex defined as follows: \begin{definition}[anisotropic strong convexity] Let $h \in \Gamma_0(\mathbb{R}^n)$ such that $\ran \partial h \supseteq \ran \nabla \phi$. Then we say that $h$ is anisotropically strongly convex relative to $\phi$ with constant $\mu$ if for all $(\bar x, \bar v) \in \gph \partial h \cap (\mathbb{R}^n \times \intr \dom \phi^*)$ the following inequality holds true \begin{align}\label{eq:a-strongly} h(x) &\geq h(\bar x) + \tfrac1 \mu \star \phi(x-\bar x + \mu^{-1} \nabla\phi^*(\bar v)) - \tfrac1 \mu \star \phi(\mu^{-1} \nabla\phi^*(\bar v)), \quad \forall x\in\mathbb{R}^n. \end{align} \end{definition} Next we show that anisotropic strong convexity of $f$ implies anisotropic proximal gradient dominance of $F=f+g$. For that purpose we shall prove the following epi-scaling property of the gap function $\gap{\phi}{F}(\cdot, \lambda)$ adapting the proof of \cite[Lemma 1]{karimi2016linear} to the anisotropic case: \begin{lemma}[scaling property of gap function] \label{thm:scaling_property} Let $g \in \Gamma_0(\mathbb{R}^n)$. For any $0<\lambda_2 \leq \lambda_1$ we have that $$ \gap{\phi}{F}(\bar x, \lambda_2) \geq \gap{\phi}{F}(\bar x, \lambda_1). $$ \end{lemma} \begin{proof} Without loss of generality let $\bar x \in \dom g$. We rewrite \begin{align} \gap{\phi}{F}(\bar x, \lambda) &= \frac{1}{\lambda}\big(F(\bar x) - F_{\lambda}(\bar x) \big) \notag \\ &= \phi(\nabla \phi^*(\nabla f(\bar x)))-\min_{x \in \mathbb{R}^n} \left\{ \phi(\lambda^{-1}(x - \bar x) + \nabla \phi^*(\nabla f(\bar x))) + \lambda^{-1}(g(x) - g(\bar x)) \right\} \notag \\ &= \phi(\nabla \phi^*(\nabla f(\bar x)))-\min_{u \in \mathbb{R}^n} \left\{ \phi(u + \nabla \phi^*(\nabla f(\bar x))) + \lambda^{-1}(g(\lambda u +\bar x) - g(\bar x)) \right\} \label{eq:subst_scaling_1}\\ &= \phi(\nabla \phi^*(\nabla f(\bar x)))-\min_{u \in \mathbb{R}^n} \left\{ \phi(u + \nabla \phi^*(\nabla f(\bar x))) + \lambda^{-1}\star h(u) \right\}\label{eq:subst_scaling_2}, \end{align} where \cref{eq:subst_scaling_1} follows by the change of variable $u := \lambda^{-1}(x - \bar x)$ and \cref{eq:subst_scaling_2} follows by defining $h:=g(\cdot + \bar x) - g(\bar x)$. Thus $\gap{\phi}{F}(\bar x, \lambda_2) \geq \gap{\phi}{F}(\bar x, \lambda_1)$ is implied by $\frac{1}{\lambda_2}\star h(u) \leq \frac{1}{\lambda_1}\star h(u)$, which holds by convexity of $h$ and $h(0) = 0$ thanks to \cite[Lemma 4.4]{burke2013epi} also see the proof of \cite[Lemma 1]{karimi2016linear}. \ifxfalsetrue \qed \fi \end{proof} Now we are ready to prove that anisotropic strong convexity of $f$ implies anisotropic proximal gradient dominance of $F=f+g$: \begin{proposition}[anisotropic strong convexity implies anisotropic proximal gradient dominance] \label{thm:strong_implies_pg_dominance} Let $f$ be anisotropically strongly convex relative to $\phi$ with constant $\mu$ and $g \in \Gamma_0(\mathbb{R}^n)$. Then $\ran \nabla f = \ran \nabla \phi$ and $F:=f+g$ is coercive and strictly convex relative to its effective domain implying that it has a unique minimizer $x^\star= \argmin F$. In particular, $F$ satisfies the anisotropic proximal gradient dominance condition relative to $\phi$ with constant $\mu$ for any parameter $0<\lambda\leq \mu^{-1}$, i.e., $$ \mu(F(\bar x) - F(x^\star) ) \leq \gap{\phi}{F}(\bar x, \lambda). $$ \end{proposition} \begin{proof} Since $f$ is anisotropically strongly convex we have $\ran \nabla f \supseteq \ran \nabla \phi$. By anisotropic smoothness of $f$ we also have $\ran \nabla f \subseteq \ran \nabla \phi$. Combining the inclusions we have $\ran \nabla f = \ran \nabla \phi=\intr\dom \phi^*$. Let $\bar x \in \mathbb{R}^n$. The anisotropic strong convexity inequality yields \begin{align*} f(x) &\geq f(\bar x) + \tfrac1 \mu \star \phi(x-\bar x + \mu^{-1} \nabla\phi^*(\nabla f(\bar x))) - \tfrac1 \mu \star \phi(\mu^{-1} \nabla\phi^*(\nabla f(\bar x))) > f(\bar x) + \langle \nabla f(\bar x), x - \bar x\rangle, \end{align*} for all $x \in \mathbb{R}^n$ with $x\neq \bar x$, where the second inequality follows due to strict convexity of $\phi$. Thus $f$ is strictly convex and therefore $F=f+g$ is strictly convex relative to $\dom F = \dom g$ implying that $F^*$ is essentially smooth. Thanks to \cref{thm:moreau_decomposition:decomp} the convex lower bound $$ \xi_{\bar x}(x) := f(\bar x) + \tfrac1 \mu \star \phi(x-\bar x + \mu^{-1} \nabla\phi^*(\nabla f(\bar x))) - \tfrac1 \mu \star \phi(\mu^{-1}\nabla\phi^*(\nabla f(\bar x))) + g(x), $$ of $F$ has the unique minimizer $\argmin_{x \in \mathbb{R}^n} \xi_{\bar x}(x) = \aprox[\mu]{\phi_-}{g}(\bar x - \mu^{-1} \nabla\phi^*(\nabla f(\bar x)))$. Invoking \cite[Theorem 11.8(b)]{RoWe98} we know that $0 \in \dom \partial \xi_{\bar x}^*$. Since $\xi_{\bar x}$ is strictly convex relative to $\dom g$ we have that $\xi_{\bar x}^*$ is essentially smooth implying that $0 \in \dom \partial \xi_{\bar x}^*=\intr(\dom \xi_{\bar x}^*)$. In view of \cite[Theorem 11.8(c)]{RoWe98} this means that $\xi_{\bar x}$ is coercive. Since $\xi_{\bar x} \leq F$ the cost $F$ is coercive too. Again invoking \cite[Theorem 11.8(c)]{RoWe98} we know that $0 \in \intr \dom F^*$. Since $F^*$ is essentially smooth we have that $0 \in \intr \dom F^* = \dom \partial F^*$. Invoking \cite[Theorem 11.8(b)]{RoWe98} $F^*$ has a unique minimizer $x^\star$. By anisotropic strong convexity we have: $$ f(x)\geq f(\bar x) + \tfrac1 \mu \star \phi(x-\bar x + \mu^{-1} \nabla\phi^*(\nabla f(\bar x))) - \tfrac1 \mu \star \phi(\mu^{-1} \nabla\phi^*(\nabla f(\bar x))). $$ Adding $g(x)$ to both sides of the inequality and minimizing both sides wrt $x$ we have \begin{align*} F(x^\star)&\geq \inf_{x \in \mathbb{R}^n} \tfrac1 \mu \star \phi(x-\bar x + \mu^{-1} \nabla\phi^*(\nabla f(\bar x))) - \tfrac1 \mu \star \phi(\mu^{-1} \nabla\phi^*(\nabla f(\bar x))) + g(x) = F_{\mu^{-1}}(\bar x). \end{align*} This implies that $\mu^{-1}\gap{\phi}{F}(\bar x, \mu^{-1}) = F(\bar x) - F_{\mu^{-1}}(\bar x) \geq F(\bar x) - F(x^\star)$. Invoking \cref{thm:scaling_property} we have $\mu(F(\bar x) -F(x^\star)) \leq \gap{\phi}{F}(\bar x, \mu^{-1}) \leq \gap{\phi}{F}(\bar x, \lambda)$, for any $\lambda \leq \mu^{-1}$. \ifxfalsetrue \qed \fi \end{proof} Next we prove a conjugate duality correspondence between anisotropic strong convexity and relative smoothness in the Bregman sense \cite{birnbaum2011distributed,bauschke2017descent,lu2018relatively} generalizing \cite{laude2021conjugate} to reference functions which are not necessarily super-coercive. This turns out helpful in the subsequent \cref{ex:a_strongly_convex_1,ex:a_strongly_convex_2}. \begin{proposition}[conjugate duality between anisotropic strong convexity and relative smoothness] \label{thm:conjugate_duality_astrong} Let $h \in \Gamma_0(\mathbb{R}^n)$. Assume that $ \ran \partial h \supseteq \ran \nabla \phi. $ Let $\mu > 0$. Then the following are equivalent: \begin{propenum} \item \label{thm:conjugate_duality_astrong:astrong} $h$ satisfies the anisotropic strong convexity inequality for all $(\bar x, \bar v) \in \gph \partial h \cap (\mathbb{R}^n \times \intr \dom \phi^*)$, i.e., the following inequality holds true \begin{align}\label{eq:a_strongly_mu1} h(x) &\geq h(\bar x) +\tfrac{1}{\mu} \star \phi(x-\bar x + \mu^{-1}\nabla\phi^*(\bar v)) - \tfrac{1}{\mu} \star \phi(\mu^{-1}\nabla\phi^*(\bar v)) \quad \forall x\in\mathbb{R}^n; \end{align} \item \label{thm:conjugate_duality_astrong:bsmooth} $h^*$ is smooth relative to $\phi^*$ in the Bregman sense with constant $\mu^{-1}$, i.e., $\mu^{-1}\phi^* \mathbin{\dot{-}} h^*$ is convex on $\intr \dom \phi^*$. \end{propenum} \end{proposition} \begin{proof} Without loss of generality assume that $\mu = 1$ by replacing $\phi$ with $\mu^{-1} \star \phi$. Choose $\Phi(x,y) = + \mu \star \phi(x-y)$. ``\labelcref{thm:conjugate_duality_astrong:astrong} $\Rightarrow$ \labelcref{thm:conjugate_duality_astrong:bsmooth}'': Let $\bar v \in \intr \dom \phi^* = \ran \nabla \phi$. Thanks to the constraint qualification we have $\bar v \in \ran \partial h$. Thus there exists $\bar x \in \dom \partial h$ such that $\bar v \in \partial h(\bar x)$ and the subgradient inequality \cref{eq:a_strongly_mu1} holds true for $(\bar x, \bar v)$. Then the anisotropic subgradient inequality means by definition that $\bar y := \bar x - \nabla \phi^*(\bar v) \in \partial_\Phi h(\bar x)$. Invoking \cref{thm:phi_subgradients} we have $h(\bar x) + h^\Phi(\bar y) = \phi(\bar x - \bar y)$ and $\bar x \in \partial_\Phi h^\Phi(\bar y)$ where the latter means by definition that $\bar y \in \argmax \phi(\bar x - \cdot) - h^\Phi$. Combined these yield \begin{align} \label{eq:supremum_y} \sup_{y \in \mathbb{R}^n} \phi(\bar x - y) - h^\Phi(y) = \phi(\bar x - \bar y) - h^\Phi(\bar y) = h(\bar x). \end{align} Fenchel duality yields $\phi(\bar x - y) = \phi^{**}(\bar x - y) = \sup_{v \in \mathbb{R}^n} \langle \bar x - y, v \rangle - \phi^*(v)$. Define $q(y) := \sup_{v \in \mathbb{R}^n} \xi(y, v)$ for $\xi(y, v) := \langle \bar x - y, v \rangle - \phi^*(v) - h^\Phi(y)$. Then we can rewrite the supremum in \cref{eq:supremum_y} in terms of the joint supremum $h(\bar x) = \sup_{y \in \mathbb{R}^n} q(y)= q(\bar y)$ and in particular $\bar y \in \argmax \,q$. We have $\bar y = \bar x - \nabla \phi^*(\bar v)$ or equivalently $\bar v = \nabla \phi(\bar x - \bar y)$ which via convex conjugacy implies that $\bar v \in \argmax \langle \bar x - \bar y, \cdot \rangle - \phi^*$. This shows that $\bar v \in \argmax \xi(\bar y, \cdot)$. Define $p(v) = \sup_{y \in \mathbb{R}^n} \xi(y, v)$. Then \cite[Proposition 1.35]{RoWe98} yields that $(\bar y, \bar v) \in \argmax \xi$ as well as $\bar v \in \argmax \,p$. Overall this yields \begin{align*} h(\bar x) &= \sup_{y \in \mathbb{R}^n} q(y) = \sup_{v\in \mathbb{R}^n} p(v) = p(\bar v) \\ &= \langle \bar x, \bar v \rangle - \phi^*(\bar v) + \sup_{y \in \mathbb{R}^n} \langle y, -\bar v \rangle - h^\Phi(y) = \langle \bar x, \bar v \rangle - \phi^*(\bar v) + (h^\Phi)^*(-\bar v), \end{align*} which combined with the fact that $\bar v \in \partial h(\bar x)$ results in $$ h^*(\bar v) = \langle \bar x, \bar v \rangle - h(\bar x) = \phi^*(\bar v) - (h^\Phi)^*_-(\bar v) \in \mathbb{R}. $$ Since $\bar v \in \intr \dom \phi^*$ was arbitrary we have $\phi^* \mathbin{\dot{-}} h^* \equiv (h^\Phi)^*_-$ on $\intr \dom \phi^*$ and thus $\phi^* \mathbin{\dot{-}} h^*$ is convex on $\intr \dom \phi^*$. ``\labelcref{thm:conjugate_duality_astrong:bsmooth} $\Rightarrow$ \labelcref{thm:conjugate_duality_astrong:astrong}'': Thanks to the constraint qualification we have $\dom h^* \supseteq \dom \partial h^* = \ran \partial h \supseteq \intr \dom \phi^*$ and thus $\phi^* + (- h^*)$ is finite-valued and convex on $\intr \dom \phi^*$. This implies that $-h^*$ is finite-valued on $\intr \dom \phi^*$. Since $\phi^*$ is smooth on $\intr \dom \phi^*$ we can invoke \cite[Exercise 8.20(b)]{RoWe98} to show that $-h^*$ is regular on $\intr \dom \phi^*$. Invoking \cite[Exercise 8.8(c)]{RoWe98} we obtain that $\partial(\phi^* +(- h^*)) =\widehat \partial(\phi^* +(- h^*)) = \nabla \phi^* + \widehat \partial (-h^*)$ on $\intr \dom \phi$. Since, in addition, $h^* \in \Gamma_0(\mathbb{R}^n)$, both, $\pm h^*$ are regular with $\widehat\partial (\pm h^*)$ nonempty on $\intr \dom \phi^*$. Invoking \cite[Theorem 9.18]{RoWe98} this means that $h^*$ is smooth on $\intr \dom \phi^*$ and as such so is $g:= \phi^* \mathbin{\dot{-}} h^*$. Fix $(\bar x, \bar v) \in \gph \partial h \cap (\mathbb{R}^n \times \intr \dom \phi^*)$ and let $\bar y:= \bar x - \nabla \phi^*(\bar v)$. Then $\nabla \phi(\bar x - \bar y) = \bar v \in \partial h(\bar x)$ and since $h \in \Gamma_0(\mathbb{R}^n)$ using convex conjugacy we have $\bar x = \nabla h^*(\nabla \phi(\bar x - \bar y))$. Since $h^*\equiv \phi^* \mathbin{\dot{-}} g$ on $\dom \phi^*$ and $\dom \phi^* = \dom (\phi^* \mathbin{\dot{-}} g)$ we have \begin{align} h(x) &= h^{**}(x) = \sup_{v \in \mathbb{R}^n} \langle v, x \rangle - h^*(v) \notag \\ &\geq \sup_{v \in \dom \phi^*} \langle v, x \rangle - h^*(v) = \sup_{v \in \mathbb{R}^n} \langle v, x \rangle - (\phi^* \mathbin{\dot{-}} g)(v) = (\phi^* \mathbin{\dot{-}} g)^{*}(x) \label{eq:hirriart_urruty_1} \\ &= \sup_{y \in \mathbb{R}^n} \phi(x-y) \mathbin{\text{\d{\ensuremath{-}}}} g^*_-(y)\label{eq:hirriart_urruty} \\ &\geq \phi(x-\bar y) \mathbin{\text{\d{\ensuremath{-}}}} g^*_-(\bar y) \label{eq:inequality_f}, \end{align} where \cref{eq:hirriart_urruty} follows by Hiriart-Urruty \cite[Theorem 2.2]{hiriart1986general}. Smoothness of $h^*$ and $g$ on $\intr \dom \phi^*$ yields: \begin{align*} \bar x = \nabla h^*(\nabla \phi(\bar x - \bar y)) &= \nabla (\phi^* - g)(\nabla \phi(\bar x - \bar y)) =\bar x - \bar y - \nabla g(\nabla \phi(\bar x - \bar y)), \end{align*} and therefore $\nabla g(\nabla \phi(\bar x - \bar y)) = - \bar y$. Recall that $\nabla \phi(\bar x - \bar y) = \bar v \in \partial h(\bar x)$. Then we have thanks to convex conjugacy \begin{align*} h(\bar x) &= \langle \nabla \phi(\bar x - \bar y), \bar x \rangle - h^*(\nabla \phi(\bar x - \bar y)) = \langle \nabla \phi(\bar x - \bar y), \bar x \rangle - \phi^*(\nabla \phi(\bar x - \bar y)) + g(\nabla \phi(\bar x - \bar y)) \\ &= \langle \nabla \phi(\bar x - \bar y), \bar x - \bar y \rangle - \phi^*(\nabla \phi(\bar x - \bar y)) - \big( \langle \nabla \phi(\bar x - \bar y), - \bar y \rangle -g(\nabla \phi(\bar x - \bar y)) \big) \\ &= \phi(\bar x - \bar y) - g^*_-(\bar y). \end{align*} By combining this result with \cref{eq:inequality_f} we obtain \[ h(x) \geq h(\bar x) + \phi(x-\bar y) -\phi(\bar x - \bar y) = h(\bar x) + \phi(x-\bar x + \nabla \phi^*(\bar v)) -\phi(\nabla \phi^*(\bar v)). \ifxfalsetrue \qed \else \qedhere \fi \] \end{proof} \begin{remark} For super-coercive $\phi$ the inequality \cref{eq:hirriart_urruty_1} holds with equality and thus the conjugate $h=h^{**}$ of a convex Bregman smooth function $h^*$ is a so-called Klee envelope $h^{**}(x)=\sup_{y \in \mathbb{R}^n} \phi(x-y) \mathbin{\text{\d{\ensuremath{-}}}} g^*_-(y)$. As shown in \cite{laude2021conjugate} the converse, however, is false in general unless $\phi=\frac{1}{2}\|\cdot\|^2$, $h$ is essentially smooth, or a finite-valued, one-dimensional function. \end{remark} \begin{remark} The constraint qualification $\ran \partial h \supseteq \ran \nabla \phi$ in the theorem above is equivalent to $\dom h^* \supseteq \dom \phi^*$ which is the standard constraint qualification in Bregman smoothness, see, e.g., \cite{bauschke2017descent}. This is proved as follows: $\dom h^* \supseteq \dom \phi^*$ implies that $\intr \dom h^* \supseteq \intr \dom \phi^*$ and thus $\ran \partial h=\dom \partial h^* \supseteq \intr \dom h^* \supseteq \intr \dom \phi^*=\ran \nabla \phi$. To show the opposite implication let $x \in \dom \phi^*$ and $x_0 \in \intr \dom \phi^*$. We have \begin{align*} h^*(x) &\leq \liminf_{\tau \nearrow 1} h^*((1-\tau) x_0 + \tau x) \\ &\leq \lim_{\tau \nearrow 1} h^*(x_0) + \langle \nabla h^*(x_0), (1-\tau) x_0 + \tau x - x_0 \rangle + D_{\phi^*}((1-\tau) x_0 + \tau x, x_0) \\ &= h^*(x_0) + \langle \nabla h^*(x_0), x - x_0 \rangle + D_{\phi^*}(x, x_0), \end{align*} where the first inequality holds since $h^*$ is lsc, the second since in view of the proof of the above theorem, $\phi^*-h^*$ is convex and smooth on $\intr \dom \phi^*$, and $(1-\tau) x_0 + \tau x \in \intr \dom \phi^*$ for $\tau \in [0, 1)$ by the line segment principle \cite[Theorem 2.33]{RoWe98} and the last equality holds due to \cite[Theorem 2.35]{RoWe98} and since $\phi^* \in \Gamma_0(\mathbb{R}^n)$. Since $x \in \dom \phi^*$ and $x_0 \in \intr \dom \phi^*$ the right-hand side of the inequality is finite and thus $x \in \dom h^*$. \end{remark} The next examples show that the absolute value function and the quadratic function are both anisotropically strongly convex relative to a softmax approximation of the absolute value function: \begin{example} \label{ex:a_strongly_convex_1} Let $g(x):=\nu |x|$ with $\nu \geq 1$. Then $g$ is anisotropically strongly convex relative to the symmetrized logistic loss $\phi(x):=2\log(1+ \exp(x)) - x$ with any constant $\mu>0$. \end{example} \begin{proof} The individual conjugates amount to $g^*(x)=\delta_{[-\nu,\nu]}(x)$ and $\phi^*(x)=(x + 1)\log((x + 1)/2) + (1-x)\log((1-x)/2)$ with $x\log x + (1-x)\log(1-x)=0$ for $x\in\{0,1\}$ and $\dom\phi^*=[-1, 1]$. Since $\dom g^* = [-\nu,\nu] \supseteq [-1,1]$ thanks to \cref{thm:conjugate_duality_astrong} anisotropic strong convexity of $g$ is implied by convexity of $\frac{1}{\mu}\phi^* \mathbin{\dot{-}} g^*$ on $(-1, 1)$. Since $g^*\equiv 0$ on $(-1, 1)$ this valid for any $\mu>0$. \ifxfalsetrue \qed \fi \end{proof} \begin{remark}[Disclaimer] It should be noted that for $\nu \geq 1$ \cref{ex:a_strongly_convex_1} is rather of theoretical interest: For the smooth part $f$ we have the restriction $\ran f' \subseteq \intr\dom\phi^*=(-1,1)$ and thus the first-order optimality condition $-f'(x^\star)\in \partial g(x^\star)$ is only valid at $x^\star = 0$. \end{remark} \begin{example} \label{ex:a_strongly_convex_2} Let $g(x):=\frac{\nu}{2}x^2$ with $\nu >0$. Then we show that $g$ is anisotropically strongly convex relative to $\phi(x):=2\log(1+ \exp(x)) - x$ with constant $\mu:=2\nu$. \end{example} \begin{proof} The individual conjugates amount to $g^*(x)=\frac{1}{2\nu}x^2$ and $\phi^*(x)=(x + 1)\log((x + 1)/2) + (1-x)\log((1-x)/2)$ with $x\log x + (1-x)\log(1-x)=0$ for $x\in\{0,1\}$ and $\dom\phi^*=[-1, 1]$. Since $\dom g^* = \mathbb{R}^n$ thanks to \cref{thm:conjugate_duality_astrong} anisotropic strong convexity of $g$ is implied by convexity of $\frac{1}{\mu}\phi^* \mathbin{\dot{-}} g^*$ on $(-1, 1)$. This is implied by $(\phi^*)''(x)=2/(1-x^2)\geq \frac{\mu}{\nu}$ for all $x \in (-1,1)$ which is valid for $\mu=2\nu$. \ifxfalsetrue \qed \fi \end{proof} \section{Difference of \texorpdfstring{$\Phi$}{Phi}-convex approach} \label{sec:dca} In this section we provide an equivalent interpretation of the algorithm in terms of a \emph{difference of $\Phi$-convex approach} and show that the algorithm is invariant under a certain double-min duality (DC-duality). This allows us to transfer smoothness between the components $f$ and $g$ of the optimization problem. If $g$ in place of $f$ is anisotropically strongly convex this is used to show that the DC-dual problem has the anisotropic proximal gradient dominance condition and the algorithm attains linear convergence. In view of \cref{thm:phi_envelope,thm:episcaling_asmoothness} anisotropic smoothness of $f$ relative to $\phi$ with constant $L$ implies that $-f$ is $\Phi$-convex relative to $\Phi(x,y)=-\lambda \star \phi(x-y)$ for any $\lambda \leq 1/L$ and thus that $(-f)(x)=(-f)^{\Phi\Phi}(x)= - \sup_{y \in \mathbb{R}^n} -\lambda \star\phi(x-y) -(-f)^{\Phi}(y)$ where the supremum is attained. By invoking the notion of the $\Phi$-subdifferential this allows us to provide an alternative interpretation of our algorithm in terms of a difference of $\Phi$-convex approach applied to $F=g - (-f)$: Thanks to \cref{thm:phi_subgradient_gradient_correspondence,thm:phi_subgradients} the updates in \cref{alg:aproxgrad} can be rewritten: \begin{align} y^k &= x^k - \lambda\nabla\phi^*(\nabla f(x^k)) \in \partial_\Phi(-f)(x^k) \label{eq:forward_phi} \\ x^{k+1} &= \argmin_{x \in \mathbb{R}^n} ~\lambda\star \phi(x - y^k) + g(x)= \argmax_{x \in \mathbb{R}^n} ~\Phi(x, y^k) - g(x) = \partial_\Phi g^\Phi(y^k) \label{eq:backward_phi}. \end{align} Up to replacing the classical subdifferential with the $\Phi$-subdifferential this algorithm exactly resembles the structure of the classical \emph{difference of convex approach} (DCA). Next we shall discuss a double-min duality also called DC-duality in the classical DCA; see, e.g., \cite{tao1997convex}. In particular we show that the algorithm is invariant under this duality. Based on the identity $-f = (-f)^{\Phi\Phi}$ assuming that the optimization problem~\cref{eq:opt_prob} has a minimizer we can rewrite \cref{eq:opt_prob} as: \begin{align} \min_{x \in \mathbb{R}^n} ~g(x) + f(x) &= \min_{x \in \mathbb{R}^n} ~g(x) -(-f)^{\Phi\Phi}(x)\notag \\ &= \min_{x \in \mathbb{R}^n} ~g(x) - \max_{y \in \mathbb{R}^n} -\lambda \star\phi(x-y) -(-f)^{\Phi}(y)\notag \\ &= \min_{x \in \mathbb{R}^n} \min_{y \in \mathbb{R}^n} ~g(x) + \lambda \star\phi(x-y) +(-f)^{\Phi}(y)\notag \\ &= \min_{y \in \mathbb{R}^n}\min_{x \in \mathbb{R}^n} ~g(x) + \lambda \star\phi(x-y) +(-f)^{\Phi}(y)\label{eq:double_min} \\ &= \min_{y \in \mathbb{R}^n} (-f)^{\Phi}(y) -\max_{x \in \mathbb{R}^n} -\lambda \star \phi(x-y)-g(x)\notag \\ &= \min_{y \in \mathbb{R}^n} (-f)^{\Phi}(y) -g^\Phi(y) \label{eq:dc_dual_plain}, \end{align} where reversing the order of minimization is also called double-min duality; see \cite[Theorem 11.67]{RoWe98}. In the spirit of classical DC-duality we refer to \cref{eq:dc_dual_plain} as the DC-dual problem of \cref{eq:opt_prob}. Further rewriting $-g^\Phi$ respectively $(-f)^{\Phi}$ in terms of the infimal convolution respectively infimal deconvolution, introduced in \cref{sec:phi_convexity}, the DC-dual problem of \cref{eq:opt_prob} is rewritten as \begin{align} \text{minimize}~ G:= (g \infconv \lambda \star \phi_-) + (f \infdeconv \lambda \star \phi) \tag{D} \label{eq:dc_dual}. \end{align} The epi-graphical addition and subtraction of the reference function leads to a transfer of smoothness in the DC-dual and corresponds to pointwise addition and subtraction of the conjugate reference function in the Fenchel--Rockafellar dual. In the Euclidean case this is also related to primal and dual regularization in classical DC-programming \cite[Section 5.2]{tao1997convex}. Next we show that up to interchanging the roles of forward- and backward-step our algorithm is invariant under double-min duality as is known for the classical DCA. The following result is a generalization of \cite[Theorem 3.8]{laude2021lower} for the Euclidean case to the anisotropic case: \begin{proposition}[interchange of forward- and backward-step] \label{thm:invariance_double_min} Let $g$ be convex. Then $g \infconv \lambda \star \phi_-$ is anisotropically smooth relative to $\phi_-$ with constant $\lambda^{-1}$ and for any $y \in \mathbb{R}^n$ a backward-step on $g$ at $y$ is a forward-step on $g \infconv \lambda \star \phi_-$ at $y$, i.e., $$ y - \lambda \nabla \phi_-^*(\nabla (g \infconv \lambda \star \phi_-)(y)) = \aprox[\lambda]{\phi_-}{g}(y). $$ For any $x \in \mathbb{R}^n$ a forward-step on $f$ at $x$ is backward-step on $f \infdeconv \lambda \star \phi$ at $x$, i.e., $$ \aprox[\lambda]{\phi}{f \infdeconv \lambda \star \phi}(x) = x - \lambda \nabla \phi^*(\nabla f(x)). $$ If, in addition, $f$ is convex we have $f \infdeconv \lambda \star \phi \in \Gamma_0(\mathbb{R}^n)$ with $\dom (f \infdeconv \lambda \star \phi)^* \cap \intr \dom\phi^* \neq \emptyset$, i.e., the constraint qualification for the anisotropic proximal mapping in the convex case is satisfied. \end{proposition} \begin{proof} In view of \cref{thm:moreau_decomposition:smooth} the following gradient formula for the anisotropic Moreau envelope holds true: \begin{align} \nabla (g \infconv \lambda \star \phi_-) = \nabla \phi_- \circ \lambda^{-1}(\id - \aprox[\lambda]{\phi_-}{g}), \end{align} and in particular $\ran \nabla(g \infconv \lambda \star \phi_-) = \ran \nabla \phi_- \circ \lambda^{-1}(\id - \aprox[\lambda]{\phi_-}{g}) \subseteq \ran \nabla \phi_-$. Thus we can invoke \cref{thm:phi_convex_asmooth} and obtain that $g \infconv \lambda \star \phi_-$ has the anisotropic smoothness property relative $\phi_-$ with constant $\lambda^{-1}$ and in particular we have for the forward-step applied to $g \infconv \lambda \star \phi_-$ at any $y \in \mathbb{R}^n$: \begin{align*} y - \lambda \nabla \phi_-^*(\nabla (g \infconv \lambda \star \phi_-)(y))&=y - \lambda \nabla \phi_-^*(\nabla \phi_-(\lambda^{-1}(y - \aprox[\lambda]{\phi_-}{g}(y))))\\ &= y - (y - \aprox[\lambda]{\phi_-}{g}(y)) =\aprox{\lambda \star \phi_-}{g}(y). \end{align*} Define $\Phi(x,y):= -\lambda \star \phi(x-y)$. Let $y \in \aprox[\lambda]{\phi}{f \infdeconv \lambda \star \phi}(x) =\argmax_{y \in \mathbb{R}^n} ~\Phi(x,y) -(-f)^\Phi(y)$ be the backward-step applied to $f \infdeconv \lambda \star \phi=(-f)^\Phi$ at $x$. By definition of the $\Phi$-subdifferential this is equivalent to $x \in \partial_\Phi (-f)^\Phi(y)$. Since $f$ is anisotropically smooth relative to $\phi$ with constant $L$, by \cref{thm:episcaling_asmoothness}, $f$ is anisotropically smooth relative to $\phi$ with constant $\lambda^{-1}\geq L$ as well. By \cref{thm:phi_convex_asmooth} this means that $-f$ is $\Phi$-convex and thus by \cref{thm:phi_subgradients} $x \in \partial_\Phi (-f)^\Phi(y)$ is equivalent to $y \in \partial_\Phi (-f)(x)$. Invoking \cref{thm:phi_subgradient_gradient_correspondence} this implies that $\aprox[\lambda]{\phi}{f \infdeconv \lambda \star \phi}(x) =\partial_\Phi (-f)(x)= (\id -\lambda \nabla \phi^* \circ \nabla f)(x) = x - \lambda \nabla \phi^*(\nabla f(x))$. If, in addition, $f$ is convex, thanks to \cref{thm:convexity_deconvolution} the deconvolution $f \infdeconv \lambda \star \phi = (f^* \mathbin{\dot{-}} \lambda \phi^*)^* \in \Gamma_0(\mathbb{R}^n)$. Since $\ran \nabla f \subseteq \ran \nabla \phi$ we have that $\relint \dom f^* \subseteq \dom \partial f^* \subseteq \intr\dom \phi^*$. Since $\dom( f^* \mathbin{\dot{-}} \lambda \phi^*) = \dom f^*$ we have $\dom( f^* \mathbin{\dot{-}} \lambda \phi^*) \cap \intr \dom \phi^* \neq \emptyset$. Since $(f \infdeconv \lambda \star \phi)^* = (f^* \mathbin{\dot{-}} \lambda \phi^*)^{**} \leq f^* \mathbin{\dot{-}}\lambda \phi^*$ we have $\dom (f \infdeconv\lambda \star \phi)^* \supseteq \dom( f^* \mathbin{\dot{-}}\lambda \phi^*)$. This implies that $\dom (f \infdeconv \lambda \star \phi)^*\cap \intr \dom\phi^*\neq \emptyset$. \ifxfalsetrue \qed \fi \end{proof} Alternatively, our algorithm can be understood in terms of Gauss--Seidel minimization applied to \cref{eq:double_min} for which the same self-duality in the DC-sense is true. Next, we exploit the transfer of smoothness in combination with the self-duality to show linear convergence wrt $G$ if $g$ is anisotropically strongly convex. Due to a lack of tilt-invariance a primal transfer of strong convexity as in Euclidean forward-backward splitting is in general not possible in the anisotropic case: \begin{lemma}[transfer of smoothness] \label{thm:strongly_convex} Let $g$ be anisotropically strongly convex with constant $\mu$ relative to $\phi_-$. Then $g \infconv \frac{1}{L} \star \phi_-$ is anisotropically strongly convex with constant $\sigma = 1/(L^{-1}+\mu^{-1}) < \mu$ and has the anisotropic descent property with constant $L$. \end{lemma} \begin{proof} Thanks to \cref{thm:conjugate_duality_astrong} anisotropic strong convexity of $g \infconv L^{-1} \star \phi_-$ with constant $\sigma$ is implied by Bregman smoothness of $(g \infconv L^{-1} \star \phi_-)^* = g^* + L^{-1}\phi_-^*$ relative to $\sigma^{-1} \phi_-^*$, i.e., convexity of $ \sigma^{-1} \phi_-^* \mathbin{\dot{-}} (g \infconv L^{-1} \star \phi_-)^* = (\sigma^{-1}-L) \phi_-^* \mathbin{\dot{-}} g^*, $ on $\intr \dom \phi_-^*$ and $\dom \partial(g \infconv L^{-1} \star \phi_-)^* \supseteq \intr \dom \phi_-^*$. Since $g$ is anisotropically strongly convex relative to $\phi_-$ we have $\dom \partial g^* \supseteq \intr \dom\phi_-^*$. In light of \cref{thm:conjugate_duality_astrong}, $\mu^{-1}\phi^*_- \mathbin{\dot{-}} g^*$ is convex on $\intr \dom \phi_-^*$. Thus $(\sigma^{-1}-L^{-1}) \phi_-^* \mathbin{\dot{-}} g^*$ is convex on $\intr \dom \phi_-^*$ if $\sigma^{-1}\geq L^{-1}+\mu^{-1}$, i.e., $\sigma \leq 1/(L^{-1}+\mu^{-1})$. We also have thanks to the constraint qualification and the sum-rule and the inclusion $\dom \partial g^* \supseteq \intr \dom\phi_-^*$ that $\dom \partial(g \infconv L^{-1} \star \phi_-)^* =\dom (\partial g^* + \nabla \phi_-^*) = \dom \partial g^*\cap \dom \nabla \phi_-^* = \dom \partial g^* \supseteq \intr \dom\phi_-^*$. This proves the claimed result. \ifxfalsetrue \qed \fi \end{proof} Thanks to the self-duality of our algorithm in the DC-sense and the previous proposition it can be shown that the method converges linearly wrt to the DC-dual cost $G=(g\infconv \lambda \star \phi_-) + (f \infdeconv \lambda \star \phi)$ evaluated at the forward-steps $y^k$: \begin{proposition} \label{thm:linear_convergence_dual} Let $f$ be convex and $g$ be anisotropically strongly convex with constant $\mu$ relative to $\phi_-$. Let $\{y^k\}_{k \in \mathbb{N}_0}$ be the sequence of forward-steps generated by \cref{alg:aproxgrad} with $\lambda=1/L$. Then $G$ has a unique minimizer $y^\star$ and $\{G(y^k)\}_{k \in \mathbb{N}_0}$ converges linearly, in particular, $$ G(y^{k}) - G(y^\star) \leq \left(1 - \frac\mu{\mu + L} \right)^{k} (G(y^0) - G(y^\star) ). $$ \end{proposition} \begin{proof} By \cref{thm:strongly_convex} $g \infconv \lambda \star \phi_-$ is anisotropically smooth with constant $L$ and anisotropically strongly convex with constant $1/(L^{-1}+\mu^{-1})$. By \cref{thm:invariance_double_min} $f \infdeconv \lambda \star \phi \in \Gamma_0(\mathbb{R}^n)$ with $\dom(f \infdeconv \lambda \star \phi) \cap \intr \dom \phi^* \neq \emptyset$ and therefore we can invoke \cref{thm:strong_implies_pg_dominance} to show that $G$ has a unique minimizer $y^\star$ and $G=(g \infconv \lambda \star \phi_-) + (f \infdeconv \lambda \star \phi)$ satisfies the anisotropic proximal gradient dominance condition relative to $\phi_-$ with constant $1/(L^{-1}+\mu^{-1})$ and parameter $L$. Thanks to \cref{thm:invariance_double_min} we have for every $k \in \mathbb{N}_0$ that $y^{k+1}=x^{k+1} - \lambda \nabla \phi^*(x^{k+1}) = \aprox[\lambda]{\phi}{f \infdeconv \lambda \star \phi}(x^{k+1})$ and $x^{k+1}=\aprox[\lambda]{\phi_-}{g}(y^k) = y^k - \nabla \phi_-^*(\nabla (g \infconv \phi_-))(y^k)$. In combination we have $y^{k+1}=\aprox[\lambda]{\phi}{f \infdeconv \lambda \star \phi}(y^k - \nabla \phi_-^*(\nabla (g \infconv \phi_-))(y^k))$. This is identical to \cref{alg:aproxgrad} with step-size $\lambda$, reference function $\phi_-$ and initial iterate $y^0$ applied to $G=(g\infconv \lambda \star \phi_-) + (f \infdeconv \lambda \star \phi)$. Thus we can apply \cref{thm:linear_convergence} to the sequence $\{y^k\}_{k \in \mathbb{N}_0}$ to show that $\{G(y^k) \}_{k \in \mathbb{N}_0}$ converges linearly with the claimed constant. \ifxfalsetrue \qed \fi \end{proof} The above result can be translated to primal linear convergence by showing that $G(y^k) = F_\lambda(x^k)$ which relates the dual cost $G$ and the forward-backward envelope $F_\lambda$ to each other. This relation was observed previously in the Euclidean setting \cite{themelis2020envelope}: \begin{proposition}[DC-dual vs. forward-backward envelope] \label{thm:fbe_dc} Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of backward-steps and $\{y^k\}_{k \in \mathbb{N}_0}$ be the sequence of forward-steps generated by \cref{alg:aproxgrad}. Then it holds that $$ G(y^k)=(g\infconv \lambda \star \phi_-)(y^k) + (f \infdeconv \lambda \star \phi)(y^k)=F_\lambda(x^k), $$ for all $k \in \mathbb{N}_0$. \end{proposition} \begin{proof} Let $k\in \mathbb{N}_0$. In view of \cref{thm:invariance_double_min} we have $y^k=x^k - \lambda\nabla\phi^*(\nabla f(x^k)) = \argmin_{y \in \mathbb{R}^n} \lambda \star \phi(x^k - y) + (f \infdeconv \lambda \star \phi)(y)$. Since $f$ is anisotropically smooth relative to $\phi$ with constant $L$, by \cref{thm:episcaling_asmoothness}, $f$ is anisotropically smooth relative to $\phi$ with constant $\lambda^{-1}\geq L$ as well. Owing to \cref{thm:phi_convex_asmooth} $-f$ is $\Phi$-convex relative to $\Phi(x,y)=-\lambda \star \phi(x-y)$ and thus by \cref{thm:phi_envelope} $f=-(-f)^{\Phi\Phi}=\inf_{y \in \mathbb{R}^n} \lambda \star \phi(x^k - y) + (f \infdeconv \lambda \star \phi)(y)$ implying that \begin{align} \label{eq:fbe_dc_1} (f \infdeconv \lambda \star \phi)(y^k) + \lambda \star \phi(x^k - y^k)=f(x^k). \end{align} By definition of the forward-backward envelope since $x^k - y^k=\lambda \nabla \phi^*(\nabla f(x^k))$ we have that $F_\lambda(x^k)=(g\infconv \lambda \star \phi_-)(y^k) + f(x^k) - \lambda \star \phi(x^k - y^k)$. In combination with \cref{eq:fbe_dc_1} we obtain that \[ F_\lambda(x^k)= (g\infconv \lambda \star \phi_-)(y^k) + (f \infdeconv \lambda \star \phi)(y^k) = G(y^k) \ifxfalsetrue \qed \else \qedhere \fi \] \end{proof} Now we are ready to state the primal linear convergence: \begin{corollary} \label{thm:linear_convergence_primal} Let $f$ be convex and $g$ be anisotropically strongly convex with constant $\mu$ relative to $\phi_-$. Let $\{x^k\}_{k \in \mathbb{N}_0}$ be the sequence of backward-steps generated by \cref{alg:aproxgrad} with step-size $\lambda=L^{-1}$. Then $\{F(x^{k+1}) \}_{k \in \mathbb{N}_0}$ converges linearly, in particular, $$ F(x^{k+1}) - \inf F \leq \left(1 - \frac\mu{\mu + L} \right)^{k} (F(x^0) - \inf F). $$ \end{corollary} \begin{proof} Let $k\in \mathbb{N}_0$. Since $f$ is anisotropically smooth the anisotropic descent inequality holds true which after adding $g(x^{k+1})$ to both sides of the inequality amounts to: \begin{align} \label{eq:ineq_fbe_1} F(x^{k+1}) \leq f(x^k) + \lambda \star \phi(x^{k+1} - y^k) - \lambda \star \phi(\lambda\nabla \phi^*(\nabla f(x^k))) + g(x^{k+1}) = F_\lambda(x^k), \end{align} where the last equality follows by definition of $F_\lambda$ and $x^{k+1}$. Define $\xi(x, x^k):=f(x^k) + \lambda \star \phi(x - x^k + \lambda \nabla \phi^*(\nabla f(x^k))) - \lambda \star \phi(\lambda\nabla \phi^*(\nabla f(x^k))) + g(x^{k+1})$. Then by definition $F_\lambda(x^k) = \inf_{x \in \mathbb{R}^n} \xi(x, x^k)$ and $\xi(x^k, x^k) = F(x^k)$. Therefore we also have \begin{align} \label{eq:ineq_fbe_2} F_\lambda(x^k) \leq F(x^k). \end{align} Invoking \cref{thm:linear_convergence_dual} $G$ has a unique minimizer $y^\star$ and we have after $K$ iterations: $$ G(y^{k}) - G(y^\star) \leq \left(1 - \frac\mu{\mu + L} \right)^{k} (G(y^0) - G(y^\star) ). $$ By double-min duality \cref{eq:double_min} we have $G(y^\star) = \inf F$. Invoking \cref{thm:fbe_dc} we have that $F_\lambda(x^k)=G(y^k)$ for all $k \in \mathbb{N}_0$. Thanks to the identities \cref{eq:ineq_fbe_1} and \cref{eq:ineq_fbe_2} we have $F(x^{k+1}) \leq F_\lambda(x^{k})=G(y^{k})$ and $F(x^0) \geq F_\lambda(x^0)=G(y^{0})$ and thus we obtain: \begin{align*} F(x^{k+1}) - \inf F \leq G(y^{k}) - G(y^\star) &\leq \left(1 - \frac\mu{\mu + L} \right)^{k} (G(y^0) - G(y^\star) ) \\ &\leq \left(1 - \frac\mu{\mu + L} \right)^{k} (F(x^{0}) - \inf F). \ifxfalsetrue \qed \else \qedhere \fi \end{align*} \end{proof} \section{Applications} \label{sec:apps} \subsection{Regularized logistic regression} In this section we consider regularized logistic regression. Here, one is interested in the minimization of the following cost function: \begin{align} \min_{x \in \mathbb{R}^n} \frac{1}{m} \sum_{i=1}^m \log(1+\exp( -b_i \langle a_i, x \rangle)) + g(x), \end{align} for $m$ input output training pairs $(a_i, b_i) \in [-1, 1]^n \times \{-1, 1\}$ and $g$ is a possibly nonsmooth regularizer. Thanks to \cref{ex:logistic} $f(x)=\frac{1}{m} \sum_{i=1}^m \log(1+\exp( -b_i \langle a_i, x\rangle))$ is anisotropically smooth relative to the symmetrized logistic loss $\phi(x)=\sum_{j=1}^n h(x_j)$ with $h(t)=2\log(1+ \exp(t)) - t$. The conjugate amounts to $h^*(t)=(t + 1)\log((t + 1)/2) + (1-t)\log((1-t)/2)$ with $\dom h^*=[-1, 1]$ where $t\log t + (1-t)\log(1-t)=0$ for $t\in\{0,1\}$. Note that $\phi$ is a softmax approximation to the one-norm. We discuss the following choices of regularizers $g=\nu\|\cdot\|_1$ and $g=\frac{\nu}{2}\|\cdot\|^2$. For both choices it holds that $\dom g^*\cap \intr \dom \phi_-^* \neq \emptyset$ and thus the algorithm can be applied. Thanks to \cref{ex:a_strongly_convex_2} and \cref{thm:linear_convergence_primal} the algorithm attains linear convergence wrt the DC-dual cost for $g=\frac{\nu}{2}\|\cdot\|^2$. For $g=\nu\|\cdot\|_1$ a closed form solution to the anisotropic proximal mapping is obtained via the Bregman--Moreau decomposition \cref{thm:moreau_decomposition:decomp}. Then the anisotropic proximal mapping can be computed in terms of the Bregman proximal mapping of $g^*=\delta_{[-\nu,\nu]^n}$ whose solution involves a simple clipping operation. The anisotropic proximal mapping of $g$ thus amounts to the following soft-thresholding operation: $$ [\aprox[\lambda]{\phi_-}{g}(y)]_i = \sign(y_i)\max\{|y_i|- \rho, 0\}, $$ for $\rho = \lambda (h^*)'(\nu)$. For $g=\frac{\nu}{2}\|\cdot\|^2$ by a change of variable the anisotropic proximal mapping of $g$ can be written as the Euclidean proximal mapping of the logistic loss $x \mapsto \log(1+ \exp(x))$. Thanks to \cite[Proposition 2]{briceno2019random} its solution can be obtained in closed form via the generalized Lambert $W$ function \cite{mezHo2017generalization,maignan2016fleshing}. Also nonconvex choices for $g$ are possible. \subsection{Exponential regularization for linear programs} In this section we consider exponential smoothing for linear programming. Given a \emph{linear program} (LP) with linear inequality constraints $\langle a_i, x \rangle - b_i \leq 0$ we consider an unconstrained approximation to the original LP which takes the form \begin{align} \label{eq:LP_exp} \min_{x \in \mathbb{R}^n}~ \langle c, x \rangle + \sum_{i=1}^m \sigma \star \exp(\langle a_i, x \rangle - b_i), \end{align} for $c,a_i \in \mathbb{R}^n$ and $b_i \in \mathbb{R}$ and $\sigma >0$ is a regularization parameter. The functions $\exp(\langle a_i, x \rangle - b_i)$ can be interpreted as penalty functions for the constraints. Exponential smoothing is dual to entropic regularization which is a common technique for solving large-scale optimal transport problems \cite{cuturi2013sinkhorn}. Alternatively, the exponential loss function appears in boosting for machine learning \cite{freund1997decision,schapire1998improved}. For applying our algorithm we rewrite the cost as $f+g$ for functions $g(x):=\langle c, x \rangle$ and $f(x):=\sum_{i=1}^m \sigma \star \exp(\langle a_i, x \rangle - b_i)$. Assuming that $a_i \in \mathbb{R}_{+}^n$ with $\sum_{i=1}^m a_i \in \mathbb{R}_{++}^n$, thanks to \cref{ex:exp_smoothness}, $f$ has the anisotropic descent property relative to $\phi=\Exp$ with constant $L = \max_{1\leq i \leq m} \|a_i\|_1/\sigma$. Further assuming that $-c \in \mathbb{R}_+^n =\intr \dom \phi^*$ we also have that $\emptyset \neq \dom g^* \cap \intr \dom \phi^*_-$ and thus our algorithm can be applied. The anisotropic proximal mapping $\aprox[\lambda]{\phi_-}{g}$ of $g$ is separable where each individual proximal mapping amounts to $x_j= \argmin_{t \in \mathbb{R}} \lambda \star \exp(t-y_j) + c_j t$, which by first-order stationarity yields $x_j = y_j + \lambda \log(c_j)$. Exponential regularization for linear programs was also considered in \cite{maddison2021dual} using a different reference function. However, this imposes restrictions on the rank of the constraint matrix $A$ and the corresponding feasible set. Using $\Exp$ as the reference function we are able to recover the Sinkhorn algorithm for regularized optimal transport \cite{cuturi2013sinkhorn} and a classical parallel update algorithm for AdaBoost \cite{collins2002logistic} as special cases of the proposed anisotropic proximal gradient framework. This complements the existing primal interpretation in terms of entropic subspace projections \cite{benamou2015iterative,collins2002logistic}; also see \cite{della2001duality}. For marginals $r \in \mathbb{R}_{++}^n$ and $s \in \mathbb{R}_{++}^m$ and transportation cost $C \in \mathbb{R}^{m \times n}$ the regularized dual OT problem takes the form \begin{align} \label{eq:OT_exp} \min_{\alpha \in\mathbb{R}^n, \beta \in \mathbb{R}^m} \langle -r, \alpha \rangle + \langle -s, \beta \rangle + \sum_{i=1}^m \sum_{j=1}^n \sigma \star \exp(\langle e_j,\alpha \rangle + \langle e_i, \beta \rangle - C_{ij}). \end{align} In order to recover the Sinkhorn algorithm we apply our method in a Gauss--Seidel fashion where $\alpha^{k+1}$ and $\beta^{k+1}$ are obtained by anisotropic proximal gradient steps applied to the objective for fixed $\beta^k$ respectively $\alpha^{k+1}$. Then $f(\alpha, \beta^k)$ respectively $f(\alpha^{k+1}, \beta)$ both have the anisotropic descent property relative to $\phi=\Exp$ with constants $\|e_i\|_1/\sigma=\|e_j\|_1/\sigma=1/\sigma$. In fact, for fixed $\alpha$ or $\beta$ the objective is separable and anisotropic proximal gradient steps with step-size $1/\sigma$ correspond to exact minimization steps in the same way Euclidean coordinate-wise gradient descent on a quadratic objective corresponds to exact coordinate descent for an appropriately chosen step-size. The exact dual coordinate descent interpretation of Sinkhorn for OT is well known in literature. Alternatively, the anisotropic proximal gradient method can be applied in a joint fashion. Then, the smoothness constant of $f$ becomes $\|e_i\|_1/\sigma+\|e_j\|_1/\sigma=2/\sigma$. A drawback of the exponential reference function is the fact that the corresponding preconditioner $\nabla \phi^*$ is only defined on the positive orthant. This imposes a ``one-sided'' structure on the optimization problem in the sense that the coupling vectors $a_i \in \mathbb{R}_+^n$ are constrained to the nonnegative orthant; see \cref{ex:exp_smoothness}. To handle ``two-sided'' optimization problems, the following lifting can be applied as a workaround: We introduce an additional variable $x_-$ along with the linear sharing constraint $x_- = -x$. We define \begin{align} a_{ij}^+ := \begin{cases} a_{ij} & \text{if $a_{ij} \geq 0$} \\ 0 & \text{otherwise,} \end{cases} \qquad a_{ij}^- := \begin{cases} a_{ij} & \text{if $a_{ij} \leq 0$} \\ 0 & \text{otherwise.} \end{cases} \end{align} Analogously, we define $c^+, c^-$. Then we rewrite the problem equivalently as \begin{align} \min_{x, x_- \in \mathbb{R}^n} \sum_{i=1}^m \sigma \star \exp(\langle a_i^+, x \rangle + \langle a_i^-, x_- \rangle - b_i) + \langle x, c^+\rangle + \langle x_-, c^-\rangle +\delta_{C}(x,x_-), \end{align} where $f(x, x_-)=\sum_{i=1}^m \sigma \star \exp(\langle a_i^+, x \rangle + \langle a_i^-, x_- \rangle - b_i)+ \langle x, c^+\rangle + \langle x_-, c^-\rangle$ and $g(x, x_-) :=\delta_{C}(x,x_-)$ for $C=\{(x, x_-) \in (\mathbb{R}^n)^2 : x_- = -x \}$. We identify the vectors $a_i \in \mathbb{R}^n$ and $a_i^+, a_i^- \in \mathbb{R}_+^n$ as the rows of the matrices $A_i, A_i^+, A_i^-$. Thanks to \cref{thm:moreau_decomposition:decomp} the $x$-update can be expressed in terms of the Bregman projection onto the consensus subspace $C^*=\{(x,y) \in \mathbb{R}^n \times \mathbb{R}^n \mid x = y\}$ which, thanks to \cite[Example 3.16(i)]{BC03}, admits a simple closed form solution $(x,x) = \bprox[\lambda]{\phi^*}{\delta_{C^*}}(y_1, y_2)$ in terms of the geometric mean: $x = \nabla \phi((\nabla \phi^*(y_1) + \nabla \phi^*(y_2)) / 2)$. Eliminating $x_-^k$ and $y^k,y_-^k$ from the algorithm the $x$-update can be written compactly as follows: \begin{align} \label{eq:parallel_update} x^{k+1} = x^k - \frac{\lambda}{2}(\nabla \phi^*(A_+^\top \nabla (\sigma \star \Exp)(Ax^k - b) + c^+) - \nabla \phi^*(A_-^\top \nabla (\sigma \star \Exp)(Ax^k - b) + c^-)). \end{align} Without loss of generality we may assume that $\sum_{i=1}^m a_i^+ \in \mathbb{R}_{++}^n$ and $\sum_{i=1}^m a_i^- \in \mathbb{R}_{++}^n$ as well as $c^+, c^- \in \mathbb{R}_{++}^n$. Then $\sum_{i=1}^m \sigma \star \exp(\langle a_i^+, x \rangle + \langle a_i^-, x_- \rangle - b_i)$ complies with the requirements in \cref{ex:exp_smoothness} for $\phi=\Exp$ and $L=\max_{1 \leq i \leq m} \|a_i\|_1/\sigma$. Thanks to \cref{thm:phi_convex_asmooth} $\langle x, c^+\rangle + \langle x_-, c^-\rangle$ is anisotropically smooth relative to $\Exp$ for any $L>0$. Invoking \cref{thm:closed_addition_exp} we deduce that $f(x, x_-)$ remains anisotropically smooth relative to $\Exp$ with constant $L$. For the special case that $b=c=0$, $\sigma = 1$ and $A \in [-1,1]^n$ with $\|a_i\|_1 \leq 1$ the smoothness constant and step-size can be chosen to be $\lambda=L=1$ and \cref{eq:parallel_update} specializes to the parallel update optimization algorithm \cite{collins2002logistic} for AdaBoost \cite{freund1997decision,schapire1998improved} with $q_0=\mathds{1}$. \section{Conclusion} In this paper we have considered dual space nonlinear preconditioning for forward-backward splitting. The algorithm is an extension of dual space preconditioning for gradient descent \cite{maddison2021dual} to the nonconvex and composite setting. In our case, the method is derived via an anisotropic descent inequality relative to a reference function whose inverse gradient takes the role of a nonlinear preconditioner in the proximal gradient scheme. Building upon recent developments for the Bregman proximal average \cite{wang2021bregman} a calculus for anisotropic smoothness is derived under joint convexity of the dual Bregman distance and practical examples are provided. We prove subsequential convergence of the method using a regularized gap function which vanishes at rate $\mathcal{O}(1/K)$ and we analyze the algorithm's linear convergence under an anisotropic generalization of the proximal PL-inequality \cite{karimi2016linear}. The method generalizes existing classical algorithms for optimal transport and boosting. This provides a dual view onto the framework of entropic subspace projections that is typically used to derive these algorithms. We also discuss examples which go beyond these existing methods.
0904.1014
\section{#1 \setcounter{equation}{0}} \newtheorem{hypothesis}{Hypothesis} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{plain} \begin{document} \bibliographystyle{plain} \setcounter{page}{0} \thispagestyle{empty} \title{Spectral Renormalization Group and Local Decay in the Standard Model of the Non-relativistic Quantum Electrodynamics} \author{J\"{u}rg Fr\"{o}hlich \thanks{Inst.~f.~Theoretische Physik; ETH Z\"{u}rich, Switzerland; also at IHES, Bures-sur-Yvette, France} \and Marcel Griesemer \thanks{Dept.~of Math.; Univ.~of Stuttgart; D-70569 Stuttgart, Germany} \and Israel Michael Sigal \thanks{School of Mathematics, IAS, Princeton, N.J., U.S.A.; permanent address: Dept.~of Math.; Univ. of Toronto; Toronto; Canada; Supported by NSERC Grant No. NA7901} \\ } \date{\DATUM} \maketitle \begin{abstract} We prove the limiting absorption principle for the standard model of the non-relativistic quantum electrodynamics (QED) and for the Nelson model describing interactions of electrons with phonons. To this end we use the spectral renormalization group technique on the continuous spectrum in conjunction with the Mourre theory. \end{abstract} \thispagestyle{empty} \setcounter{page}{1} \secct{Introduction} \label{sec-I} The mathematical framework of the theory of non-relativistic matter interacting with the quantized electro-magnetic field (non-relativistic quantum electrodynamics) is well established. It is given in terms of the standard quantum Hamiltonian \begin{equation}\label{eq1} H^{SM}_g=\sum\limits_{j=1}^n{1\over 2m_j} (i\nabla_{x_j}+gA(x_j))^2+V(x)+H_f \end{equation} acting on the Hilbert space $\mathcal{H}=\mathcal{H}_{p}\otimes\mathcal{H}_{f}$, the tensor product of the state spaces of the particle system and the quantized electromagnetic field. Here $SM$ stands for 'standard model'. The notation above and units we use are explained below. This model describes, in particular, the phenomena of emission and absorption of radiation by systems of matter, such as atoms and molecules, as well as other processes of interaction of quantum radiation with matter. It has been extensively studied in the last decade, see the books \cite{Spohn, GustafsonSigal} and reviews \cite{ Arai1999, Hirokawa1, Hirokawa2, Hiroshima5, Hiroshima7} and references therein for a partial list of contributions. For reasonable potentials $V(x)$ the operator $H^{SM}_g$ is self-adjoint and its spectral and resonance structure - and therefore dynamics for long but finite time-intervals - is well understood (see e.g. \cite{AFFS,Faupin2007, GriesemerLiebLoss, HHS, HaslerHerbst2007, HaslerHerbstHuber2007, HirokawaHiroshimaSpohn, Sigal} and references therein for recent results). However, we still know little about its asymptotic dynamics. In particular, the full scattering theory for this operator does not, at present, exist (see, however, \cite{FroehlichGriesemerSchlein1, FroehlichGriesemerSchlein2, FroehlichGriesemerSchlein3, ChenFroehlichPizzo1, ChenFroehlichPizzo2}). A key notion connected to the asymptotic dynamics is that of the local decay. This notion also lies at the foundation of the construction of the modern quantum scattering theory. It states that the system under consideration is either in a bound state, or, as time goes to infinity, it breaks apart, i.e. the probability to occupy any bounded region of the physical space tends to zero and, consequently, average distance between the particles goes to infinity. In our case, this means that the photons leave the part of the space occupied by the particle system. Until recently the local decay for the Hamiltonian $H^{SM}_g$ is proven only for the energies away from $O(g^2)$-neighborhoods of the ground state energy, $e_g$, and the ionization energy. However, starting from any energy, the system eventually winds up in a neighborhood of the ground state energy. Indeed, while the total energy is conserved, the photons carry away the energy from regions of the space where matter is concentrated. Hence understanding the dynamics in this energy interval is an important matter. Recently, the local decay was proven for states in the spectral interval $(\e_g, \e_g + \e^{(p)}_{gap}/12)$ for the Hamiltonian $H^{SM}_g$ \cite{FroehlichGriesemerSigal2008}. Here $\e^{(p)}_{gap}:=\e^{(p)}_1 -\e^{(p)}_0$, where $\e^{(p)}_0$ and $\e^{(p)}_1$ are the ground state and the first excited state energies of the particle system. In this paper we give another prove of this fact. However, the main goal of this paper is to develop a new approach to time-dependent problems in the non-relativistic QED which combines the spectral renormalization group (RG), developed in \cite{BachFroehlichSigal1998a,BachFroehlichSigal1998b,BachChenFroehlichSigal2003} (see also \cite{FroehlichGriesemerSigal2009}), with more traditional spectral techniques such as Mourre estimate. The key here is the result that the stronger property of the limiting absorption principle (LAP) propagates along the RG flow. Now, we explain the units and notation employed in (\ref{eq1}). We use the units in which the Planck constant divided by $2\pi$, speed of light and the electron mass are equal to $1(\ \hbar=1$, $c=1$ and $m=1$). In these units the electron charge is equal to $-\sqrt{\alpha}\ (e=-\sqrt{\alpha})$, where $\alpha =\frac{e^2}{4\pi \hbar c}\approx {1\over 137}$ is the fine-structure constant, the distance, time and energy are measured in the units of $\hbar/mc =3.86 \cdot 10^{-11}cm,\ \hbar/mc^2 =1.29 \cdot 10^{-21} sec$ and $mc^2 = 0.511 MeV$, respectively (natural units). We show below that one can set $g:= \alpha^{3/2}$. Our particle system consists of $n$ particles of masses $m_j$ (the ratio of the mass of the $j$-th particle to the mass of an electron) and positions $x_j$, where $j=1, ..., n$. We write $x=(x_1,\dots,x_n)$. The total potential of the particle system is denoted by $V(x)$. The Hamiltonian operator of the particle system alone is given by \begin{equation} \label{Hp} H_p:=-\sum\limits_{j=1}^n {1\over 2m_j} \Delta_{x_j}+V(x), \end{equation} where $\Delta_{x_j}$ is the Laplacian in the variable $x_j$. This operator acts on a Hilbert space of the particle system, denoted by $\mathcal{H}_{p}$, which is either $L^2(\mathbb{R}^{3n})$ or a subspace of this space determined by a symmetry group of the particle system. The quantized electromagnetic field is described by the quantized (in the Coulomb gauge) vector potential \begin{equation}\label{eq3} A(y)=\int(e^{iky}a(k)+e^{-iky}a^*(k)){\chi(k)d^3k\over (2\pi)^3 \sqrt{2|k|}}, \end{equation} where $\chi$ is an ultraviolet cut-off: $\chi(k)=1$ in a neighborhood of $k=0$ and it vanishes sufficiently fast at infinity, and its dynamics, by the quantum Hamiltonian \begin{equation} \label{Hf} H_f \ = \ \int d^3 k \; a^*(k)\; \omega(k) \; a(k) \comma \end{equation} both acting on the Fock space $\mathcal{H}_{f}\equiv \mathcal{F}$. Above, $\omega(k) \ = \ |k|$ is the dispersion law connecting the energy, $\omega(k)$, of the field quantum with wave vector $k$, $a^*(k)$ and $a(k)$ denote the creation and annihilation operators on $\mathcal{F}$ and the right side can be understood as a weak integral. The families $a^*(k)$ and $a(k)$ are operator valued generalized, transverse vector fields: $$a^\#(k):= \sum_{\lambda \in \{0, 1\}} e_{\lambda}(k) a^\#_{\lambda}(k),$$ where $e_{\lambda}(k)$ are polarization vectors, i.e. orthonormal vectors in $\mathbb{R}^3$ satisfying $k \cdot e_{\lambda}(k) =0$, and $a^\#_{\lambda}(k)$ are scalar creation and annihilation operators satisfying standard commutation relations. See Supplement for a brief review of the definitions of the Fock space, the creation and annihilation operators acting on it and the definition of the operator $H_f$. In the natural units the Hamiltonian operator is of the form \eqref{eq1} but with $g=e$ and with $V(x)$ being the total Coulomb potential of the particle system. To obtain expression (\ref{eq1}) we rescale this original Hamiltonian appropriately (see \cite{BachFroehlichSigal1999}). Then we relax the restriction on $V(x)$ and consider standard generalized $n$-body potentials (see e.g. \cite{HunzikerSigal}), $V(x) = \sum_i W_i(\pi_i x)$, where $\pi_i$ are a linear maps from $\mathbb{R}^{3n}$ to $\mathbb{R}^{m_i},\ m_i \le 3n $ and $ W_i$ are Kato-Rellich potentials (i.e. $W_i(\pi_i x) \in L^{p_i}(\mathbb{R}^{m_i}) + (L^\infty (\mathbb{R}^{3n}))_{\varepsilon}$ with $p_i=2$ for $m_i \le 3,\ p_i>2 $ for $m_i =4$ and $p_i \ge m_i/2$ for $m_i > 4$, see \cite{RSIV, HislopSigal}). In order not to deal with the problem of center-of-mass motion, which is not essential in the present context, we assume that either some of the particles (nuclei) are infinitely heavy or the system is placed in an external potential field. One verifies that $H_f$ defines a positive, self-adjoint operator on $\mathcal{F}$ with purely absolutely continuous spectrum, except for a simple eigenvalue $0$ corresponding to the eigenvector $\Om$ (the vacuum vector, see Supplement). Thus for $g=0$ the low energy spectrum of the operator $H^{SM}_0$ consists of the branches $[\e^{(p)}_i, \infty)$, where $\e^{(p)}_i$ are the isolated eigenvalues of $H_{p}$, and of the eigenvalues $\e^{(p)}_i$ sitting at the top of the branch points ('thresholds') of the continuous spectrum. The absence of gaps between the eigenvalues and thresholds is a consequence of the fact that the photons (and the phonons) are massless. This leads to hard and subtle problems in the perturbation theory, known collectively as the infrared problem. As was mentioned above, in this paper we prove the local decay property for the Hamiltonian $H^{SM}_g$. In fact, we prove a slightly stronger property - the limiting absorption principle - which states that the resolvent sandwiched by appropriate weights has H\"older continuous limit on the spectrum. To be specific, let $B$ denote the self-adjoint generator of dilatations on the Fock space $\mathcal{F}$. It can be expressed in terms of creation- and annihilation operators as \begin{equation} \label{eq-I.24} B \ = \ \frac{i}{2}\; \int d^3k \; a^*(k) \: \big\{ k \cdot \nabla_k + \nabla_k \cdot k \big\} \: a(k) \period \end{equation} We further extend it to the Hilbert space $\mathcal{H}=\mathcal{H}_{{p}}\otimes\mathcal{F}$. Let $\langle B\rangle := (\mathbf{1} +B^2)^{1/2}$. Our goal is to prove the following\\ \begin{theorem} \label{main-thm} Let $g\ll \e^{(p)}_{gap}$ and let $\Delta \subset (\e_g, \e_g +\frac{1}{12}\ \e^{(p)}_{gap}) $, where $\e_g$ is the ground state energy of $H$. Then \begin{equation} \langle B\rangle^{-\theta}(H^{SM}_g-\lambda \pm i 0)^{-1}\langle B \rangle^{-\theta}\in C^\nu(\Delta). \label{eqn:Claim1} \end{equation} for $\theta>1/2$ and $0<\nu<\theta-\frac{1}{2}$. \end{theorem} The above theorem has the following consequence. (In what follows functions of self-adjoint operators are defined by functional calculus.)\\ \begin{corollary} For $\Delta$ as above and for any function $f(\lambda)$ with $\mathrm{supp} f \subseteq\Delta$ and for $\nu< \theta-\frac{1}{2}$, we have \begin{equation} \|\langle B\rangle^{-\theta}e^{-i H t}f(H)\langle B\rangle^{-\theta}\|\le C t^{-\nu}. \end{equation} \end{corollary} The statement follows from \eqref{eqn:Claim1} and the formula \begin{align*} & \langle B\rangle^{-\theta}e^{-i H t}f(H)\langle B\rangle^{-\theta}= & \int_{-\infty}^\infty d\lambda f(\lambda)e^{-i\lambda t}\mathrm{Im}\langle B\rangle^{-\theta}(H-\lambda-i0)^{-1}\langle B\rangle^{-\theta} \end{align*} (see e.g. \cite{RSIV} and a detailed discussion in \cite{FroehlichGriesemerSigal2008}). \begin{remark} \label{rem-1} Let $\Sigma_{p}:=\inf\sigma(H_{p})$. We expect that the method of this paper can be extended to the energy interval $\spec(H)\setminus \sigma_{\rm pp}(H)$ for the Nelson model and $\big(\spec(H)\setminus \sigma_{\rm pp}(H)\big) \bigcap (-\infty,\Sigma_{p}-{\varepsilon}]$ for some ${\varepsilon}>0$, for QED. \end{remark} Previously the limiting absorption principle and local decay estimates were proven in \cite {BachFroehlichSigal1998b, BachFroehlichSigalSoffer1999} for the standard model of non-relativistic QED and for the Nelson model away from neighborhoods of the ground state energy and ionization threshold. In \cite {GergescuGerardMoeller1,GergescuGerardMoeller2} they were proven for the Nelson model near the ground state energy and for all values of the coupling constant, but under rather stringent assumptions, including that on the infra-red behavior of the coupling functions (see also \cite{BachFroehlichSigal1998a,BachFroehlichSigal1999,HuebnerSpohn2,Skibsted} for earlier works). Finally, as was mentioned above, it was proven in a neighbourhood of the ground state energy in \cite{FroehlichGriesemerSigal2008}. Our approach consists of three steps. First, following \cite{Sigal}, we use a generalized Pauli-Fierz transform to map the QED Hamiltonian Eq.~(\ref{eq1}) into a new Hamiltonian $H^{PF}_g$ whose interaction has a better, in some sense, infra-red behaviour. To this new Hamiltonian we apply sufficiently many iterations of the renormalization map obtaining at the end a rather simple Hamiltonian which we investigate further with the help of the Mourre estimate. This proves LAP for the latter Hamiltonian. Since, as we prove in this paper, the renormalization map preserves the LAP property we conclude from this that the Hamiltonian $H^{PF}_g$ enjoys the LAP property as well. The size of the interval and the number of iterations of the RG map depends on the distance of a spectral point of interest to the ground state energy. \DETAILS{Note that in this approach a specific form of the interaction and the coupling functions becomes irrelevant. With little more work one can establish an explicit restriction on the coupling constant $g$ in terms of the particle energy difference $\e^{(p)}_{gap}$ and appropriate norms of the coupling functions.} In this paper we consider also the Nelson model. In that model the total system consisting of the particle system coupled to the quantized field is described by the Hamiltonian \begin{equation} \label{Hn} H_g^{N} \ = \ H^{N}_0 \, + \, I_{g}^{N} \comma \end{equation} acting on the state space, $\mathcal{H}=\mathcal{H}_{p} \, \otimes \, \mathcal{F}$, where now $\mathcal{F}$ is the Fock space for phonons, i.~e. spinless, massless Bosons. Here $g$ is a positive parameter - a coupling constant - which we assume to be small, and \begin{equation} \label{H_0} H^{N}_0 \ = \ H^{N}_p \, + \, H_f \comma \end{equation} where $H^{N}_p=H_p$ and $H_f$ are given in \eqref{Hp} and \eqref{Hf}, respectively, but, in the last case, with the scalar creation and annihilation operators, $a$ and $a^*$, and the interaction operator is \begin{equation} \label{I.6} I_g \ = \ g \, \int \frac{ d^3 k \: \kappa(k)}{ |k|^{1/2} } \: \big\{ e^{-i k x} \, a^*(k) \: + \: e^{i k x} \, a(k) \big\} \end{equation} (we can also treat terms quadratic in $a$ and $a^*$ but for the sake of exposition we leave such terms out). Here, $\kappa=\kappa(k)$ is a real function with the property that \begin{equation} \label{I.7} \| \kappa \|_\mu \ := \ \Big( \int \frac{d^3 k}{|k|^{3+2\mu} } \: |\kappa(k)|^2 \Big)^{1/2} \ < \ \infty \comma \end{equation} for some (arbitrarily small, but) strictly positive $\mu > 0$. In the following, we fix $\kappa$ with $\|\kappa\|_{\mu} = 1$ and vary $g$. It is easy to see that the operator $I_g$ is symmetric and bounded relative to $H^{N}_0$, in the sense of Kato \cite{RSIV, HislopSigal}, with an arbitrarily small constant. Thus $H_g^{N}$ is self-adjoint on the domain of $H^{N}_0$ for arbitrary $g$. Of course, for the Nelson model we can take an arbitrary dimension $d\geq 1$ rather than the dimension $3$. Our approach can handle the interactions quadratic in creation and annihilation operators, $a$ and $a^*$, as it is the case for the operator $H^{SM}_g$. All the results mentioned above for the standard model Hamiltonian $H^{SM}_g$ holds also for the Nelson model one, $H^{N}_g$ with $\mu >0$. In order not complicate matters unnecessary we will think about the creation and annihilation operators used below as scalar operators rather than operator-valued transverse vector functions. We explain at the end of Appendix how to reinterpret the corresponding expression for the vector - photon - case. \secct{Generalized Pauli-Fierz Transform} \label{sec-II} We describe the generalized Pauli-Fierz transform mentioned in the introduction (see \cite{Sigal}). We define the following Hamiltonian \begin{equation} H_g^{PF}: = e^{-ig F(x)} H^{SM}_g e^{ig F(x)}, \end{equation} \noindent which we call the generalized Pauli-Fierz Hamiltonian. In order to keep notation simple, we present this transformation in the one-particle, $n=1$, case: \begin{equation}\label{eq3} F(x)=\sum_\lambda \int(\bar{f}_{x,\lambda}(k)a_{\lambda}(k)+f_{x,\lambda}(k)a_{\lambda}^*(k))\frac{d^3k}{\sqrt{|k|}}, \end{equation} with the coupling function $f_{x,\lambda}(k)$ chosen as $$f_{x,\lambda}(k):= \frac{e^{-ikx}\chi(k)}{(2\pi)^3\sqrt{2|k|}}\varphi(|k|^{\frac{1}{2}}{\varepsilon}_\lambda(k) \cdot x).$$ The function $\varphi$ is assumed to be $C^2$, bounded, having a bounded second derivative and satisfying $\varphi'(0)=1.$ We compute \begin{equation} \label{H^PFa} H_g^{PF} = \frac{1}{2} (p - g A_1(x))^2 + V_g(x) + H_f + gG(x)\\ \end{equation} \noindent where $A_1(x) = A(x) - \nabla F(x),\ V_g(x):= V(x) + 2 g^2\sum_\lambda \int \omega |f_{x,\lambda}(k)|^2d^3k$ and \begin{equation} \label{I.14} G(x):=- i\sum_\lambda \int\omega(\bar{f}_{x,\lambda}(k)a_{\lambda}(k)-f_{x,\lambda}(k)a_{\lambda}^*(k))\frac{d^3k}{\sqrt{|k|}}. \end{equation} (The terms $gG$ and $V_g - V$ come from the commutator expansion $e^{-ig F(x)} H_f e^{ig F(x)}$ $= - i g [F,H_f] - g^2 [F, [F, H_f]]$.) Observe that the operator family $A_1(x)$ is of the form \begin{equation}\label{15} A_1(x)=\sum_\lambda \int(e^{ikx}a_{\lambda}(k)+e^{-ikx}a_{\lambda}^*(k)){\chi_{\lambda, x}(k)d^3k\over (2\pi)^3 \sqrt{2|k|}}, \end{equation} where the coupling function $\chi_{\lambda, x}(k)$ is defined as follows $$\chi_{\lambda, x}(k):= e_{\lambda}(k)e^{-ikx}\chi(k)- \nabla_x f_{x,\lambda}(k).$$ It satisfies the estimates \begin{equation}\label{Acoupling} |\chi_{\lambda, x}(k)| \le \mathrm{const} \min (1, \sqrt{|k|}\langle x\rangle), \end{equation} with $\langle x\rangle := (1 +|x|^2)^{1/2}$, and \begin{equation}\label{chi-estim2} \int \frac{d^3 k}{|k| } \: |\chi_{\lambda, x}(k)|^{2} \ < \ \infty .\end{equation} Using the fact that the operators $A_1$ and $G$ have much better infra-red behavior than the original vector potential $A$ we can use our approach and prove the limiting absorption principle for $H_g^{PF}$ and $B$: \begin{equation} \label{lap1} \langle B \rangle^{-\theta} (H_g^{PF} - z)^{-1}\langle B \rangle^{-\theta} \text{ H\"older continuous in $z$}. \end{equation} Now we show that estimate \eqref{lap1} and the additional restriction on the spectral interval imply the limiting absorption principle for $H^{SM}_g$. Let $B_1 : = e^{-ig F(x)} B e^{ig F(x)}$. We compute \begin{equation} B_1 = B + g C \end{equation} \noindent where $C : = -i[F(x), B]$. Note that the operator $C$ contains a term proportional to $x$. Now, let a function $f$ be supported in $(-\infty,\Sigma_{p})$. Then, using that $(H^{SM}_g - z)^{-1}= e^{ig F(x)}(H_g^{PF} - z)^{-1} e^{-ig F(x)}$, we obtain \begin{align} &\langle B\rangle^{-\theta}f(H^{SM}_g)^2 (H^{SM}_g - z)^{-1}\langle B\rangle^{-\theta}\\ \notag &= D E(z) D^*, \end{align} where $ D := \langle B\rangle^{-\theta}f(H^{SM}_g)\langle B_1 \rangle^{\theta} e^{ig F(x)}$ and $ E(z):= \langle B\rangle^{-\theta}(H_g^{PF} - z)^{-1}\langle B\rangle^{-\theta}$. The operator $D$ is bounded by standard operator calculus estimates and the fact that $e^{\delta \langle x \rangle} f(H^{SM}_g)$ is bounded for $\delta >0$ sufficiently small. Furthermore, the operator-family $E(z)$ is H\"older continuous by the assumed result. Now, for $z\in (-\infty,\Sigma_{p}-{\varepsilon}]$ for some ${\varepsilon}>0$ the previous conclusion remains true even if remove the cut-off function $f(H^{SM}_g)$. We mention for further references that the operator (I.13) can be written as \begin{equation} \label{Hpf} H_g^{PF} \ = \ H^{PF}_{0} \, + \, I_{g}^{PF} \comma \end{equation} where $H_{0g}=H_0 + 2 g^2\sum_\lambda \int \omega |f_{x,\lambda}(k)|^2d^3k + g^2\sum_\lambda \int{|\chi_\lambda(k)|^2\over (2\pi)^6 2\omega}d^3k$, with $H_0$ defined in (\ref{H_0}), and $I_{g}^{PF}$ is defined by this relation. Note that the operator $I_{g}^{PF}$ contains linear and quadratic terms in the creation and annihilation operators, with the coupling functions (form-factors) in the linear terms satisfying estimate \eqref{Acoupling} and with the coupling functions in the quadratic terms satisfying a similar estimate. Moreover, the operator $H^{PF}_{0}$ is of the form $H^{PF}_{0}= H^{PF}_{p}+H_{f}$ where \begin{equation} \label{Hpfp} H^{PF}_{p}:=H_{p}+2 g^2\sum_\lambda \int |k| |f_{x,\lambda}(k)|^2d^3k + g^2\sum_\lambda \int{|\chi_\lambda(k)|^2\over |k|}d^3k \end{equation} where $H_p$ is given in \eqref{Hp}. \secct{The Smooth Feshbach-Shur Map} \label{sec-III} In this section, we review and extend ,in a simple but important way, the method of isospectral decimations or Feshbach-Schur maps introduced in \cite{BachFroehlichSigal1998a,BachFroehlichSigal1998b} and refined in \cite{BachChenFroehlichSigal2003}\footnote{In \cite{BachFroehlichSigal1998a,BachFroehlichSigal1998b, BachChenFroehlichSigal2003} this map is called the Feshbach map. As was pointed out to us by F. Klopp and B. Simon, the invertibility procedure at the heart of this map was introduced by I. Schur in 1917; it appeared implicitly in an independent work of H. Feshbach on the theory of nuclear reactions in 1958, where the problem of perturbation of operator eigenvalues was considered. See \cite{GriesemerHasler} for further discussion and historical remarks.}. For further extensions see \cite{GriesemerHasler}. At the root of this method is the isospectral smooth Feshbach-Schur map acting on a set of closed operators and mapping a given operator to one acting on much smaller space which is easier to handle. Let $\chi$, ${\overline{\chi}}$ be a partition of unity on a separable Hilbert space $\mathcal{H}$, i.e. $\chi$ and ${\overline{\chi}}$ are positive operators on $\mathcal{H}$ whose norms are bounded by one, $0 \leq \chi, {\overline{\chi}} \leq \mathbf{1}$, and $\chi^{2}+ {\overline{\chi}}^{2} = \mathbf{1}$. We assume that $\chi$ and ${\overline{\chi}}$ are nonzero. Let $\tau$ be a (linear) projection acting on closed operators on $\mathcal{H}$ s.t. operators from its image commute with $\chi$ and ${\overline{\chi}}$. We also assume that $\tau(\textbf{1}) =\textbf{1}$. Assume that $\tau$ and $\chi$ (and therefore also ${\overline{\chi}}$) leave $\mathrm{dom}(H)$ invariant $ \mathrm{dom}(\tau(H))=\mathrm{dom}(H)$ and $ \chi\mathrm{dom}(H)\subset \mathrm{dom}(H) $. Let $\overline{\tau}:= \mathbf{1} - \tau$ and define \begin{equation} \\ \label{II-1} H_{\tau,\chi^{\#}} \ \; := \tau(H) \: + \: \chi^{\#} \overline{\tau}(H)\chi^{\#} \period \end{equation} where $\chi^{\#}$ stands for either $\chi$ or ${\overline{\chi}}$. Given $\chi$ and $\tau$ as above, we denote by $D_{\tau,\chi}$ the space of closed operators, $H$, on $\mathcal{H}$ which belong to the domain of $\tau$ and satisfy the following conditions: \begin{equation} \label{II-2}H_{\tau,{\overline{\chi}}}\ \mbox{is (bounded) invertible on}\ \Ran \, {\overline{\chi}}, \end{equation} $$\overline{\tau}(H) \chi\ \mbox{and}\ \chi \overline{\tau}(H)\ \mbox{extend to bounded operators on}\ \mathcal{H}.$$ (For more general conditions see \cite{BachChenFroehlichSigal2003, GriesemerHasler}.) Denote $H_0 := \tau(H)$ and $W := \overline{\tau}(H)$. Then $H_0$ and $W$ are two closed operators on $\mathcal{H}$ with coinciding domains, $ \mathrm{dom}(H_0)= \mathrm{dom}(W)=\mathrm{dom}(H)$, and $H = H_0 + W$. We remark that the domains of $\chi W\chi$, ${\overline{\chi}} W{\overline{\chi}}$, $H_{\tau,\chi}$, and $H_{\tau,{\overline{\chi}}}$ all contain $\mathrm{dom}(H)$. The \textit{smooth Feshbach-Schur map (SFM)} maps operators on $\mathcal{H}$ to operators on $\mathcal{H}$ by $H \ \mapsto \ F_{\tau,\chi} (H)$, where \begin{equation} \label{II-3} F_{\tau,\chi} (H) \ := \ H_0 \, + \, \chi W\chi \, - \, \chi W {\overline{\chi}} H_{\tau,{\overline{\chi}}}^{-1} {\overline{\chi}} W \chi \period \end{equation} Clearly, it is defined on the domain $D_{\tau,\chi}$. Remarks \begin{itemize} \item The definition of the smooth Feshbach-Schur map given above is the same as in \cite{FroehlichGriesemerSigal2009} and differs from the one given in \cite{BachChenFroehlichSigal2003}. In \cite{BachChenFroehlichSigal2003} the map $F_{\tau,\chi} (H)$ is denoted by $F_{\chi}(H,\tau(H))$ and the pair of operators $(H, T)$ are referred to as a Feshbach pair. \item The Feshbach-Schur map is obtained from the smooth Feshbach-Schur map by specifying $\chi=$ projection and, usually, $\tau = 0$. \end{itemize} We furthermore define the maps entering some identities involving the Feshbach-Schur map: \begin{eqnarray} \label{eq-II-4} Q_{\tau,\chi} (H) & := & \chi \: - \: {\overline{\chi}} \, H_{\tau,{\overline{\chi}}}^{-1} {\overline{\chi}} W \chi \comma \\ \label{eq-II-5} Q_{\tau,\chi} ^\#(H) & := & \chi \: - \: \chi W {\overline{\chi}} \, H_{\tau,{\overline{\chi}}}^{-1} {\overline{\chi}} \period \end{eqnarray} Note that $Q_{\tau,\chi} (H) \in \mathcal{B}( \Ran\, \chi , \mathcal{H})$ and $Q_{\tau,\chi}^\#(H) \in \mathcal{B}( \mathcal{H} , \Ran\, \chi)$. The smooth Feshbach map of $H$ is isospectral to $H$ in the sense of the following theorem. \begin{theorem} \label{thm-II-1} Let $\chi$ and $\tau$ be as above. Then we have the following results. \begin{itemize} \item[(i)] $0 \in \rho(H) \leftrightarrow 0 \in \rho(F_{\tau,\chi} (H))$, i.e. $H$ is bounded invertible on $\mathcal{H}$ if and only if $F_{\tau,\chi} (H)$ is bounded invertible on $\Ran\, \chi$. \item[(ii)] If $\psi \in \mathcal{H} \setminus \{0\}$ solves $H \psi = 0$ then $\vphi := \chi \psi \in \Ran\, \chi \setminus \{0\}$ solves $F_{\tau,\chi} (H) \, \vphi = 0$. \item[(iii)] If $\vphi \in \Ran\, \chi \setminus \{0\}$ solves $F_{\tau,\chi} (H) \, \vphi = 0$ then $\psi := Q_{\tau,\chi} (H) \vphi \in \mathcal{H} \setminus \{0\}$ solves $H \psi = 0$. \item[(iv)] The multiplicity of the spectral value $\{0\}$ is conserved in the sense that $\dim \mathrm{Ker} H = \dim \mathrm{Ker} F_{\tau,\chi} (H)$. \item[(v)] If one of the inverses, $H^{-1}$ or $F_{\tau,\chi} (H)^{-1}$, exists then so is the other and they are related as \begin{equation} \label{eq-II-6} H^{-1} = Q_{\tau,\chi} (H) \: F_{\tau,\chi} (H)^{-1} \: Q_{\tau,\chi} (H)^\# \; + \; {\overline{\chi}} \, H_{\tau,{\overline{\chi}}}^{-1} {\overline{\chi}} \comma \end{equation} and $$ F_{\tau,\chi} (H)^{-1} = \chi \, H^{-1} \, \chi \; + \; {\overline{\chi}} \, T^{-1} {\overline{\chi}} \period$$ \end{itemize} \end{theorem} This theorem is proven in \cite{BachChenFroehlichSigal2003} (see \cite{GriesemerHasler} for a more general result). Now we establish a key result relating smoothness of the resolvent of an operator on its continuous spectrum with smoothness of the resolvent of its image under a smooth Feshbach-Schur map. Let $B_\theta:=\langle B\rangle^{-\theta}$. In what follows $\Delta$ stands for an open interval in $\mathbb{R}$. \begin{theorem} \label{LAPtransfer} Assume a self-adjoint operator $B$ and a $C^\infty$ family $H(\lambda)$, $\lambda\in\Delta$, of closed operators satisfy the following conditions: $\forall \lambda\in\Delta, H_{\tau,{\overline{\chi}}}(\lambda) \in D_{\tau,\chi}$ and \begin{equation} \mathrm{ad}_B^j(A)\ \mbox{is bounded}\ \forall j\le 1, \label{eqn:5} \end{equation} where $A$ stands for one of the operators $A=\chi,\ \overline{\chi},\ \chi W,\ W\chi,\ \partial_\lambda^k ({\overline{\chi}} H_{\tau,{\overline{\chi}}}(\lambda)^{-1} {\overline{\chi}})\ \forall k.$ If $H(\lambda)\in \mathrm{dom}(F_{\tau, \chi})$, then for any $\nu\ge 0$ and $0 < \theta \le 1$, \begin{equation} B_\theta(F_{\tau, \chi}(H(\lambda))-i0)^{-1}B_\theta\in C^\nu(\Delta)\Rightarrow B_\theta(H(\lambda)-i0)^{-1}B_\theta\in C^\nu(\Delta). \label{eqn:7} \end{equation} \end{theorem} \Proof We use identity (\ref{eq-II-6}) with $H$ replaced by $H(\lambda) -i{\varepsilon}$. Since $\tau(\textbf{1})=\textbf{1}$ we have that $(H(\lambda)-i\epsilon)_{\tau,\chi^{\#}}=H(\lambda)_{\tau,\chi^{\#}}-i\epsilon$, where $\chi^{\#}$ is either $\chi$ or ${\overline{\chi}}$. Furthermore, on $Ran {\overline{\chi}}$, the operator family $[(H(\lambda)-i{\varepsilon})_{\tau,{\overline{\chi}}}]^{-1}$ is differentiable in $\lambda$ and analytic in ${\varepsilon}$ and can be expanded as $$[(H(\lambda)-i{\varepsilon})_{\tau,{\overline{\chi}}}]^{-1}= [H(\lambda)_{\tau,{\overline{\chi}}}]^{-1} + i{\varepsilon} [H(\lambda)_{\tau,{\overline{\chi}}}]^{-1} {\overline{\chi}}^2 [H(\lambda)_{\tau,{\overline{\chi}}}]^{-1} +O({\varepsilon}^2).$$ This implies the relation \begin{equation} \label{BFB} \lim_{{\varepsilon}\rightarrow i0}B_\theta[F_{\tau,\chi}(H(\lambda)-i{\varepsilon})]^{-1}B_\theta = B_\theta[F_{\tau,\chi}(H(\lambda))-i0]^{-1}B_\theta. \end{equation} Conditions \eqref{eqn:5} and the formula $B_\theta=C_\theta\int_0^\infty \frac{d\omega}{\omega^{\theta/2}}(\omega+1+B^2)^{-1}$, where $C_\theta:=\big[\int_0^\infty \frac{d\omega}{\omega^{\theta/2}}(\omega+1)^{-1}\big]^{-1}$, imply that the operators \begin{equation} B_\theta\chi B_\theta^{-1}, B_\theta\overline{\chi} B_\theta^{-1}, B_\theta [H(\lambda)_{\tau,{\overline{\chi}}}]^{-1}B_{\theta}^{-1} \end{equation} and the transposed operators (i.e., $B_\theta^{-1}\chi B_\theta$, etc.) are bounded and $C^\infty(\Delta)$ in $\lambda$. This property shows that $B_\theta^{-1}Q B_\theta$ and $B_\theta^{-1} Q^\# B_\theta$ are bounded and smooth in $\lambda \in \Delta$. This together with \eqref{BFB}, $H(\lambda)\in \mathrm{dom}(F_{\tau\chi})$ and \eqref{eq-II-6} implies the theorem.\qed \secct{A Banach Space of Hamiltonians} \label{subsec-III.1} We construct a Banach space of Hamiltonians on which the renormalization transformation is defined. Let $\chi_1(r) \equiv\chi_{r\le1}$ be a smooth cut-off function s.t. $\chi_1 = 1$ for $r \le 1,\ = 0$ for $r\ge 11/10$ and $0 \le \chi_1(r) \le1\ $ and $\sup|\partial^n_r \chi_1(r)| \le 30\ \forall r$ and for $n=1,2.$ We denote $\chi_\rho(r) \equiv\chi_{r\le\rho}:= \chi_1(r/\rho) \equiv\chi_{r/\rho\le1}$ and $\chi_\rho\equiv\chi_{H_f\le\rho}$. Let $B_1^d$ denotes the unit ball in $\RR^{3d}$, $I:=[0,1]$ and $m,n \ge 0$. Given functions $w_{0,0}: [0, \infty) \rightarrow \mathbb{C}$ and $w_{m,n}: I\times B_1^{m+n} \rightarrow \mathbb{C}, m+n > 0$, we consider monomials, $W_{m,n} \equiv W_{m,n}[w_{m,n}]$, in the creation and annihilation operators of the form $W_{0,0}:=w_{0,0}[H_f]$ (defined by the operator calculus), for $m=n=0$, and \begin{eqnarray} \label{III.1} &&W_{m,n}[w_{m,n}] := \\ \nonumber && \int_{B_1^{m+n}} \frac{ dk_{(m,n)} }{ |k_{(m,n)}|^{1/2} } \; a^*( k_{(m)} ) \, w_{m,n} \big[ H_f ; k_{(m,n)} \big] \, a( \tilde{k}_{(n)} ) \:, \end{eqnarray} for $m+n>0$. Here we used the notation \begin{eqnarray} \label{III.2} & k_{(m)} \: := \: (k_1, \ldots, k_m) \: \in \: \RR^{dm} \comma \hspace{5mm} a^*( k_{(m)} ) \: := \: \prod_{i=1}^m a^*(k_i ), \\ \label{III.3} & k_{(m,n)} \: := \: (k_{(m)}, \tilde{k}_{(n)}) \comma \hspace{5mm} dk_{(m,n)} \: := \: \prod_{i=1}^m d^d k_i \; \prod_{i=1}^n d^d \tilde{k}_i \comma & \\ \label{III.4} & |k_{(m,n)}| \, := \, |k_{(m)}| \cdot |\tilde{k}_{(n)}| \comma \hspace{3mm} |k_{(m)}| \, := \, |k_1| \cdots |k_m| \period & \end{eqnarray} We assume that for every $m$ and $n$ with $m+n>0$ the function $w_{m,n}[ r, , k_{(m,n)}]$ is $s$ times continuously differentiable in $r \in I$, for almost every $k_{(m,n)} \in B_1^{m+n}$, and weakly differentiable in $k_{(m,n)} \in B_1^{m+n}$, for almost every $r$ in $I$. As a function of $k_{(m,n)}$, it is totally symmetric w.~r.~t.\ the variables $k_{(m)} = (k_1, \ldots, k_m)$ and $\tilde{k}_{(n)} = (\tilde{k}_1, \ldots, \tilde{k}_n)$ and obeys the norm bound \begin{equation} \label{III.5} \| w_{m,n} \|_{\mu,s} \ := \sum \| \partial_r^n (k\partial_k)^q w_{m,n} \|_{\mu} \ < \ \infty \comma \end{equation} where $q:= (q_1, \ldots, q_{m+n}),\ (k\partial_k)^q: = \prod_1^{m+n}(k_j \cdot \nabla_{k_j})^{q_j}$, with $k_{m+j} := \tilde{k}_j$, and where the sum is taken over the indices $n$ and $q$ satisfying $0 \le n+|q| \leq s$ and where $\mu \ge 0$ and \begin{equation} \label{III.6} \| w_{m,n} \|_{\mu} \ := \max_j \sup_{r \in I, k_{(m,n)} \in B_1^{m+n}} \big| | k_j|^{-\mu}w_{m,n}[r ; k_{(m,n)}] \big|. \end{equation} Here and in what follows $k_j$ is the $j-$th $3-$dimensional components of the $k-$vector $k_{(m,n)}$ over we take the supremum. For $m+n=0$ the variable $r$ ranges in $[0,\infty)$ and we assume that the following norm is finite: \begin{equation} \\ \label{III.7} \ \| w_{0,0} \|_{\mu, s} := |w_{0,0}(0)|+ \sum_{1 \le n \leq s} \sup_{r \in I}| \partial_r^n w_{0,0}(r)| \hspace{10mm} \end{equation} (for $s=0$ we drop the sum on the r.h.s. ). (This norm is independent of $\mu$ but we keep this index for notational convinience.) The Banach space of these functions is denoted by $\mathcal{W}_{m,n}^{\mu,s}$. Moreover, $W_{m,n}[w_{m,n}]$ stresses the dependence of $W_{m,n}$ on $w_{m,n}$. In particular, $W_{0,0}[w_{0,0}] := w_{0,0}[H_f]$. We fix three numbers $\mu$, $0 < \xi < 1$ and $s \ge 0$ and define Banach space \begin{equation} \label{III.8} \mathcal{W}^{\mu,s} \ \equiv \mathcal{W}^{\mu,s}_{\xi} := \ \bigoplus_{m+n \geq 0} \mathcal{W}_{m,n}^{\mu,s} \ \comma \end{equation} with the norm \begin{equation} \label{III.9} \big\| {\underline w} \big\|_{\mu, s,\xi} \ := \ \sum_{m+n \geq 0} \xi^{-(m+n)} \; \| w_{m,n} \|_{\mu, s} \ < \ \infty \period \end{equation} Clearly, $\mathcal{W}^{\mu,s'}_{\xi', \mu'} \subset \mathcal{W}^{\mu,s}_{\xi,\mu}$ if $\mu' \le \mu, s' \ge s$ and $\xi' \le \xi$. \begin{remark} \label{remIII-1} Though we use the same notation, the Banach spaces, $\mathcal{W}^{\mu,s}_{\xi,\mu}$, etc, introduced above differ from the ones used in \cite{Sigal, FroehlichGriesemerSigal2007b}. The latter are obtained from the former by setting $q=0$ in \eqref{III.5}. To extend estimates of \cite{Sigal, FroehlichGriesemerSigal2007b} to the present setting one has to estimate the effect of the derivatives $(k\partial_k)^q$ which is straightforward. \end{remark} The following basic bound, proven in [2], links the norm defined in (\ref{III.6}) to the operator norm on $\mathcal{B}[\mathcal{F}]$. \begin{theorem} \label{thm-III.1} Fix $m,n \in \NN_0$ such that $m+n \geq 1$. Suppose that $w_{m,n} \in \mathcal{W}_{m,n}^{\mu,0}$, and let $W_{m,n} \equiv W_{m,n}[w_{m,n}]$ be as defined in (\ref{III.1}). Then $\forall \rho>0$ \begin{equation} \label{III.10} \big\| (H_f+\rho)^{-m/2} \, W_{m,n} \, (H_f+\rho)^{-n/2} \big\| \ \leq \ \| w_{m,n} \|_{0} \, , \end{equation} and therefore \begin{equation} \label{III.11} \big\| \chi_\rho \, W_{m,n} \, \chi_\rho \big\| \ \leq \ \frac{\rho^{(m+n)(1+\mu)}}{\sqrt{m! \, n!} } \, \| w_{m,n} \|_{\mu} \, , \end{equation} where $\| \, \cdot \, \|$ denotes the operator norm on $\mathcal{B}[\mathcal{F}]$. \end{theorem} Theorem~\ref{thm-III.1} says that the finiteness of $\| w_{m,n} \|_{\mu}$ insures that $W_{m,n}$ defines a bounded operator on $\mathcal{B}[\mathcal{F}]$. Now with a sequence ${\underline w} := (w_{m,n})_{m+n \geq 0}$ in $\mathcal{W}^{\mu,s}$ we associate an operator by setting \begin{equation} \label{III.12} H({\underline w}) := W_{0,0}[{\underline w}] + \sum_{m+n \geq 1} \chi_1 W_{m,n}[{\underline w}] \chi_1, \end{equation} where we write $W_{m,n}[{\underline w}] := W_{m,n}[w_{m,n}]$. This form of operators on the Fock space will be called the \textit{generalized normal (or Wick) form}. Theorem~\ref{thm-III.1} shows that the series in (\ref{III.12}) converges in the operator norm and obeys the estimate \begin{equation} \label{eq-III-1-25.1} \big\| \, H({\underline w})- W_{0,0}({\underline w}) \, \big\| \ \leq \ \xi\big\| \, {\underline w}_1 \, \big\|_{\mu,0, \xi} \comma \end{equation} for any ${\underline w} = (w_{m,n})_{m+n \geq 0} \in \mathcal{W}^{\mu,0}$. Here ${\underline w}_1 = (w_{m,n})_{m+n \geq 1}$. Hence we have the linear map \begin{equation} \label{eq-III-1-24.1} H : {\underline w} \to H({\underline w}) \end{equation} from $\mathcal{W}^{\mu,0}$ into the set of closed operators on the Fock space $\mathcal{F}$. Furthermore the following result was proven in [2]. \begin{theorem} \label{thm-III-1-2} For any $\mu \ge 0$ and $0 < \xi < 1$, the map $H : {\underline w} \to H({\underline w})$, given in (\ref{III.12}), is one-to-one. \end{theorem} Define the spaces $\mathcal{W}_{op}^{\mu,s} :=H(\mathcal{W}^{\mu,s})$ and $\mathcal{W}_{mn,op}^{\mu,s} :=H(\mathcal{W}_{mn}^{\mu,s})$. Sometimes we display the parameter $\xi$ as in $\mathcal{W}_{op,\xi}^{\mu,s} :=H(\mathcal{W}^{\mu,s}_\xi)$. Theorem \ref{thm-III-1-2} implies that $H(\mathcal{W}^{\mu,s})$ is a Banach space under the norm $\big\| \, H({\underline w}) \big\|_{\mu,s, \xi}$ $:=\ \big\| \, {\underline w} \, \big\|_{\mu,s, \xi}$. Similarly, the other spaces defined above are Banach spaces in the corresponding norms. Recall that $B$ denotes the dilation generator on the Fock space $\mathcal{F}$ (see \eqref{eq-I.24}). Let \begin{equation} \chi_\rho\equiv\chi_{H_f\le\rho}\ \mbox{and}\ \overline{\chi}_\rho\equiv\chi_{H_f\ge\rho} \end{equation} be a smooth partition of unity, $\chi_\rho^2+\overline{\chi}_\rho^2=\mathbf{1}$. Let $F_\rho:=F_{\tau\chi_\rho}$. We have \begin{lemma} \label{LAPprepar} Let $\chi_\rho^\#$ be either $\chi_\rho$ or $\overline{\chi}_\rho$. If $H\in \mathcal{W}_{op}^{\mu,1} $, then the operators \begin{equation} \mathrm{ad}_B^j(\chi_\rho^\#),\ H_f^{-1}\mathrm{ad}_B^j(W_{00})\ \mbox{and}\ \mathrm{ad}_B^j(H-W_{00})\ \mbox{are bounded} \end{equation} for $j\leq 1$. In particular, condition \eqref{eqn:5} with $\tau(H):=W_{00}$, and therefore property \eqref{eqn:7}, with $\chi=\chi_\rho$, hold for $H(\lambda)\in C^\infty(\Delta,\mathcal{W}_{op}^{\mu,1})\cap\mathrm{dom}(F_\rho)$. \end{lemma} \Proof The result follows from the following relations \begin{align} \comm{B}{a^\#(k)}=\pm i( k\cdot\nabla_k+\frac{d}{2} ) a^\#(k),\\ i\comm{B}{H_f}=H_f, i\comm{B}{f(H_f)}=H_f f'(H_f). \end{align} Using these relations we show, in particular, that if $H\in \mathcal{W}_{op,\xi}^{\mu,1} $, then for any $j\leq 1,\ \mathrm{ad}_B^j(W)\in \mathcal{W}_{op,\xi'}^{\mu,1-j}\ \forall \xi' < \xi$ and \begin{equation} \label{eq-III-1-26} \|ad_B(H_{m n})\|_{\mathcal{W}_{m n, op}^{\mu,0}} \leq c(m+n+1)\|H_{m n}\|_{\mathcal{W}_{m n, op}^{\mu,1}}. \end{equation} Eqn (III.25) implies that $\mathrm{ad}_B^j(\chi_\rho^\#)$ and $H_f^{-1}\mathrm{ad}_B^j(T)$ are bounded and Eqn \eqref{eq-III-1-26} implies that $\mathrm{ad}_B^j(W)$ are bounded, for $j\leq 1$. \qed \secct{The Renormalization Transformation $\mathcal{R}_\rho$} \label{sec-IV} In this section we present an operator-theoretic renormalization transformation based on the smooth Feshbach-Schur map related closely to the one defined in \cite{BachChenFroehlichSigal2003} and \cite{BachFroehlichSigal1998a,BachFroehlichSigal1998b}. We fix the index $\mu$ in our Banach spaces at some positive value $\mu > 0$. The renormalization transformation is homothetic to an isospectral map defined on a subset of a suitable Banach space of Hamiltonians. It has a certain contraction property which insures that (upon an appropriate tuning of the spectral parameter) its iteration converges to a fixed-point (limiting) Hamiltonian, whose spectral analysis is particularly simple. Thanks to the isospectrality of the renormalization map, certain properties of the spectrum of the initial Hamiltonian can be studied by analyzing the limiting Hamiltonian. The renormalization map is defined below as a composition of a decimation map, $F_{\rho}$, and two rescaling maps, $S_\rho$ and $A_\rho$. Here $\rho$ is a positive parameter - the photon energy scale - which will be chosen later. The \emph{decimation of degrees of freedom} is done by the smooth Feshbach map, $F_{\tau,\chi}$. Except for the first step, the decimation map will act on the Banach space $\mathcal{W}_{op}^s$. The operators $\tau$ and $\chi$ will be chosen as \begin{equation} \tau(H)=W_{00}:=w_{0 0}(H_f)\ \mbox{and}\ \chi=\chi_\rho\equiv\chi_{\rho^{-1}H_f\le1} , \label{IV.1} \end{equation} where $H=H({\underline w})$ is given in Eqn \eqref{III.12}. With $\tau$ and $\chi$ identified in this way we will use the notation \begin{equation} F_\rho\equiv F_{\tau,\chi_\rho}. \label{IV.2} \end{equation} The following lemma shows that the domain of this map contains the following polydisc in $\mathcal{W}_{op}^{\mu,s}$: \begin{eqnarray} \mathcal{D}^{\mu,s}(\alpha,\beta,\gamma) & := & \Big\{ H({\underline w}) \in \mathcal{W}_{op}^{\mu,s} \ \Big| \ | w_{0,0}[0]| \leq \alpha \comma \\ \nonumber & & \sup_{r \in [0,\infty)}| w_{0,0}'[r] - 1 | \leq \beta \comma \hspace{4mm} \| {\underline w}_1 \|_{\mu,s, \xi} \leq \gamma \Big\} \comma \label{IV-3}\end{eqnarray} for appropriate $\alpha, \beta, \gamma >0$. Here ${\underline w}_1 :=(w_{m,n})_{m+n \geq 1}$. \begin{lemma} \label{lem-III-2-2} Fix $0 < \rho < 1$, $\mu > 0$, and $0 < \xi < 1$. Then it follows that the polidisc $\mathcal{D}^{\mu,1}(\rho/8, 1/8, \rho/8)$ is in the domain of the Feshbach map $F_\rho$. \end{lemma} \Proof Let $H({\underline w}) \in \mathcal{D}^{\mu,1}(\rho/8, 1/8, \rho/8)$. We remark that $W:= H[{\underline w}]-W_{0,0}[{\underline w}]$ defines a bounded operator on $\mathcal{F}$, and we only need to check the invertibility of $H({\underline w})_{\tau \chi_\rho}$ on $\Ran \,{\overline{\chi}}_\rho$. Now the operator $T + E = W_{0,0}[{\underline w}]$ is invertible on $\Ran \,{\overline{\chi}}_\rho$ since for all $r \in [3\rho/4, \infty)$ \begin{eqnarray} \label{eq-III-2-16} \rRe w_{0,0}[r] & \geq & r \, - \, | w_{0,0}[r] - r | \nonumber \\ & \geq & r \big( 1 \, - \, \sup_{r} | w_{0,0}'[r] - 1 | \big) \: - \: |w_{0,0}[0]| \nonumber \\ & \geq & \frac{3 \, \rho}{4} ( 1 - 1/8 ) \: - \: \frac{\rho}{8} \ \geq \ \frac{ \rho}{2} \ . \end{eqnarray} On the other hand, by \eqref{III.11}, $\big\| W \|\leq \xi\rho/8 \leq \rho/8$. Hence $\rRe (W_{0,0}[{\underline w}] + W) \geq \frac{\rho}{3}$ on $\Ran \,{\overline{\chi}}_\rho$, i.e. $H({\underline w})_{\tau, {\overline{\chi}}_\rho}$ is invertible on $\Ran \,{\overline{\chi}}_\rho$. \QED We introduce the \textit{scaling transformation} $S_\rho: \mathcal{B}[\mathcal{F}] \to \mathcal{B}[\mathcal{F}]$, by \begin{equation} \label{IV.5} S_\rho(\mathbf{1}) \ := \ \mathbf{1} \comma \hspace{5mm} S_\rho (a^\#(k)) := \ \rho^{-d/2} \, a^\#( \rho^{-1} k) \comma \end{equation} where $a^\#(k)$ is either $a(k)$ or $a^*(k)$, and $k \in \RR^d$. On the domain of the decimation map $F_\rho$ we define the renormalization map $\mathcal{R}_\rho$ as \begin{equation} \label{IV.6} \mathcal{R}_\rho:= \rho^{-1} S_\rho\circ F_\rho. \end{equation} \begin{remark} \label{remIV-2} The renormalization map above is different from the one defined in \cite{BachChenFroehlichSigal2003}. The map in \cite{BachChenFroehlichSigal2003} contains an additional change of the spectral parameter $\lambda:= -\langle H\rangle_\Omega$. \end{remark} We mention here some properties of the scaling transformation. It is easy to check that $S_\rho (H_f) = \rho H_f$, and hence \begin{equation} \label{eq-III-2-3} S_\rho ( \chi_\rho) = \ \chi_1 \hspace{5mm} \mbox{and} \hspace{6mm} \rho^{-1} S_\rho \big( H_f \big) \ = \ H_f \comma \end{equation} which means that the operator $H_f$ is a \emph{fixed point} of $\rho^{-1} S_\rho$. Further note that $E \cdot \mathbf{1}$ \emph{is expanded} under the scaling map, $\rho^{-1} S_\rho(E \cdot \mathbf{1}) = \rho^{-1} E \cdot \mathbf{1}$, at a rate $\rho^{-1}$. (To control this expansion it is necessary to suitably restrict the spectral parameter.) Now we show that the interaction $W$ contracts under the scaling transformation. To this end we remark that the scaling map $S_\rho$ restricted to $\mathcal{W}_{op}^{\mu,s}$ induces a scaling map $s_\rho$ on $\mathcal{W}^{\mu,s}$ by \begin{equation} \label{eq-III-2-5} \rho^{-1} S_\rho \big( H({\underline w}) \big) \ =: \ H \big( s_\rho({\underline w}) \big) \comma \end{equation} where $s_\rho({\underline w}):=(s_\rho(w_{m,n}) )_{m+n \geq 0} $, and it is easy to verify that, for all $(m,n) \in \NN_0^2$, \begin{equation} \label{eq-III-2-6} s_\rho(w_{m,n}) \big[ r , k_{(m,n)} \big] \ = \ \rho^{m+n - 1} \: w_{m,n}\big[ \rho \, r \; , \; \rho \, k_{(m,n)} \big] \period \end{equation} We note that by Theorem~\ref{thm-III.1}, the operator norm of $W_{m,n} \big[ s_\rho(w_{m,n}) \big]$ is controlled by the norm $$ \| s_\rho(w_{m,n}) \|_{\mu} =$$ \begin{eqnarray}\nonumber & \max_j &\sup_{r \in I, k \in B_1^{m+n}} \ \rho^{m+n - 1} \: \frac{\big|w_{m,n}[\rho \, r \; , \; \rho \, k_{(m,n)}] \big|}{| k_j|^{\mu}} \\ \nonumber & \leq & \rho^{m+n +\mu- 1} \, \| w_{m,n} \|_{\mu}. \end{eqnarray} Hence, for $m+n \geq 1$, we have \begin{equation} \label{eq-III-2-8} \| s_\rho(w_{m,n}) \|_{\mu} \leq \ \; \rho^{\mu} \, \| w_{m,n} \|_{\mu} \end{equation} Since $\mu >0$, this estimate shows that $S_\rho$ contracts $\| w_{m,n} \|_{\mu}$ by at least a factor of $\rho^{\mu} < 1$. The next result shows that this contraction is actually a dominating property of the renormalization map $\mathcal{R}_\rho$ along the 'stable' directions. Below, recall, $\chi_{1}$ is the cut-off function introduced at the beginning of Section III. Define the constant \begin{equation} \label{VIII.21} C_\chi:=\frac{4}{3}\big(\sum_{n=0}^2 \sup | \partial_r^n \chi_1| + \sup |\partial_r \chi_1 |^2 \big) \le 200. \end{equation} \begin{theorem} \label{thm-III-2-5} Let $\epsilon_0:H\rightarrow \langle H\rangle_\Omega$ and $\mu>0$ (see \eqref{IV-3}). Then for the absolute constant $C_\chi$ given in \eqref{VIII.21} and for any $s \ge 1,\ 0<\rho< 1/2, \alpha,\beta \le \frac{\rho}{8}$ and $\gamma \le \frac{\rho}{8C_\chi}$ we have \begin{equation} \mathcal{R}_\rho-\rho^{-1}\epsilon_0:\mathcal{D}^{\mu,s}(\alpha,\beta,\gamma)\rightarrow \mathcal{D}^{\mu,s}(\alpha',\beta',\gamma'), \label{eqn:23} \end{equation} continuously, with $\xi:=\frac{\sqrt{\rho}}{4C_\chi}$ (in the definition of the corresponding norms) and \begin{equation} \alpha'=3C_\chi\left(\gamma^2/2\rho\right), \beta'=\beta+3C_\chi\left(\gamma^2/2\rho\right), \gamma'=128C_\chi^2\rho^\mu\gamma \label{eqn:24} \end{equation} \end{theorem} With some modifications, this theorem follows from \cite{BachChenFroehlichSigal2003}, Theorem 3.8 and its proof, especially Equations (3.104), (3.107) and (3.109). For the norms \eqref{III.5} with $q=0$ it is presented in \cite{Sigal}, Appendix I. A generalization to the $q>0$ case is straightforward. \begin{remark} \label{remIV-4} Subtracting the term $\rho^{-1}\epsilon_0$ from $\mathcal{R}_\rho$ allows us to control the expanding direction during the iteration of the map $\mathcal{R}_\rho$. In \cite{BachChenFroehlichSigal2003} such a control was achieved by using the change of the spectral parameter $\lambda$ which controls $\langle H\rangle_\Omega$ (see remark in Appendix I). \end{remark} \begin{proposition} \label{prop-LAP} Let $\Delta$ be an open interval in $\mathbb{R}$, $\mu>0$ and let $\rho$ and $\xi$ be as in Theorem \ref{thm-III-2-5}. Then for $H(\lambda)\in C^\infty(\Delta, \mathcal{D}^{\mu,1}(\alpha,\beta,\gamma))$, with $\alpha,\gamma<\frac{\rho}{8}, \beta \le \frac{1}{8}$, the following is true for $1 \ge \theta > 0$ and $\nu \ge 0$ \begin{equation} B_\theta(\mathcal{R}_\rho(H(\lambda))-i0)^{-1}B_\theta\in C^\nu(\Delta)\Rightarrow B_\theta(H(\lambda)-i 0)^{-1}B_\theta\in C^\nu(\Delta) \label{LAP}. \end{equation} \end{proposition} \Proof By Theorem \ref{thm-III-2-5}, $\forall \lambda \in \Delta, H(\lambda) \in \mathrm{dom} \big(\mathcal{R}_\rho\big)$. Then Lemma \ref{LAPprepar} and invariance of the operator $B_\theta$ under the rescaling $S_\rho$ imply the result. \qed \secct{Renormalization Group} \label{sec-RG} In this section we describe some dynamical properties of the renormalization group $\mathcal{R}_\rho^n\ \forall n \ge 1 $ generated by the renormalization map $\mathcal{R}_\rho$. A closely related iteration scheme is used in \cite{BachChenFroehlichSigal2003}. First, we observe that $\forall w \in \mathbb{C}, \mathcal{R}_\rho(wH_f) = wH_f$ and $ \mathcal{R}_\rho(w\textbf{1}) =\frac{1}{\rho} w\textbf{1}$. Hence we define $\cM_{fp}:=\mathbb{C}H_f$ and $\cM_{u}:=\mathbb{C}\textbf{1}$ as candidates for a manifold of the fixed points of $\mathcal{R}_\rho$ and an unstable manifold for $\cM_{fp}:=\mathbb{C}H_f$. The next theorem identifies the stable manifold of $\cM_{fp}$ which turns out to be of the (complex) codimension $\mathbf{1}$ and is foliated by the (complex) co-dimension $2$ stable manifolds for each fixed point in $\cM_{fp}$. This implies in particular that in a vicinity of $\cM_{fp}$ there are no other fixed points and that $\cM_{u}$ is the entire unstable manifold of $\cM_{fp}$. We introduce some definitions. As an initial set of operators we take $\mathcal{D}:=\mathcal{D}^{\mu,2}(\alpha_0,\beta_0,\gamma_0)$ with $\alpha_0, \beta_0,\gamma_0 \ll 1$. (The choice $s=2$ of the smoothness index in the definition of the polidiscs is dictated by the needs of the Mourre theory applied in the next section.) We also let $\mathcal{D}_s:=\mathcal{D}^{\mu,2}(0,\beta_0,\gamma_0)$ (the subindex s stands for 'stable', not to be confused with the smoothness index $s$ which in this section is taken to be 2). We fix the scale $\rho$ so that \begin{equation} \alpha_0, \beta_0,\gamma_0 \ll\rho\le min(\frac{1}{2}, C_\chi^2) \label{rho} \end{equation} where, recall, the constant $C_\chi$ is appears in Theorem \ref{thm-III-2-5} and is defined in \eqref{VIII.21}. Below we will use the $n-$th iteration of the numbers $\alpha_0, \beta_0$ and $\gamma_0$ under the map \eqref{eqn:24}: $$\alpha_n:=c\left(\rho^{-1}(c\rho^\mu)^{n-1}\gamma_0\right)^2,$$ \begin{equation*} \beta_n=\beta_0+\sum_{j=1}^{n-1} c\left(\rho^{-1}(c\rho^\mu)^j\gamma_0\rho\right)^2, \end{equation*} $$\gamma_n=(c\rho^\mu)^n\gamma_0.$$ For $H \in \mathcal{D}$ we denote $H_u:=\langle H\rangle_\Omega$ and $H_s:=H - \langle H\rangle_\Omega\ \mathbf{1}$ (the unstable- and stable-central-space components of $H$, respectively). Note that $H_s \in \mathcal{D}_s$. Recall that a complex function $f$ on an open set $\mathcal{D}$ in a complex Banach space $\mathcal{W}$ is said to be \textit{analytic} if $\forall \xi \in \mathcal{W},\ f(H+ \tau \xi)$ is analytic in the complex variable $\tau$ for $|\tau|$ sufficiently small (see \cite{Berger}). Our analysis uses the following result from \cite{FroehlichGriesemerSigal2009}: \begin{theorem} \label{stable-manif} Let $\delta_n :=\nu_n\rho^{n}$ with $4 \alpha_n \leq \nu_{n} \leq\frac{1}{18}$. There is an analytic map $e:\mathcal{D}_s \rightarrow \mathbb{C}$ s.t. $e(H) \in \mathbb{R}$ for $H=H^*$ and \begin{equation} U_{\delta_n} \subset dom(\mathcal{R}_\rho^{n})\ \mbox{and}\ \mathcal{R}_\rho^{n}(U_{\delta_n}) \subset \mathcal{D}^{\mu,2}(\rho/8,\beta_{n},\gamma_{n}) \label{eqn:30aaa} \end{equation} where $U_\delta:= \{H \in \mathcal{D}|\ |e(H_s)+ H_u| \leq \delta\ \}.$ Moreover, $\forall H \in U_{\delta_n}$ and $\forall n \geq 1$, there are $E_{n} \in \mathbb{C}$ and $w_n(r)\in \mathbb{C}$ s.t. $|E_{n}| \leq 2\nu_n$, $|w_n(r)-1| \leq \beta_n$, $ w_n$ is $C^2$, \begin{equation} \mathcal{R}_\rho^n(H)=E_{n}+w_n(H_f)H_f +O_{\mathcal{W}_{op}^s}(\gamma_n), \label{eqn:30b} \end{equation} $E_{n}$ and $w_n(r)$ are real if $H$ is self-adjoint and, as $ n \rightarrow \infty$. \end{theorem} Moreover, one can show that $w_n(r) $ converge in $L^\infty$ to some number (constant function) $w\in \mathbb{C}$ (\cite{FroehlichGriesemerSigal2009}). This theorem implies that $\cM_{fp}:=\mathbb{C}H_f$ is (locally) a manifold of the fixed points of $\mathcal{R}_\rho$ and $\cM_{u}:=\mathbb{C}\textbf{1}$ is an unstable manifold and the set \begin{equation} \cM_s:=\bigcap_n U_{\delta_n}= \{H\in \mathcal{D} |\ e(H_s)=-H_u\}\label{eqn:30a} \end{equation} is a local stable manifold for the fixed point manifold $\cM_{fp}$ in the sense that $\forall H \in \cM_s\ \exists w \in \mathbb{C}$ s.t. \begin{equation}\mathcal{R}_\rho^n(H)\rightarrow wH_f\ \mbox{in the sense of}\ \mathcal{W}_{op}^s \label{eqn:30aa} \end{equation} as $ n \rightarrow \infty$. Moreover, $\cM_s$ is an invariant manifold for $\mathcal{R}_\rho$: $\cM_s \subset \mathrm{dom}(\mathcal{R}_\rho)$ and $\mathcal{R}_\rho(\cM_s) \subset \cM_s$, though we do not need this property here and therefore we do not show it. The next result reveals the spectral significance of the map $e$: \begin{theorem} Let $H\in \mathcal{D}$. Then the number $E:=e(H_s)+H_u$ is an eigenvalue of the operator $H$. Moreover, if $H$ is self-adjoint, then it is the ground state energy of $H$. \end{theorem} Theorems V.2 and V.3 were proven in \cite{Sigal} for somewhat simper Banach spaces which do not contain the derivatives $(k\partial_k)^q$). However, an extension to the Banach spaces which are used in this paper is straightforward and is omitted here. \secct{Mourre Estimate} \label{sec-VII} In this section we prove the Mourre estimate for the operator-family $H^{(n)}(\lambda):= \mathcal{R}_{\rho}^n(H)$ with $\lambda:=-H_u$. This gives the limiting absorption principle for $H^{(n)}(\lambda)$. The latter is then transferred with the help of Theorem \ref{LAPtransfer} to the limiting absorption principle for the operator $H$. In Section \ref{sec-VIII} this limiting absorption principle will be connected to the limiting absorption principle for the family $H_g-\lambda$, where $H_g$ is either $H^{PF}_g$ or $H^{N}_g$. \begin{theorem} \label{thm-VII-1} Let $H(\lambda)=H(\lambda)^*\in C^\infty(\Delta,\mathcal{D}^{\mu,2}(\alpha,\beta,\gamma)),$ where $\Delta$ is an open interval in $\mathbb{R}$, and $\Delta^\delta:=[\delta,\infty)$. If $\delta\gg \gamma$ and $\beta \leq \frac{1}{3}$, then \begin{equation} B_\theta(H(\lambda)-i0)^{-1}B_\theta\in C^\nu(\Delta\cap E^{-1}(\Delta^\delta)), \label{eqn:67} \end{equation} where $E:\lambda\rightarrow E(\lambda)$ with $E(\lambda):=\langle H(\lambda)\rangle_\Omega$, for any and $1/2<\theta \le 1$ and $\nu<\theta-\frac{1}{2}$. \end{theorem} \Proof In what follows we omit the argument $\lambda$. Let $E:=w_{0,0}[0], T:=w_{0,0}[H_f] - w_{0,0}[0]\ \mbox{and}\ W:=\sum_{m+n \geq 1}\chi_1 W_{m,n}[{\underline w}]\chi_1$, so that $H=E\mathbf{1} + T + W$. Let $H_1:=H-E=T+W$. We write $i \comm{H_1}{B}=\tT+\tW$, where $\tT:=i \comm{T}{B}=T'(H_f)H_f$ and $\tW:=i \comm{W}{B}$. By relation (III.28) we have for $s=2$ $$\|\tW\|_{\mathcal{W}_{op}^{\mu,s-1}}\le c\gamma,$$ where the $\xi$-parameter in the norm on the l.h.s. should be taken slightly smaller than the $\xi$-parameter in the Banach space $\mathcal{W}_{op}^{\mu,s}$ for $W$. The shift in the smoothness index from $s$ to $s-1$ is due to the fact that the coupling functions for the operator $i \comm{W}{B}$ are $(k \cdot \nabla_k + \frac{3(m+n)}{2})w_{m,n}(r,k)$, where $k:=k^{(m,n)}$, and therefore loose one derivative compared to the coupling functions, $w_{m,n}(r,k)$, of $W$. We write \begin{equation*} i\comm{H_1}{B}=\frac{1}{2}H_1+\tT-\frac{1}{2} T+\tW-\frac{1}{2}W. \end{equation*} Remembering that the operator norm is dominated by the $\mathcal{W}^{\mu,0}_{op}-$ norm we see that the last two terms are bounded as \begin{equation} \|\tW-\frac{1}{2}W\|\le C\gamma. \label{eqn:69} \end{equation} Furthermore using the estimate $|T'(r) - 1| < \beta$ and the definition of $\tT$ we find \begin{equation*} \tT(r)-\frac{1}{2}T(r)\ge (1-\beta)r-\frac{1}{2}(1+\beta)r=\frac{1}{2}(1-3\beta)r \end{equation*} and therefore $$\tT-\frac{1}{2}T\ge\inf_{0\le r\le \infty}\left( \tT(r)-\frac{1}{2}T(r) \right) $$ \begin{equation*} \ge\inf_{0\le r\le \infty}\frac{1}{2}(1-3\beta)r=0.\label{eqn:70} \end{equation*} This gives $\comm{H_1}{B}\ge\frac{1}{2}H_1-c\gamma$ and therefore for $\Delta':=\left(\frac{1}{2}\delta,\infty\right)$, $\delta\gg\gamma$, \begin{equation} E_{\Delta'}(H_1)i\comm{H_1}{B}E_{\Delta'}(H_1)\ge\frac{1}{4}\delta E_{\Delta'}(H_1)^2. \label{eqn:71} \end{equation} This proves the Mourre estimate for the operator $H_1\equiv H_1(\lambda)$. Moreover, since $H(\lambda)\in C^\infty(\Delta,\mathcal{D}^{\mu,2}(\alpha,\beta,\gamma))$, we have that the commutators $[H_1, B]$ and $[[H_1, B], B]$ are bouded relative to the operator $H_1$ (this is guaranteed by taking the index $s=2$ for the polidisc $D^{\mu,s}(\alpha,\beta,\gamma)$). Hence the standard Mourre theory is applicable and gives H\"{o}lder continuity in the spectral parameter $\sigma$ as well as in the "operator $H_1(\lambda)$", i.e. in $\lambda$ (see \cite{HunzikerSigal}): \begin{equation} B_\theta R_1(\lambda,\sigma)E_{\Delta'}(H_1(\lambda))B_\theta\in C^\nu(\Delta\times\RR), \label{eqn:72} \end{equation} where $\nu < \theta -1/2$, where we restored the argument $\lambda$ in our notation and where $R_1(\lambda,\sigma):=(H_1(\lambda)-\sigma)^{-1}$. Since $$B_\theta R_1(\lambda,\sigma)B_\theta=B_\theta R_1(\lambda,\sigma)E_{\Delta'}(H_1(\lambda))B_\theta$$ \begin{equation} +B_\theta R_1(\lambda,\sigma)(\mathbf{1}-E_{\Delta'}(H_1(\lambda)))B_\theta \label{eqn:73} \end{equation} and since the last term on the right hand side is $C^\nu(\Delta)$ in $\lambda$ and $C^\infty(\Delta^\delta)$ in $\sigma$ we conclude from \eqref{eqn:72} that \begin{equation} B_\theta R_1(\lambda,\sigma) B_\theta\in C^\nu(\Delta\times\Delta^\delta). \label{eqn:74} \end{equation} Now take $\sigma = E(\lambda)+i0$. Since by the condition of the theorem $E(\lambda):=\langle H(\lambda)\rangle_\Omega\in C^\infty(\Delta)$ we conclude that \eqref{eqn:67} holds.\qed In the previous section the parameter $\delta_n$ was allowed to change in a certain range (see Theorem V.2). In this section we make a particular choice of $\delta_n$, namely $\delta_n :=\frac{1}{18}\rho^{n}$. Recall the definition of the set $U_{\delta}$ in Theorem \ref{stable-manif}. \begin{theorem} \label{thm-VII-2} Assume \eqref{rho}. Let $ n \geq 1$, $\delta_n :=\frac{1}{18}\rho^{n}$ and let $ H=H^* \in U_{\delta_n} $ and $\Delta_{\delta_n}:= [e(H_s)+\frac{\rho}{2}\delta_n,e(H_s)+\delta_n]$. Then \begin{equation} B_\theta(H_s-\lambda-i0)^{-1}B_\theta\in C^\nu(\Delta_{\delta_n}) \label{eqn:75} \end{equation} for any and $1/2<\theta \le 1$ and $\nu<\theta-\frac{1}{2}$. \end{theorem} \Proof Let $D_n$ be the disc of the radius $\delta_n$ centered at $e(H_s)$. Since, by \eqref{eqn:30aaa}, $U_{\delta_n} \subset D(\mathcal{R}_\rho^{n})$, the operator $H^{(n)}(\lambda):=\mathcal{R}_\rho^{n}(H)$, with $\lambda:=-H_u$, is well defined. By \eqref{eqn:30aaa}, $D_n\ni\lambda\rightarrow H^{(n)}(\lambda)\in \mathcal{D}^{\mu,2}(\frac{1}{8}\rho,\beta_{n-1},\gamma_{n-1})$ is $C^\infty$. Moreover, $H^{(n)}(\lambda)=H^{(n)}(\lambda)^*,\ \forall \lambda \in D_n \cap \mathbb{R}$. Hence, since $\rho\gg\gamma_{n-1}$, $\beta_{n-1} \leq \frac{1}{3}$, by \eqref{rho}, and $\Delta_{\delta_n} \subset D_n$, we have by Theorem \ref{thm-VII-1} that \begin{equation*} B_\theta(H^{(n)}(\lambda)-i0)^{-1} B_\theta\in C^\nu(D_n\cap E_n^{-1}(\Delta^{\frac{1}{50}\rho})), \end{equation*} where $0 \le \nu < \theta - 1/2$ and, as before, $E_{n}(\lambda)\equiv E_{n}(\lambda, H_s):=\big( H^{(n)}(\lambda)\big)_u$, which, by the above conclusion, is $C^\infty$. We need the following proposition to describe the set $E_n^{-1}(\Delta^{\frac{1}{50}\rho})$. \begin{proposition} \label{prop-VII-3} Let $n\geq0$, $\delta_n :=\frac{1}{18}\rho^{n}$ and $A_{\delta_n}:= \{\frac{\rho}{2}\delta_n \leq |\lambda -e(H_s)| \leq \delta_n \}$. For $H \in U_{\delta_n} $ we denote $E_{n}(\lambda, H_s):=\big( \mathcal{R}_{\rho}^n(H)\big)_u\equiv\langle \mathcal{R}_{\rho}^n(H)\rangle_\Omega,\ \lambda =-H_u$. Then \begin{equation} |E_{n}(\lambda, H_s)|\ge \frac{1}{50}\rho\ \,\ \mbox{for}\ \lambda\in A_{\delta_n}. \label{eqn:63} \end{equation} \end{proposition} \Proof In this proof we do not display the argument $H_s$. Let $\lambda \in A_{\delta_n}$ with $\delta_n$ given in the proposition. Define $E_{0 i}(\lambda)$ by the equation \begin{equation} E_{n}(\lambda)=\rho^{-n}(E_{0 n}(\lambda)-\lambda). \label{eqn:55} \end{equation} The following estimate is shown in \cite{Sigal} (see Eqn (V.27) of the latter paper): \begin{equation} |E_{0 n}(\lambda)-e| \le\frac{1}{5}|\lambda-e|+ (1-\rho)^{-1}\rho^{n+1} \alpha_{n+1}. \end{equation} This inequality and the definition of $\alpha_n$ imply \begin{align} |E_{0 n}(\lambda)-\lambda|&\ge|\lambda-e|-|E_{0 n}(\lambda)-e|\nonumber\\ &\ge \frac{4}{5}|\lambda-e|-2\gamma_0^2 c(c^2\rho^{2\mu+1})^n\rho^{-1}.\label{eqn:64} \end{align} Due to $2\gamma_0^2 c\rho^{-1}(c^2\rho^{2\mu+1})^{n}\ll \rho^{n},$ \eqref{eqn:64} gives \begin{equation} |E_{0 n}(\lambda)-\lambda| \ge \frac{1}{50}\rho^{n+1}. \label{eqn:65} \end{equation} Due to \eqref{eqn:55} this implies the statement of the proposition.\qed Proposition \ref{prop-VII-3} says that \begin{equation} E_n:\Delta_{\delta_n}\ni\lambda\rightarrow E_{n}(\lambda)\in\ \Delta^{\frac{1}{50}\rho}. \end{equation} Hence $E_{n}^{-1}(\Delta^{\frac{1}{50}\rho}) \supset \Delta_{\delta_n} $. Since $\Delta_{\delta_n} \subset D_n$, we have that \begin{equation} B_\theta(H^{(n)}(\lambda)-i0)^{-1}B_\theta\in C^\alpha(\Delta_{\delta_n}) \label{eqn:75a} \end{equation} which, due to Proposition \ref{prop-LAP}, gives \eqref{eqn:75}. \qed \secct{Initial Conditions for the Renormalization Group} \label{sec-IC} Now we turn to the operator families $H_g-\lambda$, we are interested in. Here the operator $H_g=H_0+gI$ is given either by ~\eqref{Hn} or by (~\ref{Hpf}). These operators do not belong to the Banach spaces defined above. We define an additional renormalization transformation which acts on such operators and maps them into the disc $\mathcal{D}^{\mu,s}(\alpha_0,\beta_0,\gamma_0)$ for some appropriate $\alpha_0$, $\beta_0$, $\gamma_0$. Let $H_{pg}$ denote either $H^{PF}_{p}$ or $H^{N}_{p}$ and let $e^{(p)}_{0}< e^{(p)}_{1} < ...$ be the eigenvalues of $H_{pg}$, so that $e^{(p)}_{0}$ is its the ground state energy. Let $P_{p}$ be the orthogonal projection onto the eigenspace corresponding to $e^{(p)}_0$. On Hamiltonians acting on $\mathcal{H}_{p}\otimes\mathcal{H}_f$ which were described above, we define the map \begin{equation} \mathcal{R}_{\rho_0}^{(0)}=\rho_0^{-1} S_{\rho_0}\circ F_{\tau_0 \pi_0}, \label{eqn:25} \end{equation} where $ \rho_0 \in (0, \e^{(p)}_{gap}]$ is an initial photon energy scale (recall that $\e^{(p)}_{gap}:= \e^{(p)}_1-\e^{(p)}_0$ and $\e^{(p)}_j$ are the eigenvalues of $H_p$) and where \begin{equation} \tau_0(H_g-\lambda)=H_{0g}-\lambda\ \mbox{and}\ \pi_0 \equiv \pi_0[H_f]:=P_{p}\otimes\chi_{H_f\le\rho_0}. \label{pi_0} \end{equation} for any $\lambda \in \mathbb{C}$. Recall the convention $\bar{\pi}_0 := \textbf{1} - \pi_0$. Define the set \begin{equation}I_0:=\{z\in \mathbb{C}|Rez \leq e^{(p)}_0 + \frac{1}{2}\rho_0\}.\label{eqn:26a} \end{equation} We assume $\rho_0\gg g^2$. To simplify the notation we assume that the ground state energy, $\e^{(p)}_0$, of the operator $H_{p}$ is simple (otherwise we would have to deal with matrix-valued operators on $\mathcal{H}_f$). We have \begin{theorem} \label{thm-V.1} Let $H_g$ be the Hamiltonian given either by ~\eqref{Hpf} or by ~\eqref{Hn} and let $\rho_0\gg g^2,\ \mu > -1/2$ and $\lambda\in I_0$. Then \begin{equation} H_g-\lambda\in \mathrm{dom}(\mathcal{R}_{\rho_0}^{(0)}). \label{V.4} \end{equation} Furthermore, define the family of operators $H_{\lambda}^{(0)}:=\mathcal{R}_{\rho_0}^{(0)}(H_g -\lambda) \mid \Ran {P}_{p}\otimes\ \textbf{1}$. Then $H_{\lambda}^{(0)}=H_{\lambda}^{(0)*}$, for $\lambda\in I_0\bigcap \mathbb{R} $, and \begin{equation} H_{\lambda}^{(0)}-\rho_0^{-1}(e^{(p)}_{0}-\lambda)\in \mathcal{D}^{\mu,2}(\alpha_0,\beta_0,\gamma_0), \label{V.5} \end{equation} where, with $\mu$ as in Eqn ~\eqref{I.7}, $\alpha_0=O(g^2 \rho_0^{-1})$, $\beta_0=O(g^2)$, and $\gamma_0=O(g\rho_0^\mu)$, for $\lambda\in I_0$. Moreover, $\mathcal{R}_{\rho_0}^{(0)}(H-\lambda)$ is analytic in $\lambda\in I_0$. In particular, these results apply to the Pauli-Fierz and Nelson Hamiltonians by taking $\mu=1/2$ and $\mu>0$, respectively. \end{theorem} Note that if $\psi^{(p)}$ is a ground state of $H_{pg}$ with the energy $e^{(p)}_{0}$ and $\psi_0=\psi^{(p)}\otimes\Omega$, then we have \begin{equation} e^{(p)}_{0}-\lambda=\langle H-\lambda\rangle_{\psi_0}. \end{equation} Theorem \ref{thm-V.1} is proven in \cite{Sigal}, Appendix II, for somewhat simper Banach spaces which do not contain the derivatives $(k\partial_k)^q$. However, an extension to the Banach spaces which are used in this paper is straightforward and is omitted here. Note that $ K:= \mathcal{R}_{\rho_0}^{(0)}(H_g-\lambda)\mid _{\Ran ({\bar{P}}_{pj}\otimes\ \mathbf{1})}= (H_{0g}-\lambda)\mid_{ \Ran ({\bar{P}}_{pj}\otimes\ \mathbf{1})} $ and therefore $\forall \lambda \in I_0 \cap \mathbb{R},\ \sigma(K) = \sigma(H_{pg})/\{\lambda_j\} +[0, \infty) - \lambda$. Hence $$\forall \lambda \in I_0 \cap \mathbb{R}, K \ge e^{(p)}_1-e^{(p)}_0 - \frac{1}{8}\rho_0\ge \frac{7}{8}(e^{(p)}_1-e^{(p)}_0).$$ Therefore $0 \notin \sigma(K)$. This, the relation $\sigma (\mathcal{R}_{\rho_0}^{(0)}(H_g-\lambda)) = \sigma(H_{\lambda}^{(0)}) \cup \sigma(K)$ and Theorem \ref{thm-II-1} imply that $H_{\lambda}^{(0)}$ is isospectral to $H_g-\lambda$ in the sense of Theorem \ref{thm-II-1}. Moreover, similarly to Proposition IV.3, and using the relation \begin{equation} \mathcal{R}_{\rho_0}^{(0)}(H_g-\lambda)^{-1}=H_{\lambda}^{(0)-1}(P_{pj}\otimes\ \mathbf{1}) +(H_{0g}-\lambda)^{-1}({\bar{P}}_{pj}\otimes\ \mathbf{1}). \label{VIII.7a} \end{equation} one shows the following result \begin{proposition} Let $\mu>0$, $\rho_0 \gg g^2$ and $\Delta_0\subseteq I_0\bigcap \mathbb{R} $. If $H_g$ is given in either ~\eqref{Hpf} or ~\eqref{Hn}, then \begin{equation} \label{V.19} B_s(H_{\lambda}^{(0)}-i0)^{-1}B_s\in C^\nu(\Delta_0)\Rightarrow B_s(H_g-\lambda-i0)^{-1}B_s\in C^\nu(\Delta_0). \end{equation} \end{proposition} \secct{Proof of Theorem I.1} \label{sec-Thm I.1} Let $H_g$ be a Hamiltonian given in either ~\eqref{Hpf} or ~\eqref{Hn}. Recall the definition \begin{equation} \label{eqn:31} H^{(0)}_\mu:=\mathcal{R}_{\rho_0}^{(0)}(H_g-\mu)\mid_{ \Ran {P}_{p}\otimes\ \textbf{1}},\ \mu\in I_0. \end{equation} The r.h.s. is well defined according to Theorem~\ref{thm-V.1}. By Equation \eqref{V.5}, if $\mu\in I_0$, then \begin{equation} H^{(0)}_\mu-\rho_0^{-1}(e^{(p)}_0-\mu)\in \mathcal{D}^{\mu,2}(\alpha_0,\beta_0,\gamma_0), \label{eqn:35} \end{equation} where $\alpha_0,\beta_0$ and $\gamma_0$ are given in Theorem \ref{thm-V.1}. The condition \eqref{rho} is satisfied if \begin{equation} g^2\rho_0^{-1},\ g\rho_0^\mu\ll\rho\le \frac{1}{2}, \label{eqn:32} \end{equation} which can be arranged since by our assumption $g \ll 1$ and $\rho_0$ can be fixed anywhere in the interval $(0, \e^{(p)}_{gap}]$. Let $H_{\mu s} : = (H^{(0)}_{\mu})_s=H^{(0)}_{\mu} - \langle H^{(0)}_{\mu}\rangle_\Omega\ \mathbf{1}$ and $H_{\mu u} : = (H^{(0)}_{\mu})_u = \langle H^{(0)}_{\mu}\rangle_\Omega $, the stable-central and unstable components of the operator $H^{(0)}_{\mu}$, respectively (see Section \ref{sec-RG}), and let $e:\mathcal{D}_s \rightarrow \mathbb{C}$ be the map introduced in Theorem \ref{stable-manif}. We introduce the subsets: \begin{equation} \label{VII.20} D_{\delta}: = \{ \mu \in I_0 | |e (H_{\mu s}) + H_{\mu u} | \le \delta \} \end{equation} and \begin{equation} \label{VII.21} E^{\mu}_{\delta} : = \{ \lambda \in \mathbb{R}\ |\ {\rho \over 8} \delta \le | \lambda - e (H_{\mu s}) | \le \delta \}.\end{equation} Recall, $\delta_n = {1 \over 18} \rho^n$ for $n \ge 0$. Let $\theta > {1 \over 2}$ and $0 < \nu < \theta - {1 \over 2}$. Then Theorem~\ref{thm-VII-2}, with $H_{ s} =H_{\mu s} $, implies that \begin{equation*} B_\theta (H_{\mu s} - \lambda - i 0)^{-1} B_\theta \in C^\nu (\{ (\mu, \lambda) \in (I_0 \cap \mathbb{R}) \times E^{\mu}_{\delta_n}\}). \end{equation*} Since $D_{\delta_ n} \setminus D_{{\rho \over 8} \delta_n} \owns \mu \rightarrow - H_{\mu u} \in E^{\mu}_{\delta_n},$ the latter equation yields, in turn, that $$B_\theta (H^{(0)}_\mu - i 0) B_\theta \in C^\nu (D_{\delta_n} \setminus D_{{\rho \over 2} \delta_n}),$$ which, due to Proposition VII.4, yields \begin{equation} \label{VII.22} B_\theta (H_g - \mu - i 0)^{-1} B_\theta \in C^\nu (D_{\delta_n} \setminus D_{{\rho \over 2} \delta_n}). \end{equation} Let $\e_g$ be the solution to the equation $e(H_{\mu s}) = - H_{\mu u}$ for $\mu$. By Theorem V.3, $0=e(H_{\e_g s})+H_{\e_g u}$ is the ground state energy of the operator $H^{(0)}_{\e_g}$ and therefore, by Theorem II.1, $\e_g$ is the ground state energy of the operator $H_g$. In the lemma below we show that for $g$ sufficiently small \begin{equation} \label{VII.23} D_{\delta_n} \setminus D_{{\rho \over 8} \delta_n},\ \forall n \ge 0,\ \mbox{cover}\ (\e_g, \e_g + {1 \over 18} \rho_0),\end{equation} This together with \eqref{VII.22} implies the statement of Theorem I.1. \qed \begin{lemma} \label{lemma-VII-4}For $g$ sufficiently small, \eqref{VII.23} holds. \end{lemma} \begin{proof} We claim that for $g$ sufficiently small and for $n \ge 0$ \begin{equation} \label{VII.24} D (\e_g, {\rho_0 \over 4} \delta_n) \subset D_{\delta_n}. \end{equation} We prove this claim by induction in $n$. We assume it is true for $n \le j - 1$ and prove it for $n = j$. For $j = 0$, the induction assumption is absent and so our proof of the induction step yields also the first step. We introduce the notation $e (\mu) : = e (H_{\mu s}).$ First we use the relation $e(\e_g ) = - H_{\e_g u}$ to obtain \begin{equation} \label{VII.25} | e (\mu) + H_{\mu u} | \le | e (\mu) - e (\e_g) | + | H_{\e_g u} - H_{\mu u} |. \end{equation} Next, let $\Delta_0 E (\mu)$ be defined by the relation $H_{\mu u} = : \rho^{-1}_0 (\mu - e_0) - \Delta_0 E (\mu).$ Then by \eqref{eqn:35} and analyticity of $\Delta_0 E (\mu)$ in $I_0$, $| \partial_{\mu} \Delta_0 E (\mu) | \le {\alpha_0/ \rho_0}$. The last two relations imply $$| H_{\e_g u} - H_{\mu u} | = | \rho^{-1}_0 (\e_g - \mu) + \Delta_0 E (\e_g)$$ \begin{equation} \label{VII.26} - \Delta_0 E (\mu) | \le \rho^{-1}_0 (1 + \alpha_0) | \e_g - \mu|. \end{equation} Recall the definition $E_{n}(\lambda, H_{\mu s}):=\big( \mathcal{R}_{\rho}^n(H_{\mu})\big)_u\equiv\langle \mathcal{R}_{\rho}^n(H_{\mu})\rangle_\Omega,\ \lambda =-H_{\mu u}$. Now we estimate the first term on the r. h. s. of \eqref{VII.25}. Define \begin{equation} \Delta_n E(\lambda, H_{\mu s}):=E_{n}(\lambda, H_{\mu s})-\rho^{-1}E_{n-1}(\lambda, H_{\mu s}). \label{eqn:53} \end{equation} It is shown in \cite{Sigal}, Eqns (V.24)-(V.25) that $e(H_s)$ satisfies the equation \begin{equation} e(H_s)= \sum_{i=1}^\infty\rho^i\Delta_i E(e(H_s), H_s), \label{eqn:60} \end{equation} where the series on the right hand side converges absolutely by the estimate \begin{equation} |\partial_{\lambda}^m \Delta_n E(\lambda)|\le\alpha_n (\frac{1}{12} \rho^{n+1})^{-m}\ for\ n\le j\ \mbox{and}\ m= 0,1, \label{eqn:54} \end{equation} shown in \cite{FroehlichGriesemerSigal2009}. The relation \eqref{eqn:60} together with the definitions $e (\mu) : = e (H_{\mu s})$ and $e (\e_g) : = e(H_{\e_g s} ) = - H_{\e_g u}$ implies \begin{equation} e(\mu)= \sum_{i=1}^\infty\rho^i\Delta_i E(e(\mu), H_{\mu s}) \label{eqn:60a} \end{equation} and \begin{equation} e(\e_g)= \sum_{i=1}^\infty\rho^i\Delta_i E(e(\e_g), H_{\e_g s}). \label{eqn:60b} \end{equation} We estimate the difference between these series. It follows from the analyticity of $E_n (\lambda, H_{s})$ in $H_s$, see \cite{FroehlichGriesemerSigal2009}, Proposition V.3, that $\Delta_i E (\lambda, H_{\mu s})$ are analytic in $\mu \in D_{\delta_i}, i \le j-1$. Now, by the induction assumption $D_{\delta_i} \supset D(\e_g, {\rho_0 \over 4} \delta_i)$ for $i \le j-1$. Hence using the Cauchy formula we conclude from \eqref{eqn:54} that for $i \le j-1$ $$| \partial_\mu \Delta_i E (\lambda, H_{\mu s}) | \le \frac{4 \alpha_i}{(1 - \rho) \rho_0 \delta_i}\ \mbox{on}\ D (\e_g, {\rho_0 \over 4} \delta_{i }).$$ The latter estimate together with \eqref{eqn:54} gives $$\sum\limits_{i = 1}^\infty \rho^i | \Delta_i E (e (\mu), H_{\mu s}) - \Delta_i E (e (\e_g), H_{\e_g s}) |$$ $$ \le \sum\limits_{i = 1}^{j - 1} \rho^i ({\alpha_i \over \delta_i} | e (\mu) - e (\e_g) | + \frac{4 \alpha_i}{(1 - \rho) \rho_0 \delta_i}| \mu - \e_g |) + 2 \sum\limits_{i = j}^\infty \rho^i \alpha_i $$ $$\le 2 0 \alpha_1 | e (\mu) - e (\e_g) |+ {80\alpha_1 \over (1 - \rho) \rho_0} | \mu - \e_g | + 4 \alpha_j \rho^j $$ on $D (\e_g, {\rho_0 \over 4} \delta_j)$, where we used that $\delta_j = {1 \over 18} \rho^{j }$. This estimate together with the relations \eqref{eqn:60a} and \eqref{eqn:60b} gives \begin{equation} \label{VII.27}| e (\mu) - e (\e_g) | \le {40\alpha_1 \over 1 - \rho} \delta_j + 160 \alpha_{j} \delta_j \end{equation} in $D (\e_g, {\rho_0 \over 4} \delta_j)$, provided $\alpha_1 \le {1 \over 40}$. This estimate together with \eqref{VII.25} and \eqref{VII.26} and the definition of $D_{\delta_n}$ implies \eqref{VII.24} with $n=j$, provided \begin{equation} \label{VII.28}{1 + \alpha_0 \over 4} +{4 0 \alpha_1 \over 1 - \rho} + 16 0 \alpha_0 \le 1. \end{equation} Remembering the definition of $\alpha_j$, we see that the latter conditions can be easily arranged by taking $g$ sufficiently small. This proves \eqref{VII.24}. Next we show that for $g$ sufficiently small \begin{equation} \label{VII.29}D_{\tau \delta_n} \subset D (\e_g, 1.5 \rho_0 \tau \delta_n),\ \mbox{where}\ \tau = O(1).\end{equation} The proof of this embedding proceeds by induction in $n$ along the same lines as the proof of \eqref{VII.24} given above. We have $$| e (\mu) + H_{\mu u} | \ge | H_{\e_g u} - H_{\mu u} | - | e (\mu) - e (\e_g) |.$$ Again using the equality in \eqref{VII.26} and the estimates $| \partial_\mu \Delta_0 E (\mu) | \le \rho^{-1}_0 \alpha_0$ and \eqref{VII.27}, we find $$| e (\mu) + H_{\mu u} | \ge \rho^{-1}_0 (1 -\alpha_0 - {160\alpha_1 \over \rho (1 - \rho)}) | \mu - \e_g | - 8 0 \rho^{-1} \alpha_j \delta_j$$ in $D_{\delta_j}$, provided $\alpha_1 \le {1 \over 40}$. Let $\mu \in D_{\tau \delta_j}$. Then $|e (\mu) + H_{\mu u} | \le \tau \delta_j$, which together with the previous estimate gives $$ | \mu - \e_g | \le \rho_0 (1 - \alpha_0 - {160 \delta_1 \over \rho (1 - \rho)})^{-1} (\tau + {80 \alpha_j \over \rho}) \delta_j.$$ This yields \eqref{VII.29}, provided $g$ is sufficiently small. Embeddings \eqref{VII.24} and \eqref{VII.29} with $\tau = {\rho \over 8}$ imply that $$D_{\delta_n} \setminus D_{{\rho \delta_n \over 8 }} \supset D (\e_g, {\rho_0 \over 4}\delta_n) \setminus D (\e_g, {3\rho_0 \rho \over 16} \delta_n).$$ Since $\forall n, {\rho_0 \over 4} \delta_n > {3\rho_0 \rho \over 16} \delta_{n - 1}$, the sets on the r. h. s. cover the interval $(\e_g, \e_g + {1 \over 18} \rho_0)$ and therefore so do the sets on the l.h.s. . Hence the lemma follows. \end{proof} \secct{Supplement: Background on the Fock space, etc} \label{sect-SA} Let $ \fh$ be either $ L^2 (\RR^3, \mathbb{C}, d^3 k)$ or $ L^2 (\RR^3, \mathbb{C}^2, d^3 k)$. In the first case we consider $ \fh$ as the Hilbert space of one-particle states of a scalar Boson or a phonon, and in the second case, of a photon. The variable $k\in\RR^3$ is the wave vector or momentum of the particle. (Recall that throughout this paper, the velocity of light, $c$, and Planck's constant, $\hbar$, are set equal to 1.) The Bosonic Fock space, $\mathcal{F}$, over $\fh$ is defined by \begin{equation} \label{eq-I.10} \mathcal{F} \ := \ \bigoplus_{n=0}^{\infty} \mathcal{S}_n \, \fh^{\otimes n} \comma \end{equation} where $\mathcal{S}_n$ is the orthogonal projection onto the subspace of totally symmetric $n$-particle wave functions contained in the $n$-fold tensor product $\fh^{\otimes n}$ of $\fh$; and $\mathcal{S}_0 \fh^{\otimes 0} := \CC $. The vector $\Om:=1 \bigoplus_{n=1}^{\infty}0$ is called the \emph{vacuum vector} in $\mathcal{F}$. Vectors $\Psi\in \mathcal{F}$ can be identified with sequences $(\psi_n)^{\infty}_{n=0}$ of $n$-particle wave functions, which are totally symmetric in their $n$ arguments, and $\psi_0\in\CC$. In the first case these functions are of the form, $\psi_n(k_1, \ldots, k_n)$, while in the second case, of the form $\psi_n(k_1, \lambda_1, \ldots, k_n, \lambda_n)$, where $\lambda_j \in \{-1, 1\}$ are the polarization variables. In what follows we present some key definitions in the first case only limiting ourselves to remarks at the end of this appendix on how these definitions have to be modified for the second case. The scalar product of two vectors $\Psi$ and $\Phi$ is given by \begin{equation} \label{eq-I.11} \langle \Psi \, , \; \Phi \rangle \ := \ \sum_{n=0}^{\infty} \int \prod^n_{j=1} d^3k_j \; \overline{\psi_n (k_1, \ldots, k_n)} \: \vphi_n (k_1, \ldots, k_n) \period \end{equation} Given a one particle dispersion relation $\omega(k)$, the energy of a configuration of $n$ \emph{non-interacting} field particles with wave vectors $k_1, \ldots,k_n$ is given by $\sum^{n}_{j=1} \omega(k_j)$. We define the \emph{free-field Hamiltonian}, $H_f$, giving the field dynamics, by \begin{equation} \label{eq-I.17a} (H_f \Psi)_n(k_1,\ldots,k_n) \ = \ \Big( \sum_{j=1}^n \omega(k_j) \Big) \: \psi_n (k_1, \ldots, k_n) , \end{equation} for $n\ge1$ and $(H_f \Psi)_n =0$ for $n=0$. Here $\Psi=(\psi_n)_{n=0}^{\infty}$ (to be sure that the r.h.s. makes sense we can assume that $\psi_n=0$, except for finitely many $n$, for which $\psi_n(k_1,\ldots,k_n)$ decrease rapidly at infinity). Clearly that the operator $H_f$ has the single eigenvalue $0$ with the eigenvector $\Om$ and the rest of the spectrum absolutely continuous. With each function $\vphi \in \fh$ one associates an \emph{annihilation operator} $a(\vphi)$ defined as follows. For $\Psi=(\psi_n)^{\infty}_{n=0}\in \mathcal{F}$ with the property that $\psi_n=0$, for all but finitely many $n$, the vector $a(\vphi) \Psi$ is defined by \begin{equation} \label{eq-I.12} (a(\vphi) \Psi)_n (k_1, \ldots, k_n) \ := \ \sqrt{n+1 \,} \, \int d^3 k \; \overline{\vphi(k)} \: \psi_{n+1}(k, k_1, \ldots, k_n). \end{equation} These equations define a closable operator $a(\vphi)$ whose closure is also denoted by $a(\vphi)$. Eqn \eqref{eq-I.12} implies the relation \begin{equation} \label{eq-I.13} a(\vphi) \Om \ = \ 0 \period \end{equation} The creation operator $a^*(\vphi)$ is defined to be the adjoint of $a(\vphi)$ with respect to the scalar product defined in Eq.~(\ref{eq-I.11}). Since $a(\vphi)$ is anti-linear, and $a^*(\vphi)$ is linear in $\vphi$, we write formally \begin{equation} \label{eq-I.14} a(\vphi) \ = \ \int d^3 k \; \overline{\vphi(k)} \, a(k) \comma \hspace{8mm} a^*(\vphi) \ = \ \int d^3 k \; \vphi(k) \, a^*(k) \comma \end{equation} where $a(k)$ and $a^*(k)$ are unbounded, operator-valued distributions. The latter are well-known to obey the \emph{canonical commutation relations} (CCR): \begin{equation} \label{eq-I.15} \big[ a^{\#}(k) \, , \, a^{\#}(k') \big] \ = \ 0 \comma \hspace{8mm} \big[ a(k) \, , \, a^*(k') \big] \ = \ \delta^3 (k-k') \comma \end{equation} where $a^{\#}= a$ or $a^*$. Now, using this one can rewrite the quantum Hamiltonian $H_f$ in terms of the creation and annihilation operators, $a$ and $a^*$, as \begin{equation} \label{Hfa} H_f \ = \ \int d^3 k \; a^*(k)\; \omega(k) \; a(k) \comma \end{equation} acting on the Fock space $ \mathcal{F}$. More generally, for any operator, $t$, on the one-particle space $ \fh$ we define the operator $T$ on the Fock space $\mathcal{F}$ by the following formal expression $T: = \int a^*(k) t a(k) dk$, where the operator $t$ acts on the $k-$variable ($T$ is the second quantization of $t$). The precise meaning of the latter expression can obtained by using a basis $\{\phi_j\}$ in the space $ \fh$ to rewrite it as $T: = \sum_{j} \int a^*(\phi_j) a(t^* \phi_j) dk$. To modify the above definitions to the case of photons, one replaces the variable $k$ by the pair $(k, \lambda)$ and adds to the integrals in $k$ also the sums over $\lambda$. In particular, the creation and annihilation operators have now two variables: $a_ \lambda^\#(k)\equiv a^\#(k, \lambda)$; they satisfy the commutation relations \begin{equation} \label{eq-I.15} \big[ a_{\lambda}^{\#}(k) \, , \, a_{\lambda'}^{\#}(k') \big] \ = \ 0 \comma \hspace{8mm} \big[ a_{\lambda}(k) \, , \, a_{\lambda'}^*(k') \big] \ = \ \delta_{\lambda, \lambda'} \delta^3 (k-k') . \end{equation} One can also introduce the operator-valued transverse vector fields by $$a^\#(k):= \sum_{\lambda \in \{-1, 1\}} e_{\lambda}(k) a_{\lambda}^\#(k),$$ where $e_{\lambda}(k) \equiv e(k, \lambda)$ are polarization vectors, i.e. orthonormal vectors in $\mathbb{R}^3$ satisfying $k \cdot e_{\lambda}(k) =0$. Then in order to reinterpret the expressions in this paper for the vector (photon) - case one either adds the variable $\lambda$ as was mentioned above or replaces, in appropriate places, the usual product of scalar functions or scalar functions and scalar operators by the dot product of vector-functions or vector-functions and operator valued vector-functions. \vspace{3mm} \noindent {\bf Acknowledgements:} A part of this work was done while the third author was visiting ETH Z\"urich, ESI Vienna and IAS Princeton. He is grateful to these institutions for hospitality. \vspace{3mm}
0904.0525
\section{Introduction} Let $\mathbb{F}_{q^m}$ be a finite field with $q^m$ elements, which contains a subfield $\mathbb{F}_q$ with $q$ elements. Let $\mathcal{S}=(s_0,s_1,\ldots,s_n,\ldots)$ be a linear recurring sequence over $\mathbb{F}_{q^m}$. The monic polynomial $f(x)=a_0+a_1x+\cdots+a_{n-1}x^{n-1}+x^n \in \mathbb{F}_{q^m}[x]$ is called a characteristic polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ if \[ a_0s_k+a_1s_{k+1}+a_2s_{k+2}+\cdots+a_{n-1}s_{k+n-1}+s_{k+n}=0,\ \ \ \mbox{for all}\ k\geq 0. \] If the characteristic polynomial $f(x)$ is a polynomial over $\mathbb{F}_q$, that is, all $a_i \in \mathbb{F}_q$, we call $f(x)$ a characteristic polynomial over $\mathbb{F}_q$ of $\mathcal{S}$. Since the linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ is ultimately periodic, a characteristic polynomial over $\mathbb{F}_q$ of $\mathcal{S}$ does exist. The minimal polynomial over $\mathbb{F}_{q^m}$ (resp. $\mathbb{F}_q$) of $\mathcal{S}$ is the uniquely determined characteristic polynomial over $\mathbb{F}_{q^m}$ (resp. $\mathbb{F}_q$) of $\mathcal{S}$ with least degree. The linear complexity over $\mathbb{F}_{q^m}$ (resp. $\mathbb{F}_q$) of $\mathcal{S}$ is the degree of the minimal polynomial over $\mathbb{F}_{q^m}$ (resp. $\mathbb{F}_q$) of $\mathcal{S}$. Let $h(x)$ be the minimal polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$. It is known that $h(x)|f(x)$ for any characteristic polynomial $f(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$. Similarly, let $H(x)$ be the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$, we have $H(x)|f(x)$ for any characteristic polynomial $f(x)$ over $\mathbb{F}_{q}$ of $\mathcal{S}$. Note that a characteristic polynomial $f(x)$ over $\mathbb{F}_{q}$ of $\mathcal{S}$ is also a characteristic polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$. Hence, $h(x)|f(x)$ for any characteristic polynomial $f(x)$ over $\mathbb{F}_{q}$ of $\mathcal{S}$. In particular, $h(x)|H(x)$. Similarly, for any $m$-fold multisequence ${\bf S}^{(m)}=(S_1,S_2,\ldots, S_m)$ over $\mathbb{F}_q$, the monic polynomial $g(x)\in\mathbb{F}_q[x]$ is called a joint characteristic polynomial of ${\bf S}^{(m)}$ if $g(x)$ is a characteristic polynomial of $S_j$ for each $1\leq j\leq m$. The joint minimal polynomial of ${\bf S}^{(m)}$ is the uniquely determined joint characteristic polynomial of ${\bf S}^{(m)}$ with least degree, and the joint linear complexity of ${\bf S}^{(m)}$ is the degree of the joint minimal polynomial of ${\bf S}^{(m)}$. Since $\mathbb{F}_{q^m}$ and $\mathbb{F}_{q}^m$ are isomorphic vector spaces over the finite field $\mathbb{F}_q$, a linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ is identified with an $m$-fold multisequence ${\bf S}^{(m)}$ over $\mathbb{F}_q$. It is well known that the joint minimal polynomial and joint linear complexity of the $m$-fold multisequence ${\bf S}^{(m)}$ are the minimal polynomial and linear complexity over $\mathbb{F}_q$ of $\mathcal{S}$ respectively. The linear complexity of sequences is one of the important security measures for stream cipher systems (see \cite{cdr}, \cite{dxs}, \cite{ru1}, \cite{ru2}). For a general introduction to the theory of linear feedback shift register sequences, we refer the reader to \cite[Chapter 8]{ln} and the references therein. The linear complexity of sequences has been extensively studied by many researchers. For a recent survey paper, see Niederreiter \cite{n1}. The notion of linear complexity over $\mathbb{F}_q$ of linear recurring sequences over $\mathbb{F}_{q^m}$ was introduced by Ding, Xiao and Shan in \cite{dxs}, and discussed by some authors, for example, see \cite{cm}, \cite{kl}, \cite{mei}-\cite{mo2}, \cite{mus}, \cite{n1}, \cite{nv}. Recently, in the study of vectorized stream cipher systems, the joint linear complexity of multisequences has been extensively investigated (see \cite{diy}, \cite{ds}, \cite{fd}-\cite{hr}, \cite{mei}-\cite{nw2}, \cite{wn}-\cite{xi}). In this paper, we study the minimal polynomial and linear complexity over $\mathbb{F}_q$ of a linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ with minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$. If the canonical factorization of $h(x)$ in $\mathbb{F}_{q^m}[x]$ is known, we determine the minimal polynomial and linear complexity over $\mathbb{F}_q$ of the linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$. The rest of the paper is organized as follows. In Section \ref{lrs} we introduce and give some results on linear recurring sequences that will be used in this paper. In Section \ref{pra} we introduce a ring automorphism of the polynomial ring $\mathbb{F}_{q^m}[x]$. We derive some results on this polynomial ring automorphism that are crucial to establish the main results in this paper. In Section \ref{mp} we determine the minimal polynomial and linear complexity over $\mathbb{F}_q$ of a linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ with minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$. In Section \ref{lbmo} we give a new proof for the lower bound of Meidl and \"Ozbudak \cite{mo1} on the linear complexity over $\mathbb{F}_{q^m}$ of linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ with given minimal polynomial $g(x)$ over $\mathbb{F}_q$. We show that this lower bound is tight if and only if the minimal polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is in certain form. \section{Linear Recurring Sequences} \label{lrs} Let $f(x)$ be a monic polynomial over $\mathbb{F}_q$. Denote $\mathcal {M}(f(x))$ the set of all linear recurring sequences over $\mathbb{F}_q$ with characteristic polynomial $f(x)$. Note that $\mathcal {M}(f(x))$ is a vector space over $\mathbb{F}_q$ with dimension $\mbox{deg}(f(x))$. We need the following results on linear recurring sequences from \cite{ln}: \begin{Theorem} \label{th1} {\rm \cite[Theorem 8.55]{ln}}\quad Let $f_1(x),\ldots, f_k(x)$ be monic polynomials over $\mathbb{F}_q$. If $f_1(x),\ldots, f_k(x)$ are pairwise relatively prime, then the vector space\\ $\mathcal{M}(f_1(x)\cdots f_k(x))$ is the direct sum of the subspaces $\mathcal{M}(f_1(x)),\cdots, \mathcal{M}(f_k(x))$, that is \[ \mathcal{M}(f_1(x)\cdots f_k(x))=\mathcal{M}(f_1(x))\dotplus \cdots \dotplus \mathcal{M}(f_k(x)). \] \end{Theorem} \begin{Theorem} \label{th2} {\rm \cite[Theorem 8.57]{ln}}\quad Let $S_1, S_2, \ldots, S_k$ be linear recurring sequences over $\mathbb{F}_{q}$. The minimal polynomials over $\mathbb{F}_{q}$ of $S_1, S_2, \ldots, S_k$ are $h_1(x), h_2(x), \ldots, h_k(x)$ respectively. If $h_1(x), h_2(x), \ldots, h_k(x)$ are pairwise relatively prime, then the minimal polynomial over $\mathbb{F}_{q}$ of $\sum_{i=1}^{k}S_i$ is the product of $h_1(x),h_2(x),\ldots, h_k(x)$. \end{Theorem} It is easy to extend this result to the following case: \begin{Lemma} \label{lemma1} Let $\mathcal{S}_1, \mathcal{S}_2, \ldots, \mathcal{S}_k$ be linear recurring sequences over $\mathbb{F}_{q^m}$. The minimal polynomials over $\mathbb{F}_{q}$ of $\mathcal{S}_1, \mathcal{S}_2, \ldots, \mathcal{S}_k$ are $H_1(x), H_2(x), \ldots, H_k(x)$ respectively. If $H_1(x), H_2(x), \ldots, H_k(x)$ are pairwise relatively prime over $\mathbb{F}_{q}$, then the minimal polynomial over $\mathbb{F}_{q}$ of $\sum_{i=1}^{k}\mathcal{S}_i$ is the product of $H_1(x),H_2(x),\ldots, H_k(x)$. \end{Lemma} Now we establish the following lemma which will be used in this paper: \begin{Lemma} \label{lemma2} Let $S$ be a linear recurring sequence over $\mathbb{F}_{q}$. The minimal polynomial over $\mathbb{F}_{q}$ of $S$ is given by $h(x)=h_1(x)h_2(x)\cdots h_k(x)$ where $h_1(x), h_2(x), \ldots, h_k(x)$ are monic polynomials over $\mathbb{F}_{q}$. If $h_1(x), h_2(x), \ldots, h_k(x)$ are pairwise relatively prime, then there uniquely exist sequences $S_1, S_2, \ldots, S_k$ over $\mathbb{F}_{q}$ such that \[ S=S_1+S_2+\cdots+S_k \] and the minimal polynomials over $\mathbb{F}_{q}$ of $S_1,S_2,\ldots,S_k$ are $h_1(x), h_2(x), \ldots, h_k(x)$ respectively. \end{Lemma} \begin{proof} By Theorem \ref{th1}, we have \[ \mathcal{M}(h(x))=\mathcal{M}(h_1(x))\dotplus \cdots \dotplus \mathcal{M}(h_k(x)). \] Then, there uniquely exist sequences $S_1,S_2,\ldots,S_k$ over $\mathbb{F}_{q}$ such that $S_j\in \mathcal{M}(h_j(x))$ and \[ S=S_1+S_2+\cdots+S_k. \] Assume that the minimal polynomial over $\mathbb{F}_{q}$ of $S_j$ is $h_j^{'}(x)$ which is a divisor of $h_j(x)$ for $1\leq j\leq k$. By Theorem \ref{th2}, the minimal polynomial over $\mathbb{F}_{q}$ of $S$ is $\prod_{j=1}^{k}h_j^{'}(x)$. Thus, \[ h_1^{'}(x)h_2^{'}(x)\cdots h_k^{'}(x)=h_1(x)h_2(x)\cdots h_k(x). \] Since $h_j^{'}(x)| h_j(x)$ for $1\leq j\leq k$, we have \[ h_j^{'}(x)=h_j(x), \;\; 1\leq j\leq k, \] which completes the proof. \end{proof} \section{Polynomial Ring Automorphism} \label{pra} We define $\sigma$ to be a mapping from the polynomial ring $\mathbb{F}_{q^m}[x]$ to itself as follows: For $f(x)=a_0+a_1x+\cdots+a_nx^n\in \mathbb{F}_{q^m}[x]$, \[\sigma: \mathbb{F}_{q^m}[x]\longrightarrow \mathbb{F}_{q^m}[x],\] \[f(x)\longrightarrow \sigma(f(x))\] where $\sigma(f(x))=a_0^q+a_1^qx+\cdots+a_n^qx^n$. It is easy to see that $\sigma$ is a ring automorphism of $\mathbb{F}_{q^m}[x]$. Throughout the paper, we will use the fact that \[ \sigma(f(x)g(x))=\sigma(f(x))\sigma(g(x)), \;\;\mbox{for any} \; f(x), g(x)\in \mathbb{F}_{q^m}[x]. \] Denote $\sigma^{(k)}$ the $k$th usual composition of $\sigma$. Note that $\sigma^{(0)}$ is the identity mapping. Since $a^{q^m}=a$ for any $a\in \mathbb{F}_{q^m}$, we have $\sigma^{(m)}(f(x))=f(x)$. Denote $k(f)$ the minimum positive integer $k$ such that $\sigma^{(k)}(f(x))=f(x)$. \begin{Lemma} \label{lemma3} For any $f(x)\in \mathbb{F}_{q^m}[x]$ and positive integer $l$, $\sigma^{(l)}(f(x))=f(x)$ if and only if $k(f)|l$. \end{Lemma} \begin{proof} It is easy to see that $\sigma^{(l)}(f(x))=f(x)$ if $k(f)|l$. On the other hand, if $\sigma^{(l)}(f(x))=f(x)$, we assume that $l=k(f)w+r$ and $0\leq r<k(f)$. Then \[ f(x)=\sigma^{(l)}(f(x))=\sigma^{(r)}(\sigma^{(k(f)w)}(f(x)))=\sigma^{(r)}(f(x)). \] Hence, $r=0$ by the definition of $k(f)$. Therefore, $k(f)|l$. \end{proof} Now we define an equivalence relation $\stackrel{\sigma}{\sim}$ on $\mathbb{F}_{q^m}[x]$: $f(x)\stackrel{\sigma}{\sim} g(x)$ if and only if there exists positive integer $j$ such that $\sigma^{(j)}(f(x))=g(x)$. The equivalence classes induced by this equivalence relation $\stackrel{\sigma}{\sim}$ are called $\sigma$-equivalence classes. \begin{Lemma} \label{lemma4} Let $f(x)$ be a polynomial over $\mathbb{F}_{q^m}$. Then $\sigma(f(x))$ is irreducible over $\mathbb{F}_{q^m}$ if and only if $f(x)$ is irreducible over $\mathbb{F}_{q^m}$. \end{Lemma} \begin{proof} Since $f(x)\in\mathbb{F}_{q^m}[x]$, we have $f(x)=\sigma^{(m)}(f(x))$. Then, we only need to prove that $\sigma(f(x))$ is irreducible over $\mathbb{F}_{q^m}$ if $f(x)$ is irreducible over $\mathbb{F}_{q^m}$. Assume that $\sigma(f(x))$ is not irreducible over $\mathbb{F}_{q^m}$, that is to say there exist two nonconstant polynomials $r_1(x),r_2(x)$ in $\mathbb{F}_{q^m}[x]$ such that $\sigma(f(x))=r_1(x)r_2(x)$. Therefore, \[f(x)=\sigma^{(m)}(f(x))=\sigma^{(m-1)}(\sigma(f(x)))=\sigma^{(m-1)}(r_1(x))\sigma^{(m-1)}(r_2(x))\] where $\sigma^{(m-1)}(r_1(x)), \sigma^{(m-1)}(r_2(x))$ are nonconstant polynomials over $\mathbb{F}_{q^m}$, which contradicts to the fact that $f(x)$ is irreducible over $\mathbb{F}_{q^m}$. Hence, $\sigma(f(x))$ is irreducible over $\mathbb{F}_{q^m}$. \end{proof} The following theorem is crucial to establish the main results in this paper. \begin{Theorem} \label{th3} Let $f(x)$ be an irreducible polynomial in $\mathbb{F}_{q^m}[x]$, then the product \[ f(x)\sigma(f(x))\sigma^{(2)}(f(x))\cdots \sigma^{(k(f)-1)}(f(x)) \] is an irreducible polynomial in $\mathbb{F}_{q}[x]$. \end{Theorem} \begin{proof} Let $\mbox{deg}(f(x))=n$. Then, by \cite[Chapter 2, Theorem 2.14]{ln} there exits $\alpha\in \mathbb{F}_{q^{mn}}$ such that \begin{eqnarray} \label{pr1} f(x)=(x-\alpha)(x-\alpha^{q^m})(x-\alpha^{q^{2m}})\cdots(x-\alpha^{q^{(n-1)m}}) \end{eqnarray} where $\alpha,\alpha^{q^m},\ldots,\alpha^{q^{(n-1)m}}$ are different roots of $f(x)$. Let $g(x)$ be the minimal polynomial of $\alpha\in\mathbb{F}_{q^{mn}}$ over $\mathbb{F}_q$. By \cite[Chapter 2, Theorem 2.14]{ln}, $g(x)$ is an irreducible polynomial over $\mathbb{F}_q$ and \begin{eqnarray} \label{pr2} g(x)=(x-\alpha)(x-\alpha^{q})(x-\alpha^{q^2})\cdots(x-\alpha^{q^{d-1}}) \end{eqnarray} where $d$ is the least positive integer such that $\alpha^{q^d}=\alpha$. Since $\alpha^{q^{mn}}=\alpha$ and $\alpha,\alpha^{q^m},\ldots,\alpha^{q^{(n-1)m}}$ are distinct, we have $d\mid mn$ but $d\nmid im$ for $1\leq i\leq n-1$. Then, we claim that $d$ must be a multiple of $n$. Otherwise, we have $ \gcd(d,n)<n$. Since $d\mid mn$, then we have $\frac{d}{ \gcd(d,n)}\mid\frac{mn}{\gcd(d,n)}$. Since $\frac{d}{ \gcd(d,n)}$ and $\frac{n}{ \gcd(d,n)}$ are relatively prime, we have $\frac{d}{ \gcd(d,n)}\mid m$. Then, $d\mid \gcd(d,n)m$. This gives a contradiction since $\gcd(d,n)<n$. Therefore, $d$ is a multiple of $n$. Let $k$ be the positive integer such that $d=nk$. Since $d\mid mn$, then $k\mid m$. Let $s$ be the positive integer such that $m=sk$. Then, we claim that $s$ and $n$ are relatively prime. Otherwise, we have $\frac{n}{\gcd(n,s)}<n$. Since $n\mid \frac{ns}{\gcd(n,s)}$, then $kn\mid \frac{kns}{ \gcd(n,s)}$, that is $d\mid m\frac{n}{\gcd(n,s)}$. This gives a contradiction since $\frac{n}{\gcd(n,s)}<n$. Therefore, $s$ and $n$ are relatively prime. Thus, $\{js | j=0,1,\ldots, n-1\}$ is a complete residue system modulo $n$, i.e., there exits $(i_0,i_1,\ldots,i_{n-1})$, a permutation of $(0,1,2,\ldots,n-1)$, such that $js\equiv i_j \;\;({\rm mod} \ n)$. So we have $kjs\equiv ki_j \;\;({\rm mod} \ kn)$, i.e., $jm\equiv ki_j \;\;({\rm mod} \ d)$. Hence, $\alpha^{q^{jm}}=\alpha^{q^{ki_j}}$ for $0\leq j\leq n-1$. Therefore, it follows from (\ref{pr1}) that \begin{eqnarray} f(x)&=&(x-\alpha^{q^{ki_{0}}})(x-\alpha^{q^{ki_{1}}})(x-\alpha^{q^{ki_{2}}})\cdots (x-\alpha^{q^{ki_{n-1}}})\nonumber\\ &=&(x-\alpha)(x-\alpha^{q^{k}})(x-\alpha^{q^{2k}})\cdots(x-\alpha^{q^{(n-1)k}}). \label{pr3} \end{eqnarray} By (\ref{pr3}) and the definition of $\sigma$, we have \begin{eqnarray} \label{pr4} \sigma^{(i)}(f(x))=(x-\alpha^{q^i})(x-\alpha^{q^{k+i}})(x-\alpha^{q^{2k+i}})\cdots(x-\alpha^{q^{(n-1)k+i}}). \end{eqnarray} By (\ref{pr2}), (\ref{pr3}), (\ref{pr4}) and note that $d=nk$, we have \[ g(x)=f(x)\sigma(f(x))\ldots\sigma^{(k-1)}(f(x)) \] and \[ \sigma^{(k)}(f(x))=(x-\alpha^{q^k})(x-\alpha^{q^{2k}})(x-\alpha^{q^{3k}})\ldots(x-\alpha^{q^{nk}})=f(x). \] Since $d$ is the least positive integer such that $\alpha^{q^d}=\alpha$ and $d=nk$, we have that $f(x),\sigma(f(x)),\ldots,\sigma^{(k-1)}(f(x))$ are different from each other. Hence, $k=k(f)$. Therefore, \[ g(x)=f(x)\sigma(f(x))\cdots\sigma^{(k(f)-1)}(f(x)). \] Note that $g(x)$ is an irreducible polynomial over $\mathbb{F}_q$, we complete the proof. \end{proof} Let $f(x)$ be an irreducible polynomial in $\mathbb{F}_{q^m}[x]$. It is known from Lemma \ref{lemma4} that $f(x), \sigma(f(x)), \ldots, \sigma^{(k(f)-1)}(f(x))$ are irreducible polynomials in $\mathbb{F}_{q^m}[x]$. Denote \[ R(f(x))=f(x)\sigma(f(x))\cdots\sigma^{(k(f)-1)}(f(x)). \] By Theorem \ref{th3}, $R(f(x))$ is irreducible in $\mathbb{F}_{q}[x]$. Note that $R(f(x))$ is a multiple of $f(x)$ in $\mathbb{F}_{q^m}[x]$. Using Theorem \ref{th3}, we could give an refined version of \cite[Chapter 3, Theorem 3.46]{ln} as follows: \begin{Theorem} \label{th4} Let $f(x)$ be a monic irreducible polynomial over $\mathbb{F}_{q}$ and $n=\deg(f(x))$. Let $m$ be a positive integer. Denote $u= {\gcd}(n,m)$. Then the canonical factorization of $f(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \[ f(x)=h(x)\sigma(h(x))\cdots \sigma^{(k(h)-1)}(h(x)) \] where $h(x)$ is a monic irreducible polynomial over $\mathbb{F}_{q^m}$ and $k(h)=u$. \end{Theorem} \begin{proof} By \cite[Chapter 3, Theorem 3.46]{ln}, the canonical factorization of $f(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \[ f(x)=f_1(x)f_2(x)\cdots f_u(x) \] where $f_1(x),f_2(x),\ldots,f_u(x)\in \mathbb{F}_{q^m}[x]$ are distinct irreducible polynomials with the same degree. Let $h(x)=f_1(x)$. By Theorem \ref{th3}, $R(h(x))$ is an irreducible polynomial in $\mathbb{F}_{q}[x]$. Since $f(x)$ and $R(h(x))$ have a common factor $h(x)$ in $\mathbb{F}_{q^m}[x]$, $f(x)$ and $R(h(x))$ are not relatively prime in $\mathbb{F}_{q}[x]$. Note that $f(x)$ and $R(h(x))$ are monic irreducible polynomials in $\mathbb{F}_{q}[x]$. So, $f(x)=R(h(x))$. By Lemma \ref{lemma4}, $h(x),\sigma(h(x)),\ldots ,\sigma^{(k(h)-1)}(h(x))$ are all irreducible polynomials over $\mathbb{F}_{q^m}$. Therefore, the canonical factorization of $f(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \[ f(x)=h(x)\sigma(h(x))\cdots \sigma^{(k(h)-1)}(h(x)) \] and $k(h)=u$. \end{proof} In certain sense, Theorem \ref{th4} could be considered as a converse procedure of Theorem \ref{th3}. \section{Minimal Polynomials over $\mathbb{F}_{q}$ and $\mathbb{F}_{q^m}$} \label{mp} Now we determine the minimal polynomial and linear complexity over $\mathbb{F}_q$ of a linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ with minimal polynomial $h(x)\in \mathbb{F}_{q^m}[x]$. \begin{Theorem} \label{th5} Let $\mathcal{S}$ be a linear recurring sequence over $\mathbb{F}_{q^m}$ with minimal polynomial $h(x)\in \mathbb{F}_{q^m}[x]$. Assume that the canonical factorization of $h(x)$ in $\mathbb{F}_{q^m}[x]$ is given by \[ h(x)=\prod_{j=1}^{l}P_{j0}^{e_{j0}}P_{j1}^{e_{j1}}\cdots P_{ji_j}^{e_{ji_j}} \] where $\{P_{uv}\}$ are distinct monic irreducible polynomials in $\mathbb{F}_{q^m}[x]$, $P_{j0},P_{j1},\ldots, P_{ji_j}$ are in the same $\sigma$-equivalence class and $P_{uv}$, $P_{tw}$ are in the different $\sigma$-equivalence classes when $u\neq t$. Then the minimal polynomial over $\mathbb{F}_q$ of $\mathcal{S}$ is given by \[ H(x)=\prod_{j=1}^{l}R(P_{j0})^{e_j} \] where $e_j=\max\{e_{j0},e_{j1},\ldots,e_{ji_j}\}$ for $1\leq j\leq l$. \end{Theorem} \begin{proof} By Lemma \ref{lemma2}, there uniquely exist sequences $\mathcal{S}_1,\mathcal{S}_2,\ldots,\mathcal{S}_l$ over $\mathbb{F}_{q^m}$ such that \[ \mathcal{S}=\mathcal{S}_1+\mathcal{S}_2+\cdots+\mathcal{S}_l \] and the minimal polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}_j$ is $P_{j0}^{e_{j0}}P_{j1}^{e_{j1}}\cdots P_{ji_j}^{e_{ji_j}}$ for $1\leq j\leq l$. Let $H_j(x)$ be the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}_j$. Since $P_{j0},P_{j1},\ldots ,P_{ji_j}$ are in the same $\sigma$-equivalence class, then $R(P_{j0})^{e_j}$ is a multiple of $P_{j0}^{e_{j0}}P_{j1}^{e_{j1}}\cdots P_{ji_j}^{e_{ji_j}}$. So, by Theorem \ref{th3}, $R(P_{j0})^{e_j}$ is a characteristic polynomial over $\mathbb{F}_q$ of $\mathcal{S}_j$. Hence, $H_j(x)$ divides $R(P_{j0})^{e_j}$ in $\mathbb{F}_q[x]$. Since, by Theorem \ref{th3}, $R(P_{j0})$ is irreducible over $\mathbb{F}_q$, we have $H_j(x)=R(P_{j0})^{e'_j}$ where $e'_j\leq e_j$. By the definition of $e_j$, there exists $e_{ju_j}$ such that $e_{ju_j}=e_j$ where $0\leq u_j\leq i_j$. If $e'_j<e_j$, then $P_{ju_j}^{e_{ju_j}}$ can't divide $H_j(x)$. However, $H_j(x)$ is a multiple of $P_{j0}^{e_{j0}}P_{j1}^{e_{j1}}\cdots P_{ji_j}^{e_{ji_j}}$ in $\mathbb{F}_{q^m}[x]$ since $H_j(x)$ is also a characteristic polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}_j$. This gives a contradiction. Therefore, $e'_j=e_j$, i.e., $H_j(x)=R(P_{j0})^{e_j}$. For any $0\leq u\neq v\leq l$, we claim that $R(P_{u0})^{e_u}$ and $R(P_{v0})^{e_v}$ are relatively prime. Suppose on the contrary that there exist $R(P_{u0})^{e_u}$ and $R(P_{v0})^{e_v}$, where $u\neq v$, which are not relatively prime. Since $R(P_{u0})$ and $R(P_{v0})$ are monic irreducible polynomials over $\mathbb{F}_q$, then we have $R(P_{u0})=R(P_{v0})$. Hence, $P_{u0}$ divides $R(P_{v0})$ in $\mathbb{F}_{q^m}[x]$. By Theorem \ref{th4}, the canonical factorization of $R(P_{v0})$ in $\mathbb{F}_{q^m}[x]$ is given by \[ R(P_{v0})=P_{v0}\sigma(P_{v0})\cdots\sigma^{(k(P_{v0})-1)}(P_{v0}). \] Since $P_{u0}$ is irreducible over $\mathbb{F}_{q^m}$, there exists a positive integer $j$ such that $P_{u0}=\sigma^{(j)}(P_{v0})$. This contradicts to the fact that $P_{u0}$ and $P_{v0}$ are in the different $\sigma$-equivalence classes. Therefore, $R(P_{u0})^{e_u}$ and $R(P_{v0})^{e_v}$ are relatively prime. Then, $H_1(x),H_2(x),\ldots,H_l(x)$ are pairwise relatively prime. By Lemma \ref{lemma1}, the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}=\sum_{j=1}^{l}\mathcal{S}_j$ is the product of $H_1(x),H_2(x),\ldots, H_l(x)$. Therefore, we have \[ H(x)=\prod_{j=1}^{l}R(P_{j0})^{e_j} \] which completes the proof. \end{proof} \begin{Corollary} \label{cor1} Under the notation of Theorem \ref{th5}, the linear complexity over $\mathbb{F}_q$ of $\mathcal{S}$ is given by \[ L_{\mathbb{F}_q}(\mathcal{S})=\sum_{j=1}^{l}e_jk(P_{j0})\deg(P_{j0}) \] where $k(f)$ is defined in Section \ref{pra}. \end{Corollary} Using Theorem \ref{th5}, we could also give a refinement of \cite[Proposition 2.1]{mo2}: \begin{Theorem} \label{th6} Let $f(x)$ be a polynomial over $\mathbb{F}_{q}$ with $\deg(f)\geq 1$. Suppose that \begin{eqnarray} \label{mf1} f=r_1^{e_1}r_2^{e_2}\cdots r_l^{e_l},\mbox{~~~}e_1, e_2, \ldots, e_l>0 \end{eqnarray} is the canonical factorization of $f$ into monic irreducibles over $\mathbb{F}_{q}$. Denote $n_i=\deg(r_i)$. Suppose by Theorem \ref{th4} that the canonical factorization of $r_i(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \begin{eqnarray} \label{mf2} r_i(x)=P_{i}(x)\sigma^{(1)}(P_{i}(x))\cdots \sigma^{(u_i-1)}(P_{i}(x)) \end{eqnarray} where $u_i= \gcd(n_i,m)=k(P_{i}(x))$. Let $\mathcal{S}$ be a linear recurring sequence over $\mathbb{F}_{q^m}$. Then, the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$ is $f(x)$ if and only if the minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is of the following form: \begin{equation} \label{mf3} h(x)=\prod_{i=1}^{l}P_{i}^{e_{i0}}\sigma^{(1)}(P_{i})^{e_{i1}}\cdots \sigma^{({u_i-1})}(P_{i})^{e_{iu_i-1}} \end{equation} where $0\leq e_{ij}\leq e_i$ and $\max\{e_{i0},e_{i1},\ldots,e_{iu_i-1}\}=e_i$ for every $i=1,2,\ldots,l$. \end{Theorem} \begin{proof} It follows from Theorem \ref{th5} that the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$ is $f(x)$ if the minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is given by (\ref{mf3}). Conversely, suppose that the minimal polynomials over $\mathbb{F}_{q}$ of $\mathcal{S}$ is $f(x)$. Then, $h(x)$ is a factor of $f(x)$ in $\mathbb{F}_{q^m}[x]$ since $f(x)$ is also a characteristic polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$. By (\ref{mf1}) and (\ref{mf2}), the canonical factorization of $f(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \[ f(x)=\prod_{i=1}^{l}P_{i}^{e_{i}}\sigma^{(1)}(P_{i})^{e_{i}}\cdots \sigma^{({u_i-1})}(P_{i})^{e_{i}}. \] So $h(x)$ must be of the form \[ h(x)=\prod_{i=1}^{l}P_{i}^{e_{i0}}\sigma^{(1)}(P_{i})^{e_{i1}}\cdots \sigma^{({u_i-1})}(P_{i})^{e_{iu_i-1}} \] where $0\leq e_{ij}\leq e_i$ for every $i=1,2,\ldots,l$. By Theorem \ref{th5}, the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$ is given by \[ H(x)=\prod_{i=1}^{l}R(P_{i})^{e'_i}=\prod_{i=1}^{l}r_i(x)^{e'_i} \] where $e'_i=\max\{e_{i0},e_{i1},\ldots,e_{iu_i-1}\}$. Due to the uniqueness of the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$, we have $H(x)=f(x)$. Hence, $e'_i=e_i$. Therefore, the minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is of the form (\ref{mf3}). This completes the proof. \end{proof} At the end of this section, we give an example to illustrate Theorem \ref{th5} and Corollary \ref{cor1}. \begin{Example} Let $\mathbb{F}_2\subseteq \mathbb{F}_4$ and let $\alpha$ be a root of $x^2+x+1$ in $\mathbb{F}_4$. So, $\mathbb{F}_4 =\{0,1, \alpha, 1+\alpha \}$. Let $\mathcal{S}$ be a periodic sequence over $\mathbb{F}_4$ with the least period $15$. The first period terms of $\mathcal{S}$ are given by \[ \alpha^2,\alpha,\alpha,\alpha^2,\alpha^2,\alpha^2,0,\alpha,\alpha^2,\alpha,0,\alpha,0,0,1. \] The minimal polynomial over $\mathbb{F}_{4}$ of $\mathcal{S}$ is $x^3+\alpha^2x^2+\alpha^2$. We first factor $x^3+\alpha^2x^2+\alpha^2$ into irreducible polynomials over $\mathbb{F}_4$: \[ x^3+\alpha^2x^2+\alpha^2=(x+\alpha)(x^2+x+\alpha). \] Note that \[ \sigma(x+\alpha)=x+\alpha^2,\;\; \sigma^{(2)}(x+\alpha)=x+\alpha, \] \[ \sigma(x^2+x+\alpha)=x^2+x+\alpha^2, \;\; \sigma^{(2)}(x^2+x+\alpha)=x^2+x+\alpha. \] So we have \[k(x+\alpha)=2, \;\; k(x^2+x+\alpha)=2.\] Then, by Theorem \ref{th5} and Corollary \ref{cor1}, the minimal polynomial over $\mathbb{F}_{2}$ of $\mathcal{S}$ is \begin{eqnarray} && (x+\alpha)\sigma(x+\alpha)(x^2+x+\alpha)\sigma(x^2+x+\alpha) \\ &=&(x^2+x+1)(x^4+x+1)=x^6+x^5+x^4+x^3+1 \end{eqnarray} and the linear complexity over $\mathbb{F}_{2}$ of $\mathcal{S}$ is \[ L=1\times k(x+\alpha)\times \deg(x+\alpha)+1\times k(x^2+x+\alpha)\times \deg(x^2+x+\alpha)=2+2\times2=6. \] \end{Example} \section{Remarks on the Lower Bound of Meidl and \"Ozbudak } \label{lbmo} Meidl and \"Ozbudak \cite{mo1} derived a lower bound on the linear complexity over $\mathbb{F}_{q^m}$ of a linear recurring sequence $\mathcal{S}$ over $\mathbb{F}_{q^m}$ with given minimal polynomial $g(x)$ over $\mathbb{F}_q$. In this section, using Theorem \ref{th6} we give a new proof for the lower bound of Meidl and \"Ozbudak and show that this lower bound is tight if and only if the minimal polynomial over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is in certain form. \begin{Corollary} \label{cor2} Let $f(x)$ be a monic polynomial in $\mathbb{F}_{q}[x]$ with the canonical factorization into irreducible polynomials over $\mathbb{F}_{q}$ given by \begin{eqnarray} \label{lb1} f=r_1^{e_1}r_2^{e_2}\ldots r_k^{e_k}, \;\;\; e_1, e_2, \ldots, e_k>0. \end{eqnarray} Suppose that $\mathcal{S}$ is a linear recurring sequence over $\mathbb{F}_{q^m}$ and the minimal polynomial over $\mathbb{F}_{q}$ of $\mathcal{S}$ is $f(x)$. Then, the linear complexity $L_{\mathbb{F}_{q^m}}(\mathcal{S})$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is lower bounded by \begin{eqnarray} \label{lb2} L_{\mathbb{F}_{q^m}}(\mathcal{S})\geq \sum_{i=1}^{k}e_i\frac{n_i}{\gcd(n_i,m)} \end{eqnarray} where $n_i=\deg(r_i)$ for $i=1,2,\ldots, k$. Furthermore, suppose by Theorem \ref{th4} that the canonical factorization of $r_i(x)$ into monic irreducibles over $\mathbb{F}_{q^m}$ is given by \begin{eqnarray} \label{lb3} r_i(x)=P_{i}(x)\sigma^{(1)}(P_{i}(x))\ldots \sigma^{(u_i-1)}(P_{i}(x)) \end{eqnarray} where $u_i= \gcd(n_i,m)$ for $i=1,2,\ldots, k$. Then, the lower bound is tight if and only if the minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is of the following form: \[ h(x)= \prod _{i=1}^{k} \sigma^{(j_i)}(P_{i})^{e_i} \] where $0\leq j_i\leq u_i -1$ for $i=1,2,\ldots, k$. \end{Corollary} \begin{proof} It follows from (\ref{lb1}) and (\ref{lb3}) and Theorem \ref{th6} that the minimal polynomial $h(x)$ over $\mathbb{F}_{q^m}$ of $\mathcal{S}$ is of the form: \begin{equation} \label{lb4} h(x)=\prod_{i=1}^{k}P_{i}^{e_{i0}}\sigma^{(1)}(P_{i})^{e_{i1}}\cdots \sigma^{({u_i-1})}(P_{i})^{e_{iu_i-1}} \end{equation} where $0\leq e_{ij}\leq e_i$ and $\max\{e_{i0},e_{i1},\ldots,e_{iu_i-1}\}=e_i$ for every $i=1,2,\ldots,k$. Note from (\ref{lb3}) that $\deg(P_{i}(x))=n_i/u_i$. Hence, by (\ref{lb4}), \begin{eqnarray*} L_{\mathbb{F}_{q^m}}(\mathcal{S})=\deg(h(x)) \geq \sum_{i=1}^{k}e_i \deg(P_{i}(x)) = \sum_{i=1}^{k}e_i\frac{n_i}{ \mbox{gcd}(n_i,m)} \end{eqnarray*} and the equality holds if and only if \[ h(x)= \prod _{i=1}^{k} \sigma^{(j_i)}(P_{i})^{e_i} \] where $0\leq j_i\leq u_i -1$ for $i=1,2,\ldots, k$. This completes the proof. \end{proof} \begin{Remark} Meidl and \"Ozbudak \cite[Proposition 3]{mo1} showed that there exists a linear recurring sequence over $\mathbb{F}_{q^m}$ such that the lower bound (\ref{lb2}) is tight. We give in Corollary \ref{cor2} the necessary and sufficient condition under which the lower bound (\ref{lb2}) is tight. \end{Remark} \baselineskip=14pt\small
1301.3266
\section{Introduction} Platinum (Pt) is a suitable material to be used as a spin-current to charge-current converter due to its strong spin-orbit coupling.\citep{AndoIEEE2010} A spin current injected into a Pt film will generate a transverse charge current by the Inverse Spin-Hall Effect (ISHE), which can then be electrically detected. The ISHE has been used to detect for example spin pumping into Pt from various materials such as permalloy\citep{SaitohAPL2006} (Py) and yttrium iron garnet (YIG).\citep{ISHESaitohJAP,Kurebayashi2011nmat,CastelPRB} For the opposite effect, to use Pt as a spin current injector, a charge current is sent through the Pt, creating a transverse spin accumulation by the Spin-Hall Effect (SHE).\citep{SHESaitohPRL,SHEBuhrmanPRL,AzevedoAPL2011} Recently, Weiler et al.\citep{WeilerPRL2012} and Huang et al.\citep{ChienPRL2012} observed magnetoresistance (MR) effects in Pt on YIG and related those effects to magnetic proximity. These MR effects have been further investigated by Nakayama et al.\citep{BauerSMR} and they found and explained a new magnetoresistance, called Spin-Hall Magnetoresistance (SMR).\citep{BauerSMR,BauerTheorySMR} A change in resistance due to SMR can be explained by a combination of the Spin-Hall Effect (SHE) and the Inverse Spin-Hall Effect (ISHE), acting simultaneously. When a charge current $\vec{J_e}$ is sent through a Pt strip, a transverse spin current $\vec{J_s}$ is generated by the SHE following $\vec{J_e} \propto \vec{\sigma}\times\vec{J_s}$,\citep{SHE,NiFePtdep,AzevedoPRB2011,Kajiwara2010nature} where $\vec{\sigma}$ is the polarization direction of the spin current. Part of this created spin current is directed towards the YIG/Pt interface. At this interface the electrons in the Pt will interact with the localized moments in the YIG as is shown in Fig. \ref{fig:Fig1}. Depending on the magnetization direction of the YIG, electron spins will be absorbed ($\vec{M} \perp \vec{\sigma}$) or reflected ($\vec{M} \parallel \vec{\sigma}$). By changing the direction of the magnetization of the YIG, the polarization direction of the reflected spins, and thus the direction of the additional created charge current, can be controlled. A charge current with a component in the direction perpendicular to $\vec{J_e}$ can also be created, which generates a transverse voltage. In this paper, the angular dependence of the SMR in Pt on YIG is investigated for different Pt thicknesses (3, 4, 8 and 35nm) and different deposition techniques (e-beam evaporation and dc sputtering), for applied in-plane as well as out-of-plane magnetic field sweeps, revealing the full magnetization behaviour of the YIG.\footnote{Nakayama et al.\citep{BauerSMR} also investigated the out-of-plane behavior of the SMR, but only for saturated magnetization directions, which are fully aligned to the applied field} All measurements are performed at room temperature. The magnitude of the SMR is shown to be dependent on the magnetization direction of the YIG, as well as on the Pt thickness, indicating its relation to the spin diffusion length. Also the used deposition technique is found to be an important factor for the magnitude of the measured signals. \begin{figure}[b] \includegraphics[width=8.5cm]{Fig1SMR \caption{\label{fig:Fig1} Schematic drawing explaining the SMR in a YIG/Pt system. (a) When the magnetization $\vec{M}$ of YIG is perpendicular to the spin polarization $\vec{\sigma}$ of the spin accumulation created in the Pt by the SHE, the spin accumulation will be absorbed ($\vec{J}_{abs}$) by the localized moments in the YIG. (b) For $\vec{M}$ parallel to $\vec{\sigma}$, the spin accumulation cannot be absorbed, which results in a reflected spin current back into the Pt, where an additional charge current $\vec{J}_{refl}$ will be created by the ISHE. (c) For $\vec{M}$ in any other direction, the component of $\vec{\sigma}$ perpendicular to $\vec{M}$ will be absorbed and the component parallel to $\vec{M}$ will be reflected, resulting in a current $\vec{J}_{refl}$ which is not collinear with the initially applied current $\vec{J_e}$. } \end{figure} \section{Sample characteristics} Pt Hall bars with thicknesses of 3, 4, 8, and 35nm were deposited on YIG by dc sputtering. Similar Pt Hall bars were also deposited on a Si/SiO$_2$ substrate, as a reference. Finally a sample was fabricated where a layer of Pt (5nm) was deposited on YIG by e-beam evaporation. Fig. \ref{fig:Fig2}(a) shows the dimensions of the Hall bars. The thickness of the deposited Pt layers was measured by atomic force microscopy with an accuracy of $\pm$0.5nm. The used YIG (single-crystal) has a thickness of 200nm and is grown by liquid phase epitaxy on a (111) Gd$_3$Ga$_5$O$_{12}$ (GGG) substrate. By using a vibrating sample magnetometer, the magnetic field dependence of the magnetization was determined, as shown in Fig. \ref{fig:Fig2}(b). The magnetic field dependence shows the same magnetization behaviour for all in-plane directions, indicating isotropic behaviour of the magnetization in the film plane, with a low coercive field of only 0.06mT. To saturate the magnetization of this YIG sample in the out-of-plane direction, an external magnetic field higher than the saturation field ($\mu_0M_s=0.176$T)\citep{CastelPRB} has to be applied. \begin{figure}[h] \includegraphics[width=8.5cm]{Fig2Dimensions \caption{\label{fig:Fig2} (a) Schematics of the used Pt Hall bar geometry. (b) In-plane magnetic field dependence of the magnetization $M$ of the pure single-crystal of YIG. $B_c$ indicates the coercive field of 0.06mT. } \end{figure} \section{Results and Discussion} \subsection{In-plane magnetic field dependence} First, the longitudinal resistance of the Pt strip was measured (using a current $I=100\upmu$A) while sweeping an externally applied in-plane magnetic field. For subsequent measurements the magnetic field was applied for different in-plane angles $\alpha$, as defined in Fig. \ref{fig:Fig3}(a). As the in-plane magnetization of YIG shows isotropic behaviour with a coercive field $B_c$ of only 0.06mT, its magnetization will easily align with the applied in-plane magnetic field. It was observed that the measured longitudinal resistance is dependent on the direction of the applied magnetic field, and thus of the magnetization direction of the YIG, as can be seen in Fig. \ref{fig:Fig3}(c) for the YIG/Pt [4nm] sample. For clarity, a background resistance $R_0$ of 1007-1008$\Omega$ was subtracted in the plots (the small change in $R_0$ between different measurements occurred due to thermal drift). A maximum in resistance was observed when the magnetic field was applied parallel to the direction of the charge current $J_e$ ($\alpha=0^{\circ}$). The resistance was minimized for the case where $B$ and $J_e$ were perpendicular ($\alpha=90^{\circ}$). These results are consistent with the SMR as described by Fig. \ref{fig:Fig1} and as observed by Nakayama et al.\citep{BauerSMR}. The measured resistivity for the longitudinal configuration can be formulated as\citep{BauerSMR} \begin{equation} \label{eq:Rlong} \rho_L = \rho_0 - \Delta{\rho} {m_y}^2 \\ \end{equation} where $\rho_0$ is a constant resistivity offset, $\Delta{\rho}$ is the magnitude of the resistivity change, which can be calculated from the measurements, giving $\Delta{\rho}=2\times 10^{-10}\Omega$m, and $m_y$ is the component of the magnetization in the $y$-direction. \begin{figure} \includegraphics[width=8.5cm]{Fig3Inplane \caption{\label{fig:Fig3} Results of the in-plane magnetic field dependence of the resistance of the Pt strip with a thickness of 4nm. Configuration for (a) longitudinal and (b) transverse resistance measurements. (c) and (d) show the measured resistance of the Pt strip while applying an in-plane magnetic field for different angles $\alpha$, for the longitudinal and transverse configuration, respectively. $R_0$ has a magnitude of 1007-1008$\Omega$. (e) Thickness dependence of the measured magnetoresistance for YIG/Pt and SiO$_2$/Pt samples. $\Delta{R_L}$ is defined as the maximum difference in longitudinal resistance ($R_L(\alpha=0^{\circ})-R_L(\alpha=90^{\circ})$) and $R_0$ is $R_L$($\alpha=0^{\circ}$). The solid red line is a theoretical fit.\citep{BauerSMR,BauerTheorySMR} } \end{figure} The same experiments were repeated for the transverse resistance, where the resistance was measured perpendicular to the current path as shown in Fig. \ref{fig:Fig3}(b). Also in this configuration it was found that the measured resistance depends on the direction of the applied in-plane magnetic field, as shown in Fig. \ref{fig:Fig3}(d) for the YIG/Pt [4nm] sample. Here a maximum resistance is observed for $\alpha=45^{\circ}$, and a minimum for $\alpha=135^{\circ}$. The observed SMR resistivity for the transverse configuration can be formulated as\citep{BauerSMR} \begin{equation} \label{eq:Rtrans} \rho_{T} = \Delta{\rho} m_x m_y \end{equation} where $m_x$ is the component of the magnetization in the $x$-direction. From the shown measurements, a ratio $\Delta{R_L}/\Delta{R_T}\approx7$ is found, which is close to the expected ratio of 8 following from equations (\ref{eq:Rlong}) and (\ref{eq:Rtrans}). For both the longitudinal and the transverse configuration, there is a peak and/or dip observed around +$B_c$ for all measurements. This can also be explained by the above described SMR. While sweeping the magnetic field (here from negative to positive $B$), the magnetization of the YIG will change direction when passing +$B_c$ (see Fig. \ref{fig:Fig2}(b)). Due to its in-plane shape anisotropy, the magnetization of the YIG will rotate fully in-plane towards $B$. This rotation of $M$ results in a change in measured resistance, passing the maximum and/or minimum possible resistance, which is observed as a peak and/or dip around +$B_c$ (when sweeping the field from positive to negative $B$, a peak/dip will occur at -$B_c$). Similar features were not observed by Huang et al.\citep{ChienPRL2012} and Nakayama et al.\citep{BauerSMR}. They do observe some peaks and dips, but these do not cover the maximum and minimum possible resistances, and thus do not show the full rotation of the magnetization in the plane. The absence of the full peaks and dips can be explained by different magnetization behaviour of their YIG samples, showing higher coercive fields and switching of the magnetization which is probably dominated by non-uniform reversal processes. The resistance measurements for the in-plane magnetic fields were repeated for all different samples. A summary of these measurements is shown in Fig. \ref{fig:Fig3}(e). Here $\Delta{R_L}$ is defined as the difference between the maximum ($\alpha=0^{\circ}$) and minimum ($\alpha=90^{\circ}$) measured longitudinal resistance and $R_0$ is $R_L(\alpha=0^{\circ})$. The shown thickness dependent measurements are in agreement with data as published by Huang et al.\citep{ChienPRL2012}, though they do not relate their results to SMR. The red line shows a theoretical fit\citep{BauerSMR,BauerTheorySMR} of the SMR signal. The position and width of the peak are mostly determined by the spin relaxation length $\lambda$ of Pt, and the magnitude of the signal by a combination of the spin-Hall angle $\theta_{SH}$ and the spin-mixing conductance $G_{\uparrow\downarrow}$ of the YIG/Pt interface. For the shown fit, $\lambda=1.5$nm, $\theta_{SH}=0.08$, $G_{\uparrow\downarrow}=1.2\times10^{14}\Omega^{-1}$m$^{-2}$ and a thickness dependent electrical conductivity as used in Ref\citep{CastelThicknessPt}, were used. When YIG is replaced by SiO$_2$, the SMR signal totally disappears, showing the effect is indeed caused by the magnetic YIG layer. More notable, the e-beam evaporated Pt layer on YIG did show only a very low SMR signal ($\approx10^{-5}$). This suggests that the spin-mixing conductance (which is determined by the interface)\citep{HeinrichInterface} is an important parameter for the occurrence of SMR. \subsection{Out-of-plane magnetic field dependence} To further investigate the characteristics of the Pt layer, also the transverse resistance was measured while applying an out-of-plane magnetic field, as shown in Fig. \ref{fig:Fig4}(a). The Pt layers on the Si/SiO$_2$ substrate showed linear behaviour with transverse Hall resistances of 1.3, 0.9 and 0.3 $\pm0.05$m$\Omega$ for Pt thicknesses of 4, 8 and 35nm, respectively, at $B=300$mT. These results, due to the normal Hall effect, are in agreement with the theoretical description $R_{Hall}=R_HB/d$, where $R_H=-0.23\times10^{-10}$m$^3$/C is the Hall coefficient of Pt\citep{Hurd} and $d$ is the Pt thickness. For the YIG/Pt samples, results of the out-of-plane measurements are shown in Fig. \ref{fig:Fig4}(b). At fields lower than the saturation field, a large magnetic field dependence is observed. The magnitude of this dependence decreases with Pt thickness and disappears for the thickest Pt layer of 35nm. The occurrence of this magnetic field dependence can be explained by the SMR, using the results of the in-plane measurements as shown in Fig. \ref{fig:Fig3}(d), because for applied fields lower than the saturation field, the magnetization of the YIG will still have an in-plane component. To investigate its effect on the transverse resistance measurements, the direction of the in-plane magnetization in the YIG should be known. To achieve this, the external magnetic field was applied with a small intended deviation $\phi$ from the out-of-plane $z$-direction towards the -$y$-direction as defined in Fig. \ref{fig:Fig4}(a). This small deviation results in a small in-plane component of the applied field, which controls the magnetization direction of the YIG. Using this configuration the sign of the signal due to the SMR can be checked according to Fig. \ref{fig:Fig3}(d) by varying the direction of the in-plane component of the applied magnetic field. Fig. \ref{fig:Fig4}(c) shows results applying an external field fixing $\phi=-1^{\circ}$ for various angles $\theta$, where $\theta$ is an additional rotation from the $z$- towards the $x$-direction. According to the theory of the SMR and also comparing the results shown in Fig. \ref{fig:Fig3}(c), a maximum additional resistance due to SMR is expected for an in-plane magnetic field with $\alpha=45^{\circ}$, which is the direction of the in-plane component when applying a magnetic field choosing $\phi=-1^{\circ}$ and $\theta=1^{\circ}$. Similarly, for $\phi=\theta=-1^{\circ}$, the in-plane component of the field will be $\alpha=135^{\circ}$, resulting in a minimum additional resistance. Results as shown in Fig. \ref{fig:Fig4}(c) confirm that the sign and magnitude of the magnetic field dependence are consistent with the SMR observed for in-plane fields. The shape of the curve can be explained by the dependence of the resistance on the direction of $M$, as only the component of ${\sigma}$ parallel to $M$ (${\sigma_M}$) will be reflected. For out-of-plane applied fields, ${\sigma_M}$ is given by ${\sigma_M}={\sigma}\cos{\beta}\cos{\alpha}$, where $\beta$ is the angle by which $M$ is tilted out of the $x/y$-plane. Using the Stoner-Wohlfarth Model,\citep{StonerModel} for an applied field in the z-direction, it was derived that $\beta=\arcsin(b)$, where $b=B/B_s$ and $B_s$ is the saturation field. Assuming that the transverse resistivity change due to SMR scales linearly with the in-plane component of ${\sigma_M}$ ($\sigma_{M,in-plane}=\sigma_M\cos{\beta}$), this gives (for applied fields close towards the $z$-direction and $\phi=\theta=\pm 1$) \begin{equation} \rho_{T}=\pm \frac{1}{2} \Delta{\rho} (1-b^2) \end{equation} Two fits using this equation are shown in Fig. \ref{fig:Fig4}(d). For both fitted curves, an assumed linear background resistance, as indicated by the dotted red line, is also added. The derived fits are in good agreement with the measured data for applied fields below the saturation field, which confirms the presence of SMR and its dependence on the magnetization direction,\citep{BauerTheorySMR,AlthammerSMR,KleinSMR} also for out-of-plane applied fields. \begin{figure} \includegraphics[width=8.5 cm]{Fig4Outplane \caption{\label{fig:Fig4} Results of the out-of-plane magnetic field dependence of the transverse resistance. (a) Configuration for the transverse resistance measurements. $\phi$ is defined as a rotation from the $z$- towards the -$y$-direction, whereas $\theta$ gives a rotation from the $z$- towards the $x$-direction. (b) Magnetic field dependence of the transverse resistance for different thicknesses of Pt on top YIG, for $\phi=-1^{\circ}$ and $\theta=1^{\circ}$. (c) Dependence of the transverse resistance on $\theta$, fixing $\phi=-1^{\circ}$, pointing out the effect of the direction of the in-plane component of the applied magnetic field on the observed signal. (d) Theoretical fits of the SMR signal for out-of-plane applied fields lower than the saturation field, assuming a linear background resistance, as shown by the dotted red line. For all shown measurements, a constant background resistance of 10-900m$\Omega$ is subtracted. } \end{figure} Also for the out-of-plane measurements a peak and/or a dip is observed at zero applied field. These peaks and dips have the same origin as those observed for the in-plane measurements, which is the rotation of the magnetization in the plane towards the new magnetic field direction. For applied magnetic fields above the saturation field no in-plane component of M is left, but still a small magnetic field dependence is observed. At $B=300$mT, transverse resistances of 10.1, 5.1, 1.5 and 0.3 $\pm0.05$m$\Omega$ were measured for Pt thicknesses of 3, 4, 8 and 35nm, respectively. So for thin Pt layers, at applied fields above the saturation field, an increased transverse resistance is observed compared to the SiO$_2$/Pt sample. Possible origins of this difference might be related to the imaginary part of the spin-mixing conductance, or to the (spin-) anomalous Hall effect. \subsection{Comparison of e-beam evaporated and dc sputtered Pt} Additional to the thickness and angular dependence of the SMR signal, also the difference in signal for two deposition techniques, e-beam evaporation and dc sputtering was investigated. It was observed that the e-beam evaporated Pt layer did show very low SMR effects compared to the sputtered layers. To compare, Fig. \ref{fig:Fig5}(a) shows the out-of-plane transverse measurement for both the sputtered [4nm] and evaporated [5nm] Pt layers. The value of the signal at applied fields higher than the saturation field is the same, but the additional signal which is described to SMR is lowered by a factor 7. As the evaporated Pt layer showed lower SMR signals compared to the sputtered Pt layers, the effect of using a different deposition technique on the spin pumping/ISHE signal was also investigated. By using a rf-magnetic field, the magnetization of the YIG was brought into resonance. During resonance, a spin current will be pumped into the Pt layer where it will be converted in a charge current by the ISHE. A more detailed description of the used measurement technique can be found in ref.\citep{CastelPRB}. Fig. \ref{fig:Fig5}(b) shows a measurement of the spin pumping voltage for both e-beam evaporated Pt and dc sputtered Pt on YIG. A rf-frequency and power of 1.4GHz and 10mW, respectively, were used to excite the magnetization precession in the YIG. The same measurement was repeated for different rf-frequencies between 0.6 and 4GHz, all at a power of 10mW (not shown). For all measurements, the spin pumping signal of the evaporated Pt layer was found to be a factor 12 smaller than the signal of the sputtered layer. This change in magnitude of the signal shows the difference of the YIG/Pt interface between both deposition techniques, determining a probable difference in the spin-mixing conductance. As e-beam evaporation is a much softer deposition technique compared to dc sputtering, the spin-mixing conductance at the YIG/Pt interface might be lower in case of evaporation, resulting in less spin pumping.\citep{HeinrichInterface} Also the structure of the Pt layers might be different, resulting in different spin-Hall angles and/or different spin diffusion lengths. \begin{figure}[h] \includegraphics[width=8.5 cm]{Fig5SpinPumping \caption{\label{fig:Fig5} Comparison of (a) transverse resistance for an out-of-plane applied magnetic field, and (b) spin pumping/ISHE signal (using an rf-frequency of 1.4GHz with a power of 10mW) for Pt on top of YIG, deposited by e-beam evaporation (E) and dc sputtering (S). } \end{figure} \section{Summary} In summary, the SMR in Pt layers with different thicknesses [3, 4, 8 and 35nm], deposited on top of YIG, was investigated for both in-plane and out-of-plane applied magnetic fields. In-plane magnetic field scans clearly show the presence of SMR for the transverse as well as the longitudinal configuration. Out-of-plane measurements present a magnetic field dependence which can also be assigned to the SMR. The sign and magnitude of the SMR signal are shown to be determined by the magnetization direction of the YIG. Further, thickness dependence experiments show that the SMR signal decreases in magnitude when increasing the Pt thickness. No SMR signals were observed for SiO$_2$/Pt samples. For Pt layers deposited by e-beam evaporation, in stead of dc sputtering, the found SMR signals are decreased by a factor 7. Also spin pumping experiments show reduced signals for e-beam evaporated Pt compared to sputtered Pt. The difference in spin pumping signals and SMR signals show the possible importance of the YIG/Pt interface, and connected to this, the spin-mixing conductance, for this kind of experiments. \section*{Acknowledgements} We would like to acknowledge B. Wolfs, M. de Roosz and J. G. Holstein for technical assistance and prof. dr. ir. G. E. W. Bauer for useful comments regarding the explanation of the measurements. This work is part of the research program (Magnetic Insulator Spintronics) of the Foundation for Fundamental Research on Matter (FOM) and is supported by NanoNextNL, a micro and nanotechnology consortium of the Government of the Netherlands and 130 partners, by NanoLab NL and the Zernike Institute for Advanced Materials.
1301.3275
\section{Background and overview} The \fbp, that is, the problem of classifying \sgps\ according to the finite basability of their identities, has been intensively explored since the 1960s. Since the 1970s, the same problem has become investigated for \sgps\ endowed with an additional unary operation $x\mapsto x^*$; such structures are commonly called \emph{unary \sgps}. If $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ is a unary \sgp, then the (plain) \sgp\ $\langle S,\cdot\rangle$ is called the (\emph{\sgp}) \emph{reduct} of $\mathcal{S}$. It is quite natural to ask how the answer to the \fbp\ for a given unary \sgp\ relates to the finite basability of the identities of its reduct. The question turns out to be somewhat delicate. On the one hand, when we enhance the vocabulary of an equational language by adding a unary operation, the expressive power of the language increases. Hence $\mathcal{S}$ usually has more identities than $\langle S,\cdot\rangle$ so that the former may have more chance to become \nfb. On the other hand, the inference power of the language increases too. Hence one can imagine the situation when some identity of $\langle S,\cdot\rangle$ does not follow from an identity system $\Sigma$ as a plain identity but follows from $\Sigma$ when treated as a unary identity. This indicates that $\mathcal{S}$ may be \fb\ even if $\langle S,\cdot\rangle$ is not. The cumulative effect of the trade-off between increased expressivity and increased inference power is hard to predict in general, and both possible outcomes indeed occur. This means that there exist unary \sgps, even groups $\mathcal{G}=\langle G;\cdot,{}^{-1}\rangle$ with inversion as the unary operation, such that $\mathcal{G}$ is \fb\ [\nfb] as a group while its reduct $\langle G;\cdot\rangle$ is \nfb\ [respectively, \fb] as a plain \sgp. See \cite[Section~2]{Volkov:2001} for concrete examples (known since the 1970s), references and a discussion. Much attention has been paid to the restriction of the \fbp\ to the class of \fss, in the plain as well as the unary setting, see, e.g., the survey \cite{Volkov:2001}. Therefore it appears a bit surprising that the above question about the relation between the finite basability of a unary \sgp\ and of its reduct has not been systematically explored in the realm of \fss. To the best of our knowledge, the first example of a \nfb\ \fus\ whose reduct is \fb\ was constructed only in~1998, see~\cite{Lawrence&Willard:1998}. The unary operation used in~\cite{Lawrence&Willard:1998} was rather ad hoc, and similar examples with well behaved unary operations (including an example of a \nfb\ \fis\ with \fb\ reduct) have only recently appeared in \cite{Jackson&Volkov:2010}. Examples of the `opposite' kind (of \fb\ \fuss\ with \nfb\ reducts) are not yet known. For \fss, the following strengthening of the property of being \nfb\ has been successfully studied. Recall that a variety $\mathbf{V}$ of [unary] \sgps\ is called \emph{locally finite} if every \fg\ [unary] \sgp\ in $\mathbf{V}$ is finite. A finite [unary] \sgp\ is said to be \emph{\infb} (INFB for short) if it is not contained in any locally finite \fb\ variety of [unary] \sgps. Since the variety generated by a finite [unary] \sgp\ is known to be locally finite, an INFB [unary] \sgp\ certainly is \nfb. In fact, the property of being INFB is much stronger than the property of being \nfb\ and also behaves more regularly, see \cite{Volkov:2001} for details. Sapir~\cite{Sapir:1987} has given an efficient (in the algorithmic sense of the word) description of INFB \sgps. INFB unary \sgps\ have been investigated in~\cite{Dolinka:2010,ADV:2012} where some sufficient and some necessary conditions for a \fis\ to be INFB have been found. Again, in this situation it is quite natural to ask what happens when one passes from a \fus\ to its reduct. The aforementioned example of~\cite{Lawrence&Willard:1998} is in fact INFB so that in general an INFB unary \sgp\ may have a \fb\ reduct. This is however impossible for a \fis; indeed, it is easy to verify (see Lemma~\ref{easy} below) that the reduct of an INFB \is\ must be INFB. The converse is not true as first observed in~\cite{Sapir:1993}, and it is this circumstance that gives rise to the specific question addressed in the present paper: when does an involution $x\mapsto x^*$ defined on an INFB \sgp\ $\langle S,\cdot\rangle$ preserve the property of being INFB in the sense that the resulting \is\ $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ is INFB as a unary \sgp? We show (Theorem~\ref{twisted}) that this is the case whenever the variety generated by $\mathcal{S}$ contains a certain 3-element \is\ $\TSL$ (twisted semilattice). This result has several applications: first, we give new, easy and uniform proofs for some examples of INFB \iss\ found in~\cite{ADV:2012}; second, we exhibit a further series of INFB \iss. We also show (Corollary~\ref{characterization}) that if $\langle S,\cdot\rangle$ is a finite regular \sgp, then the presence of the 3-element twisted semilattice $\TSL$ in the variety generated by $\mathcal{S}$ is not only sufficient but also necessary for the property of being INFB to persist. Combined with Sapir's result, this leads to an efficient description of regular INFB \iss. \section{Preliminaries} \label{sec:preliminaries} We assume the reader's acquaintance with basic concepts of universal algebra such as the notion of a variety and the HSP-theorem, see, e.g., \cite[Chapter~II]{BuSa81}. Section~\ref{sec:regular} also requires some knowledge of Green's relations, cf.~\cite[Chapter~2]{how}. A unary \sgp\ $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ is called an \emph{\is} if it satisfies the identities \begin{equation} \label{eq:invlution} (xy)^*=y^*x^* \quad \text{ and }\quad (x^*)^*=x, \end{equation} in other words, if the unary operation $x\mapsto x^*$ is an involutory anti-automorphism of the reduct $\langle S,\cdot\rangle$. The \emph{free \is} $\FI(X)$ on a given alphabet $X$ can be constructed as follows. Let $\overline{X}=\{x^*\mid x\in X\}$ be a disjoint copy of $X$. We refer to the elements of $X$ as \emph{plain letters} and to the elements of $\overline{X}$ as \emph{starred letters}. Define $(x^*)^*=x$ for all $x^*\in \overline{X}$. Then $\FI(X)$ is the free \sgp\ $(X\cup\overline{X})^+$ endowed with the involution defined by $$(x_1\cdots x_m)^* = x_m^*\cdots x_1^*$$ for all $x_1,\dots,x_m\in X\cup \overline{X}$. We refer to elements of $\FI(X)$ as \emph{involutory words over $X$} while elements of the free semigroup $X^+$ will be referred to as \emph{plain words over $X$}. If an \is\ $\mathcal{T}=\langle T,\cdot,{}^*\rangle$ is generated by a set $Y\subseteq T$, then every element in $\mathcal{T}$ can be represented by an involutory word over $Y$ and thus by a plain word over $Y\cup\overline{Y}$ where $\overline{Y}=\{y^*\mid y\in Y\}$. Hence the reduct $\langle T,\cdot\rangle$ is generated by the set $Y\cup\overline{Y}$; in particular, $\mathcal{T}$ is \fg\ if and only if so is $\langle T,\cdot\rangle$. This observation immediately leads to the following fact already mentioned in the introduction. \begin{Lemma} \label{easy} If an \is\ $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ is \infb, then so is its reduct $\langle S,\cdot\rangle$. \end{Lemma} \begin{proof} Arguing by contradiction, assume that $\langle S,\cdot\rangle$ is not INFB. Then $\langle S,\cdot\rangle$ belongs to a locally finite plain \sgp\ variety defined by a finite identity system $\Sigma$. Consider the variety $\mathbf{V}$ of \iss\ defined by the identities \eqref{eq:invlution} and $\Sigma$. Clearly, $\mathbf{V}$ is \fb\ and $\mathcal{S}\in\mathbf{V}$. If $\mathcal{T}=\langle T,\cdot,{}^*\rangle$ is a \fg\ \is\ from $\mathbf{V}$ then the reduct $\langle T,\cdot\rangle$ is a \fg\ plain \sgp\ by the observation preceding the formulation of the lemma. Since the reduct satisfies the identities in $\Sigma$ and $\Sigma$ defines a locally finite plain \sgp\ variety, we conclude that the base set $T$ is finite. Hence the variety $\mathbf{V}$ is locally finite and $\mathcal{S}$ belongs to a locally finite \fb\ variety, a contradiction. \end{proof} As mentioned, the converse of Lemma~\ref{easy} is not true in general. For an example, consider the well known \emph{Brandt monoid} $\langle B_2^1,\cdot\rangle$, where $B_2^1$ is the set of the following six integer $2\times 2$-matrices: $$\begin{pmatrix} 0 & 0\\ 0 & 0\end{pmatrix},\ \begin{pmatrix} 1 & 0\\ 0 & 0\end{pmatrix},\ \begin{pmatrix} 0 & 1\\ 0 & 0\end{pmatrix},\ \begin{pmatrix} 0 & 0\\ 1 & 0\end{pmatrix},\ \begin{pmatrix} 0 & 0\\ 0 & 1\end{pmatrix},\ \begin{pmatrix} 1 & 0\\ 0 & 1\end{pmatrix},$$ and the binary operation $(A_1,A_2)\mapsto A_1\cdot A_2$ is the usual matrix multiplication. It is known \cite[Corollary~6.1]{sapirburnside} that the Brandt monoid is INFB (this was in fact the very first example of an INFB \sgp). The Brandt monoid admits a natural involution, namely, the usual matrix transposition $A\mapsto A^T$. However, the \is\ $\langle B_2^1,\cdot,{}^T\rangle$ is not INFB as shown in~\cite{Sapir:1993}. Further examples can be found in~\cite{ADV:2012}: if $\mathcal{K}$ is a finite field and $\mathrm{M}_n(\mathcal{K})$ stands for the set of all $n\times n$-matrices over $\mathcal{K}$, then the \sgp\ $\langle\mathrm{M}_n(\mathcal{K}),\cdot\rangle$ is INFB for any $n\ge 2$ by \cite[Corollary~6.2]{sapirburnside} while the \is\ $\langle\mathrm{M}_2(\mathcal{K}),\cdot,{}^T\rangle$ is not INFB if the number of elements in $\mathcal{K}$ leaves reminder~3 when divided by~4. Thus, not every involution defined on an INFB \sgp\ preserves the property of being INFB, and we are looking towards a classification of `INFB-preserving' involutions. Our main tool is the following result from~\cite{ADV:2012}. Recall the notions that appear in its formulation. Let $x_1,x_2,\dots,x_n,\dots$ be a sequence of letters. The sequence $\{Z_n\}_{n=1,2,\dots}$ of \emph{Zimin words} is defined inductively by $Z_1=x_1$, $Z_{n+1}=Z_nx_{n+1}Z_n$. We say that an involutory word $v$ is an \emph{involutory isoterm for a unary semigroup $\mathcal{S}$} if the only involutory word $v'$ such that $\mathcal{S}$ satisfies the \is\ identity $v=v'$ is the word $v$ itself. \begin{Thm}[{\mdseries\cite[Theorem~2.3]{ADV:2012}}] \label{Theorem 2.2} Let $\mathcal{S}$ be a \fis. If all Zimin words are involutory isoterms for $\mathcal{S}$, then $\mathcal{S}$ is \infb. \end{Thm} \section{Main result and its applications} \label{sec:main} Recall that \sgps\ satisfying both $xy=yx$ and $x^2=x$ are called \emph{semilattices}. An \is\ $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ whose reduct $\langle S,\cdot\rangle$ is a semilattice with 0 is said to be a \emph{twisted semilattice} if 0 is the only fixed point of the involution $x\mapsto x^*$. This class of \iss\ was first considered in~\cite{Fajt}. It is easy to see that the minimum non-trivial object in this class is the 3-element twisted semilattice $\TSL=\langle\{e,f,0\},\cdot,{}^*\rangle$ in which $e^2=e$, $f^2=f$ and all other products are equal to $0$, while the unary operation is defined by $e^*=f$, $f^*=e$, and $0^*=0$. If $ \mathcal{S}$ is an \is, we denote by $\var\mathcal{S}$ the variety generated by $\mathcal{S}$. \begin{Thm}\label{sufficientINFB} \label{twisted} Let $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ be a \fis\ such that $\TSL\in\var\mathcal{S}$. If the reduct $\langle S,\cdot\rangle$ is \infb, then so is $\mathcal{S}$. \end{Thm} \begin{proof} By Theorem~\ref{Theorem 2.2} we only have to show that $\mathcal{S}$ satisfies no non-trivial \is\ identity of the form $Z_n=z$. If $z$ is a plain word, we can refer to~\cite[Proposition~7]{sapirburnside} according to which the INFB \sgp\ $\langle S,\cdot\rangle$ satisfies no non-trivial plain \sgp\ identity of the form $Z_n=z$. Now suppose that $\mathcal{S}$ satisfies an identity $Z_n=z$ such that the involutory word $z$ is not a plain word. This means that $z$ contains a starred letter. Since $\TSL\in\var\mathcal{S}$, the identity $Z_n=z$ holds in $\TSL$. Substitute the element $e$ of $\TSL$ for all plain letters occurring in $Z_n$ and $z$. Since $e^2=e$, the value of the word $Z_n$ under this substitution equals $e$. On the other hand, since $z$ contains a starred letter, the value of $z$ is a product involving $e^*=f$, and every such product is equal to either $f$ or 0. This is a contradiction. \end{proof} As for applications of Theorem~\ref{twisted}, we first give simplified and uniform proofs for two important results from \cite{ADV:2012}. To start with, consider the \emph{twisted Brandt monoid} $\TB=\langle B_2^1,\cdot,{}^D\rangle$ arising when one endows the Brandt monoid $\langle B_2^1,\cdot\rangle$ with the unary operation $A\mapsto A^D$ that fixes the matrices $\left(\begin{smallmatrix} 0 & 0\\ 0 & 0\end{smallmatrix}\right),\ \left(\begin{smallmatrix} 0 & 1\\ 0 & 0\end{smallmatrix}\right),\ \left(\begin{smallmatrix} 0 & 0\\ 1 & 0\end{smallmatrix}\right),\ \left(\begin{smallmatrix} 1 & 0\\ 0 & 1\end{smallmatrix}\right)$ and swaps each of the matrices $\left(\begin{smallmatrix}1 & 0\\ 0 & 0\end{smallmatrix}\right),\ \left(\begin{smallmatrix} 0 & 0\\ 0 & 1\end{smallmatrix}\right)$ with the other one. We notice that this unary operation is just the reflection with respect to the secondary diagonal (from the top right to the bottom left corner). The reflection (called the \emph{skew transposition}) makes sense for every square matrix and is in fact an involution of the \sgp\ $\langle\mathrm{M}_n(\mathcal{K}),\cdot\rangle$; this follows from the observation that for every matrix $A\in\mathrm{M}_n(\mathcal{K})$, one has $A^D=JA^TJ$, where $J$ is the $n\times n$-matrix with 1s on the secondary diagonal and 0s elsewhere. Moreover suppose that the field $\mathcal{K}$ is such that there exists a matrix $R\in\mathrm{M}_n(\mathcal{K})$ satisfying $R^T=R$ and $R^2=J$ (this happens, e.g., when the characteristic of $\mathcal{K}$ is not $2$ and square roots of ${-1}$ and ${2}$ do exist in $\mathcal{K}$). Then the conjugation map $A\mapsto A\psi:= R^{-1}AR$ satisfies $(A^D)\psi=(A\psi)^T$ and hence is an isomorphism between the \iss\ $\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^D\rangle$ and $\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^T\rangle$. Clearly, the set $B_2^1$ can be considered as a subset of $\mathrm{M}_2(\mathcal{K})$, and as such it is closed under both the usual transposition and the skew one. Therefore it appears a bit surprising that the involutory sub\sgps\ $\TB=\langle B_2^1,\cdot,{}^D\rangle$ and $\langle B_2^1,\cdot,{}^T\rangle$ of the (isomorphic) \iss\ $\langle\mathrm{M}_2(\mathcal{K}),\cdot,{}^D\rangle$ and respectively $\langle\mathrm{M}_2(\mathcal{K}),\cdot,{}^T\rangle$ turn out to be so much different. Indeed, $\langle B_2^1,\cdot,{}^T\rangle$ is not INFB (see Section~\ref{sec:preliminaries}) while $\TB$ is, as the following corollary reveals. \begin{Cor}[{\mdseries\cite[Corollary 2.7]{ADV:2012}}] \label{twisted Brandt} The twisted Brandt monoid $\TB$ is inherently \nfb. \end{Cor} \begin{proof} As already mentioned, the reduct $\langle B_2^1,\cdot\rangle$ of $\TB$ is INFB by~\cite[Corollary~6.1]{sapirburnside}. The matrices $\left(\begin{smallmatrix}1 & 0\\ 0 & 0\end{smallmatrix}\right)$, $\left(\begin{smallmatrix}0 & 0\\ 0 & 1\end{smallmatrix}\right)$, and $\left(\begin{smallmatrix}0 & 0\\ 0 & 0\end{smallmatrix}\right)$ form an involutory sub\sgp\ in $\TB$ and, obviously, this sub\sgp\ is isomorphic to the 3-element twisted semilattice $\TSL$. Thus, Theorem~\ref{twisted} applies. \end{proof} Now consider the matrix \iss\ $\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^T\rangle$ where $\mathcal{K}$ is a finite field. \begin{Cor}[{\mdseries\cite[Theorems 3.9 and 3.10]{ADV:2012}}] \label{matrices} The \is\ $\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^T\rangle$, where $\mathcal{K}$ is a finite field, is \infb\ if $n\ge 3$ or if $n=2$ and the number of elements in $\mathcal{K}$ is not of the form $4k+3$. \end{Cor} \begin{proof} The reduct $\langle\mathrm{M}_n(\mathcal{K}),\cdot\rangle$ is INFB for each $n\ge2$ and each finite field $\mathcal{K}$ by~\cite[Corollary~6.2]{sapirburnside}. To invoke Theorem~\ref{twisted}, it only remains to show that, under the condition of the corollary, the 3-element twisted semilattice $\TSL$ belongs to the variety $\var\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^T\rangle$. First let $n\ge3$. By the Chevalley--Warning theorem \cite[Corollary~2 in \S1.2]{Serre}, the field $\mathcal{K}$ contains some elements $x,y$ satisfying $1+x^2+y^2=0$. Then the $n\times n$-matrices $$e=\begin{pmatrix} 1 & 0 & 0 & \cdots & 0\\ x & 0 & 0 & \cdots & 0\\ y & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots\\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix},\ \ f= \begin{pmatrix} 1 & x & y & \cdots & 0\\ 0 & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots\\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix},\ g= \begin{pmatrix} 1 & x & y & \cdots & 0\\ x & x^2 & xy & \cdots & 0\\ y & xy & y^2 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots\\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix}$$ satisfy $$e^2=e,\ f^2=f,\ ef=g,\ fe=0,\ e^T=f,\ f^T=e,\ \text{ and } g^T=g.$$ Therefore the set $\{e,f,g,0\}$ forms an involutory sub\sgp\ in $\langle\mathrm{M}_n(\mathcal{K}),\cdot,{}^T\rangle$, the set $\{g,0\}$ is an ideal of this sub\sgp\ and is closed under transposition. It remains to observe that the Rees quotient of the \is\ $\langle\{e,f,g,0\},\cdot,{}^T\rangle$ over the ideal $\{g,0\}$ is isomorphic to the 3-element twisted semilattice $\TSL$. Now let $n=2$ and let the number of elements in $\mathcal{K}$ be not of the form $4k+3$. Then the field $\mathcal{K}$ contains a square root of ${-1}$, see, e.g., \cite[Theorem~3.75]{LidlNiederreiter}. Now the argument of the previous paragraph applies, with the $2\times 2$-matrices $$e'=\begin{pmatrix} 1 & 0 \\ x & 0 \end{pmatrix},\ \ f'= \begin{pmatrix} 1 & x\\ 0 & 0 \end{pmatrix},\ \text{ and }\ g'= \begin{pmatrix} 1 & x \\ x & -1 \end{pmatrix}$$ in the roles of $e,f$, and $g$, respectively, where $x$ denotes some fixed square root of $-1$. \end{proof} We could have continued along the same lines to show that in fact all examples of INFB \iss\ found in~\cite{Dolinka:2010,ADV:2012} can be similarly deduced from Theorem~\ref{twisted}. However we think that Corollaries~\ref{twisted Brandt} and~\ref{matrices} are representative enough. Now we present a new application. Let $\mathcal{K}$ be a finite field and let $\mathrm{T}_n(\mathcal{K})$ stand for the set of all upper-triangular $n\times n$-matrices over $\mathcal{K}$. The set $\mathrm{T}_n(\mathcal{K})$ forms an \is\ under the usual matrix multiplication and the skew transposition. The following result classifies all INFB \iss\ of the form $\langle\mathrm{T}_n(\mathcal{K}),\cdot,{}^D\rangle$. \begin{Thm} \label{triangular} The \is\ $\langle\mathrm{T}_n(\mathcal{K}),\cdot,{}^D\rangle$, where $\mathcal{K}$ is a finite field, is \infb\ if and only if $n\ge 4$ and $\mathcal{K}$ contains at least $3$ elements. \end{Thm} \begin{proof} In~\cite{Goldberg&Volkov:2003} it is shown that the reduct $\langle\mathrm{T}_n(\mathcal{K}),\cdot\rangle$ is INFB if and only if $n\ge 4$ and $\mathcal{K}$ contains at least $3$ elements. Therefore, the `only if' part of our theorem follows from Lemma~\ref{easy} and the `if' part will follow from Theorem~\ref{twisted} as soon as we shall verify that $\TSL\in\var\langle\mathrm{T}_n(\mathcal{K}),\cdot,{}^D\rangle$. Indeed, for every $n\ge2$ the matrix units $e_{11}$ and $e_{nn}$ belong to $\mathrm{T}_n(\mathcal{K})$ and satisfy $$e_{11}^2=e_{11},\ e_{nn}^2=e_{nn},\ e_{11}e_{nn}=e_{nn}e_{11}=0,\ e_{11}^D=e_{nn},\ \text{ and } e_{nn}^D=e_{11}.$$ Hence the set $\{e_{11},e_{nn},0\}$ forms an involutory sub\sgp\ in $\langle\mathrm{T}_n(\mathcal{K}),\cdot,{}^D\rangle$ and this involutory sub\sgp\ is isomorphic to the 3-element twisted semilattice $\TSL$. \end{proof} Observe that in~\cite{Goldberg&Volkov:2003} it is shown that for any $n$ and $\mathcal{K}$, the Brandt monoid $\langle B_2^1,\cdot\rangle$ does not belong to the \sgp\ variety generated by the \sgp\ $\langle\mathrm{T}_n(\mathcal{K}),\cdot\rangle$. Hence the twisted Brandt monoid $\TB$ does not belong to the \is\ variety $\var\langle\mathrm{T}_n(\mathcal{K}),\cdot,{}^D\rangle$. Thus, Theorem~\ref{triangular} provides a series of examples of INFB \iss\ whose varieties do not contain $\TB$. Such examples have not been known before. \section{Regular semigroups} \label{sec:regular} In this section we show that the presence of the 3-element twisted semilattice $\TSL$ in the variety generated by a \fis\ $\mathcal{S}$ is (not only sufficient but also) necessary for $\mathcal{S}$ in order to inherit the property of being INFB from its \sgp\ reduct, provided that $\mathcal{S}$ is regular. As a preliminary result we present a criterion of whether or not $\TSL$ belongs to $\var\mathcal{S}$ (Corollary~\ref{TSLinvarS}). We shall use two classical results concerning Green's relations, the first of which is often referred to as the Lemma of Miller and Clifford (see \cite[Proposition~2.3.7]{how}), while the property formulated in the second one is usually called the \emph{stability} of Green's relations (see \cite[Proposition~3.1.4 (2)]{pin}). \begin{Lemma} \label{Miller&Clifford} \begin{enumerate} \item Let $a,b$ be elements of a $\mathscr{D}$-class of an arbitrary \sgp. Then $ab\in R_a\cap L_b$ if and only if $L_a\cap R_b$ contains an idempotent. \item Let $S$ be a finite semigroup and $a,b\in S$. Then $a\Jc ab$ implies $a\Rc ab$ and $b\Jc ab$ implies $b \Lc ab$. \end{enumerate} \end{Lemma} The above mentioned criterion for membership of $\TSL$ in $\var\mathcal{S}$ is clarified by the following key result. \begin{Prop}\label{alternative} For a finite involutory semigroup $\mathcal{S}$ exactly one of the two following assertions is true. \begin{enumerate} \item[(A)] There exists an idempotent $e$ of $\mathcal{S}$ satisfying $e\mathrel{{>}_{\!\!\!\Jc}} e^*e$. \item[(B)] There exists a positive integer $N$ such that $\mathcal{S}$ satisfies the identity \begin{equation}\label{identityfor(B)} x^N=(x^N(x^N)^*)^Nx^N. \end{equation} \end{enumerate} \end{Prop} \begin{proof} It is clear that the conditions (A) and (B) exclude each other. Let us assume that the assertion (A) does not hold for $\mathcal{S}=\langle S,\cdot,{}^*\rangle$. We have to prove that $\mathcal{S}$ satisfies (B). For each idempotent $e$ of $\mathcal{S}$ we have $e\Jc e^*e$ and therefore $e\Lc e^*e$ by Lemma~\ref{Miller&Clifford}~(2). Since the involution ${}^*$ is an anti-automorphism, we also have $e^*\Rc e^*e$ for each idempotent $e$. Swapping the roles of $e$ and $e^*$ we also get $e\Rc ee^*\Lc e^*$. In other words, $$e\Rc ee^*\Lc e^*\Rc e^*e\Lc e$$ holds for each idempotent $e$ of $\mathcal{S}$. By the `only if' part of Lemma~\ref{Miller&Clifford} (1), the fact that the product $e^*e$ belongs to $L_e\cap R_{e^*}$ implies that the $\mathscr{H}$-class $H_{ee^*}=R_e\cap L_{e^*}$ contains an idempotent $g$ and hence, by Green's theorem~\cite[Theorem~2.2.5]{how}, this class is a subgroup of $\langle S,\cdot\rangle$ having $g$ as its identity element. Since $g\Rc e$ we have that $ge=e$. Now take any common multiple $n$ of the exponents of all subgroups of $\mathcal{S}$; then $(ee^*)^n=g$ and hence $(ee^*)^ne=e$. Finally, choose a positive integer $N$ for which $\mathcal{S}$ satisfies the identity $x^N=x^{2N}$. Then $N$ is a common multiple of the exponents of all subgroups of $\mathcal{S}$ and each element of the form $x^N$ is idempotent. Consequently $\mathcal{S}$ satisfies the identity \eqref{identityfor(B)}. \end{proof} It is now easy to see that $\TSL\in\var\mathcal{S}$ if and only if $\mathcal{S}$ is of type (A). Indeed suppose that $\mathcal{S}$ has an idempotent $e$ satisfying $e\mathrel{{>}_{\!\!\!\Jc}} e^*e$ and let $\mathcal{T}=\langle T,\cdot,{}^*\rangle$ be the involutory subsemigroup of $\mathcal{S}$ generated by $e$; then $e\ne e^*$ and neither of the idempotents $e$ and $e^*$ is contained in the ideal $I:=Tee^*T\cup Te^*eT$. It follows that $\TSL$ is isomorphic to the Rees quotient $\mathcal{T}/I$. In other words, $\TSL$ is a homomorphic image of an involutory subsemigroup of $\mathcal{S}$, that is, $\TSL$ \emph{divides} $\mathcal{S}$ and in particular $\TSL\in \var\mathcal{S}$. Conversely, if $\mathcal{S}$ is of type (B) then $\mathcal{S}$ satisfies the identity (\ref{identityfor(B)}) for some positive integer $N$. Obviously, $\TSL$ does not satisfy this identity and, hence, it does not belong to $\var\mathcal{S}$. Altogether we have proved: \begin{Cor}\label{TSLinvarS} For a finite involutory semigroup $\mathcal{S}$, the following are equivalent: \begin{enumerate} \item $\mathcal{S}$ is of type (A). \item $\TSL$ divides $\mathcal{S}$. \item $\TSL\in \var\mathcal{S}$. \end{enumerate} \end{Cor} A finite involutory semigroup $\mathcal{S}$ of type (A) with INFB semigroup reduct $\langle S,\cdot\rangle$ is INFB as an involutory semigroup (Theorem \ref{sufficientINFB}). At the time of writing, the authors were not (yet) aware of an example of an INFB involutory semigroup of type (B). We note that finite involutory semigroups of type (B) can be characterized in various ways; for example, as those in which each regular element admits a Moore--Penrose inverse, and likewise, as those in which each regular ${\Lc}$-class (and/or each regular ${\Rc}$-class) contains a \emph{projection} (that is, an idempotent fixed under the involution) \cite{NP}. Another equivalent condition is that each involutory subsemigroup $\langle g\rangle$ generated by a single idempotent $g$ is completely simple. Moreover, the class of all finite involutory semigroups of type (B) forms a pseudovariety of involutory semigroups, namely the one defined by the pseudoidentity $$x^\omega=(x^\omega(x^\omega)^*)^\omega x^\omega.$$ As usual, $s\mapsto s^\omega$ denotes the unary operation that assigns to each element $s$ of a finite semigroup the unique idempotent in the cyclic subsemigroup generated by $s$. Recall that an element $x$ of a \sgp\ $\langle S,\cdot\rangle$ is said to be \emph{regular} if $x=xyx$ for some $y\in S$. A [unary] \sgp\ is \emph{regular} if all of its elements are regular. We shall refine the proof of Proposition \ref{alternative} and show that a \textbf{regular} involutory semigroup $\mathcal{S}$ of type (B) satisfies an identity that guarantees that $\mathcal{S}$ is \textbf{not} INFB, thanks to the following result from~\cite{ADV:2012}. \begin{Prop}[{\mdseries\cite[Proposition~2.9]{ADV:2012}}] \label{NINFB} Let $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ be a \fis. If there exists an involutory word $\iota\!(x)$ in one variable $x$ such that $\mathcal{S}$ satisfies the identity $x=x\iota\!(x)x$, then $\mathcal{S}$ is not \infb. \end{Prop} We get the following consequence: \begin{Cor}\label{regular} A finite regular involutory semigroup $\mathcal{S}$ of type (B) is not inherently nonfinitely based. \end{Cor} \begin{proof} We are going to sharpen the proof of Proposition~\ref{alternative}. Let $\mathcal{S}$ be an involutory semigroup of type (B) (not necessarily regular at this point). Fix an arbitrary regular element $x\in S$ and take an element $y\in S$ such that $x=xyx$. Then $e=xy$ and $f=yx$ are idempotents and $e\Rc x\Lc f$. Since the involution is an anti-automorphism of $\langle S,\cdot\rangle$, we also have $e^*\Lc x^*\Rc f^*$. We have already seen in the proof of Proposition \ref{alternative} that $ee^*\Rc e\Lc e^*e\Rc e^*\Lc ee^*$ and $ff^*\Rc f\Lc f^*f\Rc f^*\Lc ff^*$. All listed relations are graphically represented in Fig.~\ref{fig:D-class} that shows an appropriate fragment of the eggbox picture for the $\mathscr{D}$-class of $x$ and $x^*$. \begin{figure}[th] \begin{center} {\large \begin{tabular}{|c|c|c|c|} \hline $f^*f$\rule[-5pt]{0pt}{16pt} & & $f^*$ & $x^*$ \\ \hline \rule[-5pt]{0pt}{16pt}& $e^*e$ & & $e^*$\\ \hline $f$\rule[-5pt]{0pt}{16pt} & \phantom{$f^*f$} & $ff^*$ & \phantom{$f^*f$}\\ \hline $x$\rule[-5pt]{0pt}{16pt} & $e$ & & $ee^*$\\ \hline \end{tabular}} \caption{A fragment of the eggbox picture for the $\mathscr{D}$-class of the elements $x$ and $x^*$}\label{fig:D-class} \end{center} \end{figure} As in the proof of Proposition~\ref{alternative}, the $\mathscr{H}$-class $H_{ee^*}=R_e\cap L_{e^*}$ contains an idempotent $g$ and again this class is a subgroup of $\langle S,\cdot\rangle$ having $g$ as its identity element. Observe that $R_e=R_x$ and $L_{e^*}=L_{x^*}$ whence $H_{ee^*}=R_x\cap L_{x^*}$. Also observe that $gx=x$ since $g$ is an idempotent and $g\Rc x$. Similarly, $ff^*\in R_f\cap L_{f^*}$ implies that $H_{f^*f}=R_{f*}\cap L_f$ contains an idempotent. However, $R_{f^*}=R_{x^*}$ and $L_f=L_x$. Now, by the `if' part of Lemma~\ref{Miller&Clifford} (1), the fact that $L_x\cap R_{x^*}=H_{f^*f}$ contains an idempotent implies that the product $xx^*$ lies in $R_x\cap L_{x^*}=H_{ee^*}$. Let $n$ be the least common multiple of the exponents of the subgroups of $\langle S,\cdot\rangle$. We then have $(xx^*)^n=g$ whence $x=gx=(xx^*)^nx$. Consequently, if $\mathcal{S}$ is regular then $\mathcal{S}$ satisfies the identity $x=(xx^*)^nx$ and hence $x=x\iota\!(x)x$ for $\iota\!(x)=x^*(xx^*)^{n-1}$, which by Proposition \ref{NINFB} implies that $\mathcal{S}$ is not INFB. \end{proof} We can now easily deduce various characterizations of regular INFB \iss. \renewcommand{\labelenumi}{\emph{(\roman{enumi})}} \begin{Cor} \label{characterization} Let $\mathcal{S}=\langle S,\cdot,{}^*\rangle$ be a finite regular \is. Then the following are equivalent: \begin{enumerate} \item $\mathcal{S}$ is \infb; \item the reduct $\langle S,\cdot\rangle$ is \infb\ and the $3$-element twisted semilattice $\TSL$ belongs to $\var\mathcal{S}$; \item the reduct $\langle S,\cdot\rangle$ is \infb\ and there exists an idempotent $e$ satisfying $e\mathrel{{>}_{\!\!\!\Jc}} e^*e$; \item all Zimin words are involutory isoterms for $\mathcal{S}$. \end{enumerate} \end{Cor} \begin{proof} (i) $\to$ (ii) follows from Lemma~\ref{easy}, Proposition \ref{alternative} and Corollary~\ref{regular}. (ii) $\to$ (iv) has been verified in the course of the proof of Theorem~\ref{twisted}. (iv) $\to$ (i) is Theorem~\ref{Theorem 2.2}. (ii) $\leftrightarrow$ (iii) follows from Proposition \ref{alternative} and Corollary \ref{TSLinvarS}. \end{proof} We observe that the condition (iii) in Corollary~\ref{characterization} is algorithmically verifiable. Indeed, given a finite regular \is\ $\mathcal{S}$, we can check whether or not its reduct is INFB by using Sapir's algorithm from~\cite{Sapir:1987}, and the condition on the idempotents is obviously decidable. \begin{Cor} There exists an algorithm which decides, when given a finite regular \is\ $\mathcal{S}$, whether or not $\mathcal{S}$ is \infb. \end{Cor} \paragraph*{\textbf{Acknowledgement}.} The authors are indebted to an anonymous referee for several inspiring remarks that have led to an improved presentation of Section 4.
1312.4260
\section{Introduction} In July 2012, both the ATLAS and CMS collaborations at the LHC announced the discovery of a new boson with mass around 125 GeV \cite{1207atlas,1207cms}. The combined data at the LHC indicate that its properties are quite compatible with those of the Higgs boson in the Standard Model (SM) \cite{13atlas,13cms}. However, whether the new boson is the Higgs boson predicted by the SM or new physics models still need to be further confirmed by the LHC experiment with high luminosity. So far, various new physics models like the low energy supersymmetric models can give reasonable interpretations for the properties of this SM-like Higgs boson around 125 GeV \cite{125-our,125-other-MSSM,natural-susy-125,NMSSM-125,cmssm-125}. Moreover, discovery of the SM-like Higgs boson is not the end of the story. The next challenge for the experiment is to precisely measure its properties including all the possible production and decay channels. As a rare production channel, the Higgs pair production can be used to test the Higgs self-couplings effectively \cite{self-coupling}, which play an essential role in reconstructing the Higgs potential. The Higgs pair production in the SM at the LHC proceeds through the gluon fusion $gg\to hh$. At the leading order, the main contributions come from the heavy quark loops through the box diagrams and triangle diagrams with the Higgs self-coupling. Due to the weak Yukawa couplings and Higgs self-coupling, as well as the cancelations between these two types of diagrams, the cross section in the SM is too small to be detected with current integrated luminosity. Even at $\sqrt{s} =$ 14 TeV with high luminosity, it is still difficult to detect this process. The discovery potential of the LHC to detect this production process has been investigated in \cite{hh-detect,Barger:2013jfa,hh-other}, and the most promising channel to detect it is $gg\to hh\to b \bar b \gamma\gamma$, other signal channels such as $h h \to b \bar{b} \tau^+ \tau^-$ are swamped by the reducible backgrounds \cite{Barger:2013jfa}. Compared with the predictions in the SM, the production rate of the SM-like Higgs pair production in new physics models can be enhanced significantly due to relatively large additional couplings of the SM-like Higgs boson with the introduced new particles, such as squarks in supersymmetric models \cite{hh-susy} or other colored particles \cite{color-scalar}. Therefore, the Higgs pair production can be a sensitive probe to new physics beyond the SM. In this paper we investigate the effects of color-octet scalars in the Manohar-Wise (MW) model \cite{MW-Model} on the Higgs pair production at the LHC. The Manohar-Wise model is a special type of two-Higgs-doublet model and predicts a family of color-octet scalars, which can have sizable couplings with the Higgs boson, since the sign of Higgs coupling with gluons is usually opposite to the prediction in the SM \cite{cao-octet}. Also considering the different amplitude structure of Higgs single and pair production, the cross section of Higgs pair production in the Manohar-Wise model may deviate significantly from its predictions in the SM. This paper is structured as follows. In Sec.~II we briefly introduce the Manohar-Wise model. Then in Sec.~III we present the numerical results and discussions of the Higgs pair production in the Manohar-Wise model. Finally, the conclusions are presented in Sec.~IV. \section{Model with color octet scalars ---the Manohar-Wise Model} In the SM, the scalar sector contains only one Higgs scalar doublet, which is responsible for the electroweak symmetry breaking. Additional extensions of the scalar sector is restricted by the principle of minimal flavor violation (MFV). Just motivated by this principle, the Manohar-Wise model extends the SM by adding one family color-octet scalars with $SU(3)_C \times SU(2)_L \times U(1)_Y$ quantum numbers $(8,2)_{1/2}$ \cite{MW-Model}, \begin{eqnarray} S^{A}=\left( \begin{array}{c} S^A_+ \\ S^A_0 \end{array} \right), \end{eqnarray} where $A=1,...,8$ denotes color index, $S^A_+$ and $S^A_0$ are the electric charged and neutral color-octet scalar fields respectively, and \begin{eqnarray} S^A_0=\frac{1}{\sqrt{2}} (S^A_R+ i S^A_I) \end{eqnarray} with $S^A_{R, I}$ denote the neutral CP-even and CP-odd color-octet scalar fields. In accordance with the MFV, the Yukawa couplings to the SM fermions are parameterized as \begin{eqnarray} {\cal L} = -\eta_{U} Y_{ij}^U \bar{u}_R^i T^A S^A Q_L^j - \eta_D Y_{ij}^D \bar{d}_R^i T^A (S^A)^\dagger Q_L^j + h.c., \end{eqnarray} where $Y^{U,D}_{ij}$ are the SM Yukawa matrices, $i,j$ denote flavor indices, and $\eta_{U,D}$ are flavor universal constants. The most general renormalizable scalar potential is given by \begin{eqnarray} V&=& \frac{\lambda}{4} \big (H^{\dagger i} H_i - \frac{v^2}{2}\big )^2 + 2m_S^2 \text{Tr} (S^{\dagger i}S_i) + \lambda_1 H^{\dagger i}H_i \text{Tr} (S^{\dagger j}S_j) + \lambda_2 H^{\dagger i}H_j \text{Tr} (S^{\dagger j}S_i) \nonumber\\ && + \big [ \lambda_3 H^{\dagger i} H^{\dagger j} \text{Tr}(S_iS_j) + \lambda_4 H^{\dagger i} \text{Tr}(S^{\dagger j}S_j S_i) + \lambda_5 H^{\dagger i} \text{Tr}(S^{\dagger j}S_i S_j) + h.c. \big ] \nonumber\\ && + \lambda_6 \text{Tr} (S^{\dagger i}S_i S^{\dagger j} S_j) + \lambda_7 \text{Tr}(S^{\dagger i}S_j S^{\dagger j} S_i) + \lambda_8 \text{Tr} (S^{\dagger i}S_i)\text{Tr}(S^{\dagger j}S_j)\8 \8 \nonumber\\ && + \lambda_9 \text{Tr} (S^{\dagger i} S_j) \text{Tr}(S^{\dagger j}S_i) + \lambda_{10} \text{Tr} (S_i S_j) \text{Tr}(S^{\dagger i}S^{\dagger j}) +\lambda_{11} \text{Tr}(S_iS_j S^{\dagger j} S^{\dagger i}), \end{eqnarray} where $H$ is usual $(1,2)_{1/2}$ Higgs doublet, the traces are over color indices with $S= S^A T^A$, $i,j$ denote $SU(2)_L$ indices and all $\lambda_i$ ($i=1,..., 11$) except $\lambda_4$ and $\lambda_5$ are real parameters. Note that the convention $\lambda_3 >0$ is allowed by a appropriate phase rotation of the $S$ fields. After the electroweak symmetry breaking, the mass spectrum of the scalars depend on the parameters in the scalar potential, and at the tree-level are given by \begin{eqnarray} m_\pm^2 &=& m_S^2 + \lambda_1 \frac{v^2}{4} \equiv m_S^2 + \lambda_\pm \frac{v^2}{4}, \nonumber \\ m_R^2 &=& m_S^2 + (\lambda_1 + \lambda_2 + 2\lambda_3) \frac{v^2}{4} \equiv m_S^2 + 2\lambda_R \frac{v^2}{4}, \nonumber\\ m_I^2 &=& m_S^2 +(\lambda_1 + \lambda_2 - 2\lambda_3) \frac{v^2}{4} \equiv m_S^2 + 2\lambda_I \frac{v^2}{4}. \label{mass} \end{eqnarray} The interactions of these scalars with the Higgs boson (labeled as $h$ denoting the SM Higgs boson) are as follows \cite{hgg-NLO}, \begin{eqnarray} g_{hS^{A\ast}_i S^B_i} = \frac{v}{2} \lambda_i \delta^{AB},~~~~~~~ g_{hhS^{A\ast}_i S^B_i} = \frac{1}{2}\lambda_i \delta^{AB} \label{hss} \end{eqnarray} with $i=\pm, R, I$, and we take $v=$ 246 GeV. \section{Calculations and numerical results} \begin{figure}[thbp] \includegraphics[width=15cm]{fig1.eps} \vspace{-0.5cm} \caption{Feynman diagrams for the pair production of the Higgs boson via gluon fusion in the Manohar-Wise model, with $S^A_i$ ($i=\pm, R, I$) denoting the color-octet scalars in the model. The diagrams with initial gluons or final Higgs bosons interchanged are not shown here. Due to the large Yukawa couplings, we only consider the contributions from the third generation quarks.} \label{fig1} \end{figure} In the Manohar-Wise model the Higgs pair production at the LHC mainly proceeds through the gluon fusion shown in Fig.\ref{fig1}. Compared with the SM, the Manohar-Wise model predicts additional color octet scalars including $S^A_i (i=\pm, R, I)$, which have couplings to the Higgs boson $h$. Therefore, the pair production of $h$ in the Manohar-Wise model has additional contributions from the loops of the color octet scalars $S^A_i (i=\pm, R, I)$ besides the contributions from the loops mediated by the heavy quarks in the SM, as shown in Fig.\ref{fig1}. Since the additional contributions are at the same perturbation order as those in the SM, the cross section of the Higgs pair production in the Manohar-Wise model may significantly deviate from the prediction in the SM. In the numerical calculations we take $m_t=173$ GeV, $m_b=4.2$ GeV, $m_W=80.0$ GeV, $m_Z=91.0$ GeV, and $\alpha=1/128$ \cite{PDG}, and fix the collision energy of LHC and the mass of Higgs boson to be 14 TeV and 125.6 GeV respectively. Then we use CT10 \cite{CT10} to generate the parton distribution functions, with the factorization scale $\mu_F$ and the renormalization scale $\mu_R$ chosen to be $2 m_h$. We check that the cross section of the Higgs pair production in the SM is 18.7 fb, which is consistent with the result in \cite{sm-lo}. In this work, following our previous work \cite{cao-octet}, we scan over the parameter space of the Manohar-Wise model considering following theoretical and experimental constraints: (i) the constraints from the unitarity; (ii) the constraints from electroweak precision data (EWPD); (iii) the constraints from the LHC searches for exotic scalars through dijet-pair events. Based on 4.6 fb$^{-1}$ data at 7-TeV LHC for dijet-pair events collected by the ATLAS collaboration, the lower bound on the scalar mass has set to be 287 GeV at 95\% confidence level \cite{dijet}. The lower bound from four-top channel is much higher, but it is based on some assumptions, e.g., the bound is 500 GeV (630 GeV) for the neutral scalar decays into top pair with a branching ratio of 50\% (100\%) \cite{four-top}. Since the latter constraint can be escaped from by adjusting $\eta_U$, we only require the color octet scalars to be heavier than 300 GeV. Here we can comment that, in future running of the LHC the lower bound from dijet-pair events may be higher. According to \cite{GoncalvesNetto:2012nt}, for a color-octet scalar of 350 GeV (500 GeV), its pair production rate can reach 84.6 pb (11.4 pb) at 14-TeV LHC. Under the above constraints, we perform fits in this model to the latest Higgs data by using the ATLAS data and CMS data respectively. The detail of the fits can be found in our previous works \cite{cao-octet, LH-hfit}. From the fits we pick up the 1$\sigma$ (68\% confidence level or $\chi^2_{min} \leq \chi^2 \leq \chi^2_{min} + 2.3$) and 2$\sigma$ (95\% confidence level or $\chi^2_{min} + 2.3 < \chi^2 \leq \chi^2_{min} + 6.18$) samples, which correspond to $5.63 \leq \chi^2 \leq 7.93$ and $7.93 < \chi^2 \leq 11.81$ for the fit to the ATLAS data, and $2.47 \leq \chi^2 \leq 4.77$ and $4.77 < \chi^2 \leq 8.65$ for the fit to the CMS data. Then with these samples we calculate the cross section of Higgs pair production in the Manohar-Wise model and define $R$ as the ratio normalized to its SM values, \begin{eqnarray} R\equiv \sigma_{MW}(gg\to hh)/\sigma_{SM}(gg\to hh) \end{eqnarray} \begin{figure} \includegraphics[width=15cm]{fig2.ps} \vspace{-0.5cm} \caption{The scatter plots of the surviving samples, showing the normalized ratio $R$ as a function of $m_S$. The red circles '$\circ$' denote 1$\sigma$ surviving samples, and the sky blue stars '$\star$' denote 2$\sigma$ samples.} \label{fig2} \end{figure} \begin{figure} \includegraphics[width=15cm]{fig3.ps} \vspace{-0.5cm} \caption{Same as Fig.\ref{fig2}, but showing the ratio of cross section in the Manohar-Wise model with that in the SM (i. e. $R$) as a function of $\lambda_I/m_I$.} \label{fig3} \end{figure} In Fig.\ref{fig2} we project the $1\sigma$ and $2\sigma$ samples on the plane of the normalized ratio $R$ versus $m_S$. The left panel displays the surviving samples in fitting to the ATLAS Higgs data, and the right panel shows that to the CMS data. In the figure, the red circles denote 1$\sigma$ surviving samples, and the sky blue stars denote 2$\sigma$ samples. From this figure we can clearly see that the cross section of the Higgs pair production in the Manohar-Wise model can significantly deviate from the SM prediction, and the normalized production rate $R$ can even be up to $10^3$. The figure also shows that, for $m_S\gtrsim$ 1 TeV, the ratio $R$ is relatively small, usually smaller than 10, which reflects the decoupling effect. Now we give analytical explanations to the deviation of the cross section in the Manohar-Wise model shown in Fig.\ref{fig2}. The diagrams in Fig.\ref{fig1} can be divided into five parts: (1)+(2), (3)+(4), (5), (6)+(7) and (8)+(9)+(10), and each part is UV finite. We numerically check their relative size and find that the contributions to the cross section from the diagrams (3)+(4) and (5) are quite large. This is because the amplitude of these diagrams can be written as \begin{eqnarray} \mathcal{M} &\sim & c_1\frac{g^2_{hS^{A\ast}_\pm S^A_\pm}}{m^2_\pm} + c_2\frac{g^2_{hS^{A\ast}_R S^A_R}}{m^2_R} + c_3\frac{g^2_{hS^{A\ast}_I S^A_I}}{m^2_I} \label{eq1} \end{eqnarray} where $c_i$ (i=1, 2, 3) are ${\cal{O}}(1)$ coefficients. Considering the couplings shown in Eq.(\ref{hss}), we rewrite the Eq.(\ref{eq1}) as \begin{eqnarray} \mathcal{M} &\sim & (c_1\frac{\lambda^2_\pm}{m^2_\pm} + c_2\frac{\lambda^2_R}{m^2_R} + c_3\frac{\lambda^2_I}{m^2_I})\frac{v^2}{4} \label{eq2} \end{eqnarray} And the values of $\lambda_i$ ($i=\pm, R, I$) are usually large required by the Higgs data \cite{cao-octet}. While the amplitude from the other diagrams, such as (6)-(10) are not enhanced by $\lambda^2_i$ and usually proportional to $(C_{hgg}/SM)_i$ , whose summation can not diverge much from that of the SM, since $|C_{hgg}/SM|\simeq 1$ according to current Higgs data (Fig. 2 in \cite{cao-octet}). Besides, we also find that there are strong cancelation between the diagrams (3)+(4) and (5). In our calculation, we find that the term involving $\lambda^2_I/m^2_I$ are usually much larger than that of $\lambda^2_{\pm}/m^2_{\pm}$ and $\lambda^2_R/m^2_R$ in Eq.(\ref{eq2}). The reason can be understood as follows. Firstly, the surviving samples prefer negative $\lambda_I$ and $|\lambda_I|$ is usually much larger compared with $\lambda_\pm$ and $\lambda_R$ (see Figure 1 in \cite{cao-octet}). Secondly, Eq.(\ref{mass}) manifests that, for fixed $m_S$ and negative $\lambda_i (i=\pm, R, I)$, the larger $|\lambda_i|$, the smaller $m_i$. Therefore, the contributions of the third term are dominant in Eq.(\ref{eq2}), that is, the contributions from the loops mediated by the scalar $S^A_I$ are much larger than that by the scalar $S^A_\pm$ and $S^A_R$. As a proof, in Fig.\ref{fig3} we show the ratio $R$ versus $\lambda_I/m_I$. The figure clearly shows that larger $|\lambda_I/m_I|$ usually predicts larger value of ratio $R$. We checked that, for samples with $R\gtrsim 100$, the CP-odd octet scalars are not very light ($300\lesssim m_I \lesssim 600$ GeV), but the coupling $\lambda_I$ should be very large ($-25 \lesssim \lambda_I \lesssim -8$), which can also be understood from Figure 1 in \cite{cao-octet}. And these large-$R$ samples can also satisfy the perturbation theory, which suggests $|\lambda_i| \lesssim 8\pi$ ($i=\pm, R, I$)\cite{perturbation}. Fig.\ref{fig3} also shows that for some special samples with $|\lambda_I/m_I|\sim 0$ in the left panel, the cross section in the Manohar-Wise model can also be enhanced up to 10 times prediction in the SM. For these samples, $|\lambda_R/m_R|$ is near 0.02 and the contributions from Eq.(\ref{eq2}) can be still large, comparable to that for the samples with $|\lambda_I/m_I|\sim 0.02$. That can be understood from Figure 3 in \cite{cao-octet}. \begin{figure} \includegraphics[width=15cm]{fig4.ps} \vspace{-0.5cm} \caption{Same as Fig.\ref{fig2}, but showing the normalized ratio $R$ as a function of $S/\sqrt{B}$, which is calculated at an integrated luminosity of 3000 fb$^{-1}$, and also marking out the corresponding luminosity for $S/\sqrt{B}=5$.} \label{fig4} \end{figure} Finally, we investigate the potential for discovery of Higgs pair production at the LHC14. In Fig.\ref{fig4}, we project samples on the plane of significance $S/\sqrt{B}$ versus the normalized ratio $R$. In calculating $S/\sqrt{B}$, we utilize the Monte Carlo (MC) simulation result of $gg\to hh \to b\bar{b}\gamma\gamma$ in the SM \cite{Yao:2013ika}. We assume that in the Manohar-Wise model the $\sigma\times Br$ and acceptances of the background, the acceptances of the signal are the same as that in the SM, while the $\sigma\times Br$ of the signal are calculated by ourselves, which can be expressed as \begin{eqnarray} (\sigma\cdot Br)_{MW} &=& \sigma_{SM} \times R \times BR(h\to b\bar{b}) \times Br(h\to \gamma\gamma) \nonumber \\ &\simeq& (\sigma\cdot Br)_{SM} \times R \times (C_{h\gamma\gamma}/SM)^2, \end{eqnarray} thus $S/\sqrt{B}$ in the Manohar-Wise model should be proportional to $R\times (C_{h\gamma\gamma}/SM)^2$. So combined with Fig.2 and Fig.3 in \cite{cao-octet}, we can understand that there are mainly three linear relation in each planes in Fig. \ref{fig4} in this paper. Since $S/\sqrt{B}$ is also proportional to $\sqrt{\emph{L}}$, in this figure we also mark out the lines of $S/\sqrt{B}=5$ for other values of luminosity, samples above which can be discovered with corresponding luminosity. For example, with the integrated luminosity of 100 fb$^{-1}$ at the 14 TeV LHC, when the cross section of Higgs pair production in the Manohar-Wise model is enhanced by 10 times the prediction in the SM, i.e. $R=10$, this process may be detected. Owing to the highly enhanced Higgs pair production rate, many samples in the Manohar-Wise model can be tested very soon after LHC running again. \section{Summary and Conclusion} Motivated by the principle of minimal flavor violation, the Manohar-Wise model introduces one family of color-octet scalars, which can have large couplings with the Higgs boson. Since the properties of the SM-like Higgs boson around 125 GeV need to be precisely scrutinized, in this work we studied the Higgs pair production considering the effect of the color-octet scalars. Following our previous work \cite{cao-octet}, we first scanned over the parameter space of the Manohar-Wise model considering the theoretical and experimental constraints and performed fits of the model to the latest Higgs data by using the ATLAS and CMS data separately. Then we calculated the Higgs pair production rate and investigated the potential of its discovery at the LHC14. Base on our calculation and analysis, we get following conclusions: \begin{itemize} \item Under current constrains including Higgs data after Run I of the LHC, the cross section of Higgs pair production in the Manohar-Wise model can be enhanced up to even $10^3$ times prediction in the SM. \item Moreover, the sizable enhancement comes from the contributions of the CP-odd color-octet scalar $S^A_I$. For lighter scalar $S^A_I$ and larger values of $|\lambda_I|$, the cross section of Higgs pair production can be much larger. \item After running again of LHC at 14 TeV, most of the parameter spaces in the Manohar-Wise model can be test. For an integrated luminosity of 100 fb$^{-1}$ at the LHC14, when the normalized ratio $R=10$, the process of Higgs pair production can be detected. \end{itemize} \section*{Acknowledgement} We thank Prof. Junjie Cao for helpful discussions. This work was supported in part by the National Natural Science Foundation of China (NNSFC) under grant No. 11247268, 11305050, and by Specialized Research Fund for the Doctoral Program of Higher Education with grant No. 20124104120001.
2109.09845
\section{Introduction} \IEEEPARstart{T}{he} investigation of complexity of real world signals we have witnessed a boom in the last decades. Now recognized to have the same importance as the properties in the time and frequency domain, the complexity of a data set is a unique feature that can be utilized to understand a signal generating mechanism via nonlinear analytical tools. The studies of complexity have covered a wide spectrum from the fault diagnosis of rotating machine \cite{Ref48,Ref51,Ref56} through to the early detection of disease and sickness in humans \cite{Ref8,Ref37,Ref68,Ref81}. Indeed, bio signals exhibit high degrees of irregularities and complex dynamical behaviours \cite{Ref80}. Such nonlinear dynamics results from the interactions between human body (organisms) and peripheral environment and exhibits continuous fluctuations in time domain \cite{Ref79}. Complexity Loss Theory (CLT) states the potential relationship between the complexity of physical signals and health of an individual where the higher degree of complexity indicates healthier condition of the individual \cite{Ref78}. However, new developments have declared that pathology also exhibits an increase in complexity based on structure, that is, a decrease of self-correlated complexity will also be observed in healthy body \cite{Ref1}. Although the definition of structural complexity is inconsistent \cite{Ref27}, there are several commonly used methods to quantify the degree of dynamics where entropy-based methodologies are popular ones. Compared to other methods to estimate complexity of nonlinear systems, such as fractal dimension \cite{Ref80} and recurrence plots \cite{Ref79}, entropy analysis holds the advantages of simplicity and noise robustness \cite{Ref77}. The features of the loss of complexity (LOC) manifest themselves through, for example, an increase in randomness, less regularity, breakdown of long-term correlations, multiscale variability, and time irreversibility \cite{Ref27}. To this end, a large number of various entropy algorithms have been proposed to quantify the different features of complexity, or more precisely, the degree of complexity based on different definitions. The two early entropy algorithms that have been widely used are the Approximate Entropy (ApEn) \cite{Ref84} and Sample Entropy (SampEn) \cite{Ref22}, proposed in 1991 and 2000 respectively. Modified by ApEn, SampEn is able to reduce the bias experienced by ApEn by removing self-matching and simultaneously has less dependency on the data length giving relatively higher consistency \cite{Ref22}. Both ApEn and SampEn were developed to quantify the randomness and irregularity of the system. Generally speaking, the lower the value of SampEn, the less complex the system. However, truly complex signals exhibit varying structures across multiple time scales, while long-range correlations fail to be observed by single-scale Sample Entropy analysis. To this end, Costa \textit{et al}. introduced a ‘coarse-graining’ procedure into the Sample Entropy methodology to verify the structural complexity hidden in high scales, referred to as the Multiscale Sample Entropy (MSE) \cite{Ref23}. This, in turn further spurred the development of MSE algorithms including Composite Multiscale Sample Entropy \cite{Ref62} and Refined Composite Multiscale Sample Entropy \cite{Ref64}. However, due to the ‘coarse-graining’ procedure, the requirement of long data length remains even more problemic and hard to be satisfied in most practical situations. In 2011, Multivariate Multiscale Sample Entropy (MMSE) was introduced which successfully combined dataset from multiple channels to estimate the dynamics of the system more accurately and with shorter data length \cite{Ref24}. The key improvement of MMSE is the form of composite delay vector which involves and reconstructs data segments from multiple channels, whereby the inner correlations among diverse signals are preserved \cite{Ref24}. The existing multivariate multiscale entropies to date include: \begin{itemize} \item Multivariate Multiscale Sample Entropy (MMSE) \cite{Ref24}, a method which performs joint multivariate analysis of physiological signals associated with multiple channels. \item Multivariate Multiscale Permutation Entropy (MMPE) \cite{Ref37}, an extension of standard Permutation Entropy \cite{Ref36} which inherits the desirable properties of PE, such as fast computation and simple implementation. \item Multivariate Multiscale Fuzzy Entropy (MMFE) \cite{Ref58} which combines Composite Delay Vectors and Fuzzy Entropy \cite{Ref90}, and exhibits more stable smoother estimates than MMSE. \item Variational embedding Diversity Entropy (veMDE) \cite{Ref96}, a method developed on the basis of Diversity Entropy \cite{Ref48} that combines angular distance and relative probability, and exhibits a low computational load with similar performance as MMPE. \end{itemize} Despite success, the inherent shortcomings of amplitude-based entropy calculation still remain a major obstacle towards their more widespread use . Other issues with current multivariate entropy methods are included as: \begin{enumerate} \item The rule of thumb is that the requirement of data length is around $10^m$ to $30^m$, where $m$ refers to the embedding dimension \cite{Ref71}. Hence, the choice of the embedding dimension is limited by the available sample points. \item The ‘coarse-graining’ process further emphasizes the drawback of limited data size, which causes inaccurate and undefined estimation for high scale analysis. \item Amplitude-based distance is always sensitive to outliers such as noise and artifacts. \item Poor quality of any single channel has a large impact on the multivariate performance. \item Excessive computational load is required when implementing multi-channel analyses. \end{enumerate} Recently, Wang \textit{et al}. \cite{Ref96} introduced a new way to combine datasets from multiple channels into one entropy algorithm estimation, termed Variational Embedding Multiscale Diversity Entropy. Here, inspired by \cite{Ref96}, a new multivariate entropy method is proposed based on Sample Entropy named Variational Embedding Multiscale Sample Entropy (veMSE). This new method offers the following advantages as: \begin{enumerate} \item Complexity estimates at a higher embedding dimension are better defined with limited data size. \item The requirement for the amount sample points is lower than current Sample entropy-based methods. \item Strong noise-robustness is exhibited across the scales. \item The overall performance of multivariate estimate is independent on the quality of any single-channel within a dataset. \item Less computational time is needed owing to straightforward and efficient implementation. \end{enumerate} The reminder of the paper is organized as follows: In Section \uppercase\expandafter{\romannumeral2}, the new veMSE algorithm is outlined. Section \uppercase\expandafter{\romannumeral3} demonstrates the operation of veMSE on simulated signals to give an initial insight with regards to the choice of parameters. Then, based on the suggested parameter setting in Section \uppercase\expandafter{\romannumeral3}, Section \uppercase\expandafter{\romannumeral4} considers and discusses the properties of veMSE, including noise robustness, directionality, and calculation efficiency. Next, veMSE is applied to real-world signals as wind dynamics and heart rate variability in Section \uppercase\expandafter{\romannumeral5}, and compared with the performance of univariate MSE and MMSE. Finally, conclusions are given. \section{Variational Embedding Multiscale Sample Entropy} \begin{algorithm}[htb] \caption{Variational Embedding Multiscale Sample Entropy} \label{veMSE} \begin{algorithmic} \Statex Assume that there are $P$ channels measured from a system, where a signal recorded from the $c^{th}$ channel is denoted by $x_c(i)$ and is of length $N$, where $1 \leq c \leq P, 1 \leq i \leq N$. Parameters involved in the veMSE algorithm are the tolerance quotient $(r)$, embedding dimension $(m)$, scale factor $(\tau)$ and time lag $(L)$. The detailed steps of veMSE are shown below. \begin{enumerate}[ 1)] \item Procedure of coarse graining is firstly applied to the original datasets for all the channels. The scaled time series are calculated as \begin{equation} y^{(\tau)}(j) = \frac{1}{\tau} \sum_{i=(j-1)\tau+1 }^{j\tau}x(i),\quad 1 \leq j \leq N_t,\;N_t = \frac{N}{\tau}. \end{equation} \item For each channel, the embedding dimension is set as a variable. The dimension for the $c^{th}$ channel is calculated as $m(c) = m+c-1$, as listed in Table. \ref{Tab:varied_m}. Therefore, combined with the process of time delay, the embedding delay vector of data $y^{(\tau)}(i),\; (1 \leq i \leq N_t)$ for channel $c$ is designated as template, $Y_c^{(\tau)}(i)$ and calculated as \begin{equation} Y_c^{(\tau)}(i) = \lbrack y^{(\tau)}(i), y^{(\tau)}(i+L)\; ,\dots, \; y^{(\tau)}(i+n)\rbrack,\; n_c = (m(c)-1)L. \end{equation} \item Compute the Chebyshev distance between templates $Y_c^{(\tau)}$, where the distance is defined according to the amplitude of the embedding vector as \begin{equation} \begin{split} d = max \lbrace Y_c^{(\tau)}(i+k)) - Y_c^{(\tau)}(j+k) \rbrace, \\ where \; 0 \leq k \leq m(c)-1,\; 1 \leq i, j \leq N_t - n_c,\; i \neq j \end{split} \end{equation} \item For each channel, the number of segments, $Y_c^{(\tau)}(j)$, within the tolerance level $r$ of $Y_c^{(\tau)}(i)$, is recorded as $B_c(i)$. In other words, $B_c(i)$ is the number of template matched or the number of similar patterns in the dataset based on the boundary set, where the boundary is defined by the tolerance, $r$. Therefore, the local probability of the occurrence of template match for channel $c$ is \begin{equation} R_c(i) = \frac{B_c(i)}{(N_t - n_c-1)} \end{equation} \item Then, the global probability of the occurrence of template match for a channel $c$ is calculated as \begin{equation} \Phi (c) = \frac{1}{(N_t - n_c)}\sum_{i=1}^{N_t - n_c}{R_c(i)} \end{equation} \item Compute the sum of the global probability for all the channels as \begin{equation} \Phi = \sum_{c=1}^P{\Phi(c)} \end{equation} Recall that unlike the MMSE algorithm (see Appendix.X), the embedding dimension, $m(c)$, is diverse and varied along with the index of channel $c$. \item Modify the embedding dimension to $(m(c)+1)$. Hence, $n_c$ is adjusted to $m(c)\times L$ and the Steps 2-6 are repeated to obtain global probability with increased dimension $\Phi(m+1)$. \item Variational Embedding Multiscale Sample Entropy is finally obtained as \begin{equation} veMSE = -ln\frac{\Phi(m+1)}{\Phi(m)}. \end{equation} \end{enumerate} \end{algorithmic} \end{algorithm} The steps of proposed veMSE is given in Algorithm. \ref{veMSE}. The key improvement of veMSE is that it allows for the varying setting of the embedding dimension for multi-channel signals as illustrated in Table. \ref{Tab:varied_m}, while simultaneously maintaining the information within each channel. Compared to MMSE (see Algorithm. \ref{MMSE} in Appendix), despite the fact that different embedding values can be also achieved for each channel by MMSE, veMSE focuses more on the information between data channels. Indeed, the composite delay vector in MMSE successfully combines the embedding vectors of multichannel signals. However, there is an discrepancy and bias between the similarity among the recombined composite delay vectors and those among embedding vectors of the original datasets. The proposed veMSE estimates the complexity information of signals in multiple channels without influencing the correlation in each individual signal. Secondly, due to the varying embedding dimensions, veMSE yields a weighted contribution to multiple channels. That is, the probability distribution of each channel is diverse, which serve as an amplification of chaotic features in measured signals of multi-channel systems. Thirdly, since the probability of occurrences of similar patterns will be processed multiple times based on the number of channels, and the summation process will be performed before the logarithm operation in the last step, the veMSE is theoretically able to unveil the complexity properties under higher embedding dimension but with the same data length, in comparison with other algorithms based on Sample Entropy. \begin{table} [htbp] \small \caption{Relation between the embedding dimension ($m$) and index of channel ($c$)} \label{Tab:varied_m} \begin{center} \begin{tabular}{|p{5cm}<{\centering}||p{1cm}<{\centering}|p{1cm}<{\centering}|c|p{1.5cm}<{\centering}|} \hline Index of channel $(c)$ & 1 & 2 & $\dots$ & $c$\\ \hline Embedding dimension $(m(c))$ & $m$ & $m+1$ & $\dots$ & $m+c+1$\\ \hline \end{tabular} \end{center} \end{table} To show the performance of the new proposed veMSE, both synthetic signals and real physical datasets are considered in the following sections. As one of the popular multichannel entropy algorithms, MMSE will be employed as a benchmark for veMSE under the same conditions. \section{Results of veMSE on stimulated signals} In this section, synthetic signals generated based on five models will be utilized to illustrate the performance of veMSE including Gaussian white noise, flicker noise (coloured noise), and autoregressive (AR) models AR$(1)$, AR$(2)$ and AR$(3)$. Standard deviations of all the generated signals were set to $\sigma=1$. Coefficients of AR models are given in Table.\ref{Tab:AR_model}. Parameters that will be discussed in the following subsections include embedding dimension, data length, tolerance, and scale factor. To avoid unknown influence and control variables, time lag were set to $L = 1$ for all operations. \begin{table} [htbp] \small \caption{coefficients of AR models} \label{Tab:AR_model} \begin{center} \begin{tabular}{|p{3cm}<{\centering}||p{2cm}<{\centering}|p{2cm}<{\centering}|p{2cm}<{\centering}|} \hline Coefficients & $a_1$ & $a_2$ & $a_3$ \\ \hline AR $(1)$ & $0.5$ & $-$ & $-$\\ \hline AR $(2)$ & $0.5$ & $0.25$ & $-$\\ \hline AR $(3)$ & $0.5$ & $0.25$ & $0.125$\\ \hline \end{tabular} \end{center} \end{table} Figures in each subsection are presented in pairs to show the results for five models. Upper panels give the curves of white noise and flick noise, while the panels in the bottom depict the results of AR models in contrast to white noise. Complexity curves of entropy values are plotted as error-bar figures, averaged over outcomes of 20 realizations for each model. \subsection{Varied Embedding Dimension (m)} Usually, for the implementation of SampEn-based algorithms, the embedding dimension and data length are two parameters that are interdependent and mutually coupled. Data size is restricted to between $(10^m-30^m)$ as a rule of thumb \cite{Ref82}. In real world, signals recorded do not have infinite length and are generally limited by operation time and memory space. Therefore, the embedding dimension is commonly set to $m=2$ or $m=3$ for a signal with 1000 samples \cite{Ref26}. Higher values of embedding dimension with a shortage of data size will cause unstable estimation as standard MMSE given in Figure.\ref{fig:vari_m_MMSE}. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_m_MMSE.jpg} \label{fig:vari_m_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_m.jpg} \label{fig:vari_m_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the embedding dimension, m.} \label{fig:vari_m} \end{figure} Results of veMSE as a function of embedding dimension are shown in Figure.\ref{fig:vari_m_veMSE}. Each entropy is calculated based on signals from two channels. Except the independent variable $m$, other parameters, such as scale factor and data length, are set as constant values (1 and 1000 respectively). The tolerance, $r$ is varying according to the total variance of the covariance matrix of processed data sets as $r\times tr(S)$, and here the tolerance quotient was fixed to $r = 0.15$. Figure.\ref{fig:vari_m_veMSE}, where embedding dimension is modified from 1 to 9, shows that unlike MMSE, signals with a complex correlated structure are able to give a defined entropy value even at high embedding dimensions, as e.g. AR(3) in scale of 7. Also, signals with higher randomness are more likely to become unstable as the embedding dimension increase. However, even in the case of Gaussian white noise with highest randomness, here veMSE with the embedding dimension equal to 5 was able to successfully and stably process the data. On the other hand, traditional Multiscale Sample Entropy methods fail to give a defined value with the embedding dimension higher than 3 under the same condition \cite{Ref1}. Therefore, from the viewpoint of estimation stability, veMSE exhibits a marked improvement when it comes to complex information in high dimensions. \subsection{Varied Data Length (N)} The data length of the signal is another limitation in addition to embedding dimension when implementing entropy-based calculation, particularly in real world processes. Amplitude-based entropy algorithms require at least 1000 data points to guarantee a consistent estimation such as with Multiscale Sample Entropy (MSE) and Multiscale Fuzzy Entropy (MFE) \cite{Ref90}. However, in real world data, as in the analysis of heart rate variability for example, to obtain the required data size for R-R intervals, a minimum of 5 minutes of the raw Electrocardiograph (ECG) signals are needed. In practice, the implementation of such a long-time recording in a controlled state is hard to be satisfied. Compared to amplitude-based entropy methods, space distance-based entropy algorithms such as Cosine Similarity Entropy, show less restriction to data length, with a minimum of 700 samples required \cite{Ref1}. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_N_MMSE.jpg} \label{fig:vari_N_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_N_veMSE.jpg} \label{fig:vari_N_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the data length, N.} \label{fig:vari_N} \end{figure} Figure.\ref{fig:vari_N_veMSE} illustrates the performance of single scale of veMSE as a function of data length in a logarithmic scale. The embedding dimension was set to $m=2$ and the choice of tolerance was the same as before. The values of veMSE for white noise and $1/f$ noise were not defined before $N$ = 40, while AR(2) was not defined with veMSE when $N$ was smaller than 30. The smallest sample sizes for AR(1) and AR(3) were also $N$ = 40. The standard deviation of the entropy results is gradually narrowing down with an increase in data length, while the range for each error-bar is ascending from top to bottom. As a result, a system with higher structure (AR(3)) reveals larger standard deviation. In addition, the consistency of the estimation can be guaranteed as evidenced by the relative position of curves in each graph maintains being unchanged as data length, $N$, increases. More importantly, when analysing the white noise and flicker noise in the top graph, the estimation at $N$ = 100-sample length could successfully separate the complexity degrees of the two signals, while in the bottom graph, when the data length reaches $N$ = 400 samples, there is no intersection region among entropy values from different models. This illustrates that the requirement of data length when applying veMSE is much lower than other entropy methods, e.g. MMSE in Figure.\ref{fig:vari_N_MMSE} gives the $N$ = 1300 is required for the separation of 4 models. Subsequently, this property gives the way to reveal the complexity information under high scales, with limited data length, but with a stable estimation, which better serves the balance between the dimension and data size. \subsection{Varied tolerance (r)} \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_r_MMSE.jpg} \label{fig:vari_r_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_r.jpg} \label{fig:vari_r_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the tolerance, r.} \label{fig:vari_r} \end{figure} The tolerance, $r$, can be explained as the boundary of the similarity degree among comparing templates. The SampEn-based algorithms limit the tolerance to a hard threshold, as the Heaviside function in related to the standard deviation of the original data. However, for a multivariate case with multichannel data sets, only a single value is needed in the algorithm. As in \cite{Ref24}, the choice of tolerance of veMSE is dependent on the total variance of the covariance matrix, \textbf{$S$}, of the analysed data sets. Therefore, the tolerance was set as $r\times tr(S)$. Figure.\ref{fig:vari_r_veMSE} illustrates single scale entropy estimation as a function of tolerance quotient, $r$, varying from 0.1 to 1.5 at 0.1 increments. The data length and embedding dimension were separately fixed at $N$ = 1000 and $m$ = 2 to show the influence of varied tolerance setting. Observe from the figures that for all curves, the increase of tolerance quotient results in a monotone decrease in complexity estimation, which is the same as MMSE in Figure.\ref{fig:vari_r_MMSE}. All can be obviously distinguished from each other before $r $ = 1 in veMSE. The values after $r$ = 1 are too small to be differentiated. The gap among different complexity estimations in the two figures obviously narrows down after r = 0.5, therefore, supporting that the value of tolerance quotient to be chosen below $r$ = 0.5. \subsection{Varied Scale Factor} As noted by Costa \textit{et al}. in \cite{Ref23}, the multiscale analysis by integrating consecutive coarse-graining procedure is of importance in signal processing associated with hidden correlation structure in data. Based on the aforementioned analysis of parameters involved in veMSE, the embedding dimension was set to $m$ = 2 and the tolerance was chosen to $r$ = 0.15 multiplied by total variance of the covariance matrix. With regard to performance of multiscale analysis, graphs of multichannel entropy results are presented in response to the scale factor varying from $(\tau)$ = 1 to $(\tau)$ = 40. Dual channel data with $N$ = 3000 sample data points for each model were considered. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_22.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy \cite{Ref24} with the embedding dimension set as [2 2].} \label{fig:MMSE_22} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_23.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy \cite{Ref24} with the embedding dimension set as [2 3].} \label{fig:MMSE_23} \end{minipage} \end{figure} \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMDE.jpg} \caption{Operation of variational embedding Multiscale Diversity Entropy \cite{Ref96} with the embedding dimension set as 2.} \label{fig:veMDE} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMSE_3000.jpg} \caption{Operation of the proposed variational embedding Multiscale Sample Entropy with the embedding dimension set as 2.} \label{fig:veMSE} \end{minipage} \end{figure} To further elucidate the extent of improvements of veMSE over the Multivariate Multiscale Sample Entropy (MMSE) and Variational Embedding Multiscale Diversity Entropy (veMDE), their performances were compared with the proposed veMSE method. With the same data size, due to the variational dimension feature of veMSE, two different settings of the parameter related to embedding dimension for MMSE were applied and are shown in Figure.\ref{fig:MMSE_22} and Figure.\ref{fig:MMSE_23}. The performance of veMDE is given in Figure.\ref{fig:veMDE}, with the embedding dimension set to $m$ = 2, while the results of veMSE are presented in Figure.\ref{fig:veMSE}. These figures demonstrate that complex structure hidden in higher dimension is hard to be unveiled by MMSE. The standard deviation of those 20 realizations grows steadily as the scale factor increases. Overall, the dimension setting pair $[2,2]$ gives a better performance in MMSE for the considered restricted data length. However, even under the optimal dimension settings, as in Figure.\ref{fig:MMSE_22}, the complexity of AR(3) in purple and AR(2) in yellow fails to be distinguished in multi-scale cases. As for veMDE in Figure.\ref{fig:veMDE}, Diversity Entropy is developed based on angular distance and distribution probability \cite{Ref48}. As can be seen in the graph, veMDE reveals a consistent estimation for each system, which exhibites a short-term correlation. Nevertheless, long-term structural complexity of each model fails to be shown by veMDE. Besides, due to the lack of ability to estimate long-term correlation, veMDE for AR models which contain highly correlated structure can not be distinguished from white noise in large scales. On the other hand, consistent with the advantages as aforementioned theoretical discussion, the merits of veMSE can be clearly seen from in Figure.\ref{fig:veMSE}. To better specify the improvement, the optimal dimension setting [2 2] of MMSE in Figure.\ref{fig:MMSE_22} was utilized to compare with veMSE. Observe from the upper graphs of the two types of noise, although both of the two algorithms were able to distinguish between the two models, the complexity of white noise went down while the flicker noise maintained a certain level in spite of the increasing scale. The range of entropy in error-bars for flicker noise based on veMSE was much narrower than that in basis of MMSE especially in large scales. Secondly, in the bottom graph, values of AR(2) and AR(3) (in yellow and purple) fail to be fully separated by MMSE in the cases of high scale, as stated above, while with the same data length, the separability of AR models with different orders can be successfully accomplished in veMSE. It is critical to apply the entropy calculation under multiscale situation since the long-range correlation of the system is largely ignored in the analysis under low scale. Next, it can be observed that minor difference exists between the complexity estimation of the two models, referred to white noise and AR(1) (blue and red line in bottom graph). Instead, the enhancement produced by veMSE is particularly revealed on the analysis of highly correlated and structural signals, systems with higher structural complexity. Overall, according to the comparison of veMSE and MMSE as well as veMDE based on the above five models, the veMSE provides a more stable estimation that can better demonstrate complex temporal fluctuations. In addition, veMSE is especially suitable for multiscale analysis of highly correlated signals which exhibit variation of spatial-temporal patterns over a range of scales. \section{Properties of veMSE} We now elaborate on the three properties of the proposed entropy veMSE algorithm: noise robustness, directionality, and calculation efficiency. The parameters setting: data size $N$ = 3000; embedding dimension $ m$ = 2; tolerance $ r$ = 0.15; and scale factor $\tau$ = 1. A bivariate system was considered in the analysis. The results which depict the noise analysis and directionality analysis based on proposed veMSE are presented in Figure.\ref{fig:veMSE_noise} and Figure.\ref{fig:veMSE_dire} respectively. And corresponding performance of MMSE are exhibited in Figure.\ref{fig:MMSE_noise} and \ref{fig:MMSE_dire}. The curves of time requirements of veMSE and MMSE are given in Figure.\ref{fig:time}. \subsection{Noise Robustness} Robustness towards noise and artifacts is of critical importance in any estimation. Given that it is infeasible to avoid the noise associated to recording equipment and the ubiquity of artifacts in bio signals, for instance, muscle and electro-magnetic artifacts pervasively exist in EEG-based monitor \cite{Ref137}, the noise-robustness property was tested by comparing complexity estimation for AR models with and without noise. In Figure.\ref{fig:veMSE_noise}, from top to the bottom, the first panel presents the curves for uncorrelated white Gaussian noise (WGN), correlated flicker noise ($1/f$ noise) and coloured noise containing both WGN and $1/f$ noise. It is clear that three systems with different degrees of correlation can be separated by the veMSE. Adding white noise will enhence the short-term correlation as shown at the beginning in the top graph, where the yellow line (\texttt{1/f + WGN}) is as high as blue line (\texttt{WGN}), while the long-term correlation is lower as scale factor increases (\texttt{1/f} $>$ \texttt{1/f}+\texttt{WGN} $>$ \texttt{WGN}). The first graph reveals that veMSE could correctly give a complexity estimation, in line with the theoretical analysis, on the basis of uncorrelated and correlated noise. In the second and third panel of Figure.\ref{fig:veMSE_noise}, the results of veMSE for AR models with uncorrelated white noise and correlated flicker noise are presented, to contrast to the outcomes of pure AR signals in Figure.\ref{fig:veMSE}. The amplitude of the added noise signal was set to $20\%$ that for the AR signals. Compared to Figure.\ref{fig:veMSE}, the gaps between the complexity curves for the AR models of varying order decrease with noise. However, even that the gap among distinct models is narrowed down, separation can still be achieved to high scales in $20\%$ noisy scenarios. While in case of MMSE, noisy AR signals with different complexity cannot be well divided and the impact of noise was clearly shown in Figure.\ref{fig:MMSE_noise}. Given these points, the performance of complexity estimation based on veMSE is consistent with cases without noise. A unique feature of veMSE not present in the other MSE algorithms, showing its potential to be applied in practical recording data sets. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/noise_MMSE.jpg} \label{fig:MMSE_noise} } \subfigure[]{ \includegraphics[width=3.4in]{figure/noise_veMSE.jpg} \label{fig:veMSE_noise}} \caption{Illustration of noise robustness of a) standard MMSE and b) the proposed veMSE.} \label{fig:noise} \end{figure} \subsection{Directionality} Next feature of veMSE that is going to be discussed is directionality. For multivariate analysis, a problem is that the optimal ordering of the input channel is unknown. Yet, without prior knowledge related to the optimal channel order, the performance of the estimation will be impacted. To this end, the directionality of the veMSE is analysed for bivariate systems. Figure.\ref{fig:veMSE_dire} reveals two graphs, each contains three pairs of curves. In top graph, there are white noise with AR(1), AR(1) with AR(2) and AR(2) with AR(3), with the order of input shown by legend (first present, first processed). As can be seen from this figure, the lower scale estimate is mainly influenced by the first input signal. For example, the blue line \texttt{[WGN AR(1)]} can be clearly recognized lower than the red one \texttt{[AR(1) WGN]} at the beginning especially, in the single scale case. As scale increases, the two lines approach the same level. Similar trend can be seen for the other two pairs. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/dire_MMSE.jpg} \label{fig:MMSE_dire} } \subfigure[]{ \includegraphics[width=3.4in]{figure/dire_veMSE.jpg} \label{fig:veMSE_dire}} \caption{Illustration of directionality of a) standard MMSE and b) the proposed veMSE.} \label{fig:dire} \end{figure} In the second graph of Figure.\ref{fig:veMSE_dire}, the analysed signals are AR(1), AR(2) and AR(3) with one of the signals in each system associated with white noise. The legend \texttt{[AR AR+WGN]} refers to cases where noise-free signals are the first variate followed by noisy signals and vice versa for \texttt{[AR+WGN AR]}. This setting of the inputs was used to stimulate real scenarios when dealing with multi-channel signals, whereby one of them represents a poor recording with noise. Considering the noise-robustness property of veMSE, the amplitude of noise signals in this subsection is enlarged to the same level as for AR signals, to demonstrate a clear difference when the input order is modified. As shown in the figure, the inverted input orders can be reflected by different start levels, while the curves then approach each other as well as end with similar estimates. Therefore, regardless of the input order, the separation of complexity levels of AR models can be successfully achieved. And observed in Figure.\ref{fig:MMSE_dire} in case of MMSE, the inverted input exhibited no influence on the resulted curves in small scales, while similar performance as veMSE in the larger scale. As demonstrated in Figure.\ref{fig:veMSE_dire}, the reversed order has limited influence on the estimation in high scales as all the paired curves approach to the same three regions, respectively, while in spite of the modified order, the three models can be separated. However, as similar phenomenon is shown in the top graph, the varying order of the input signal will generate entropy values with different degrees in small scale analysis when the input signals contain distinct structure. Therefore, the direction of the input order needs to be carefully considered when applying small scale analysis, and such consideration can be ignored in high scale analysis with identical system measurement. \subsection{Computational Complexity} Admittedly, entropy analysis based on multichannel signals is more time-consuming than other single variate estimation as it should be. Nevertheless, for potential applications as real time processing in the future, calculation efficiency is one of the critical factors that need to be carefully considered. Therefore, in this subsection, time consumption of veMSE is discussed and compared with commonly used MMSE and the results are given below. \begin{figure}[htp] \centering \includegraphics[width=7in]{figure/time_consumption.jpg} \caption{Computation time for MMSE and the proposed veMSE with modified parameters for white Gaussian input.} \label{fig:time} \end{figure} Figure.\ref{fig:time} shows the processing time as a function of various modified parameters when implementing the veMSE (blue line) and MMSE (red line). All the curves are given as an average over 10 implement realizations. Each graph is designed to reflect the behaviours for only one modified parameter, when the independent variables in the following figures, from the left- to the right-hand side, as the scale factor, length of data, number of channels and embedding dimension. All the entropy calculations were set as single-variate and bivariate processing by default. The data length and embedding dimension were irrelevant variables were fixed to $N$ = 5000 and $m$ = 2. Overall, the red line, representing computational time of MMSE, is above the blue, that of veMSE, for all the scenarios. Actually, the increase of scale factor and the decrease of data length reflect that when the data size after ‘coarse-graining’ procedure is lower than 1000, the times needed for the two calculations are similar, as shown in the first graph where the scale factor is higher than 5 or when the data length is shorter than 1000 in the second graph. As for the influence of channel number, demonstrated in the third graph, it is reasonable that MMSE need more time as the channel number increases, because the key step for Sample Entropy is the ratio of conditional probability for similar patterns between the embedding dimension, $m$, and its increase, $m+1$, the number of possible ways to apply the ($m+1$) dimension is equal to the number of channels involved when forming the composite delay vector in MMSE. Therefore, the calculation with an increased embedding dimension will be repeated $c$ times in MMSE, where $c$ denotes the number of channels. Finally, in the relationship between an increased dimension and computation time, the time difference roughly maintains a fixed value in spite of the dimension changing in the last graph. In general, with reference to the widely applied MMSE, the time needed for the same amount of data with veMSE is shorter. This reflects that the calculations efficiency of veMSE is higher than MMSE, which is of high potential interest to implement real-time monitoring of human states in the future. \section{Performance of veMSE on real signals} \subsection{Wind Dynamics} Having illustrated how veMSE performs on synthetic signals, we now examine the performance of veMSE in real-world systems. In the first place, wind dynamics were examined. The long-term correlations in wind dynamics has been revealed in previous studies by using detrended fluctuation analysis (DFA) \cite{Ref275} and standard Multivariate Multiscale Sample Entropy (MMSE) \cite{Ref24}. Here, the proposed veMSE was utilized to exhibit an improved performance of characterizing different dynamical complexity of wind regimes. The applied database of wind was recorded by 3D ultrasonic anemometers at a sampling frequency of 50 Hz. The recording process was implemented in the Institute of Industrial Science of the University of Tokyo. Three channels were obtained and they are in east-west, north-south, and vertical direction respectively. The wind regimes containing different dynamics were defined as low, medium, and high by the magnitude of wind speed as examples shown in Figure. \ref{fig:wind}. The parameters involved in veMSE analysis were set to $m$ = 2, $L$ = 1, $r = 0.2\times tr(S)$, and $N$ = 3000. Illustrations of entropy results were averaged over 10 trials for each channel and exhibited as error-bar. In addition, shuffled wind data sets were also tested and compared with obtained wind dynamics, which were generated from the recorded wind signals but with a random order. In this case, the possible correlations within signals were broken down but the statistical properties were preserved. \begin{figure}[htp] \centering \includegraphics[width=5in]{figure/wind_example.jpg} \caption{Magnitude of the wind signal. The wind segments are defined as low, medium, and high regimes.} \label{fig:wind} \end{figure} Before the comparison between veMSE and MMSE, univariate Multiscale Sample Entropy (MSE) was firstly applied to give an insight into the complexity of the dynamical system to be processed. As shown in Figure.\ref{fig:MSE_wind}, the randomized data sets exhibit a white noise-like behaviour in terms of structural complexity, which is expected as the correlation was destroyed while shuffling. The MSE results of wind dynamics assigned the highest complexity to the medium regime which is expected since the medium wind speed hold the fewest constrains. Yet, the complexity of low wind regime was quantified higher than the high regime which violates the intuition, as the high wind regime also contains components with medium speed. Besides, each wind dynamics exhibited lower complexity than the their randomized series, wrongly resulting to the absence of long-term correlations within wind data sets. The same trails were analyzed by Multivariate Multiscale Sample Entropy as shown in Figure.\ref{fig:MMSE_wind} which revealed the relation of the three wind regimes in a expected way that mild wind exhibited the highest complexity, followed by the high and low dynamics regimes. Further, all the three wind regimes were able to present long-range correlation higher than their surrogate data sets. However, overlapped area can be found across all the scales. While in Figure.\ref{fig:veMSE_wind}, observe, as desired, the long-range correlation of different wind speed was able to be detected resulting from the proposed veMSE with more apparent separation on basis of the correct analysis compared to MSE and MMSE. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=2.2in]{figure/MSE2.jpg} \label{fig:MSE_wind} } \subfigure[]{ \includegraphics[width=2.2in]{figure/MMSE_wind.jpg} \label{fig:MMSE_wind}} \subfigure[]{ \includegraphics[width=2.2in]{figure/veMSE_wind.jpg} \label{fig:veMSE_wind}} \caption{Results of a) standard single-variate MSE, b) standard multi-channel MMSE, c) proposed veMSE based on wind dynamics and randomized dynamics.} \label{fig:wind_entropy} \end{figure} \subsection{Physiological Database} Now, physiological data sets were involved to exhibit the performance of complexity quantification based on entropy analysis. The physiological signals utilized is Fantasia database \cite{Ref40} that includes R-R intervals (RRI) and interbreath interval (IBI) extracted from electrocardiograph (ECG) and respiration signals respectively. The structure of long-term correlation in heart rate variability and respiratory dynamics have been examined by traditional method as Detrended Fluctuation Analysis (DFA) \cite{Ref277} and standard MMSE \cite{Ref24}. Fantasia database were recorded from 20 young people (at the age of 21-34) and 20 elderly participants (ageing from 68-85) who were rigorously screened for 120 minutes. 10 subsets out of them were chosen in each group. The ECG and respiration signals were recorded with sample frequency at 250 Hz. Then, RRI and IBI signals were extracted and aligned. The surrogate series with randomised order were analyzed along with RRI and IBI signals in each methodology to reveal the effectiveness of the complexity. Parameters in entropy analysis were set to $m$ = 2, $L$ = 1, $r = 0.15\times tr(S)$, and $N$ = 4000. In the first place, to give an overall view of the signals in each channel. Univariate Multiscale Sample Entropy was applied to single channel separately as exhibited in Figure.\ref{fig:MSE_RRI} and Figure.\ref{fig:MSE_IBI}. As Complexity Loss Theory \cite{Ref254} stated, the adaptive capacity of bio-systems is damaged by disease and aging. The complexity of RRI and IBI signals recorded from young people is supposed to be higher than that from elderly subjects when considering long-term correlation. In respect of RRI in short-term correlation, MSE exhibited a correct estimation under certain low scales. However, both MSE on basis of RRI and IBI are unable to reveal the correct relation between the dynamics of elder individuals and young people in a long-range term. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MSE_RRI.jpg} \caption{Operation of Univariate Multiscale Sample Entropy based on R-R interval.} \label{fig:MSE_RRI} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MSE_IBI.jpg} \caption{Operation of Univariate Multiscale Sample Entropy based on interbreath interval (IBI).} \label{fig:MSE_IBI} \end{minipage} \end{figure} Next, observed in Figure.\ref{fig:MMSE_Fan}, MMSE was applied to the bivariate channels. The resulted error bar indicated the higher dynamical complexity in young participants than that in elderly individuals which conformed the complexity loss theory with aging \cite{Ref254}. Moreover, both physiological data sets showed higher long-range correlation than the randomized surrogate series in an expected way, yet as scale increases, overlapping area can be observed due to the lack of sample length after coarse graining process. While in contrast to MMSE, performance of veMSE in Figure.\ref{fig:veMSE_Fan} remained the correct and clear estimation but with higher stability particularly in large scales. In practical scenarios, the size of recorded signals is mostly limited. Hence, veMSE can better facilitate the analysis of physical and physiological signals when identifying dynamical difference based on nonlinear features. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_Fan.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy.} \label{fig:MMSE_Fan} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMSE_Fan.jpg} \caption{Operation of Variational Embedding Multiscale Sample Entropy.} \label{fig:veMSE_Fan} \end{minipage} \end{figure} \section{Conclusion} To conclude, the Variational Embedding Multiscale Sample Entropy (veMSE) method has been introduced for enhanced complexity analysis of real-world data. The results of this paper indicate that veMSE is capable of exhibiting the complex features of the system in large scale with higher embedding dimension compared to MMSE and veMDE. Besides, the utilization of multivariate analysis via veMSE could guarantee an improvement in contrast to single-variate analysis regardless of the quality of the recorded signals in sub-channels. Moreover, veMSE shows a strong noise robustness feature and meanwhile, the time consumption of veMSE is lower than MMSE under the same conditions as well, and this improvement is particularly apparent as the increase of available channels. The higher calculation efficiency is of high potential interest to apply entropy analysis into scenarios where need real-time processing or synchronized monitor in the future application. Nevertheless, it should be noted that this method is restricted to amplitude-based distance. Future research can be intended to put focus on angular distance-based associated with variational embedding dimension methodology. \appendices \section{Algorithm of Multivariate Multiscale Sample Entropy} \begin{algorithm}[htb] \caption{Multivariate Multiscale Sample Entropy } \label{MMSE} \begin{algorithmic} \Statex The steps of standard Multivariate Multiscale Sample Entropy are given below. For a multi-variate data set $\{x_{c,i}\}_{i=1}^N,\,1\leq c\leq P$ with length of $N$ and number of channel $P$. Manually selected parameters are the embedding dimension ($M = [m_1, m_2,\dots, m_P]$), tolerance ($r$), time delay ($L = [l_1, l_2,\dots, l_P]$) and scale factor ($\tau$): \begin{enumerate} \item Normalize the original multi-variate data sets by subtracting the mean and dividing by the standard deviation. \item Perform Coarse Graining Process to obtain the scaled multi-channel time series $\{y_{c,i}^{(\tau)}\}_{j=1}^{N/\tau}$ following the equation\newline \begin{equation} y_{c,i}^{(\tau)}(j) = \frac{1}{\tau}\sum^{j+\tau/2-1}_{i = j-\tau/2 - 1}{x_{c}(i)},\quad 1\leq j\leq \frac{N}{\tau},\,c = 1,2,\dots,P \nonumber \end{equation} \item Form the Composite Delay Vectors (CDV) $\textbf{Y}_M(i)$ according to $M$ and $L$ in a form as \begin{align*} \textbf{Y}_M(i) =[& y_{1,i},y_{1,i+l_1},\dots,y_{1,i+(m_1-1)l_1)},\\ & y_{2,i},y_{2,i+l_2},\dots,y_{2,i+(m_2-1)l_2)},\\ &\qquad \qquad \qquad \vdots \\ & y_{c,i},y_{c,i+l_c},\dots,y_{c,i+(m_c-1)l_c)},] \end{align*} \item Compute the similarity for all pairwise CDVs ($Y_M(i)\,\&\, Y_M(j)$) based on Chebyshev distance as \begin{equation} D(i,j) = max\{ Y_M(i+k)-Y_M(j+k)||\, 0\leq k\leq ((\sum_{c=1}^{P}{m_c})-1),i\neq j\} \nonumber \end{equation} \item Calculate the number of matching patterns defined as similar pair $B(i)$ that satisfies the criterion $D(i,j)\leq r$. \item Compute the local probability $C(i)$ and global probability $\Phi(i)$ of $B(i)$ by \newline \begin{equation} C(i) = \frac{B(i)}{N-n-1},\quad \Phi=\frac{\sum_{i=1}^{N-n}{{C(i)}}}{N-n},\,n = max(M)*max(L) \nonumber \end{equation} \item Repeat Steps 3-6 with modified embedding dimension as ($m_c$+1) and obtain the updated global probability as\newline \begin{equation} \Phi^*=\frac{\sum_{i=1}^{N-n}{{C^*(i)}}}{N-n},\, n = max(M^*)*max(L) \nonumber \end{equation} Recall that there are $P$ ways to increase the embedding dimension and the modified global probability, $\Phi^*$, is the averaged result. \item Multivariate Multiscale Sample Entropy is defined as\newline \begin{equation} MMSE= -\ln{[\frac{\Phi^*}{\Phi}]} \nonumber \end{equation} \end{enumerate} \end{algorithmic} \end{algorithm} \section*{Acknowledgment} We wish to thank Institute of Industrial Science of the University of Tokyo for providing the wind data sets used in this article. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr} \section{Introduction} \IEEEPARstart{T}{he} investigation of complexity of real world signals we have witnessed a boom in the last decades. Now recognized to have the same importance as the properties in the time and frequency domain, the complexity of a data set is a unique feature that can be utilized to understand a signal generating mechanism via nonlinear analytical tools. The studies of complexity have covered a wide spectrum from the fault diagnosis of rotating machine \cite{Ref48,Ref51,Ref56} through to the early detection of disease and sickness in humans \cite{Ref8,Ref37,Ref68,Ref81}. Indeed, bio signals exhibit high degrees of irregularities and complex dynamical behaviours \cite{Ref80}. Such nonlinear dynamics results from the interactions between human body (organisms) and peripheral environment and exhibits continuous fluctuations in time domain \cite{Ref79}. Complexity Loss Theory (CLT) states the potential relationship between the complexity of physical signals and health of an individual where the higher degree of complexity indicates healthier condition of the individual \cite{Ref78}. However, new developments have declared that pathology also exhibits an increase in complexity based on structure, that is, a decrease of self-correlated complexity will also be observed in healthy body \cite{Ref1}. Although the definition of structural complexity is inconsistent \cite{Ref27}, there are several commonly used methods to quantify the degree of dynamics where entropy-based methodologies are popular ones. Compared to other methods to estimate complexity of nonlinear systems, such as fractal dimension \cite{Ref80} and recurrence plots \cite{Ref79}, entropy analysis holds the advantages of simplicity and noise robustness \cite{Ref77}. The features of the loss of complexity (LOC) manifest themselves through, for example, an increase in randomness, less regularity, breakdown of long-term correlations, multiscale variability, and time irreversibility \cite{Ref27}. To this end, a large number of various entropy algorithms have been proposed to quantify the different features of complexity, or more precisely, the degree of complexity based on different definitions. The two early entropy algorithms that have been widely used are the Approximate Entropy (ApEn) \cite{Ref84} and Sample Entropy (SampEn) \cite{Ref22}, proposed in 1991 and 2000 respectively. Modified by ApEn, SampEn is able to reduce the bias experienced by ApEn by removing self-matching and simultaneously has less dependency on the data length giving relatively higher consistency \cite{Ref22}. Both ApEn and SampEn were developed to quantify the randomness and irregularity of the system. Generally speaking, the lower the value of SampEn, the less complex the system. However, truly complex signals exhibit varying structures across multiple time scales, while long-range correlations fail to be observed by single-scale Sample Entropy analysis. To this end, Costa \textit{et al}. introduced a ‘coarse-graining’ procedure into the Sample Entropy methodology to verify the structural complexity hidden in high scales, referred to as the Multiscale Sample Entropy (MSE) \cite{Ref23}. This, in turn further spurred the development of MSE algorithms including Composite Multiscale Sample Entropy \cite{Ref62} and Refined Composite Multiscale Sample Entropy \cite{Ref64}. However, due to the ‘coarse-graining’ procedure, the requirement of long data length remains even more problemic and hard to be satisfied in most practical situations. In 2011, Multivariate Multiscale Sample Entropy (MMSE) was introduced which successfully combined dataset from multiple channels to estimate the dynamics of the system more accurately and with shorter data length \cite{Ref24}. The key improvement of MMSE is the form of composite delay vector which involves and reconstructs data segments from multiple channels, whereby the inner correlations among diverse signals are preserved \cite{Ref24}. The existing multivariate multiscale entropies to date include: \begin{itemize} \item Multivariate Multiscale Sample Entropy (MMSE) \cite{Ref24}, a method which performs joint multivariate analysis of physiological signals associated with multiple channels. \item Multivariate Multiscale Permutation Entropy (MMPE) \cite{Ref37}, an extension of standard Permutation Entropy \cite{Ref36} which inherits the desirable properties of PE, such as fast computation and simple implementation. \item Multivariate Multiscale Fuzzy Entropy (MMFE) \cite{Ref58} which combines Composite Delay Vectors and Fuzzy Entropy \cite{Ref90}, and exhibits more stable smoother estimates than MMSE. \item Variational embedding Diversity Entropy (veMDE) \cite{Ref96}, a method developed on the basis of Diversity Entropy \cite{Ref48} that combines angular distance and relative probability, and exhibits a low computational load with similar performance as MMPE. \end{itemize} Despite success, the inherent shortcomings of amplitude-based entropy calculation still remain a major obstacle towards their more widespread use . Other issues with current multivariate entropy methods are included as: \begin{enumerate} \item The rule of thumb is that the requirement of data length is around $10^m$ to $30^m$, where $m$ refers to the embedding dimension \cite{Ref71}. Hence, the choice of the embedding dimension is limited by the available sample points. \item The ‘coarse-graining’ process further emphasizes the drawback of limited data size, which causes inaccurate and undefined estimation for high scale analysis. \item Amplitude-based distance is always sensitive to outliers such as noise and artifacts. \item Poor quality of any single channel has a large impact on the multivariate performance. \item Excessive computational load is required when implementing multi-channel analyses. \end{enumerate} Recently, Wang \textit{et al}. \cite{Ref96} introduced a new way to combine datasets from multiple channels into one entropy algorithm estimation, termed Variational Embedding Multiscale Diversity Entropy. Here, inspired by \cite{Ref96}, a new multivariate entropy method is proposed based on Sample Entropy named Variational Embedding Multiscale Sample Entropy (veMSE). This new method offers the following advantages as: \begin{enumerate} \item Complexity estimates at a higher embedding dimension are better defined with limited data size. \item The requirement for the amount sample points is lower than current Sample entropy-based methods. \item Strong noise-robustness is exhibited across the scales. \item The overall performance of multivariate estimate is independent on the quality of any single-channel within a dataset. \item Less computational time is needed owing to straightforward and efficient implementation. \end{enumerate} The reminder of the paper is organized as follows: In Section \uppercase\expandafter{\romannumeral2}, the new veMSE algorithm is outlined. Section \uppercase\expandafter{\romannumeral3} demonstrates the operation of veMSE on simulated signals to give an initial insight with regards to the choice of parameters. Then, based on the suggested parameter setting in Section \uppercase\expandafter{\romannumeral3}, Section \uppercase\expandafter{\romannumeral4} considers and discusses the properties of veMSE, including noise robustness, directionality, and calculation efficiency. Next, veMSE is applied to real-world signals as wind dynamics and heart rate variability in Section \uppercase\expandafter{\romannumeral5}, and compared with the performance of univariate MSE and MMSE. Finally, conclusions are given. \section{Variational Embedding Multiscale Sample Entropy} \begin{algorithm}[htb] \caption{Variational Embedding Multiscale Sample Entropy} \label{veMSE} \begin{algorithmic} \Statex Assume that there are $P$ channels measured from a system, where a signal recorded from the $c^{th}$ channel is denoted by $x_c(i)$ and is of length $N$, where $1 \leq c \leq P, 1 \leq i \leq N$. Parameters involved in the veMSE algorithm are the tolerance quotient $(r)$, embedding dimension $(m)$, scale factor $(\tau)$ and time lag $(L)$. The detailed steps of veMSE are shown below. \begin{enumerate}[ 1)] \item Procedure of coarse graining is firstly applied to the original datasets for all the channels. The scaled time series are calculated as \begin{equation} y^{(\tau)}(j) = \frac{1}{\tau} \sum_{i=(j-1)\tau+1 }^{j\tau}x(i),\quad 1 \leq j \leq N_t,\;N_t = \frac{N}{\tau}. \end{equation} \item For each channel, the embedding dimension is set as a variable. The dimension for the $c^{th}$ channel is calculated as $m(c) = m+c-1$, as listed in Table. \ref{Tab:varied_m}. Therefore, combined with the process of time delay, the embedding delay vector of data $y^{(\tau)}(i),\; (1 \leq i \leq N_t)$ for channel $c$ is designated as template, $Y_c^{(\tau)}(i)$ and calculated as \begin{equation} Y_c^{(\tau)}(i) = \lbrack y^{(\tau)}(i), y^{(\tau)}(i+L)\; ,\dots, \; y^{(\tau)}(i+n)\rbrack,\; n_c = (m(c)-1)L. \end{equation} \item Compute the Chebyshev distance between templates $Y_c^{(\tau)}$, where the distance is defined according to the amplitude of the embedding vector as \begin{equation} \begin{split} d = max \lbrace Y_c^{(\tau)}(i+k)) - Y_c^{(\tau)}(j+k) \rbrace, \\ where \; 0 \leq k \leq m(c)-1,\; 1 \leq i, j \leq N_t - n_c,\; i \neq j \end{split} \end{equation} \item For each channel, the number of segments, $Y_c^{(\tau)}(j)$, within the tolerance level $r$ of $Y_c^{(\tau)}(i)$, is recorded as $B_c(i)$. In other words, $B_c(i)$ is the number of template matched or the number of similar patterns in the dataset based on the boundary set, where the boundary is defined by the tolerance, $r$. Therefore, the local probability of the occurrence of template match for channel $c$ is \begin{equation} R_c(i) = \frac{B_c(i)}{(N_t - n_c-1)} \end{equation} \item Then, the global probability of the occurrence of template match for a channel $c$ is calculated as \begin{equation} \Phi (c) = \frac{1}{(N_t - n_c)}\sum_{i=1}^{N_t - n_c}{R_c(i)} \end{equation} \item Compute the sum of the global probability for all the channels as \begin{equation} \Phi = \sum_{c=1}^P{\Phi(c)} \end{equation} Recall that unlike the MMSE algorithm (see Appendix.X), the embedding dimension, $m(c)$, is diverse and varied along with the index of channel $c$. \item Modify the embedding dimension to $(m(c)+1)$. Hence, $n_c$ is adjusted to $m(c)\times L$ and the Steps 2-6 are repeated to obtain global probability with increased dimension $\Phi(m+1)$. \item Variational Embedding Multiscale Sample Entropy is finally obtained as \begin{equation} veMSE = -ln\frac{\Phi(m+1)}{\Phi(m)}. \end{equation} \end{enumerate} \end{algorithmic} \end{algorithm} The steps of proposed veMSE is given in Algorithm. \ref{veMSE}. The key improvement of veMSE is that it allows for the varying setting of the embedding dimension for multi-channel signals as illustrated in Table. \ref{Tab:varied_m}, while simultaneously maintaining the information within each channel. Compared to MMSE (see Algorithm. \ref{MMSE} in Appendix), despite the fact that different embedding values can be also achieved for each channel by MMSE, veMSE focuses more on the information between data channels. Indeed, the composite delay vector in MMSE successfully combines the embedding vectors of multichannel signals. However, there is an discrepancy and bias between the similarity among the recombined composite delay vectors and those among embedding vectors of the original datasets. The proposed veMSE estimates the complexity information of signals in multiple channels without influencing the correlation in each individual signal. Secondly, due to the varying embedding dimensions, veMSE yields a weighted contribution to multiple channels. That is, the probability distribution of each channel is diverse, which serve as an amplification of chaotic features in measured signals of multi-channel systems. Thirdly, since the probability of occurrences of similar patterns will be processed multiple times based on the number of channels, and the summation process will be performed before the logarithm operation in the last step, the veMSE is theoretically able to unveil the complexity properties under higher embedding dimension but with the same data length, in comparison with other algorithms based on Sample Entropy. \begin{table} [htbp] \small \caption{Relation between the embedding dimension ($m$) and index of channel ($c$)} \label{Tab:varied_m} \begin{center} \begin{tabular}{|p{5cm}<{\centering}||p{1cm}<{\centering}|p{1cm}<{\centering}|c|p{1.5cm}<{\centering}|} \hline Index of channel $(c)$ & 1 & 2 & $\dots$ & $c$\\ \hline Embedding dimension $(m(c))$ & $m$ & $m+1$ & $\dots$ & $m+c+1$\\ \hline \end{tabular} \end{center} \end{table} To show the performance of the new proposed veMSE, both synthetic signals and real physical datasets are considered in the following sections. As one of the popular multichannel entropy algorithms, MMSE will be employed as a benchmark for veMSE under the same conditions. \section{Results of veMSE on stimulated signals} In this section, synthetic signals generated based on five models will be utilized to illustrate the performance of veMSE including Gaussian white noise, flicker noise (coloured noise), and autoregressive (AR) models AR$(1)$, AR$(2)$ and AR$(3)$. Standard deviations of all the generated signals were set to $\sigma=1$. Coefficients of AR models are given in Table.\ref{Tab:AR_model}. Parameters that will be discussed in the following subsections include embedding dimension, data length, tolerance, and scale factor. To avoid unknown influence and control variables, time lag were set to $L = 1$ for all operations. \begin{table} [htbp] \small \caption{coefficients of AR models} \label{Tab:AR_model} \begin{center} \begin{tabular}{|p{3cm}<{\centering}||p{2cm}<{\centering}|p{2cm}<{\centering}|p{2cm}<{\centering}|} \hline Coefficients & $a_1$ & $a_2$ & $a_3$ \\ \hline AR $(1)$ & $0.5$ & $-$ & $-$\\ \hline AR $(2)$ & $0.5$ & $0.25$ & $-$\\ \hline AR $(3)$ & $0.5$ & $0.25$ & $0.125$\\ \hline \end{tabular} \end{center} \end{table} Figures in each subsection are presented in pairs to show the results for five models. Upper panels give the curves of white noise and flick noise, while the panels in the bottom depict the results of AR models in contrast to white noise. Complexity curves of entropy values are plotted as error-bar figures, averaged over outcomes of 20 realizations for each model. \subsection{Varied Embedding Dimension (m)} Usually, for the implementation of SampEn-based algorithms, the embedding dimension and data length are two parameters that are interdependent and mutually coupled. Data size is restricted to between $(10^m-30^m)$ as a rule of thumb \cite{Ref82}. In real world, signals recorded do not have infinite length and are generally limited by operation time and memory space. Therefore, the embedding dimension is commonly set to $m=2$ or $m=3$ for a signal with 1000 samples \cite{Ref26}. Higher values of embedding dimension with a shortage of data size will cause unstable estimation as standard MMSE given in Figure.\ref{fig:vari_m_MMSE}. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_m_MMSE.jpg} \label{fig:vari_m_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_m.jpg} \label{fig:vari_m_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the embedding dimension, m.} \label{fig:vari_m} \end{figure} Results of veMSE as a function of embedding dimension are shown in Figure.\ref{fig:vari_m_veMSE}. Each entropy is calculated based on signals from two channels. Except the independent variable $m$, other parameters, such as scale factor and data length, are set as constant values (1 and 1000 respectively). The tolerance, $r$ is varying according to the total variance of the covariance matrix of processed data sets as $r\times tr(S)$, and here the tolerance quotient was fixed to $r = 0.15$. Figure.\ref{fig:vari_m_veMSE}, where embedding dimension is modified from 1 to 9, shows that unlike MMSE, signals with a complex correlated structure are able to give a defined entropy value even at high embedding dimensions, as e.g. AR(3) in scale of 7. Also, signals with higher randomness are more likely to become unstable as the embedding dimension increase. However, even in the case of Gaussian white noise with highest randomness, here veMSE with the embedding dimension equal to 5 was able to successfully and stably process the data. On the other hand, traditional Multiscale Sample Entropy methods fail to give a defined value with the embedding dimension higher than 3 under the same condition \cite{Ref1}. Therefore, from the viewpoint of estimation stability, veMSE exhibits a marked improvement when it comes to complex information in high dimensions. \subsection{Varied Data Length (N)} The data length of the signal is another limitation in addition to embedding dimension when implementing entropy-based calculation, particularly in real world processes. Amplitude-based entropy algorithms require at least 1000 data points to guarantee a consistent estimation such as with Multiscale Sample Entropy (MSE) and Multiscale Fuzzy Entropy (MFE) \cite{Ref90}. However, in real world data, as in the analysis of heart rate variability for example, to obtain the required data size for R-R intervals, a minimum of 5 minutes of the raw Electrocardiograph (ECG) signals are needed. In practice, the implementation of such a long-time recording in a controlled state is hard to be satisfied. Compared to amplitude-based entropy methods, space distance-based entropy algorithms such as Cosine Similarity Entropy, show less restriction to data length, with a minimum of 700 samples required \cite{Ref1}. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_N_MMSE.jpg} \label{fig:vari_N_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_N_veMSE.jpg} \label{fig:vari_N_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the data length, N.} \label{fig:vari_N} \end{figure} Figure.\ref{fig:vari_N_veMSE} illustrates the performance of single scale of veMSE as a function of data length in a logarithmic scale. The embedding dimension was set to $m=2$ and the choice of tolerance was the same as before. The values of veMSE for white noise and $1/f$ noise were not defined before $N$ = 40, while AR(2) was not defined with veMSE when $N$ was smaller than 30. The smallest sample sizes for AR(1) and AR(3) were also $N$ = 40. The standard deviation of the entropy results is gradually narrowing down with an increase in data length, while the range for each error-bar is ascending from top to bottom. As a result, a system with higher structure (AR(3)) reveals larger standard deviation. In addition, the consistency of the estimation can be guaranteed as evidenced by the relative position of curves in each graph maintains being unchanged as data length, $N$, increases. More importantly, when analysing the white noise and flicker noise in the top graph, the estimation at $N$ = 100-sample length could successfully separate the complexity degrees of the two signals, while in the bottom graph, when the data length reaches $N$ = 400 samples, there is no intersection region among entropy values from different models. This illustrates that the requirement of data length when applying veMSE is much lower than other entropy methods, e.g. MMSE in Figure.\ref{fig:vari_N_MMSE} gives the $N$ = 1300 is required for the separation of 4 models. Subsequently, this property gives the way to reveal the complexity information under high scales, with limited data length, but with a stable estimation, which better serves the balance between the dimension and data size. \subsection{Varied tolerance (r)} \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_r_MMSE.jpg} \label{fig:vari_r_MMSE} } \subfigure[]{ \includegraphics[width=3.4in]{figure/vari_r.jpg} \label{fig:vari_r_veMSE}} \caption{Operation of single-scale a) MMSE, and b) veMSE as a function of the tolerance, r.} \label{fig:vari_r} \end{figure} The tolerance, $r$, can be explained as the boundary of the similarity degree among comparing templates. The SampEn-based algorithms limit the tolerance to a hard threshold, as the Heaviside function in related to the standard deviation of the original data. However, for a multivariate case with multichannel data sets, only a single value is needed in the algorithm. As in \cite{Ref24}, the choice of tolerance of veMSE is dependent on the total variance of the covariance matrix, \textbf{$S$}, of the analysed data sets. Therefore, the tolerance was set as $r\times tr(S)$. Figure.\ref{fig:vari_r_veMSE} illustrates single scale entropy estimation as a function of tolerance quotient, $r$, varying from 0.1 to 1.5 at 0.1 increments. The data length and embedding dimension were separately fixed at $N$ = 1000 and $m$ = 2 to show the influence of varied tolerance setting. Observe from the figures that for all curves, the increase of tolerance quotient results in a monotone decrease in complexity estimation, which is the same as MMSE in Figure.\ref{fig:vari_r_MMSE}. All can be obviously distinguished from each other before $r $ = 1 in veMSE. The values after $r$ = 1 are too small to be differentiated. The gap among different complexity estimations in the two figures obviously narrows down after r = 0.5, therefore, supporting that the value of tolerance quotient to be chosen below $r$ = 0.5. \subsection{Varied Scale Factor} As noted by Costa \textit{et al}. in \cite{Ref23}, the multiscale analysis by integrating consecutive coarse-graining procedure is of importance in signal processing associated with hidden correlation structure in data. Based on the aforementioned analysis of parameters involved in veMSE, the embedding dimension was set to $m$ = 2 and the tolerance was chosen to $r$ = 0.15 multiplied by total variance of the covariance matrix. With regard to performance of multiscale analysis, graphs of multichannel entropy results are presented in response to the scale factor varying from $(\tau)$ = 1 to $(\tau)$ = 40. Dual channel data with $N$ = 3000 sample data points for each model were considered. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_22.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy \cite{Ref24} with the embedding dimension set as [2 2].} \label{fig:MMSE_22} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_23.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy \cite{Ref24} with the embedding dimension set as [2 3].} \label{fig:MMSE_23} \end{minipage} \end{figure} \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMDE.jpg} \caption{Operation of variational embedding Multiscale Diversity Entropy \cite{Ref96} with the embedding dimension set as 2.} \label{fig:veMDE} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMSE_3000.jpg} \caption{Operation of the proposed variational embedding Multiscale Sample Entropy with the embedding dimension set as 2.} \label{fig:veMSE} \end{minipage} \end{figure} To further elucidate the extent of improvements of veMSE over the Multivariate Multiscale Sample Entropy (MMSE) and Variational Embedding Multiscale Diversity Entropy (veMDE), their performances were compared with the proposed veMSE method. With the same data size, due to the variational dimension feature of veMSE, two different settings of the parameter related to embedding dimension for MMSE were applied and are shown in Figure.\ref{fig:MMSE_22} and Figure.\ref{fig:MMSE_23}. The performance of veMDE is given in Figure.\ref{fig:veMDE}, with the embedding dimension set to $m$ = 2, while the results of veMSE are presented in Figure.\ref{fig:veMSE}. These figures demonstrate that complex structure hidden in higher dimension is hard to be unveiled by MMSE. The standard deviation of those 20 realizations grows steadily as the scale factor increases. Overall, the dimension setting pair $[2,2]$ gives a better performance in MMSE for the considered restricted data length. However, even under the optimal dimension settings, as in Figure.\ref{fig:MMSE_22}, the complexity of AR(3) in purple and AR(2) in yellow fails to be distinguished in multi-scale cases. As for veMDE in Figure.\ref{fig:veMDE}, Diversity Entropy is developed based on angular distance and distribution probability \cite{Ref48}. As can be seen in the graph, veMDE reveals a consistent estimation for each system, which exhibites a short-term correlation. Nevertheless, long-term structural complexity of each model fails to be shown by veMDE. Besides, due to the lack of ability to estimate long-term correlation, veMDE for AR models which contain highly correlated structure can not be distinguished from white noise in large scales. On the other hand, consistent with the advantages as aforementioned theoretical discussion, the merits of veMSE can be clearly seen from in Figure.\ref{fig:veMSE}. To better specify the improvement, the optimal dimension setting [2 2] of MMSE in Figure.\ref{fig:MMSE_22} was utilized to compare with veMSE. Observe from the upper graphs of the two types of noise, although both of the two algorithms were able to distinguish between the two models, the complexity of white noise went down while the flicker noise maintained a certain level in spite of the increasing scale. The range of entropy in error-bars for flicker noise based on veMSE was much narrower than that in basis of MMSE especially in large scales. Secondly, in the bottom graph, values of AR(2) and AR(3) (in yellow and purple) fail to be fully separated by MMSE in the cases of high scale, as stated above, while with the same data length, the separability of AR models with different orders can be successfully accomplished in veMSE. It is critical to apply the entropy calculation under multiscale situation since the long-range correlation of the system is largely ignored in the analysis under low scale. Next, it can be observed that minor difference exists between the complexity estimation of the two models, referred to white noise and AR(1) (blue and red line in bottom graph). Instead, the enhancement produced by veMSE is particularly revealed on the analysis of highly correlated and structural signals, systems with higher structural complexity. Overall, according to the comparison of veMSE and MMSE as well as veMDE based on the above five models, the veMSE provides a more stable estimation that can better demonstrate complex temporal fluctuations. In addition, veMSE is especially suitable for multiscale analysis of highly correlated signals which exhibit variation of spatial-temporal patterns over a range of scales. \section{Properties of veMSE} We now elaborate on the three properties of the proposed entropy veMSE algorithm: noise robustness, directionality, and calculation efficiency. The parameters setting: data size $N$ = 3000; embedding dimension $ m$ = 2; tolerance $ r$ = 0.15; and scale factor $\tau$ = 1. A bivariate system was considered in the analysis. The results which depict the noise analysis and directionality analysis based on proposed veMSE are presented in Figure.\ref{fig:veMSE_noise} and Figure.\ref{fig:veMSE_dire} respectively. And corresponding performance of MMSE are exhibited in Figure.\ref{fig:MMSE_noise} and \ref{fig:MMSE_dire}. The curves of time requirements of veMSE and MMSE are given in Figure.\ref{fig:time}. \subsection{Noise Robustness} Robustness towards noise and artifacts is of critical importance in any estimation. Given that it is infeasible to avoid the noise associated to recording equipment and the ubiquity of artifacts in bio signals, for instance, muscle and electro-magnetic artifacts pervasively exist in EEG-based monitor \cite{Ref137}, the noise-robustness property was tested by comparing complexity estimation for AR models with and without noise. In Figure.\ref{fig:veMSE_noise}, from top to the bottom, the first panel presents the curves for uncorrelated white Gaussian noise (WGN), correlated flicker noise ($1/f$ noise) and coloured noise containing both WGN and $1/f$ noise. It is clear that three systems with different degrees of correlation can be separated by the veMSE. Adding white noise will enhence the short-term correlation as shown at the beginning in the top graph, where the yellow line (\texttt{1/f + WGN}) is as high as blue line (\texttt{WGN}), while the long-term correlation is lower as scale factor increases (\texttt{1/f} $>$ \texttt{1/f}+\texttt{WGN} $>$ \texttt{WGN}). The first graph reveals that veMSE could correctly give a complexity estimation, in line with the theoretical analysis, on the basis of uncorrelated and correlated noise. In the second and third panel of Figure.\ref{fig:veMSE_noise}, the results of veMSE for AR models with uncorrelated white noise and correlated flicker noise are presented, to contrast to the outcomes of pure AR signals in Figure.\ref{fig:veMSE}. The amplitude of the added noise signal was set to $20\%$ that for the AR signals. Compared to Figure.\ref{fig:veMSE}, the gaps between the complexity curves for the AR models of varying order decrease with noise. However, even that the gap among distinct models is narrowed down, separation can still be achieved to high scales in $20\%$ noisy scenarios. While in case of MMSE, noisy AR signals with different complexity cannot be well divided and the impact of noise was clearly shown in Figure.\ref{fig:MMSE_noise}. Given these points, the performance of complexity estimation based on veMSE is consistent with cases without noise. A unique feature of veMSE not present in the other MSE algorithms, showing its potential to be applied in practical recording data sets. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/noise_MMSE.jpg} \label{fig:MMSE_noise} } \subfigure[]{ \includegraphics[width=3.4in]{figure/noise_veMSE.jpg} \label{fig:veMSE_noise}} \caption{Illustration of noise robustness of a) standard MMSE and b) the proposed veMSE.} \label{fig:noise} \end{figure} \subsection{Directionality} Next feature of veMSE that is going to be discussed is directionality. For multivariate analysis, a problem is that the optimal ordering of the input channel is unknown. Yet, without prior knowledge related to the optimal channel order, the performance of the estimation will be impacted. To this end, the directionality of the veMSE is analysed for bivariate systems. Figure.\ref{fig:veMSE_dire} reveals two graphs, each contains three pairs of curves. In top graph, there are white noise with AR(1), AR(1) with AR(2) and AR(2) with AR(3), with the order of input shown by legend (first present, first processed). As can be seen from this figure, the lower scale estimate is mainly influenced by the first input signal. For example, the blue line \texttt{[WGN AR(1)]} can be clearly recognized lower than the red one \texttt{[AR(1) WGN]} at the beginning especially, in the single scale case. As scale increases, the two lines approach the same level. Similar trend can be seen for the other two pairs. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=3.4in]{figure/dire_MMSE.jpg} \label{fig:MMSE_dire} } \subfigure[]{ \includegraphics[width=3.4in]{figure/dire_veMSE.jpg} \label{fig:veMSE_dire}} \caption{Illustration of directionality of a) standard MMSE and b) the proposed veMSE.} \label{fig:dire} \end{figure} In the second graph of Figure.\ref{fig:veMSE_dire}, the analysed signals are AR(1), AR(2) and AR(3) with one of the signals in each system associated with white noise. The legend \texttt{[AR AR+WGN]} refers to cases where noise-free signals are the first variate followed by noisy signals and vice versa for \texttt{[AR+WGN AR]}. This setting of the inputs was used to stimulate real scenarios when dealing with multi-channel signals, whereby one of them represents a poor recording with noise. Considering the noise-robustness property of veMSE, the amplitude of noise signals in this subsection is enlarged to the same level as for AR signals, to demonstrate a clear difference when the input order is modified. As shown in the figure, the inverted input orders can be reflected by different start levels, while the curves then approach each other as well as end with similar estimates. Therefore, regardless of the input order, the separation of complexity levels of AR models can be successfully achieved. And observed in Figure.\ref{fig:MMSE_dire} in case of MMSE, the inverted input exhibited no influence on the resulted curves in small scales, while similar performance as veMSE in the larger scale. As demonstrated in Figure.\ref{fig:veMSE_dire}, the reversed order has limited influence on the estimation in high scales as all the paired curves approach to the same three regions, respectively, while in spite of the modified order, the three models can be separated. However, as similar phenomenon is shown in the top graph, the varying order of the input signal will generate entropy values with different degrees in small scale analysis when the input signals contain distinct structure. Therefore, the direction of the input order needs to be carefully considered when applying small scale analysis, and such consideration can be ignored in high scale analysis with identical system measurement. \subsection{Computational Complexity} Admittedly, entropy analysis based on multichannel signals is more time-consuming than other single variate estimation as it should be. Nevertheless, for potential applications as real time processing in the future, calculation efficiency is one of the critical factors that need to be carefully considered. Therefore, in this subsection, time consumption of veMSE is discussed and compared with commonly used MMSE and the results are given below. \begin{figure}[htp] \centering \includegraphics[width=7in]{figure/time_consumption.jpg} \caption{Computation time for MMSE and the proposed veMSE with modified parameters for white Gaussian input.} \label{fig:time} \end{figure} Figure.\ref{fig:time} shows the processing time as a function of various modified parameters when implementing the veMSE (blue line) and MMSE (red line). All the curves are given as an average over 10 implement realizations. Each graph is designed to reflect the behaviours for only one modified parameter, when the independent variables in the following figures, from the left- to the right-hand side, as the scale factor, length of data, number of channels and embedding dimension. All the entropy calculations were set as single-variate and bivariate processing by default. The data length and embedding dimension were irrelevant variables were fixed to $N$ = 5000 and $m$ = 2. Overall, the red line, representing computational time of MMSE, is above the blue, that of veMSE, for all the scenarios. Actually, the increase of scale factor and the decrease of data length reflect that when the data size after ‘coarse-graining’ procedure is lower than 1000, the times needed for the two calculations are similar, as shown in the first graph where the scale factor is higher than 5 or when the data length is shorter than 1000 in the second graph. As for the influence of channel number, demonstrated in the third graph, it is reasonable that MMSE need more time as the channel number increases, because the key step for Sample Entropy is the ratio of conditional probability for similar patterns between the embedding dimension, $m$, and its increase, $m+1$, the number of possible ways to apply the ($m+1$) dimension is equal to the number of channels involved when forming the composite delay vector in MMSE. Therefore, the calculation with an increased embedding dimension will be repeated $c$ times in MMSE, where $c$ denotes the number of channels. Finally, in the relationship between an increased dimension and computation time, the time difference roughly maintains a fixed value in spite of the dimension changing in the last graph. In general, with reference to the widely applied MMSE, the time needed for the same amount of data with veMSE is shorter. This reflects that the calculations efficiency of veMSE is higher than MMSE, which is of high potential interest to implement real-time monitoring of human states in the future. \section{Performance of veMSE on real signals} \subsection{Wind Dynamics} Having illustrated how veMSE performs on synthetic signals, we now examine the performance of veMSE in real-world systems. In the first place, wind dynamics were examined. The long-term correlations in wind dynamics has been revealed in previous studies by using detrended fluctuation analysis (DFA) \cite{Ref275} and standard Multivariate Multiscale Sample Entropy (MMSE) \cite{Ref24}. Here, the proposed veMSE was utilized to exhibit an improved performance of characterizing different dynamical complexity of wind regimes. The applied database of wind was recorded by 3D ultrasonic anemometers at a sampling frequency of 50 Hz. The recording process was implemented in the Institute of Industrial Science of the University of Tokyo. Three channels were obtained and they are in east-west, north-south, and vertical direction respectively. The wind regimes containing different dynamics were defined as low, medium, and high by the magnitude of wind speed as examples shown in Figure. \ref{fig:wind}. The parameters involved in veMSE analysis were set to $m$ = 2, $L$ = 1, $r = 0.2\times tr(S)$, and $N$ = 3000. Illustrations of entropy results were averaged over 10 trials for each channel and exhibited as error-bar. In addition, shuffled wind data sets were also tested and compared with obtained wind dynamics, which were generated from the recorded wind signals but with a random order. In this case, the possible correlations within signals were broken down but the statistical properties were preserved. \begin{figure}[htp] \centering \includegraphics[width=5in]{figure/wind_example.jpg} \caption{Magnitude of the wind signal. The wind segments are defined as low, medium, and high regimes.} \label{fig:wind} \end{figure} Before the comparison between veMSE and MMSE, univariate Multiscale Sample Entropy (MSE) was firstly applied to give an insight into the complexity of the dynamical system to be processed. As shown in Figure.\ref{fig:MSE_wind}, the randomized data sets exhibit a white noise-like behaviour in terms of structural complexity, which is expected as the correlation was destroyed while shuffling. The MSE results of wind dynamics assigned the highest complexity to the medium regime which is expected since the medium wind speed hold the fewest constrains. Yet, the complexity of low wind regime was quantified higher than the high regime which violates the intuition, as the high wind regime also contains components with medium speed. Besides, each wind dynamics exhibited lower complexity than the their randomized series, wrongly resulting to the absence of long-term correlations within wind data sets. The same trails were analyzed by Multivariate Multiscale Sample Entropy as shown in Figure.\ref{fig:MMSE_wind} which revealed the relation of the three wind regimes in a expected way that mild wind exhibited the highest complexity, followed by the high and low dynamics regimes. Further, all the three wind regimes were able to present long-range correlation higher than their surrogate data sets. However, overlapped area can be found across all the scales. While in Figure.\ref{fig:veMSE_wind}, observe, as desired, the long-range correlation of different wind speed was able to be detected resulting from the proposed veMSE with more apparent separation on basis of the correct analysis compared to MSE and MMSE. \begin{figure}[htbp] \centering \subfigure[]{ \includegraphics[width=2.2in]{figure/MSE2.jpg} \label{fig:MSE_wind} } \subfigure[]{ \includegraphics[width=2.2in]{figure/MMSE_wind.jpg} \label{fig:MMSE_wind}} \subfigure[]{ \includegraphics[width=2.2in]{figure/veMSE_wind.jpg} \label{fig:veMSE_wind}} \caption{Results of a) standard single-variate MSE, b) standard multi-channel MMSE, c) proposed veMSE based on wind dynamics and randomized dynamics.} \label{fig:wind_entropy} \end{figure} \subsection{Physiological Database} Now, physiological data sets were involved to exhibit the performance of complexity quantification based on entropy analysis. The physiological signals utilized is Fantasia database \cite{Ref40} that includes R-R intervals (RRI) and interbreath interval (IBI) extracted from electrocardiograph (ECG) and respiration signals respectively. The structure of long-term correlation in heart rate variability and respiratory dynamics have been examined by traditional method as Detrended Fluctuation Analysis (DFA) \cite{Ref277} and standard MMSE \cite{Ref24}. Fantasia database were recorded from 20 young people (at the age of 21-34) and 20 elderly participants (ageing from 68-85) who were rigorously screened for 120 minutes. 10 subsets out of them were chosen in each group. The ECG and respiration signals were recorded with sample frequency at 250 Hz. Then, RRI and IBI signals were extracted and aligned. The surrogate series with randomised order were analyzed along with RRI and IBI signals in each methodology to reveal the effectiveness of the complexity. Parameters in entropy analysis were set to $m$ = 2, $L$ = 1, $r = 0.15\times tr(S)$, and $N$ = 4000. In the first place, to give an overall view of the signals in each channel. Univariate Multiscale Sample Entropy was applied to single channel separately as exhibited in Figure.\ref{fig:MSE_RRI} and Figure.\ref{fig:MSE_IBI}. As Complexity Loss Theory \cite{Ref254} stated, the adaptive capacity of bio-systems is damaged by disease and aging. The complexity of RRI and IBI signals recorded from young people is supposed to be higher than that from elderly subjects when considering long-term correlation. In respect of RRI in short-term correlation, MSE exhibited a correct estimation under certain low scales. However, both MSE on basis of RRI and IBI are unable to reveal the correct relation between the dynamics of elder individuals and young people in a long-range term. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MSE_RRI.jpg} \caption{Operation of Univariate Multiscale Sample Entropy based on R-R interval.} \label{fig:MSE_RRI} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MSE_IBI.jpg} \caption{Operation of Univariate Multiscale Sample Entropy based on interbreath interval (IBI).} \label{fig:MSE_IBI} \end{minipage} \end{figure} Next, observed in Figure.\ref{fig:MMSE_Fan}, MMSE was applied to the bivariate channels. The resulted error bar indicated the higher dynamical complexity in young participants than that in elderly individuals which conformed the complexity loss theory with aging \cite{Ref254}. Moreover, both physiological data sets showed higher long-range correlation than the randomized surrogate series in an expected way, yet as scale increases, overlapping area can be observed due to the lack of sample length after coarse graining process. While in contrast to MMSE, performance of veMSE in Figure.\ref{fig:veMSE_Fan} remained the correct and clear estimation but with higher stability particularly in large scales. In practical scenarios, the size of recorded signals is mostly limited. Hence, veMSE can better facilitate the analysis of physical and physiological signals when identifying dynamical difference based on nonlinear features. \begin{figure}[htbp] \centering \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/MMSE_Fan.jpg} \caption{Operation of Multivariate Multiscale Sample Entropy.} \label{fig:MMSE_Fan} \end{minipage} \hspace{0.1in} \begin{minipage}[t]{3.3in} \centering \includegraphics[width=3.1in]{figure/veMSE_Fan.jpg} \caption{Operation of Variational Embedding Multiscale Sample Entropy.} \label{fig:veMSE_Fan} \end{minipage} \end{figure} \section{Conclusion} To conclude, the Variational Embedding Multiscale Sample Entropy (veMSE) method has been introduced for enhanced complexity analysis of real-world data. The results of this paper indicate that veMSE is capable of exhibiting the complex features of the system in large scale with higher embedding dimension compared to MMSE and veMDE. Besides, the utilization of multivariate analysis via veMSE could guarantee an improvement in contrast to single-variate analysis regardless of the quality of the recorded signals in sub-channels. Moreover, veMSE shows a strong noise robustness feature and meanwhile, the time consumption of veMSE is lower than MMSE under the same conditions as well, and this improvement is particularly apparent as the increase of available channels. The higher calculation efficiency is of high potential interest to apply entropy analysis into scenarios where need real-time processing or synchronized monitor in the future application. Nevertheless, it should be noted that this method is restricted to amplitude-based distance. Future research can be intended to put focus on angular distance-based associated with variational embedding dimension methodology. \appendices \section{Algorithm of Multivariate Multiscale Sample Entropy} \begin{algorithm}[htb] \caption{Multivariate Multiscale Sample Entropy } \label{MMSE} \begin{algorithmic} \Statex The steps of standard Multivariate Multiscale Sample Entropy are given below. For a multi-variate data set $\{x_{c,i}\}_{i=1}^N,\,1\leq c\leq P$ with length of $N$ and number of channel $P$. Manually selected parameters are the embedding dimension ($M = [m_1, m_2,\dots, m_P]$), tolerance ($r$), time delay ($L = [l_1, l_2,\dots, l_P]$) and scale factor ($\tau$): \begin{enumerate} \item Normalize the original multi-variate data sets by subtracting the mean and dividing by the standard deviation. \item Perform Coarse Graining Process to obtain the scaled multi-channel time series $\{y_{c,i}^{(\tau)}\}_{j=1}^{N/\tau}$ following the equation\newline \begin{equation} y_{c,i}^{(\tau)}(j) = \frac{1}{\tau}\sum^{j+\tau/2-1}_{i = j-\tau/2 - 1}{x_{c}(i)},\quad 1\leq j\leq \frac{N}{\tau},\,c = 1,2,\dots,P \nonumber \end{equation} \item Form the Composite Delay Vectors (CDV) $\textbf{Y}_M(i)$ according to $M$ and $L$ in a form as \begin{align*} \textbf{Y}_M(i) =[& y_{1,i},y_{1,i+l_1},\dots,y_{1,i+(m_1-1)l_1)},\\ & y_{2,i},y_{2,i+l_2},\dots,y_{2,i+(m_2-1)l_2)},\\ &\qquad \qquad \qquad \vdots \\ & y_{c,i},y_{c,i+l_c},\dots,y_{c,i+(m_c-1)l_c)},] \end{align*} \item Compute the similarity for all pairwise CDVs ($Y_M(i)\,\&\, Y_M(j)$) based on Chebyshev distance as \begin{equation} D(i,j) = max\{ Y_M(i+k)-Y_M(j+k)||\, 0\leq k\leq ((\sum_{c=1}^{P}{m_c})-1),i\neq j\} \nonumber \end{equation} \item Calculate the number of matching patterns defined as similar pair $B(i)$ that satisfies the criterion $D(i,j)\leq r$. \item Compute the local probability $C(i)$ and global probability $\Phi(i)$ of $B(i)$ by \newline \begin{equation} C(i) = \frac{B(i)}{N-n-1},\quad \Phi=\frac{\sum_{i=1}^{N-n}{{C(i)}}}{N-n},\,n = max(M)*max(L) \nonumber \end{equation} \item Repeat Steps 3-6 with modified embedding dimension as ($m_c$+1) and obtain the updated global probability as\newline \begin{equation} \Phi^*=\frac{\sum_{i=1}^{N-n}{{C^*(i)}}}{N-n},\, n = max(M^*)*max(L) \nonumber \end{equation} Recall that there are $P$ ways to increase the embedding dimension and the modified global probability, $\Phi^*$, is the averaged result. \item Multivariate Multiscale Sample Entropy is defined as\newline \begin{equation} MMSE= -\ln{[\frac{\Phi^*}{\Phi}]} \nonumber \end{equation} \end{enumerate} \end{algorithmic} \end{algorithm} \section*{Acknowledgment} We wish to thank Institute of Industrial Science of the University of Tokyo for providing the wind data sets used in this article. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
math/0504208
\section{Introduction} \subsection{Regular projectively Anosov flows} In \cite{Mi}, Mitsumatsu introduced {\it a bi-contact structure} on a three-dimensional manifold, {\it i.e.}, a pair of mutually transverse positive and negative contact structures. He observed that a three-dimensional Anosov flow naturally induces a bi-contact structure whose intersection as a pair of plane fields is tangent to the flow. In general, the intersection of a bi-contact structure does not define an Anosov flow. In fact, he showed that a bi-contact structure corresponds to {\it a projectively Anosov flow}, which is a generalization of an Anosov flow. In \cite{ET}, Eliashberg and Thurston also studied bi-contact structures and projectively Anosov flows ({\it conformally Anosov flows} in their book) from the viewpoint of confoliation theory. They observed that a bi-contact structure naturally appears in a linear deformation of a foliation into contact structures. A flow $\Phi=\{\Phi^t\}_{t \in \mathbb{R}}$ on a three-dimensional manifold $M$ is called a {\it projectively Anosov} flow (or a $\mbox{$\mathbb{P}$\rm{A}}$ flow) if it has no stationary points and admits a decomposition $TM=E^u +E^s$ by continuous plane fields such that \begin{itemize} \item $E^u(z) \cap E^s(z)=T \Phi(z)$ for any $z \in M$, where $T\Phi$ is the line field tangent to the orbits of $\Phi$, \item $D\Phi^t(E^\sigma(z))=E^\sigma(\Phi^t(z))$ for any $\sigma \in \{u,s\}$, $z \in M$, and $t \in \mathbb{R}$, and \item there exist two constants $C>0$ and $\lambda >1$ such that \begin{displaymath} \label{eq:PA def} \|N\Phi^t|_{(E^s/T\Phi)(z)}\| \cdot \|(N\Phi^t|_{(E^u/T\Phi)(z)})^{-1}\| \leq C\lambda^{-t} \end{displaymath} for any $z \in M$ and $t \geq 0$, where $N\Phi=\{N\Phi^t\}_{t \in \mathbb{R}}$ is the flow on $TM/T\Phi$ induced by $\Phi$. \end{itemize} We call the decomposition $TM=E^u+E^s$ {\it a $\mbox{$\mathbb{P}$\rm{A}}$ splitting}. If it satisfies stronger inequalities \begin{displaymath} \label{eq:Anosov def} \|N\Phi^t|_{(E^s/T\Phi)(z)}\|\leq C\lambda^{-t},\; \|(N\Phi^t|_{(E^u/T\Phi)(z)})^{-1}\| \leq C\lambda^{-t} \end{displaymath} for any $z \in M$ and $t \geq 0$, then the flow is called an {\it Anosov flow} and the splitting is called a {\it weak Anosov splitting} \footnote{It is different from but equivalent to the common definition of an Anosov flow as pointed out by Doering \cite[Proposition 1.1]{Do}.}. We remark that a $\mbox{$\mathbb{P}$\rm{A}}$ splitting is a {\it dominated splitting} on the whole manifold. Such a splitting plays important roles in the modern theory of dynamical systems. See \cite{BDV} for example. It is known that a $\mbox{$\mathbb{P}$\rm{A}}$ splitting is always integrable. However, the splitting is not smooth in general\footnote{A $\mbox{$\mathbb{P}$\rm{A}}$ flow with a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting is called {\it regular}. However, we do not use the term `regular' in this sense since we use this term in other context below.}. In fact, any orientable closed three-dimensional manifold admits a smooth $\mbox{$\mathbb{P}$\rm{A}}$ flow, but no $\mbox{$\mathbb{P}$\rm{A}}$ flow on the three-dimensional sphere admits a $C^1$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting. See Theorems 4.2.6 and 4.3.1 in \cite{Mi2}. From the viewpoint of confoliation theory, a $\mbox{$\mathbb{P}$\rm{A}}$ flow with a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting corresponds to a linear deformation of a smooth foliation into contact structures whose derivative generates another smooth foliation (see \cite[Proposition 2.2.3]{ET}). In \cite{Gh}, Ghys classified three-dimensional Anosov flows with a smooth weak Anosov splitting. We say an Anosov flow $\Phi$ is {\it algebraic} if there exists a Lie group $G$, its cocompact lattice $\Gamma$, and a one-parameter subgroup $\{a^t\}_{t \in \mathbb{R}}$ of $G$ such that $\Phi$ is a flow on $\Gamma \backslash G$ given by $\Phi^t(\Gamma g)=\Gamma(g \cdot a^t)$. It is known that there are only two choices of $G$ up to covering: \begin{enumerate} \item The universal covering group $\widetilde{\mbox{PSL}}(2,\mathbb{R})$ of the special linear group of $\mathbb{R}^2$. \item The semi-direct product $\mathbb{R} \ltimes \mathbb{R}^2$ associated to a homomorphism $H:\mathbb{R} {\rightarrow} \mbox{GL}(2,\mathbb{R})$ given by $H(t)(x,y)= (e^t x, e^{-t},y)$. \end{enumerate} In the former case, the algebraic Anosov flow can be identified with the geodesic flow on a closed surface with a hyperbolic metric up to finite cover. In the latter case, the algebraic Anosov flow can be identified with the suspension flow of a hyperbolic toral automorphism. \begin{theo} [\cite{Gh}] If an Anosov flow on a closed three-dimensional manifold admits a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting, it is smoothly equivalent to an algebraic Anosov flow. \end{theo} It is natural to ask whether any $\mbox{$\mathbb{P}$\rm{A}}$ flow with a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting is equivalent to an algebraic model or not. In \cite{No}, Noda showed that if a $\mbox{$\mathbb{P}$\rm{A}}$ flow on a $\mathbb{T}^2$-bundle over $S^1$ admits a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting and has an invariant torus, then it must be represented as a finite union of so-called {\it $\mathbb{T}^2 \times I$-models}. Roughly speaking, a $\mathbb{T}^2 \times I$-model is a flow on $\mathbb{T}^2 \times [0,1]$ which is transverse to $\mathbb{T}^2 \times \{z\}$ for any $z \in (0,1)$ and is equivalent to a linear flow on each boundary. See \cite{No} for the precise definition. In a series of papers, he and Tsuboi gave a classification for certain manifolds, which is summarized as follows. \begin{theo}[\cite{No,No2,NT,Ts}] If a $\mbox{$\mathbb{P}$\rm{A}}$ flow on a Seifert manifold or a $\mathbb{T}^2$-bundle over $S^1$ admits a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting, then it is either an Anosov flow or represented as a finite union of $\mathbb{T}^2\times I$-models. \end{theo} The author of this paper also approached the classification from another direction. In \cite{As}, he showed that if a $\mbox{$\mathbb{P}$\rm{A}}$ flow on {\it any} closed three-dimensional manifold admits a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting and all periodic orbits are hyperbolic, then it is equivalent to one of the above. In \cite{No2}, Noda conjectured that the above is the complete list of three-dimensional $\mbox{$\mathbb{P}$\rm{A}}$ flows with a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting. The goal of this paper is an affirmative solution to the conjecture. \begin{theo} \label{thm:main theorem} If a $\mbox{$\mathbb{P}$\rm{A}}$ flow on a closed, connected, and three-dimensional manifold admits a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting, then it is either an Anosov flow or represented as a finite union of $\mathbb{T}^2\times I$ models. \end{theo} The theorem gives a solution to a conjecture posed by Mitsumatsu (Conjecture 4.3.3 in \cite{Mi2}) immediately. \begin{coro} Any bi-contact structure associated to a $\mbox{$\mathbb{P}$\rm{A}}$ flow with a smooth $\mbox{$\mathbb{P}$\rm{A}}$ splitting consists of tight contact structures. \end{coro} We give the proof of Theorem \ref{thm:main theorem} in Sections \ref{sec:dichotomy} and \ref{sec:transitive}. In Section \ref{sec:dichotomy}, we show a dichotomy on dynamics of a $\mbox{$\mathbb{P}$\rm{A}}$ flow with a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting. Namely, either the flow is topologically transitive or the non-wandering set is the union of invariant tori with rotational dynamics. It is not so hard to see that the latter implies that the flow is represented by $\mathbb{T}^2 \times I$-models. In Section \ref{sec:transitive}, we show the former implies that the flow is Anosov. It is done by proving the hyperbolicity of all periodic orbits. \subsection{Foliations with a tangentially contracting flow} Let $\mathcal{F}$ be a codimension-one foliation on a three-dimensional manifold $M$. We say a flow $\Phi$ is {\it tangentially contracting} with respect to $\mathcal{F}$ if there exist $C>0$ and $\lambda>1$ such that $\|N\Phi^t|_{T\mathcal{F}/T\Phi(z)}\| \leq C\lambda^{-t}$ for any $z \in M$ and $t \geq 0$. We apply the method developed in this paper to a classification of foliations which admit a tangentially contracting flow. \begin{theo} \label{thm:TC} Let $M$ be a closed three-dimensional manifold and $\mathcal{F}$ a $C^r$ codimension-one foliation on $M$ with $r \geq 2$. Suppose that $\mathcal{F}$ admits a $C^r$ tangentially contracting flow $\Phi$. Then, $\Phi$ is Anosov and $\mathcal{F}$ is $C^r$-diffeomorphic to the weak stable foliation of an algebraic Anosov flow. \end{theo} We give two examples of group actions which induce a foliation with a tangentially contracting flow naturally. The above theorem implies the rigidity of such actions. \paragraph{Locally free actions of the affine group} Let ${\mbox{GA}}$ be the group of orientation preserving affine transformations of the real line $\mathbb{R}$. It is generated by two one-parameter subgroups $\{a^t\}_{t \in \mathbb{R}}$ and $\{b^x\}_{x \in \mathbb{R}}$ with a relation $b^x \cdot a^t = a^t \cdot b^{\exp(-t)x}$. We say an action $\rho:M \times {\mbox{GA}} {\rightarrow} M$ on a manifold $M$ is {\it locally free} if the isotropy subgroup $\{g \in {\mbox{GA}} {\;|\;} \rho(p,g)=p\}$ is discrete for any $p \in M$. By $\mathcal{O}(p,\rho)$, we denote the $\rho$-orbit $\{\rho(p,g) {\;|\;} g \in {\mbox{GA}}\}$ of $p \in M$. If $\rho$ is of class $C^1$ and $M$ is closed, then the partition $\mathcal{O}_\rho=\{\mathcal{O}(p,\rho) {\;|\;} p \in M\}$ is a foliation. The flow $\{\rho(\cdot,a^t)\}_{t \in \mathbb{R}}$ is tangentially contracting with respect to $\mathcal{O}_\rho$. In \cite{Gh2}, Ghys classified $C^r$ locally free action of ${\mbox{GA}}$ on closed three-dimensional manifolds for $r \geq 2$ up to $C^r$ conjugacy assuming the existence of an invariant volume. Applying Theorem \ref{thm:TC} to $\mathcal{O}_\rho$, we obtain a classification of {\it the orbit foliation} of actions {\it without} the assumption on an invariant volume. \begin{theo} Let $\rho$ be a $C^r$ locally free action of ${\mbox{GA}}$ on a closed three-dimensional manifold with $r \geq 2$. Then, the orbit foliation of $\rho$ is $C^r$ diffeomorphic to the weak stable foliation of an algebraic Anosov flow. \end{theo} In the forthcoming paper \cite{As0}, we will give a classification of the actions of ${\mbox{GA}}$ {\it up to smooth conjugacy}. \paragraph{Actions of Fuchsian groups on the circle} Let $\Gamma_g$ be the fundamental group of the oriented closed surface of genus $g \geq 2$. We identify the circle $S^1$ with the real projective line. It induces a projective structure to the circle. We call an action $\Phi$ of $\Gamma_g$ on the circle {\it projective} if it preserves the projective structure. We say two actions $\Phi_1$ and $\Phi_2$ of $\Gamma_g$ are $C^r$-conjugate if there exists a $C^r$ diffeomorphism $H$ (or a homeomorphism if $r=0$) of $S^1$ such that $H(\Phi_1(\gamma,p))=\Phi_2(\gamma,H(p))$ for any $\gamma \in \Gamma_g$ and $p \in S^1$. In \cite{Gh}, Ghys proved the rigidity of projective actions. \begin{theo} [\cite{Gh}] \label{thm:Fuchsian} Let $\Phi:\Gamma_g \times S^1 {\rightarrow} S^1$ be a $C^r$ action with $r \geq 3$. Suppose that $\Phi$ is $C^0$-conjugate to a projective action. Then, it is $C^r$-conjugate to a projective action. \end{theo} His proof can be divided into two steps. The first step is to show that the suspension foliation of the action admits a tangentially contracting flow. The second step is to construct a transverse projective structure of the foliation using the flow obtained in the first step. The first step can be done even for $r=2$. The second step also can be done even for $r=2$ if the flow obtained in the first step is Anosov, as Ghys mentioned in Section 5 of \cite{Gh}. Hence, Theorem \ref{thm:TC} implies the following improvement of the above theorem. \begin{theo} Theorem \ref{thm:Fuchsian} holds even for $r=2$. \end{theo} It is known that the theorem does not hold for $r=1$. See \cite{Gh}. \paragraph{Acknowledgments} The author would like to thank Professor Takashi Inaba for letting the author know the stability theory and the level theory of Cantwell and Conlon, which are necessary to prove Lemma \ref{lemma:semi-proper}. This paper was partially written when the author stayed at Unit\'e de Math\'ematiques Pures et Appliqu\'ees, \'Ecole Normale Sup\'erieure de Lyon. He is grateful to the members of UMPA, especially to Professor \'Etienne Ghys for their warm hospitality. He would also like to thank an anonymous referee for many comments to improve the paper. \section{A dichotomy on dynamics} \label{sec:dichotomy} \label{sec:section2} In the rest of the article, we fix an orientable, closed, connected, and three-dimensional manifold $M$. Let $\Phi$ be a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow on $M$ with a continuous $\mbox{$\mathbb{P}$\rm{A}}$ splitting $TM=E^u + E^s$. For a compact $\Phi$-invariant set $\Lambda$, we define {\it the stable set} $W^s(\Lambda)$ and {\it the unstable set} $W^u(\Lambda)$ by \begin{eqnarray*} W^s(\Lambda) &=& \left\{z \in M {\;|\;} \lim_{t {\rightarrow} + \infty}d(\Phi^t(z),\Lambda) = 0\right\} \end{eqnarray*} and $W^u(\Lambda)=W^s(\Lambda;\Phi^{-1})$, where $\Phi^{-1}$ is the time-reverse of $\Phi$. We call a $\Phi$-invariant torus $T$ is {\it normally attracting} if there exists $C>0$ and $\lambda>1$ such that $\|N\Phi^t|_{E^s/T\Phi(z)}\| \leq C\lambda^{-t}$ for any $z \in T$ and $t \geq 0$. By the existence of a $\mbox{$\mathbb{P}$\rm{A}}$ splitting, our definition coincides with the usual definition. It is known that if $T$ is a normally attracting invariant torus, then $W^s(T)$ is an open neighborhood of $T$ and is diffeomorphic to $\mathbb{T}^2 \times \mathbb{R}$. We say an invariant torus $T'$ is {\it normally repelling} if $T'$ is normally attracting with respect to the time-reverse $\Phi^{-1}$. Let $\Omega_*$ be the union of invariant embedded tori to which the restriction of $\Phi$ are topologically equivalent to a linear flow. For $\rho \in \{u,s\}$, let $\Omega^\rho_*$ be the union of tori in $\Omega_*$ tangent to $E^\rho$. By the linearity of the flow on tori and the domination property of the splitting, we have $\Omega_*=\Omega^u_* \cup \Omega^s_*$, $\Omega^u_*$ is a union of normally attracting $\Phi$-invariant tori, and $\Omega^s_*$ is a union of normally repelling $\Phi$-invariant tori. The exactly same argument as Proposition 3.9 of \cite{AR} shows that $\Omega_*$ consists of finite number of tori. The aim of this section is to show the following dichotomy. \begin{prop} \label{prop:dichotomy} If $\Phi$ admits a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting, then either \begin{enumerate} \item $\Phi$ is topologically transitive, or \item $M=W^s(\Omega^u_*) \cup \Omega^s_* =W^u(\Omega^s_*) \cup \Omega^u_*$. \end{enumerate} \end{prop} The latter implies that $\Phi$ is equivalent to one of the known models. \begin{prop} \label{prop:T2*I} In the latter case of Proposition \ref{prop:dichotomy}, $\Phi$ is represented by a finite union of $\mathbb{T}^2 \times I$-models. \end{prop} \begin{proof} Fix connected components $T_0$ of $\Omega^s_*$ and $U$ of $W^u(T_0) \- T_0$. Take an embedding $\psi:\mathbb{T}^2 \times [0,1] {\rightarrow} M$ such that $\psi(\mathbb{T}^2 \times 0)=T_0$, $\mbox{\rm{Im} } \psi \subset U \cup T_0$, and $\psi(\mathbb{T}^2 \times 1)$ is transverse to the flow. Put $T'=\psi(\mathbb{T}^2 \times 1)$. Since $W^s(\Omega^u_*)$ is a disjoint union of the stable sets of connected components of $\Omega^u_*$, we have $T' \subset W^s(T_1)$ for some connected component $T_1$ of $\Omega^u_*$. Let $U_1$ be the connected component of $W^s(T_1) \- T_1$ which contains $T'$. Since $T_1$ is normally attracting there exists an embedding $\psi_1:\mathbb{T}^2 \times [0,1] {\rightarrow} M$ such that $\mbox{\rm{Im} } \psi_1 \subset T_1 \cup (U_1 \- \mbox{\rm{Im} } \psi)$, $\psi_1(\mathbb{T}^2 \times 0)=T_1$, $\psi_1(\mathbb{T}^2 \times 1)$ is transverse to the flow, and each $\Phi$-orbit contained in $U_1$ intersects with $\psi_1(\mathbb{T}^2 \times 1)$ exactly once. Then, we can take a smooth positive function $\tau$ on $T'$ such that $\Phi^{\tau(z)}(z) \in \psi_1(\mathbb{T}^2 \times 1)$ for any $z \in T'$. It implies that \begin{equation*} \cl{U}=\cl{U_1} =\mbox{\rm{Im} } \psi \cup \mbox{\rm{Im} } \psi_1 \cup \{\Phi^t(z) {\;|\;} z \in T',t \in [0,\tau(z)]\} \end{equation*} is diffeomorphic to $\mathbb{T}^2 \times [0,1]$ and its boundary is $T_0 \cup T_1$. Inductively, we obtain sequences $(T_n)_{n \geq 0}$ and $(B_n)_{n \geq 0}$ of subsets of $M$ such that $T_n$ is a connected component of $\Omega_*$, $B_n$ is diffeomorphic to $\mathbb{T}^2 \times [0,1]$, $\partial B_n=T_n \cup T_{n+1}$, and $B_n \cap B_{n+1}=T_{n+1}$ for any $n$. Since $\Omega_*$ contains only finitely many tori, we have $T_n=T_m$ for some $n \neq m$. It implies that $M$ is a $\mathbb{T}^2$-bundle over $S^1$. By Noda's classification \cite{No}, $\Phi$ is represented by a finite union of $\mathbb{T}^2 \times I$-models. \end{proof} The rest of this section is devoted to the proof of Proposition \ref{prop:dichotomy}. In Subsection \ref{sec:hyp-like}, we show the existence of the stable and unstable manifolds of invariant sets without irregular periodic points. In Section \ref{sec:reduced dichotomy}, we prove the dichotomy when all periodic points outside $\Omega_*$ are regular. In the both of subsections, we assume only a weaker condition on the regularity of the $\mbox{$\mathbb{P}$\rm{A}}$ splitting since the $C^2$ regularity of the splitting is too strong when we apply the results to a foliation with a tangentially contracting flow. At last, in Subsection \ref{sec:local dynamics}, we prove that all periodic points are regular for $\mbox{$\mathbb{P}$\rm{A}}$ flow with a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting. \subsection{Hyperbolic-like behavior} \label{sec:hyp-like} Let $\Phi$ be a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow and $TM=E^u + E^s$ its $\mbox{$\mathbb{P}$\rm{A}}$ splitting. In this subsection, we do not assume the the splitting is of class $C^2$. For $z \in M$, we define {\it the orbit} $\mathcal{O}(z)$, {\it the $\alpha$-limit set} $\alpha(z)$, and {\it the $\omega$-limit set} $\omega(z)$ by \begin{align*} \mathcal{O}(z) & =\{\Phi^t(z) {\;|\;} z \in \mathbb{R}\},\\ \alpha(z) & =\bigcap_{T>0}\cl{\{\Phi^t(z) {\;|\;} t \leq -T\}},\\ \omega(z) & =\bigcap_{T>0}\cl{\{\Phi^t(z) {\;|\;} t \geq T\}}. \end{align*} We say a point $z \in M$ is {\it periodic} if there exists $T>0$ such that $\Phi^T(z)=z$. The minimum of $\{t>0 {\;|\;} \Phi^t(z)=z\}$ is called {\it the period} of $z$. We denote the set of periodic points of $\Phi$ by $\mbox{\rm{Per}}(\Phi)$, and the non-wandering set of $\Phi$ by $\Omega(\Phi)$. We say a periodic point $z_0$ is {\it $s$-regular} when there exists an embedded closed annulus $A$ tangent to $E^s$ such that $\Phi^t(A) \subset \mbox{\rm{Int}}\; A$ for any $t>0$ and $\bigcap_{t>0} \Phi^t(A)=\mathcal{O}(z_0)$. Similarly, we say a periodic point $z_0$ is {\it $u$-regular} when there exists an embedded closed annulus $A$ tangent to $E^u$ such that $\Phi^{-t}(A) \subset \mbox{\rm{Int}}\; A$ for any $t>0$ and $\bigcap_{t>0} \Phi^{-t}(A)=\mathcal{O}(z_0)$. We also say $z_0$ is {\it $\rho$-irregular} for $\rho \in \{u,s\}$ if $z_0$ is not $\rho$-regular. Let $\mbox{\rm{Per}}^\rho_{{\mbox{\rm{\scriptsize irr}}}}(\Phi)$ be the set of $\rho$-irregular periodic points. Put $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)=\mbox{\rm{Per}}^s_{\mbox{\rm{\scriptsize irr}}}(\Phi) \cup \mbox{\rm{Per}}^u_{\mbox{\rm{\scriptsize irr}}}(\Phi)$. The aim of this subsection is to show the existence of the unstable manifolds for a compact invariant set which does not intersect with $\Omega_* \cup \cl{\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)}$. Fix a continuous family $\{\phi_z\}_{z \in M}$ of $C^2$ embeddings of $[-1,1]^2$ into $M$ such that $\mbox{\rm{Im} } \phi_z$ is transverse to $T\Phi$ and $\phi_z(0,0)=z$ for any $z \in M$. We call $\{\phi_z\}_{z \in M}$ {\it a family of local cross sections}. Let $r_z^t$ be the holonomy map of the orbit foliation of $\Phi$ between $\mbox{\rm{Im} } \phi_z$ and $\mbox{\rm{Im} } \phi_{\Phi^t(z)}$ along the path $\{\Phi^{t'}(z) {\;|\;} t' \in [0,t]\}$. We call $\{r_z^t\}_{(z,t) \in M \times \mathbb{R}}$ {\it the family of local returns} associated to $\{\phi_z\}_{z \in M}$. For $\Delta>0$, put $D_\Delta(z)=\{z' \in \mbox{\rm{Im} } \phi_z {\;|\;} d(z,z') \leq \Delta\}$, where $d(z,z')$ is the distance of $z,z' \in M$. By the continuity of the family $\{\phi^t_z\}_{z \in M}$, there exists $\Delta_\phi>0$ such that $r_z^t$ is well-defined on $D_{\Delta_\phi}(z)$ for any $z \in M$ and $t \in [-1,1]$. The splitting $TM/T\Phi=(E^s/T\Phi) \oplus (E^u/T\Phi)$ defines projections $\pi^s$ and $\pi^u$ from $TM$ to $E^s/T\Phi$ and $E^u/T\Phi$ respectively. For $\alpha>0$, we say an embedded interval $I$ in $M$ is {\it an $(E^s,\alpha)$-transversal} if $\|\pi^s(v)\| \leq \alpha\|\pi^u(v)\|$ for any $z \in I$ and $v \in T_z I$. Similarly, we say an embedded interval $I$ in $M$ is {\it an $(E^u,\alpha)$-transversal} if $\|\pi^u(v)\| \leq \alpha\|\pi^s(v)\|$ for any $z \in I$ and $v \in T_z I$. For $\Delta>0$, an interval $I$ is called {\it a $(\Delta,E^s)$-interval} if it is an $(E^s,1)$-transversal and $r_z^t(I) \subset D_\Delta(\Phi^t(z))$ for any $t \geq 0$. Similarly, an interval $I$ is called {\it a $(\Delta,E^u)$-interval} if it is an $(E^u,1)$-transversal and $r_z^{-t}(I) \subset D_\Delta(\Phi^{-t}(z))$ for any $t \geq 0$. The next lemma is a variant of ``the Denjoy property'', which was proved by Arroyo and Rodrigues-Hertz in \cite{AR} for flows without non-hyperbolic periodic points. \begin{lemm} \label{lemma:Denjoy} There exists $\Delta_0>0$ such that \begin{enumerate} \item the interior of any $(\Delta_0,E^s)$-interval contains a point $z$ such that $\omega(z)$ is a periodic orbit in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^u(\Phi)$ or is a torus in $\Omega_*^u$, and \item the interior of any $(\Delta_0,E^u)$-interval contains a point $z$ such that $\alpha(z)$ is a periodic orbit in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$ or is a torus in $\Omega_*^s$. \end{enumerate} \end{lemm} \begin{proof} We will show the former assertion since that the latter can be obtained in the same way. In the proof of Proposition 4.2 in \cite{AR}, Arroyo and Rodrigues-Hertz used the hyperbolicity of all periodic points only in the proof of Lemma 4.3 and 4.4 and the other part of the proof works even if there are non-hyperbolic periodic points. Hence, it is sufficient to see how to recover the proof of Lemmas 4.3 and 4.4 for our case. Fix a $(\delta,E^s)$-interval $I$ which contains $z$. Put $I_t=r_z^t(I)$ for $t \geq 0$. Let $\{J_s\}_{s \geq 0}$ be the family of $(\Delta,E^s)$-intervals in the proof of Proposition 4.2 of \cite{AR}, {\it i.e.}, the maximal one among families of $(\Delta,E^s)$-intervals satisfying $I_s \subset J_s$ and $r_z^{t-s}(J_s) \subset J_t$ for any $t \geq s \geq 0$. As shown in Lemma 4.1 of \cite{AR} (its proof does not require the hyperbolicity of periodic orbits), there is a uniform bound from below of the length of the local stable manifold of each point of $J_s$. Let $J_s^\epsilon$ be the union of the local stable manifold of each point of $J_s$. Lemma 4.3 of \cite{AR} deals with the case that $r_z^t(J_s^\epsilon) \subset J_s^\epsilon$ for some $z \in M$ and $t>0$. In this case, the exactly same argument as Lemma 4.3 of \cite{AR} shows that the forward orbit of some point in $\mbox{\rm{Int}}\; J_0$ converges to a $u$-irregular periodic orbit. Lemma 4.4 of \cite{AR} deals with the case that $\limsup|J_s|>0$ and $r_z^t(J_s^\epsilon) \cap \mbox{\rm{Per}}(\Phi) \neq \emptyset$ for some $s$ and $t>0$. In this case, we need to show the inclination lemma for an $(\Delta,E^s)$-interval $J_s$ and a periodic point which is sufficiently close to $J_s$. However, it is an easy consequence of the existence of the local stable manifolds with uniform length (Lemma 4.1 in \cite{AR}). \end{proof} Let $\Lambda$ be a compact $\Phi$-invariant set such that $\Lambda \cap (\Omega_* \cup \cl{\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)})=\emptyset$. In the rest of the subsection, we will show that the stable and unstable manifolds are well-defined for any point of $\Lambda$. For a subset $S$ of $M$ and $\delta>0$, we denote {\it the $\delta$-neighborhood} $\{p \in M {\;|\;} \inf_{q \in S} d(p,q) \leq \delta\}$ by ${\mathcal N}_\delta(S)$. Fix $0<\Delta_1<\Delta_0$ such that ${\mathcal N}_{\Delta_1}(\Lambda) \cap \left(\Omega_* \cup \cl{\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)}\right) = \emptyset$. By the center-unstable manifold theorem, there exist constants $0<\delta_1<\delta_2<\Delta_1$ and a continuous family $\{W^{cu}_{loc}(z)\}_{z \in M}$ of $C^2$ $(E^s,1)$-transversals such that \begin{itemize} \item $z \in W^{cu}_{loc}(z) \subset D_{\delta_2}(z)$ and $\partial W^{cu}_{loc}(z) \subset \partial D_{\delta_2}(z)$ for any $z \in M$, \item $W^{cu}_\delta(z)=W^{cu}_{loc}(z) \cap D_\delta(z)$ is an interval for any $0 <\delta<\delta_2$ and $z \in M$, and \item $r_z^{-t}(W^{cu}_{\delta_1}(z)) \subset W^{cu}_{\delta_2}(\Phi^t(z))$ for any $0 \leq t \leq 1$ and $z \in M$. \end{itemize} \begin{prop} \label{prop:stable manifolds 1} For any given $\delta>0$, there exists $\epsilon_1>0$ such that \begin{enumerate} \item $r_z^{-t}(W^{cu}_{\epsilon_1}(z)) \subset W^{cu}_{\delta}(\Phi^t(z))$ for any $z \in \Lambda$ and $t \geq 0$, and \item $\lim_{t {\rightarrow} \infty} \left(\sup_{z \in M}|r_z^{-t}(W^{cu}_{\epsilon_1}(z))|\right)=0$. \end{enumerate} \end{prop} \begin{proof} Without loss of generality, we may assume $\delta<\delta_1$. If the lemma does not hold, then there exist sequences $\{\epsilon_k>0\}_{k \geq 1}$, $\{t_k >0\}_{k \geq 1}$, and $(z_k \in \Lambda)_{k=0}^\infty$ such that \begin{itemize} \item $\lim_{k {\rightarrow} \infty}\epsilon_k=0$, \item $r_{z_k}^{-t}(W^{cu}_{\epsilon_k}(z_k)) \subset W^{cu}_{\delta}(\Phi^{-t}(z_k))$ for any $k \geq 0$ and $0 \leq t \leq t_k$, and \item $\limsup_{k {\rightarrow} \infty}|r_{z_k}^{-t_k}(W^{cu}_{\epsilon_k}(z_k))|>0$. \end{itemize} By the first and the last items, we have $\lim_{k {\rightarrow} \infty} t_k=\infty$. By taking subsequences, we may assume that $\Phi^{-t_k}(z_k)$ converges to a point $z_*$ of $\Lambda$ and $r_{z_k}^{-t_k}(W^{cu}_{\epsilon_k}(z_k))$ converges to an interval $I \subset W^{cu}_\delta(z_*)$ with positive length. Then, $I$ is a $(\delta,E^s)$-interval. Since $z_*$ is a point of $I \cap \Lambda$, we have $\omega(z) \subset {\mathcal N}_\delta(\Lambda)$ for any $z \in I$, and hence, $\omega(z) \cap \left(\Omega_*(\Phi) \cup \cl{\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)}\right) =\emptyset$. However, it contradicts Lemma \ref{lemma:Denjoy}. \end{proof} \begin{coro} \label{cor:W-cu tangent} For any sufficiently small $\epsilon>0$, $W^{cu}_{\epsilon}(z)$ is tangent to $E^u$ for any $z \in \Lambda$. \end{coro} \begin{proof} Fix $\delta>0$ and take $\epsilon_1>0$ in Proposition \ref{prop:stable manifolds 1}. For $z \in \Lambda$, $z' \in W^{cu}_{\epsilon_1}(z)$, and $t \geq 0$, let $\alpha(z,z',t)$ be the angle between $T_{r_z^{-t}(z')}r_z^{-t}(W^{cu}_{\epsilon_1}(z))$ and $E^u(r_z^{-t}(z'))$. By the domination property of the $\mbox{$\mathbb{P}$\rm{A}}$ splitting, there exist $C>0$ and $\lambda>1$ such that $\alpha(z,z',t) \geq C\lambda^t\alpha(z,z',0)$. On the other hand, the continuity of the family $\{W^{cu}_{\epsilon_1}(z)\}_{z \in M}$ and the proposition implies that $\alpha$ is bounded as a function of $z,z'$ and $t$. Hence, we have $\alpha(z,z',0)=0$ for any $z$ and $z'$. \end{proof} \begin{lemm} \label{lemma:E-u interval} There exists $\Delta_2 \in (0,\Delta_1/2)$ which satisfies the following property: If an $(E^u,1)$-transversal $I$ is contained in $D_{\Delta_2}(z)$ for some $z \in M$ and satisfies $r_z^{-t}(\partial I) \subset D_{\Delta_2}(\Phi^{-t}(z))$ for any $t \geq 0$, then it is a $(\Delta_1/2,E^u)$-interval. \end{lemm} \begin{proof} Since $TM=E^s +E^u$ is a $\mbox{$\mathbb{P}$\rm{A}}$ splitting, there exists $\alpha>0$ such that if $I$ is an $(E^u,1)$-transversal and $r_z^{-t}(I)$ is well-defined for some $z \in M$ and $t \geq 0$, then $r_z^{-t}(I)$ is an $(E^u,\alpha)$-transversal. By the uniform transversality of $(E^u,\alpha)$-transversals to $E^u$, we can take $\delta \in (0,\Delta_1/2)$ and $\beta>1$ such that $|J| \leq \beta \cdot \mbox{\rm{diam}}(\partial I)$ for any $z \in M$ and any $(E^u,\alpha)$-transversal with $J \subset D_{\delta}(z)$. Put $\Delta_2=\delta/4\beta$. We remark that $\Delta_2<\delta/4<\Delta_1/8$. Let $I$ be an $(E^u,1)$-transversal contained in $D_{\Delta_2}(z)$ for some $z \in M$ such that $r_z^{-t}(\partial I) \subset D_{\Delta_2}(\Phi^{-t}(z))$ for any $t \geq 0$. It is sufficient to show that \begin{equation*} t_0=\sup\{t_1 \geq 0 {\;|\;} r_z^{-t}(I) \subset D_{\delta}(\Phi^{-t}(z)), |r_z^{-t}(I)| \leq \delta \mbox{ for any } t \in [0,t_1]\} \end{equation*} is infinite. Suppose that $t_0$ is a finite number. Since $r_z^{-t}(I)$ is an $(E^u,\alpha)$-transversal and $r_z^{-t}(\partial I) \subset D_{\Delta_2}(\Phi^{-t}(z))$ for $0 \leq t \leq t_0$, we have \begin{equation} \label{eqn:E-u interval 1} |r_z^{-{t_0}}(I)| \leq \beta \cdot \mbox{\rm{diam}} (r_z^{-{t_0}}(\partial I)) \leq 2 \beta \Delta_2 = \delta/2. \end{equation} It implies $|r_z^t(I)| <\delta$ for any $t$ sufficiently close to $t_0$. By the inclusion $r_z^{-t_0}(\partial I) \subset D_{\Delta_2}(\Phi^{-t_0}(z))$ again, the inequality (\ref{eqn:E-u interval 1}) implies \begin{equation} \label{eqn:E-u interval 2} r_z^{-t_0} (I) \subset D_{(\delta/2)+\Delta_2}(\Phi^{-{t_0}}(z)) \subset D_{(3/4)\delta}(\Phi^{-{t_0}}(z)). \end{equation} Hence, $r_z^{-t}(I) \subset D_{\delta}(\Phi^{-t}(z))$ for any $t$ sufficiently close to $t_0$. It contradicts the choice of $t_0$. \end{proof} \begin{prop} \label{prop:stable manifolds 2} There exists $\epsilon_2>0$ such that \begin{equation*} \bigcap_{t \geq 0}r_z^t(D_\epsilon(\Phi^{-t}(z))) \subset W^{cu}_\epsilon(z) \end{equation*} for any $z \in \Lambda$ and $0<\epsilon<\epsilon_2$. \end{prop} \begin{proof} Let $\epsilon_1>0$ be the constant obtained by applying Proposition \ref{prop:stable manifolds 1} for $\delta=\Delta_2$. By the uniform transversality of $(\Delta_1,E^s)$-interval to $E^s$, we can take a constant $\epsilon_2 \in (0,\epsilon_1)$ which satisfies the following property: For any $z \in \Lambda$ and $z' \in D_{\epsilon_2}(z) \- W^{cu}_{\epsilon_1}(z)$, there exists an $(E^u,1)$-transversal $J$ in $D_{\Delta_2}(z)$ and $z_J \in J \cap W^{cu}_{\epsilon_1}(z)$ such that $\partial J=\{z',z_J\}$. Suppose that the proposition does not hold. Then, there exists $z \in \Lambda$ and $z' \in \bigcap_{t \geq 0}r_z^t(D_{\epsilon_2}(\Phi^{-t}(z))) \setminus W^{cu}_{\epsilon_2}(z)$. Take an $(E^u,1)$-transversal $J$ in $D_{\Delta_2}(z)$ and $z_J \in J \cap W^{cu}_{\epsilon_1}(z)$ such that $\partial J=\{z',z_J\}$. Since both $r_z^{-t}(z')$ and $r_z^{-t}(z_J)$ are contained in $D_{\Delta_2}(\Phi^{-t}(z))$ for any $t \geq 0$, $J$ is an $(\Delta_1/2,E^u)$-interval by Lemma \ref{lemma:E-u interval}. Hence, we have \begin{equation*} r_z^{-t}(J) \subset D_{(\Delta_1/2)+{\Delta_2}}(\Phi^{-t}(z)) \subset D_{\Delta_1}(\Phi^{-t}(z)). \end{equation*} By Lemma \ref{lemma:Denjoy}, the set $\Sigma=\cl{\bigcup_{t \geq 0}r_z^{-t}(J)}$ intersects with $\Omega_*^s \cup \mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$. However, $\Sigma$ is contained in the $\Delta_1$-neighborhood of $\Lambda$. It contradicts the choice of $\Delta_1$. \end{proof} We define a family $\{V^u(z)\}_{z \in \Lambda}$ of subsets of $M$ by \begin{equation*} V^u(z)=\bigcup_{t>0}\bigcup_{z' \in \mathcal{O}(z)} \Phi^t(W^{cu}_\epsilon(z')). \end{equation*} It is a continuous family of $C^2$ open immersed surfaces tangent to $E^u$ by Corollary \ref{cor:W-cu tangent}. By Proposition \ref{prop:stable manifolds 1}, $V^u(z)$ does not depend on the choice of sufficiently small $\epsilon>0$. It is easy to see that \begin{itemize} \item $V^u(z_0)$ is diffeomorphic to $S^1 \times \mathbb{R}$ for any periodic point $z_0 \in \Lambda$, and \item $V^u(z_1) \cap V^u(z_2) \neq \emptyset$ for $z_1,z_2 \in \Lambda$ implies $V^u(z_1)=V^u(z_2)$. \end{itemize} Similar to $\{W^{cu}_\delta(z)\}_{z \in \Lambda}$, we can take a family $\{W^{cs}_\delta(z)\}_{z \in \Lambda}$ of $(E^u,1)$-transversals. We define a family $\{V^s(z)\}_{z \in \Lambda}$ by \begin{displaymath} V^s(z)=\bigcup_{t>0}\bigcup_{z' \in \mathcal{O}(z)} \Phi^{-t}(W^{cs}_\epsilon(z')) \end{displaymath} for any small $\epsilon>0$. It has analogous properties to $\{V^u(z)\}_{z \in \Lambda}$. For $\Phi$-invariant compact subsets $\Lambda_1$ and $\Lambda_2$ of $\Lambda$, we write $\Lambda_1 \preceq \Lambda_2$ if $W^s(\Lambda_1) \cap W^u(\Lambda_2) \neq \emptyset$. \begin{prop} \label{prop:spectral decomposition} Suppose that $\Lambda$ is locally maximal, {\it i.e.}, there exists a neighborhood $U$ of $\Lambda$ such that $\Lambda=\bigcap_{t \in \mathbb{R}} \Phi^t(U)$. Then, \begin{itemize} \item $W^s(\Lambda)=\bigcup_{z \in \Lambda \cap \Omega(\Phi)} V^s(z)$, \item there exists a decomposition $\Lambda \cap \Omega(\Phi)=\bigcup_{i=1}^m \Lambda_i$ into mutually disjoint topologically transitive compact invariant subsets, and \item $\preceq$ is a partial order on $\{\Lambda_1,\cdots \Lambda_m\}$. \end{itemize} \end{prop} \begin{proof} By Propositions \ref{prop:stable manifolds 1} and \ref{prop:stable manifolds 2}, $\Lambda$ has the shadowing property (see {\it e.g.} \cite{Sh}). We can show the required properties by the same argument as the case of locally maximal hyperbolic sets. \end{proof} The partially ordered set $(\{\Lambda_1,\cdots, \Lambda_m\}, \preceq)$ is called {\it the spectral decomposition} of $\Lambda \cap \Omega(\Phi)$. We say a point $z$ of a topological space $X$ is {\it accessible} from a subset $A$ of $X$ if there exists a continuous map $l:[0,1] {\rightarrow} X$ such that $l(1)=z$ and $l(t) \in A$ for any $t \in [0,1)$. \begin{lemm} \label{lemma:boundary point} Let $\Lambda'$ be a topologically transitive compact invariant subset of $\Lambda$ such that $W^s(\Lambda') \cap W^u(\Lambda')=\Lambda'$. If $z \in \Lambda'$ is accessible from $V^s(z) \-\Lambda'$, then $V^u(z)$ contains a periodic point $z_* \in \Lambda'$ which is accessible from $V^s(z_*) \- \Lambda'$. Similarly, if $z \in \Lambda'$ is accessible from $V^u(z) \-\Lambda'$, then $V^s(z)$ contains a periodic orbit $z_*$ which is accessible from $V^u(z_*) \- \Lambda'$. \end{lemm} \begin{proof} The same argument as the proof of Proposition 1 of \cite{NP} shows that $V^u(z)$ contains a periodic point $z_*$. Since the $\alpha$-limit set of $z$ coincides with the orbit of $z_*$, the invariance of $\Lambda'$ implies that $z_* \in \Lambda'$. By the accessibility of $z$ from $V^s(z) \- \Lambda'$, there exists a curve $I \subset V^s(z)$ transverse to $E^s$ such that $I \cap \Lambda'=\{z\}$. Suppose that $z_*$ is not accessible from $V^s(z_*) \- \Lambda'$. By the continuity of $V^u(z')$ with respect to $z' \in \Lambda$ there exists $z_1 \in \Lambda' \cap V^u(z_*)$ such that $V^u(z_1) \cap (I \- \{z\}) \neq \emptyset$. Since $W^s(\Lambda') \cap W^u(\Lambda')=\Lambda'$ and $V^\sigma(z') \subset W^\sigma(\Lambda')$ for any $z' \in \Lambda'$ and $\sigma=s,u$, we have $I \cap V^u(z_1) \subset \Lambda' $. However, it contradicts the choice of $I$. Therefore, $z_*$ is accessible from $V^s(z_*) \- \Lambda'$. We obtain the latter from the former by reversing the time. \end{proof} \subsection{Dichotomy under regularity of periodic orbits} \label{sec:reduced dichotomy} The aim of this subsection is to show Proposition \ref{prop:dichotomy} under some additional assumptions. \begin{prop} \label{prop:reduced dichotomy} Let $\Phi$ be a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow and $TM=E^s + E^u$ be its $\mbox{$\mathbb{P}$\rm{A}}$ splitting. Suppose that $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi) \subset \Omega_*$ and $E^s$ generates a $C^2$ foliation. Then, either \begin{enumerate} \item $\Omega_*=\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)=\emptyset$ and $\Phi$ is topologically transitive, or \item $M= W^u(\Omega^s_*) \cup \Omega^u_* = W^s(\Omega^u_*) \cup \Omega^s_*$. \end{enumerate} \end{prop} Remark that we do not assume the $C^2$-regularity of $E^u$. Put $\Omega_h=M\-(W^u(\Omega^s_*) \cup W^s(\Omega^u_*))$. \begin{lemm} \label{lemma:locally maximal} $\Omega_h$ is a locally maximal closed invariant set. \end{lemm} \begin{proof} Since $\Omega_*^s$ is normally repelling, there exists a compact neighborhood $K^s$ of $\Omega^s_*$ such that $\Phi^{-t}(K^s) \subset K^s$ for any $t>0$, $\bigcap_{t \geq 0} \Phi^{-t}(K^s)=\Omega^s_*$, and $\bigcup_{t \geq 0}\Phi^t(K^s)=W^u(\Omega^s_*)$. Similarly, there exists a compact neighborhood $K^u$ of $\Omega^u_*$ such that $\Phi^t(K^u) \subset K^u$ for any $t >0$, $\bigcap_{t \geq 0} \Phi^t(K^u)=\Omega^u_*$, and $\bigcup_{t \geq 0}\Phi^{-t}(K^u)=W^u(\Omega^u_*)$. Then, $U=M \- (K^u \cup K^s)$ is a neighborhood of $\Omega_h$ such that $\bigcap_{t \in \mathbb{R}}\Phi^t(U)=\Omega_h$. \end{proof} It is easy to see that $\alpha(z) \cup \omega(z) \subset \Omega^s_* \cup \Omega^u_* \cup\Omega_h$ for any $z \in M$. Since $\Omega^u_*$ is normally attracting and $\Omega^s_*$ is normally repelling, we have \begin{equation*} M=W^u(\Omega_h) \cup W^u(\Omega_*^s) \cup \Omega_*^u =W^s(\Omega_h) \cup W^s(\Omega_*^u) \cup \Omega_*^s. \end{equation*} We assume $\Omega_h \neq \emptyset$ and show that $M=\Omega_h$ and $\Phi$ is topologically transitive. It implies that $\Omega_*^u=\Omega_*^s=\emptyset$, and hence, $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)=\emptyset$ by the assumption. By $\mathcal{G}(z)$, we denote the leaf of a foliation $\mathcal{G}$ that contains a point $z$. Let $\mathcal{F}^s$ be the $C^2$ foliation generated by $E^s$. \begin{lemm} \label{lemma:leaf} $\mathcal{F}^s(z)=V^s(z)$ for any $z \in \Omega_h$. \end{lemm} \begin{proof} Since $V^s(z')$ is tangent to $E^s$ for any $z' \in \Omega_h$, it is a connected open subset of $\mathcal{F}^s(z')$. Since $\mathcal{F}^s(z) \subset M\- \Omega^s_* =W^s(\Omega_h) \cup W^s(\Omega^u_*)$, we have a decomposition \begin{equation*} \mathcal{F}^s(z)=(\mathcal{F}^s(z) \cap W^s(\Omega^u_*)) \cup \bigcup_{z' \in \Omega_h \cap \mathcal{F}^s(z)} V^s(z') \end{equation*} of $\mathcal{F}^s(z)$ into mutually disjoint open subsets. It implies that $V^s(z)$ coincides with $\mathcal{F}^s(z)$. \end{proof} By Proposition \ref{prop:spectral decomposition} and Lemma \ref{lemma:locally maximal}, the invariant set $\Omega_h \cap \Omega(\Phi)$ admits a spectral decomposition $(\{\Lambda_1,\cdots,\Lambda_m\},\preceq)$. Take a maximal element $\Lambda_+$ with respect to $\preceq$. By the same argument as the hyperbolic case, we have $W^s(\Lambda_+) \cap W^u(\Lambda_+) \subset \Omega(\Phi)$. The maximality implies that $W^s(\Lambda_+) \subset \Lambda_+ \cup W^u(\Omega_*^s)$. Since $W^u(\Lambda_+) \cap W^u(\Omega_*^u)=\emptyset$, we have $W^s(\Lambda_+) \cap W^u(\Lambda_+) = \Lambda_+$. Recall that a subset $\Lambda$ of $M$ is called {\it a saturated set} of $\mathcal{F}^s$ if $\mathcal{F}^s(z) \subset \Lambda$ for any $z \in \Lambda$. \begin{lemm} \label{lemma:saturated} $\Lambda_+$ is a closed saturated set of $\mathcal{F}^s$. \end{lemm} \begin{proof} We will show $W^s(\Lambda_+) \subset \Lambda_+$. It completes the proof of the lemma since $\mathcal{F}^s(z)=V^s(z) \subset W^s(\Lambda_+)$ for any $z \in \Lambda_+$. Suppose that $W^s(\Lambda_+) \not\subset \Lambda_+$. Then, there exists $z_* \in \Lambda_+$ which is accessible from $V^s(z_*) \- \Lambda_+$. Since $W^s(\Lambda_+) \cap W^u(\Lambda_+)=\Lambda_+$, we can apply Lemma \ref{lemma:boundary point} to $\Lambda_+$. Hence, we may assume that $z_*$ is a periodic point. Accessibility implies that a connected component $L$ of $V^s(z_*)\- \mathcal{O}(z_*)$ is a subset of $W^s(\Lambda_+) \- \Lambda_+$, and hence, is contained in $W^u(\Omega^s_*)$. Take a simple closed curve $\gamma \subset L$ which is homotopic to $\mathcal{O}(z_*)$ in $\mathcal{F}^s(z_*)$. Since $z_*$ is a $u$-regular periodic point, the holonomy of $\mathcal{F}^s$ along $\gamma$ is non-trivial. Since $W^u(T)$ is a connected open subset of $M$ for any torus $T$ in $\Omega^s_*$, there exists a torus $T_*$ in $\Omega^s_*$ such that $L \subset W^u(T_*)$. Take an embedding $\psi:\mathbb{T}^2 \times [-1,1] {\rightarrow} W^u(T_*)$ such that $\psi(\mathbb{T}^2 \times 0)=T_*$ and $\psi(\mathbb{T}^2 \times \{-1,1\})$ is transverse to the flow. There exists $t>0$ such that $\Phi^{-t}(\gamma) \subset \psi(\mathbb{T}^2 \times (-1,1))$. Let $\mathcal{G}$ be the restriction of $\mathcal{F}$ to $\mbox{\rm{Im} } \psi$. Since $T_*$ is the unique compact leaf of $\mathcal{G}$, a classification theorem of $C^2$ foliation on $\mathbb{T}^2 \times [0,1]$ due to Moussu and Roussarie \cite{MR} implies that $T_*$ is the only leaf of $\mathcal{G}$ that has non-trivial holonomy. It contradicts that the holonomy of $\mathcal{F}^s$ along $\Phi^{-t}(\gamma)$ is non-trivial but $\Phi^{-t}(\gamma)$ is not contained in $T_*$. \end{proof} Recall that a leaf of a codimension-one foliation is called {\it semi-proper} when it accumulates to itself from at most one side. We also say a leaf is {\it proper} when it does not accumulate to itself from either sides. \begin{lemm} \label{lemma:semi-proper} Let $\mathcal{G}$ be a $C^2$ codimension-one foliation of a closed three-dimensional manifold. Then, any semi-proper leaf of $\mathcal{G}$ diffeomorphic to $S^1 \times \mathbb{R}$ is proper and it has trivial holonomy. \end{lemm} \begin{proof} Let $L$ be a leaf of $\mathcal{G}$ which is diffeomorphic to $S^1 \times \mathbb{R}$. By the level theory of Cantwell and Conlon \cite{CC}, $L$ is either proper or contained in an exceptional local minimal set. Duminy's theorem \cite{CC2} implies that the end set of a semi-proper leaf in an exceptional local minimal set must be a Cantor set. Since the end set of $L$ consists of two points, the leaf $L$ is proper. By a stability theorem of proper leaves with finite ends due to Cantwell and Conlon \cite[Theorem 1]{CC1}, $L$ has trivial holonomy. \end{proof} Now, we prove Proposition \ref{prop:reduced dichotomy}. Let $\Phi$ be a $\mbox{$\mathbb{P}$\rm{A}}$ flow satisfying the assumptions of the proposition. Then, $\mathcal{F}^s$ is a $C^2$ foliation and all periodic points in $\Omega_h$ are $u$-regular. Suppose that the proposition does not hold. Let $\Lambda_+$ be the maximal set in the spectral decomposition of $\Omega_h \cap \Omega(\Phi)$. By Lemma \ref{lemma:saturated}, it is a closed saturated set of $\mathcal{F}^s$. Since the restriction of $\Phi$ to $\Lambda_+$ is topologically transitive, the assumption implies $\Lambda_+ \neq M$. In particular, $\Lambda_+$ contains a semi-proper leaf $L$ of $\mathcal{F}^s$. By Lemma \ref{lemma:boundary point}, $L$ contains a periodic point $q$ in $\Lambda_+$, and hence, it is diffeomorphic to $S^1 \times \mathbb{R}$. Lemma \ref{lemma:semi-proper} implies that the holonomy of $\mathcal{F}^s$ along the orbit of $q$ is trivial. In particular, $q$ is a $u$-irregular periodic point. However, it contradicts that all periodic points in $\Omega_h$ are $u$-regular. \subsection{Local dynamics at periodic points} \label{sec:local dynamics} Let $\Phi$ be a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow with a $\mbox{$\mathbb{P}$\rm{A}}$ splitting $TM=E^u + E^s$. In this subsection, we suppose that $E^u$ and $E^s$ generate $C^2$ foliations $\mathcal{F}^u$ and $\mathcal{F}^u$, respectively. Remark that $\Omega^\rho_*$ is a union of closed leaves of $\mathcal{F}^\rho$ for $\rho=s,u$. The main aim of this subsection is to show the following proposition, which completes the proof of Proposition \ref{prop:dichotomy} by combining with Proposition \ref{prop:reduced dichotomy}. \begin{prop} \label{prop:regular point} $\mbox{\rm{Per}}^u_{\mbox{\rm{\scriptsize irr}}}(\Phi) \subset \Omega^u_*$ and $\mbox{\rm{Per}}^s_{\mbox{\rm{\scriptsize irr}}}(\Phi) \subset \Omega^s_*$. \end{prop} Fix a family $\{\phi_z:[-1,1]^2 {\rightarrow} M\}_{z \in M}$ of $C^2$ local cross sections so that $\phi_z(0,0)=z$, $\phi_z([-1,1] \times y)$ is tangent to $E^s$, and $\phi_z(x \times [-1,1])$ is tangent to $E^u$ for any $(x,y) \in [-1,1]^2$. Let $\{r_z^t\}$ be the family of local returns associated to $\{\phi_z\}_{z \in M}$. Recall that $D_\delta(z)$ be the $\delta$-ball in $\mbox{\rm{Im} } \phi_z$ centered at $z$. Let $\Delta>0$ be the constant obtained in Lemma \ref{lemma:Denjoy}. For $0<\delta<\Delta$, put \begin{align*} I^s_\delta(z) & = D_\delta(z) \cap \phi_z([-1,1] \times 0),\\ I^u_\delta(z) & = D_\delta(z) \cap \phi_z(0 \times [-1,1]). \end{align*} By replacing $\Delta$ with a smaller one, we may assume that $I^u_\delta(z)$ and $I^s_\delta(z)$ are intervals for any $z \in M$ and $0<\delta <\Delta$. \begin{lemm} \label{lemma:Denjoy 2} Suppose that sequences $(z_n \in M)_{n \geq 1}$, $(\delta_n>0)_{n \geq 1}$, and $(t_n>0)_{n \geq 1}$ satisfy the following properties: \begin{itemize} \item $\lim_{n {\rightarrow} \infty}\delta_n=0$. \item $r_{z_n}^t(I^s_{\delta_n}(z_n))$ is well-defined for any $n \geq 1$ and $0 \leq t \leq t_n$. \item $\limsup_{n {\rightarrow} \infty}|r_{z_n}^{t_n}(I^s_{\delta_n}(z_n))|>0$. \end{itemize} Then, any accumulation point of $\{z_n\}_{n \geq 1}$ is contained in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi) \cup \Omega_*^s$. \end{lemm} \begin{proof} Take an accumulation point $z_*$ of $(z_n)_{n \geq 1}$. By taking subsequences if it is necessary, we may assume that $z_n$ converges to $z_*$, $\Phi^{t_n}(z_n)$ converges to a point $z_\infty$, and $r_{z_n}^{t_n}(I^s_{\delta_n}(z_n))$ converges to an interval $I_\infty \subset I^s_\Delta(z_\infty)$. Remark that $t_n$ goes to infinity. In fact, $|r_{z_n}^T(I^s_{\delta_n}(z_n))|$ converges to zero for any given $T>0$ since $\delta_n$ goes to zero. The interval $I_\infty$ is a $C^2$ $(\Delta, E^u)$-interval, By Lemma \ref{lemma:Denjoy}, there exists $z' \in \mbox{\rm{Int}}\; I_\infty$ such that its $\alpha$-limit set $\alpha(z')$ is a periodic orbit in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$ or an embedded torus in $\Omega_*^s$. In each case, $N\Phi^{-t}|_{E^u/T\Phi}$ is uniformly contracting on $\alpha(z')$. Hence, there exists a compact neighborhood $V$ of $z'$ in $\mathcal{F}^u(z')$ such that $\bigcap_{t>0}\cl{\bigcup_{t'>t}\Phi^{-t'}(V)}=\alpha(z')$. For any sufficiently large $n \geq 1$, the interval $r_{z_n}^{t_n}(I^s_{\delta_n}(z_n))$ contains a point $z'_n$ of $V$. Then, $z_*=\lim_{n {\rightarrow} \infty} (r_{z_n}^{t_n})^{-1}(z'_n)$ is contained in $\alpha(z')$. Hence, $z_*$ is contained in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$ or $\Omega_*^s$. \end{proof} \begin{lemm} \label{lemma:open} Let $z_0$ be an $s$-regular periodic point. Then, the following holds: \begin{itemize} \item There exists $\delta>0$ and $\tau: W^s(\mathcal{O}(z_0)) {\rightarrow} \mathbb{R}$ such that $I^s_\delta(\Phi^{\tau(z)}(z)) \subset W^s(\mathcal{O}(z_0))$ for any $z \in W^s(\mathcal{O}(z_0))$. \item For any $z \in W^s(\mathcal{O}(z_0))$, $\mathcal{F}^s(z) \cap W^s(\mathcal{O}(z_0))$ is an open subset of $\mathcal{F}^s(z)$ with respect to the leafwise topology. \item If $A$ is any open annulus such that $\mathcal{O}(z_0) \subset A \subset \mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$, then $\bigcup_{t \geq 0} \Phi^{-t}(A)$ is a connected component of $\mathcal{F}^s(\mathcal{O}(z_0)) \cap W^s(\mathcal{O}(z_0))$. \end{itemize} \end{lemm} \begin{proof} Let $T$ be the period of $z_0$. There exists a closed interval $I \subset [-1,1]$ and $C^2$ maps $f,g:I {\rightarrow} [-1,1]$ such that $0 \in f(I) \subset \mbox{\rm{Int}}\; I$, $r_{z_0}^T \circ \phi_{z_0}(x,y)=\phi_{z_0}(f(x),g(y))$, and $\bigcap_{n \geq 0} f^n(I)=\{0\}$. Put $\Lambda^u=\bigcap_{n \geq 0}g^{-n}(I)$ and $\Lambda^u_0=\{y \in \Lambda^u {\;|\;} \lim_{n {\rightarrow} \infty} g^n(y)=0 \}$. For any $(x,y) \in \mbox{\rm{Int}}\; I \times \Lambda^u$, we have $r_{z_0}^{nT} \circ \phi_{z_0}(x,y)=\phi_{z_0}(f^n(x),g^n(y))$. By the compactness of $\Lambda^u$, there exists $\delta>0$ such that \begin{equation} \label{eqn:open 1} I^s_\delta(\phi_{z_0}(x,y)) \subset \phi_{z_0}(\mbox{\rm{Int}}\; I \times y) \end{equation} for any $(x,y) \in f(I) \times \Lambda^u$. If $(x,y) \in \mbox{\rm{Int}}\; I \times \Lambda^u_0$, then $(f^n(x),g^n(y))$ converges to $(0,0)$ as $n$ goes to infinity. For any $z \in W^s(\mathcal{O}(z_0))$, its positive orbit $\{\Phi^t(z) {\;|\;} t \geq 0\}$ intersects with $\phi_{z_0}(f(I) \times \Lambda^u_0)$. Hence, we have \begin{equation*} W^s(\mathcal{O}(z_0))=\bigcup_{t \geq 0} \Phi^{-t} \circ \phi_{z_0}(f(I) \times \Lambda^u_0) =\bigcup_{t \geq 0} \Phi^{-t} \circ \phi_{z_0}(\mbox{\rm{Int}}\; I \times \Lambda^u_0). \end{equation*} It completes the proof of the first assertion of the lemma with the inclusion (\ref{eqn:open 1}). The second assertion is an immediate consequence of the first. Put \begin{align*} V_1 & = \bigcup_{t \geq 0} \Phi^{-t} \circ \phi_{z_0}(\mbox{\rm{Int}}\; I \times 0),\\ V_2 & = \bigcup_{t \geq 0} \Phi^{-t} \circ \phi_{z_0} (\mbox{\rm{Int}}\; I \times (\Lambda^u_0 \- \{0\})). \end{align*} Since $f(I) \subset \mbox{\rm{Int}}\; I$, $g(0)=0$, and $g(\Lambda^u_0\-\{0\}) \subset \Lambda^u_0\-\{0\}$, the set $\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$ is a disjoint union of its open subsets $V_1$ and $V_2 \cap \mathcal{F}^s(z_0)$. Hence, $V_1$ is a connected component of $\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$. Let $A$ be the annulus in the third assertion of the lemma. Then, $\phi_{z_0}(f^m(I)) \subset A$ for some large $m \geq 1$. It implies \begin{equation*} V_1=\bigcup_{t \geq 0} \Phi^{-t} \circ \phi_{z_0}(f^m(I) \times 0) \subset \bigcup_{t \geq 0}\Phi^{-t}(A) \subset V_1. \end{equation*} \end{proof} \begin{lemm} \label{lemma:annulus} Let $z_0$ and $z_1$ be periodic points of $\Phi$. \begin{enumerate} \item If $z_0$ is $s$-regular and $z_1$ is accessible from a connected component $V$ of $W^s(\mathcal{O}(z_0)) \cap \mathcal{F}^s(z_1)$. then $\mathcal{F}^s(z_0)=\mathcal{F}^s(z_1)$, the orbits of $z_0$ and $z_1$ are homotopic in $\mathcal{F}^s(z_0)$ as unoriented curves, and $V$ contains $\mathcal{O}(z_0)$. \item If $z_0$ attracting and $z_1$ is accessible from $W^s(\mathcal{O}(z_0)) \cap \mathcal{F}^u(z_1)$, then $\mathcal{F}^u(z_0)=\mathcal{F}^u(z_1)$ and the orbits of $z_0$ and $z_1$ are homotopic in $\mathcal{F}^u(z_0)$ as unoriented curves. \end{enumerate} \end{lemm} \begin{proof} First, we show the former assertion of the lemma. Suppose that $z_0$ is $s$-regular. Let $T$ be the period of $z_0$. There exists a closed interval $I$ and $C^2$ maps $f,g: I {\rightarrow} [-1,1]$ such that $0 \in \mbox{\rm{Int}}\; I \subset f(I)$, $\bigcap_{n \geq 0} f^n(I)=\{0\}$, $\phi_{z_0}(I \times I) \cap \mathcal{O}(z_0)=\{z_0\}$, and $r_{z_0}^T \circ \phi_{z_0}(x,y)=\phi_{z_0}(f(x),g(y))$ for any $(x,y) \in I \times I$. Put $U=\bigcup_{t=0}^T r_{z_0}^t(I \times I)$ and let $\mathcal{G}(y)$ be the connected component of $\mathcal{F}^s(\phi_{z_0}(0,y)) \cap U$ which contains $\phi_{z_0}(0,y)$. Then, we can see the following properties of $\mathcal{G}$ and $U$: \begin{itemize} \item $\mathcal{G}(y)$ is not contractible if and only if $g(y)=y$. \item If $z=\phi_{z_0}(x,y)$ is a point of $W^s(\mathcal{O}(z_0))$ and $\Phi^t(z) \in U$ for any $t \geq 0$, then $g^n(y)$ converges to $0$ as $n$ tends to infinity. \end{itemize} Suppose that a periodic point $z_1$ is accessible from a connected component $V$ of $\mathcal{F}^s(z_1) \cap W^s(\mathcal{O}(z_0))$. Then, there exists a simple closed curve $\gamma$ in $V$ which is homotopic to $\mathcal{O}(z_1)$. By the Poincar\'e-Bendixon theorem, $\mathcal{O}(z_1)$ and $\gamma$ are not null-homotopic in $\mathcal{F}^s(z_1)$. We can take $t_1>0$ such that $\Phi^t(\gamma) \subset U$ for any $t \geq t_1$. It implies that $\Phi^{t_1}(\gamma) \subset \mathcal{G}(y)$ for some $y \in \bigcap_{n \geq 0} g^{-n}(I)$ with $\lim_{n {\rightarrow} \infty}g^n(y)=0$. Since the closed curve $\Phi^{t_1}(\gamma)$ is not null-homotopic in $\mathcal{G}(y) \subset \mathcal{F}^s(z_1)$, we have $y=0$. Hence, $\Phi^{t_1}(\gamma)$ is homotopic to $\mathcal{O}(z_0)$ in $\mathcal{F}^s(z_0)$ as an unoriented curve and the set $V$ intersects the connected component of $\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$ which contains $\mathcal{O}(z_0)$. Therefore, we have $\mathcal{F}^s(z_1)=\mathcal{F}^s(z_0)$, $\mathcal{O}(z_0)$ and $\mathcal{O}(z_1)$ are homotopic in $\mathcal{F}^s(z_0)$ as unoriented curves, and $\mathcal{O}(z_0) \subset V$. Next, we show the latter assertion of the lemma. Suppose that $z_0$ is attracting. Let $T$ be the period of $z_0$. There exists a closed interval $I$ and $C^2$ maps $f_1,g_1: I' {\rightarrow} [-1,1]$ such that $0 \in \mbox{\rm{Int}}\; I' \subset g_1(I')$, $\bigcap_{n \geq 0} g_1^n(I')=\{0\}$, $0$ is the unique fixed point of $f_1|_{I'}$, $\phi_{z_0}(I \times I') \cap \mathcal{O}(z_0)=\{z_0\}$, and $r_{z_0}^T \circ \phi_{z_0}(x,y)=\phi_{z_0}(f_1(x),g_1(y))$ for any $(x,y) \in I' \times I'$. Put $U'=\bigcup_{t=0}^T r_{z_0}^t(I \times I)$ and let $\mathcal{G}'(x)$ be the connected component of $\mathcal{F}^u(\phi_{z_0}(x,0)) \cap U'$ which contains $\phi_{z_0}(x,0)$. Then, we can see the following properties of $\mathcal{G}'$ and $U'$: \begin{itemize} \item $\mathcal{G}'(x)$ is not contractible if and only if $x=0$. \item If $z=\phi_{z_0}(x,y)$ is a point of $W^s(\mathcal{O}(z_0))$ and $\Phi^t(z) \in U'$ for any $t \geq 0$, then $f_1^n(x)$ converges to $0$ as $n$ tends to infinity. \end{itemize} Now, the same argument as above, where we replace $g$, $\mathcal{G}$, and $\mathcal{F}^s$ with $f_1$, $\mathcal{G}'$ and $\mathcal{F}^u$, respectively, shows the latter assertion of the lemma. \end{proof} \begin{lemm} \label{lemma:stable set} The following holds for any $s$-regular periodic point $z_0$: \begin{enumerate} \item $\mathcal{F}^s(z) \subset W^s(\mathcal{O}(z_0))$ for any $z \in W^s(\mathcal{O}(z_0)) \- \mathcal{F}^s(z_0)$. \item $\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$ is homeomorphic to $S^1 \times \mathbb{R}$. \item If $\mathcal{F}^s(z_0) \not\subset W^s(\mathcal{O}(z_0))$, then $\mathcal{F}^s(z_0)$ contains an $s$-irregular periodic point such that the orbits of $z_0$ and $z_1$ are homotopic as unoriented closed curves in $\mathcal{F}^s(z_0)$. \end{enumerate} \end{lemm} \begin{proof} Since $z_0$ is $s$-regular, there exists an embedded closed annulus $A_0 \subset \mathcal{F}^s(z_0)$ such that $\Phi^t(A_0) \subset \mbox{\rm{Int}}\; A_0$ for any $t>0$ and $\bigcap_{t >0}\Phi^t(A_0)=\mathcal{O}(z_0)$. Put $V_0=\bigcup_{t \geq 0} \Phi^{-t}(A_0)$. It is diffeomorphic to $S^1 \times \mathbb{R}$ and is a connected component of $W^s(\mathcal{O}(z_0)) \cap \mathcal{F}^s(z_0)$ by the third item of Lemma \ref{lemma:open}. Fix a leaf $L$ of $\mathcal{F}^s$ and a connected component $V$ of $L \cap W^s(\mathcal{O}(z_0))$. We suppose that $V \neq L$ and show $V=V_0$. Take $z_1 \in L \- V$ which is accessible from $V$. There exist sequences $(z'_n \in V)_{n \geq 1}$ and $(\delta_n>0)_{n \geq 1}$ such that $z_1 \in I^s_{\delta_n}(z'_n)$ for any $n \geq 1$ and $\delta_n$ converges to zero. By Lemma \ref{lemma:open}, we can choose $\delta>0$ and $(T_n>0)_{n \geq 1}$ such that $I^s_{\delta}(\Phi^{T_n}(z'_n)) \subset W^s(\mathcal{O}(z_0))$ for any $n \geq 1$. Since $\mathcal{O}(z_1) \cap W^s(\mathcal{O}(z_0))= \emptyset$, there exists a sequence $(t_n \in (0,T_n))_{n \geq 1}$ such that $r_{z_n'}^t(I^s_{\delta_n}(z'_n))$ is well-defined for any $t \in [0,t_n]$ and $|r_{z_n'}^{t_n}(I^s_{\delta_n}(z'_n))|=\delta$. By Lemma \ref{lemma:Denjoy 2}, $z_1$ is a point of $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi) \cup \Omega_*^s$. Since $\mathcal{F}^s(z_1)$ contains a periodic point $z_1$ and some non-periodic points in $V$, we have $\mathcal{F}^s(z_1) \not\subset \Omega_*^s$. Therefore, $z_1$ is an $s$-irregular periodic point. Since $z_1$ is accessible from $V \subset W^s(\mathcal{O}(z_0))$, we can apply the former part of Lemma \ref{lemma:annulus}. It implies that $\mathcal{F}^s(z_1)=\mathcal{F}^s(z_0)$, $V=V_0$, and the orbits of $z_0$ and $z_1$ are homotopic as unoriented closed curves in $\mathcal{F}^s(z_0)$. \end{proof} \begin{lemm} \label{lemma:trivial holonomy} $\mbox{\rm{Per}}^s_{\mbox{\rm{\scriptsize irr}}}(\Phi) \cap \mathcal{F}^s(z)$ is a closed subset of $\mathcal{F}^s(z)$ for any $z \in M$. Similarly, $\mbox{\rm{Per}}^u_{\mbox{\rm{\scriptsize irr}}}(\Phi) \cap \mathcal{F}^u(z)$ is a closed subset of $\mathcal{F}^u(z)$ for any $z \in M$. \end{lemm} \begin{proof} We show the former assertion. The proof of the latter is similar. Suppose that a sequence $(z_n)_{n \geq 1}$ in $\mathcal{F}^s(z) \cap \mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$ converges to $z_*$ with respect to the leafwise topology of $\mathcal{F}^s(z)$. By replacing $z_n$ in its orbit, we may assume that there exists a sequence $(\delta_n>0)_{n \geq 1}$ such that $\lim_{n {\rightarrow} \infty}\delta_n=0$ and $I^s_{\delta_n}(z_n)$ contains $z_*$ for any $n \geq 1$. Let $T_n$ be the period of $z_n$ and $\Delta>0$ be a constant such that $D_\Delta(z) \subset \mbox{\rm{Im} } \phi_z$ for any $z \in M$. First, we suppose that there exists $n_0 \geq 1$ such that $|r_{z_{n_0}}^t(I^s_{\delta_{n_0}}(z_{n_0}))| \leq \Delta$ for any $t \in [0,T_{n_0}]$. Then, $r_{z_{n_0}}^{T_{n_0}}(I^s_{\delta_{n_0}}(z_{n_0}))$ is well-defined. Since $z_*$ is a limit point of the sequence $(z_n)_{n \geq 1}$ of periodic points with respect to the leafwise topology, we can choose a sequence $(z_n')_{n \geq 1}$ of periodic points of $\Phi$ such that $\lim_{n {\rightarrow} \infty}z_n'=z_*$ and $z_n' \in I^s_{\delta_{n_0}}(z_{n_0})$ for any $n$. Since $r_{z_{n_0}}^{T_{n_0}}(z_{n_0})=z_{n_0}$ and $r_{z_{n_0}}^{T_{n_0}}$ is an orientation preserving homeomorphism on an interval, we have $r_{z_{n_0}}^{T_{n_0}}(z_n')=z_n'$ for any $n$. It implies that $r_{z_{n_0}}^{T_{n_0}}(z_*)=z_*$. Therefore, $z_*$ is an $s$-irregular periodic point of $\Phi$. Now, it is sufficient to consider the case that there exists a sequence $(t_n \in [0,T_n])_{n \geq 1}$ such that $|r_{z_n}^t(I^s_{\delta_n}(z_n))|\leq \Delta$ for any $t \in [0,t_n]$ and $|r_{z_n}^{t_n}(I^s_{\delta_n}(z_n))|=\Delta$. By Lemma \ref{lemma:Denjoy 2}, $z_*=\lim_{n {\rightarrow} \infty}z_n$ is a point of $\mbox{\rm{Per}}_{{\mbox{\rm{\scriptsize irr}}}}^s(\Phi) \cup \Omega_*^s$. If $z_* \in \Omega_*^s$, then $\mathcal{F}^s(z_*)$ is an embedded torus contained in $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi)$ since $\{z_n\} \subset \mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^s(\Phi) \cap \mathcal{F}^s(z_*)$. \end{proof} \begin{lemm} \label{lemma:trivial holonomy 2} For any $z_0 \in \mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^u(\Phi)$, the leaf $\mathcal{F}^s(z_0)$ is proper, has trivial holonomy, and is contained in $W^s(\mathcal{O}(z_0))$. \end{lemm} \begin{proof} The periodic point $z_0$ is $s$-regular. Hence, there exists a closed annulus $A^s$ in $\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$ such that $\Phi^t(A^s) \subset \mbox{\rm{Int}}\; A^s$ for any $t>0$ and $\bigcap_{t \geq 0}\Phi^t(A^s)=\mathcal{O}(z_0)$. Put $V=\bigcup_{t \geq 0}\Phi^{-t}(A^s)$. Lemmas \ref{lemma:open} and \ref{lemma:stable set} imply that $V=\mathcal{F}^s(z_0) \cap W^s(\mathcal{O}(z_0))$ and $\mathcal{F}^s(z) \subset W^s(\mathcal{O}(z_0))$ for any $z \in W^s(\mathcal{O}(z_0)) \- V$. Since $z_0$ is $u$-irregular, there exists a closed annulus $A^u_0$ in $\mathcal{F}^u(z_0)$ such that $\mathcal{O}(z_0) \subset \partial A^u_0$, $A^u_0 \cap A^s=\mathcal{O}(z_0)$, and $\Phi^t(A^u_0) \subset A^u_0$ for any $t \geq 0$. It is easy to see that $V \cap A^u_0=\mathcal{O}(z_0)$. First, we show $V=\mathcal{F}^s(z_0)$. Suppose that it does not hold. By Lemma \ref{lemma:stable set}, $\mathcal{F}^s(z_0)$ contains an $s$-irregular periodic point $z_1$ such that the orbits of $z_0$ and $z_1$ are homotopic as unoriented closed curves in $\mathcal{F}^s(z_0)$. For $i=0,1$, let $T_i$ be the period of $z_i$. Put $\lambda^u_i=\|D\Phi^{T_i}|_{E^u(z_i)}\|$ and $\lambda^s_i=\|D\Phi^{T_i}|_{E^s(z_i)}\|$. Since $z_0$ is $u$-irregular, $z_1$ is $s$-irregular, and $TM=E^s+ E^u$ is a dominated splitting, we have \begin{equation} \label{eqn:trivial holonomy 2-1} \lambda_0^s<\lambda_0^u \leq 1 \leq \lambda_1^s < \lambda_1^u. \end{equation} Since $\lambda^u_i$ is the absolute value of the linear holonomy of $\mathcal{F}^s$ along the orbit of $z_i$ for each $i$ and the orbits of $z_0$ and $z_1$ are homotopic in $\mathcal{F}^s(z_0)$ as unoriented curves in $\mathcal{F}^s(z_0)$, we have $\lambda_0^u=(\lambda_1^u)^{\pm 1}$. Hence, the inequality (\ref{eqn:trivial holonomy 2-1}) implies \begin{equation} \label{eqn:trivial holonomy 2-2} \lambda_0^u=(\lambda_1^u)^{-1} <1. \end{equation} In particular, $\mathcal{O}(z_0)$ is an attracting periodic orbit. Since $W^s(\mathcal{O}(z_0))$ is an open subset of $M$, there exists a closed annulus $A^u_1 \subset \mathcal{F}^u(z_1)$ such that $\mathcal{O}(z_1) \subset \partial A^u_1$ and $\mathcal{F}^s(z)$ intersects with $(A^u_0\- \mathcal{O}(z_0)) \cap W^s(\mathcal{O}(z_0))$ for any $z \in A^u_1 \- \mathcal{O}(z_1)$. Now, we recall that $V \cap A^u_0=\mathcal{O}(z_0)$ and $\mathcal{F}^s(z) \subset W^s(\mathcal{O}(z_0))$ for any $z \in W^s(\mathcal{O}(z_0)) \- V$. They imply that $A^u_1 \- \mathcal{O}(z_1)$ is contained in $W^s(\mathcal{O}(z_0))$. In particular, $z_1$ is accessible from $\mathcal{F}^u(z_1) \cap W^s(\mathcal{O}(z_0))$. By Lemma \ref{lemma:annulus}, we have $\mathcal{F}^u(z_1)=\mathcal{F}^u(z_0)$ and the orbits of $z_0$ and $z_1$ are homotopic in $\mathcal{F}^u(z_0)$ as unoriented closed curves. Since $\lambda^s_i$ is the absolute value of the linear holonomy of $\mathcal{F}^u$ along the orbit of $z_i$, we have $\lambda^s_0=(\lambda^s_1)^{\pm 1}$. Hence, the inequality (\ref{eqn:trivial holonomy 2-1}) implies \begin{equation*} \lambda_0^s=(\lambda_1^s)^{-1}. \end{equation*} However, it contradicts with the inequalities (\ref{eqn:trivial holonomy 2-1}) and (\ref{eqn:trivial holonomy 2-2}). Therefore, we have $V=\mathcal{F}^s(z_0)$. Since $V \cap A^u_0=\mathcal{O}(z_0)$, the leaf $\mathcal{F}^s(z_0)=V$ is semi-proper. By Lemma \ref{lemma:semi-proper}, $\mathcal{F}^s(z_0)$ is a proper leaf with trivial holonomy. \end{proof} Now, we prove Proposition \ref{prop:regular point}. We claim that $\mbox{\rm{Per}}^u_{\mbox{\rm{\scriptsize irr}}}(\Phi) \subset \Omega^u_*$. Once it is done, we can show that $\mbox{\rm{Per}}^s_{\mbox{\rm{\scriptsize irr}}}(\Phi) \subset \Omega^s_*$ by applying the claim to the time-reverse of $\Phi$. Take a leaf $L^u$ of $\mathcal{F}^u$ which intersects with $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^u(\Phi)$. By Lemmas \ref{lemma:trivial holonomy} and \ref{lemma:trivial holonomy 2}, $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^u(\Phi) \cap L^u$ is a closed and open subset of $L^u$. Hence, $L^u$ is a subset of $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}^u(\Phi)$. It is sufficient to show that $L^u$ is a closed leaf. If it is not, then there exists a transversal $J$ of $\mathcal{F}^u$ such that $J$ is contained in a leaf $L^s$ of $\mathcal{F}^s$ and $J \cap L^u$ is not a relatively compact subset of $L^u$. Take a point $z_* \in L^u \cap J$. By Lemma \ref{lemma:trivial holonomy 2}, $J \subset L^s$ is contained in $W^s(\mathcal{O}(z_*))$. Since $L^u \subset \mbox{\rm{Per}}(\Phi)$, it implies that $L^u \cap J$ is contained in $\mathcal{O}(z_*)$. However, it contradicts that $L^u \cap J$ is not a relatively compact subset of $L^u$. \section{Topologically transitive regular $\mbox{$\mathbb{P}$\rm{A}}$ flows} \label{sec:transitive} \label{sec:section3} The aim of this section is the following proposition, which completes the proof of the main theorem by combining with Propositions \ref{prop:dichotomy} and \ref{prop:T2*I}. \begin{prop} \label{prop:Anosov} If a $C^2$ topologically transitive $\mbox{$\mathbb{P}$\rm{A}}$ flow on a closed three-dimensional manifold admits a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting, then it is an Anosov flow. \end{prop} When the $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ admits a global cross section, the proof of Proposition \ref{prop:Anosov} is reduced to an observation that the distortion of a holonomy map of a one-dimensional foliation on a surface can be estimated by the area of rectangle sweeped out by the holonomy. We refer the reader to \cite{As2} for the detail. If a $\mbox{$\mathbb{P}$\rm{A}}$ flow admits invariant one-dimensional subbundles of $E^s$ and $E^u$ which are transverse to the flow, then we can apply the proof in \cite{As2} with a small modification. However, a $\mbox{$\mathbb{P}$\rm{A}}$ flow admits no invariant one-dimensional subbundles transverse to the flow in general. This is the main technical difficulty in the proof. The structure of the section is as follows: In Subsections \ref{sec:one-dim} and \ref{sec:Markov}, we show the (possibly non-uniform) contraction along the direction transverse to $E^u$ by the standard argument using a Markov partition and a theorem due to Ma\~n\'e. In Subsection \ref{sec:non-expansion}, in order to overcome the above technical difficulty, we show that the diameter of $\Phi^t(D^s)$ is uniformly bounded for any small disk $D^s$ tangent to $E^s$ and $t \geq 0$ after we replace the original flow $\Phi$ by a suitable time-change. This condition makes us enable to apply the method in \cite{As2}. It is done in Subsection \ref{sec:hyp}. \subsection{One-dimensional topological Markov maps} \label{sec:one-dim} In this section, we prove some results about piecewise $C^2$ Markov maps on a finite union of compact intervals, which we use later. Let $I_*$ be a finite union of compact intervals in $\mathbb{R}$ and $\Lambda_*$ be a finite subset of $\mbox{\rm{Int}}\; I_*$. We say a map $F:I_* {\rightarrow} I_*$ is {\it a $C^2$ pre-Markov map} with {\it the set of discontinuity $\Lambda_*$} if $F(\Lambda_*) \subset \partial I_*$, $\cl{F(I_* \- \Lambda_*)}=I_*$, and for each connected component $J$ of $I_* \- \Lambda_*$, there exists a connected component $I$ of $I_*$ such that $F|_J$ extends to a $C^2$ diffeomorphism from $\cl{J}$ to $I$. Put $I_*^n=I_* \-\bigcup_{n'=0}^nF^{-n'}(\Lambda_*)$ and $I_*^\infty=\bigcap_{n \geq 0} I_*^n$. For $x \in I_*^\infty$ and $n \geq 0$, let $I_*^n(x)$ be the connected component of $I_*^n$ that contains $x$. Then, the restriction of $F^m$ to $\cl{I_*^n(x)}$ extends to a $C^2$ diffeomorphism onto $\cl{I_*^{n-m}(F^m(x))}$ for any $x \in I_*^n$ and $n \geq m \geq 0$. For $n \geq 1$, let $\mbox{\rm{Per}}_n(F)$ be the set of all periodic points of period $n$. Since $F(\Lambda_* \cup \partial I_*) \subset \partial I_*$ and $\partial I_* \cap \Lambda_*=\emptyset$, the set $\mbox{\rm{Per}}_n(F)$ is a subset of $I_*^\infty$ for any $n \geq 1$. We say a pre-Markov map $F$ is {\it a $C^2$ topologically Markov map} if $\bigcap_{n \geq 1}\cl{I_*^n(x)}$ consists of exactly one point for any $x \in I_*^\infty$. \begin{lemm} \label{lemma:Markov 1} Let $F$ be a $C^2$ topologically Markov map on $I_*$. Then, $\mbox{\rm{Per}}_n(F)$ is finite for any $n \geq 1$ and any periodic point $x \in \mbox{\rm{Per}}_n(F)$ satisfies $|DF^n(x)|\geq 1$. \end{lemm} \begin{proof} It is an immediate consequence of the equations $\bigcap_{k \geq 0}\cl{I_*^{kn}(x)}=\{x\}$ and \begin{equation*} F^n\left(\cl{I_*^{(k+1)n}(x)}\right) =\cl{I_*^{kn}(x)} \supset \cl{I_*^{(k+1)n}(x)} \end{equation*} for any $x \in \mbox{\rm{Per}}_n(F)$. \end{proof} Recall the definition and some properties of the distortion of a one-dimensional map. For a $C^2$ map $h:I {\rightarrow} I'$ between intervals $I$ and $I'$, we define {\it the distortion} $\mbox{\rm{dist}}(h,I)$ by \begin{displaymath} \mbox{\rm{dist}}(h,I) =\sup_{x,y \in I}\left(\log|Dh(x)|-\log|Dh(y)|\right). \end{displaymath} It is easy to verify \begin{equation} \label{eqn:distortion 2} \mbox{\rm{dist}}(h_n \circ \cdots \circ h_0,I) \leq \sum_{m=0}^n \mbox{\rm{dist}}(h_m,h_{m-1} \circ \cdots \circ h_0(I)). \end{equation} By the Mean Value Theorem for $h$ and $\log|Dh|$, we also have \begin{align} \label{eqn:distortion 3} \mbox{\rm{dist}}(h,I) & \geq \sup_{x \in I}\left|\log |Dh(x)| -\log\frac{|h(I)|}{|I|}\right|,\\ \label{eqn:distortion 1} \mbox{\rm{dist}}(h,I) & \leq |I| \cdot \sup_{x \in I}|D(\log|Dh|)(x)|. \end{align} Let $\mbox{\rm{Per}}_*(F)$ be the set of non-hyperbolic periodic points. The following proposition is a variant of Man\'e's theorem(\cite{Ma}). \begin{prop} \label{prop:Mane} Let $F$ be a $C^2$ topologically Markov map on $I_*$ and $\Lambda_*$ be the set of discontinuity of $F$. Then, \begin{equation} \label{eqn:Mane 1} \lim_{n {\rightarrow} \infty} \left( \inf\{|DF^n(x)| {\;|\;} x \in \mbox{\rm{Per}}_n(F)\}\right)=\infty \end{equation} and $\mbox{\rm{Per}}_*(F)$ consists of only finitely many points. Moreover, for any given neighborhood $U$ of $\mbox{\rm{Per}}_*(F)$, \begin{equation} \label{eqn:Mane 2} \lim_{n {\rightarrow} \infty}\left(\inf\{|DF^n(x)| {\;|\;} x \in I_*^n\-F^{-n}(U) \}\right) =+\infty. \end{equation} \end{prop} \begin{proof} The equation (\ref{eqn:Mane 1}) is a consequence of Theorem 5.1 of \cite{Ho}, which is a version of Ma\~n\'e's theorem for piecewise $C^2$ maps. By Lemma \ref{lemma:Markov 1}, it implies the finiteness of non-hyperbolic periodic points. Since $I_*^n$ consists of finitely many connected components for any $n$ and $\bigcap_{n \geq 0}\cl{I_*^n(x)}=\{x\}$ for any $x \in I_*^\infty$, there exist sequences $(K_n)_{n=0}^\infty$ and $(K'_n)_{n=0}^\infty$ such that $\lim_{n {\rightarrow} \infty}K_n=\lim_{n {\rightarrow} \infty}K'_n=0$ and $K_n \leq |I_*^n(x)| \leq K'_n$ for any $n \geq 0$ and $x \in I_*^n$. Fix a neighborhood $U$ of $\mbox{\rm{Per}}_*(F)$. By the finiteness of $\mbox{\rm{Per}}_*(F)$, there exists $N \geq 1$ such that $I_*^N(x_*) \subset U$ for any $x_* \in \mbox{\rm{Per}}_*(F)$. Remark that $I_*^N(x) \cap \mbox{\rm{Per}}_*(F)=\emptyset$ if $x \not\in U$. Put \begin{align*} c_1 & =|I_*| \cdot \sup\{|D(\log|DF|)|(x) {\;|\;} x \in I_* \- \Lambda_*\},\\ c_2 & =\inf\{|DF(x)| {\;|\;} x \in I_* \- \Lambda_*\}. \end{align*} We say that an interval $J \subset I_*$ is $(\lambda,m)$-compatible for $\lambda>1$ and $m \geq 1$ if $J \subset I_*^m$ and $\sum_{i=0}^m|F^i(J)|<\frac{\lambda}{\lambda-1} |I_*|$. The properties of distortion imply \begin{equation*} \inf_{x \in J}|DF^m(x)| \geq e^{-\frac{c_1\lambda}{\lambda-1}} \cdot \frac{|F^m(J)|}{|J|} \end{equation*} for any $(\lambda,m)$-compatible interval $J$. By the argument in Section III.5 of \cite{MS}, it can be shown that there exists $\lambda>1$ and $n_0 \geq 1$ such that $I_*^{n+N}(x)$ is a $(\lambda,n-n_0)$-compatible interval if $x \in I_*^{n+N}$ satisfies $I_*^N(F^n(x)) \cap \mbox{\rm{Per}}_*(F)=\emptyset$. It implies that \begin{align*} |DF^n(x)| & \geq |DF^{n_0}(F^{n-n_0}(x))| \cdot e^{-\frac{c_1 \lambda}{\lambda-1}} \frac{|I_*^{N+n_0}(F^{n-n_0}(x))|}{|I_*^{n+N}(x)|}\\ & \geq \frac{c_2^{n_0}e^{-\frac{c_1 \lambda}{\lambda-1}} K_{N+n_0}}{ K'_n} \end{align*} for any $n \geq 1$ and $x \in I_*$ with $I_*^N(F^n(x)) \cap \mbox{\rm{Per}}_*(F)=\emptyset$. Since $\lim_{n {\rightarrow} \infty}K'_n=0$, it completes the proof. \end{proof} \subsection{Markov partitions} \label{sec:Markov} Fix a closed three-dimensional Riemannian manifold $M$. Let $\{(\phi_k,\tau_k)\}_{k=1}^m$ be a family of pairs of a continuous embedding of $[0,1]^2$ into $M$ and a continuous positive-valued function on $[0,1]^2$ such that the map $(w,t) \mapsto \Phi^t \circ \phi_k(w)$ is an embedding of $\{(w,t) {\;|\;} w \in [0,1]^2, t \in [0,\tau_k(w)]\}$ into $M$ for each $k$. We define a family $\{\phi'_k\}_{k=1}^m$ of embeddings of $[0,1]^2$ into $M$ by $\phi'_k(w)=\Phi^{\tau_k(w)} \circ \phi_k(w)$. Put $R_k=\mbox{\rm{Im} } \phi_k$, $R'_k=\mbox{\rm{Im} } \phi'_k$, and $P_k=\{\Phi^t \circ \phi_k(w) {\;|\;} w \in [0,1]^2, t \in [0,\tau(w)]\}$. We say the family $\{(\phi_k,\tau_k)\}$ determines {\it a Markov partition} $\{P_k\}_{k=1}^m$ associated to a flow $\Phi$ if it satisfies the following properties: \begin{enumerate} \item $\phi_k([0,1] \times y)$ and $\phi_k(x \times [0,1])$ are intervals tangent to $E^s$ and $E^u$ respectively, for any $(x,y) \in [0,1]^2$ and $k=1,\ldots,m$. \item $M=\bigcup_{k=1}^m P_k$. \item For each pair $(k,l)$, $\partial (P_k \cap P_l)=\partial P_k \cap \partial P_l$ and there exist subintervals $I_{k,l}$ and $J_{k,l}$ of $[0,1]$, which may be empty sets, such that \begin{displaymath} R'_k \cap R_l =\phi'_k([0,1] \times I_{k,l}) =\phi_l(J_{k,l} \times [0,1]). \end{displaymath} \end{enumerate} Remark that $\{\mbox{\rm{Int}}\; I_{k,l'}\}_{l'=1}^m$ and $\{\mbox{\rm{Int}}\; J_{k',l}\}_{k'=1}^m$ are partition of $[0,1]$ up to finite set for any fixed $(k,l)$. Put $I_*= [0,1] \times \{1,\ldots,m\}$. The family $\{(\phi_k,\tau_k)\}_{k=1}^m$ induces a piecewise continuous map $F:I_* {\rightarrow} I_*$ by $F(y,k)=(y',l)$ if $\phi_l(J_{k,l} \times y')=\phi'_k([0,1] \times y)$. We call the map $F$ {\it the reduced return map}. We say a $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ is {\it $E^s$-fine} if \begin{enumerate} \item both $\Omega_*$ and $\mbox{\rm{Per}}_{\mbox{\rm{\scriptsize irr}}}(\Phi)$ are empty, and \item $\Phi$ admits a $\mbox{$\mathbb{P}$\rm{A}}$ splitting $TM=E^u + E^s$ such that $E^s$ is a $C^2$ subbundle. \end{enumerate} Remark that $\Phi$ is topologically transitive by Proposition \ref{prop:reduced dichotomy}. \begin{lemm} \label{lemma:Markov} Let $\Phi$ be a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow. For any given $\epsilon>0$, $\Phi$ admits a Markov partition $\{P_k\}_{k=1}^m$ such that the reduced return map is a $C^2$ topological Markov map and the diameter of each $P_l$ is smaller than $\epsilon$. \end{lemm} \begin{proof} As we see in Section \ref{sec:dichotomy}, the flow $\Phi$ has the shadowing property on $M$. Hence, we can obtain a Markov partition $\{P_k\}_{k=1}^m$ associated with $\Phi$ such that the diameter of each $P_k$ is smaller than a given constant $\epsilon>0$ by the same argument as the hyperbolic case (see {\it e.g.} \cite{Ra} for the hyperbolic case). By the $C^2$-regularity of $E^s$, we can choose $\{\phi_k\}$ so that $\pi_y \circ \phi_k^{-1}$ is of class $C^2$, where $\pi_y(x,y)=y$. It implies that $F$ is a piecewise $C^2$ map. It is easy to check that $F$ is a pre-Markov map with the set of discontinuity $\left(\bigcup_{k,l=1}^m \partial J_{k,l}\right) \- \partial I_*$. By Proposition \ref{prop:stable manifolds 1}, $F$ is a topologically Markov map. \end{proof} For $\sigma=s,u$, let $\mbox{\rm{Per}}^\sigma_*(\Phi)$ be the set of periodic point $z_*$ such that $\|N\Phi^{t_*}|_{(E^\sigma/T\Phi)(z_*)}\|=1$, where $t_*$ is the period of $z_*$. The following is an immediate consequence of the above lemma and Proposition \ref{prop:Mane}. \begin{prop} \label{prop:expansion} If a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ is $E^s$-fine, $\mbox{\rm{Per}}^u_*(\Phi)$ consists of finitely many orbits. Moreover, for any given neighborhood $U$ of $\mbox{\rm{Per}}^u_*(\Phi)$, there exists $T=T(U)>0$ such that \begin{equation*} \sup\left\{\|N\Phi^{-t}|_{(E^u/T\Phi)(z)}\| {\;|\;} t \geq T, z \in M \- U \right\} \leq \frac{1}{2}. \end{equation*} \end{prop} \begin{coro} \label{cor:Anosov} If a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ is $E^s$-fine and $\mbox{\rm{Per}}^u_*(\Phi)$ is empty, then there exists $C>0$ and $\lambda>1$ such that $\|N\Phi^{-t}|_{E^u/T\Phi(z)}\|<C\lambda^{-t}$ for any $z \in M$ and $t >0$. \end{coro} \subsection{Non-expansion property} \label{sec:non-expansion} Let $M$ be a closed three-dimensional Riemannian manifold and $\Phi$ be a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow. As we saw in Subsection \ref{sec:hyp-like}, there exists a family $\{V^u(z)\}_{z \in M}$ of $C^2$ immersed submanifolds of $M$ which are tangent to $E^u$. For $z \in M$ and $\delta>0$, let $B(z,\delta)$ be the closed $\delta$-ball centered at $z$ and $D^u(z,\delta)$ the connected component of $V^u(z) \cap B(z,\delta)$ that contains $z$. By the definition of $V^u(z)$, we can see that \begin{itemize} \item $V^u(\Phi^t(z))=\Phi^t(V^u(z))$ for any $z \in M$ and $t \in \mathbb{R}$, \item $D^u(z,\delta)$ is a $C^2$ embedded disk which varies continuously with respect to $z$ if $\delta$ is sufficiently small. \end{itemize} We say that $\Phi$ is {\it $u$-bounded} if there exists a positive-valued function $\bar{\delta}$ on $\mathbb{R}$ such that \begin{equation*} \Phi^{-t}(D^u(z,\bar{\delta}(\epsilon))) \subset D^u(\Phi^{-t}(z),\epsilon) \end{equation*} for any $z \in M$, $t \geq 0$, and $\epsilon>0$. Similarly, we can define a $C^2$ disk $D^s(z,\delta)$ which is tangent to $E^s$ for any $z \in M$ and any sufficiently small $\delta>0$. We say that $\Phi$ is {\it $s$-bounded} if there exists a positive-valued function $\bar{\delta}'$ on $\mathbb{R}$ such that \begin{equation*} \Phi^t(D^s(z,\bar{\delta}'(\epsilon))) \subset D^s(\Phi^t(z),\epsilon) \end{equation*} for any $z \in M$, $t \geq 0$, and $\epsilon>0$. If $E^u$ admits a continuous $D\Phi$-invariant splitting $E^u=T\Phi \oplus E^{uu}$, then we can show $\Phi$ is $u$-bounded by Proposition \ref{prop:stable manifolds 1}. However, a $\mbox{$\mathbb{P}$\rm{A}}$ flow is not $u$-bounded in general. In fact, if there exists a $\Phi$ invariant embedded annulus tangent to $E^u$ on which $\Phi^t$ is conjugate to the map $(x,y) \mapsto (x+(1+y)t,y)$ on $S^1 \times [0,1]$, then $\Phi$ is not $u$-bounded. We say that $\Phi$ admits a local invariant foliation $\mathcal{G}$ transverse to a compact $\Phi$-invariant set $\Lambda$ if $\mathcal{G}$ is a $C^2$ two dimensional foliation on an open neighborhood $U_*$ of $\Lambda$ which is transverse to the orbits of $\Phi$ and satisfies $D\Phi^t(T\mathcal{G}(z))=T\mathcal{G}(\Phi(z))$ for any $t \geq 0$ and $z \in \bigcap_{t' \in [0,t]}\Phi^{-t'}(U_*)$. Remark that a suitable time change of $\Phi$ admits a local invariant foliation transverse to $\mbox{\rm{Per}}^u_*(\Phi)$ if $\mbox{\rm{Per}}^u_*(\Phi)$ consists of finite number of orbits. The aim of this subsection is to show the $u$-boundedness of $\Phi$ under a mild assumption. \begin{prop} \label{prop:non-expansion} If a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ admits a local invariant foliation transverse to $\mbox{\rm{Per}}^u_*(\Phi)$, then $\Phi$ is $u$-bounded. \end{prop} Fix a local invariant foliation $\mathcal{G}$ which is transverse to $\mbox{\rm{Per}}^u_*(\Phi)$ on a neighborhood $U_*$ of $\mbox{\rm{Per}}^u_*(\Phi)$. \begin{lemm} \label{lemma:non-expansion 1} For any given neighborhood $U$ of $\mbox{\rm{Per}}^u_*(\Phi)$, \begin{equation*} \sup\left\{\|D\Phi^{-t}|_{E^u(z)}\| {\;|\;} t \geq 0, z \in M \- U \right\} <\infty. \end{equation*} \end{lemm} \begin{proof} By taking a finite covering, we may assume that $E^u$ is orientable without loss of generality. By $X$, we denote the vector field generating $\Phi$. Fix a continuous vector field $Y^u$ such that $\{X(z),Y^u(z)\}$ spans $E^u(z)$ for any $z \in M$ and $Y^u(z)$ is tangent to $\mathcal{G}(z)$ if $z \in U_*$. We define functions $\eta$ and $\lambda$ on $M \times \mathbb{R}$ by \begin{equation*} D\Phi^{-t}(Y^u(z))=\eta(z,t) X(\Phi^{-t}(z))+\lambda(z,t) Y^u(\Phi^{-t}(z)). \end{equation*} Since $D\Phi^{-t}(X(z))=X(\Phi^{-t}(z))$, the following identities hold: \begin{align*} \eta(z,t+t') &= \eta(z,t)+\lambda(z,t)\cdot \eta(\Phi^{-t}(z),t'),\\ \lambda(z,t+t') & =\lambda(z,t) \cdot \lambda(\Phi^{-t}(z),t'). \end{align*} By the local invariance of $\mathcal{G}$, if $\Phi^{-t}(z) \in U_*$ for any $0 < t <t_0$ then $\eta(z,t_0)=0$. Without loss of generality, we may assume that $U$ is a subset of $U_*$. By Proposition \ref{prop:expansion}, there exists $T>0$ such that $|\lambda(z,t)|<1/2$ for any $t \geq T$ and $z \in M \- U$. Take a constant $K_0>1$ such that $|\lambda(z,t)|+|\eta(z,t)| \leq K_0-1$ for any $z \in M$ and any $0 \leq t \leq T$. It is sufficient to show that $|\lambda(z,t)|+|\eta(z,t)| \leq 4 K_0$ for any $z \in M \- U$ and $t >0$. We define a function $\tau:M \- U {\rightarrow} [T,\infty]$ by \begin{equation*} \tau(z)=\left\{ \begin{array}{ll} \inf\{t \geq T {\;|\;} \Phi^{-t}(z) \not\in U\} & \mbox{ if } z \not\in \bigcap_{t \geq T}\Phi^t(U)\\ \infty & \mbox{otherwise}. \end{array} \right. \end{equation*} We claim that $|\lambda(z,t)|+|\eta(z,t)| \leq K_0$ for any $z \in M \- U$ and $0 \leq t \leq \tau(z)$. It is trivial if $t \leq T$. Suppose that $T < t \leq \tau(z)$. Then, $\Phi^{-t'}(z)$ is contained in $U$ for any $T \leq t'<t$, and hence, $\eta(\Phi^{-T}(z),t-T)=0$. Since $\lambda(z,t) < 1/2$, we have \begin{align*} |\lambda(z,t)|+|\eta(z,t)| & = |\lambda(z,t)| +\left|\eta(z,T) + \lambda(z,T) \cdot \eta(\Phi^{-T}(z),t-T)\right|\\ & < \frac{1}{2} +|\eta(z,T)| \leq K_0. \end{align*} It completes the proof of the claim. Fix $z \in M \- U$ Take a sequence $(t_i)_{i \geq 0}$ in $[0,\infty]$ such that $t_0=0$, $t_{i+1}=t_i+\tau(\Phi^{-t_i}(z))$ for any $i \geq 0$. We claim \begin{equation} \label{eqn:non-expansion 1} \lambda(z,t_i) \leq 2^{-i},{\hspace{5mm}} \eta(z,t_i) \leq 2K_0\left(1-2^{-i}\right) \end{equation} for any $i \geq 0$. The proof is by induction. The inequalities for $i=0$ is trivial. Suppose that they hold for $i$. Since $T \leq t_{i+1}-t_i =\tau(\Phi^{-t_i}(z))$ and $\tau(\Phi^{-t_i}(z)) \not\in U$, we have $|\lambda(\Phi^{-t_i}(z),t_{i+1}-t_i)| \leq 1/2$. The first claim also implies $|\eta(\Phi^{-t_i}(z),t_{i+1}-t_i)| \leq K_0$. By the assumption of induction, \begin{align*} |\lambda(z,t_{i+1})| & =|\lambda(z,t_i)|\cdot |\lambda(\Phi^{-t_i}(z),t_{i+1}-t_i)| \leq 2^{-i} \cdot 2^{-1} =2^{-(i+1)},\\ |\eta(z,t_{i+1})| & \leq |\eta(z,t_i)|+|\lambda(z,t_i)| \cdot |\eta(\Phi^{t_i}(z),t_{i+1}-t_i)|\\ &\leq 2K_0\left(1-2^{-i}\right)+2^{-i}K_0 =2K_0\left(1-2^{-(i+1)}\right). \end{align*} Therefore, the inequalities (\ref{eqn:non-expansion 1}) hold for $i+1$. The claim is proved. Take $t>0$. There exists $i \geq 0$ such that $t_i \leq t <t_{i+1}$. Since $0 \leq t-t_i <\tau(\Phi^{-t_i}(z))$, the above claims imply \begin{align*} |\lambda(z,t)| & =|\lambda(z,t_i)|\cdot |\lambda(\Phi^{-t_i}(z),t-t_i)| \leq 1 \cdot K_0 = K_0,\\ |\eta(z,t)| & \leq |\eta(z,t_i)|+|\lambda(z,t_i)| \cdot |\eta(\Phi^{-t_i}(z),t-t_i)|\\ &\leq 2K_0+1 \cdot K_0 =3K_0. \end{align*} \end{proof} \begin{lemm} \label{lemma:non-expansion 2} There exists a neighborhood $U_1$ of $\mbox{\rm{Per}}^u_*(\Phi)$ and a positive-valued function $\bar{\delta}_1$ on $\mathbb{R}$ such that \begin{equation*} \Phi^{-t}(D^u(z,\bar{\delta}_1(\epsilon))) \subset D^u(\Phi^{-t}(z),\epsilon) \end{equation*} for any $\epsilon>0$, $t >0$, and $z \in \bigcap_{t' \in [0,t]}\Phi^{t'}(U_1)$. \end{lemm} \begin{proof} For an interval $J \subset \mathbb{R}$ and a subset $S$ of $M$, we denote the set $\{\Phi^t(z) {\;|\;} z \in S, t \in J \}$ by $\Phi^J(S)$. Fix a neighborhood $U_1$ of $\mbox{\rm{Per}}^u_*(\Phi)$ and a constant $0<\epsilon'<\epsilon/2$ such that ${\mathcal N}_{\epsilon'}(U_1) \cap {\mathcal N}_{\epsilon'}(M \- U_*)=\emptyset$. Take a family $\{\phi_z\}_{z \in M}$ of local cross-sections so that $\mbox{\rm{Im} } \phi_z \subset \mathcal{G}(z)$ for any $z \in U_1$. Let $\{r_z^t\}$ be the family of returns and put $I^u_\delta(z)=D^u(z,\delta) \cap \mbox{\rm{Im} } \phi_z$. Since $\{D^u(z,\Delta)\}$ is a continuous family of $C^2$-disks, $\{I^s_\delta(z)\}$ is a continuous family of $C^2$-intervals. By Proposition \ref{prop:stable manifolds 1}, there exists $\delta'>0$ such that $r_z^{-t}(I^u_{\delta'}(z)) \subset I^u_{\epsilon'}(\Phi^{-t}(z))$ for any $t \geq 0$ and $z \in M$. Take an open interval $J \subset \mathbb{R}$ containing $0$ such that $\Phi^J(I_{\epsilon'}^u(z)) \subset D^u(z,2\epsilon')$ for any $z \in M$. By the invariance of $\mathcal{G}$, we have $r_z^{-t}(I^u_{\delta'(z)})=\Phi^{-t}(I^u_{\delta'}(z))$ for any $z \in \bigcap_{t' \in [0,t]}\Phi^{t'}(U_1)$. We also take $\delta>0$ so that $D^u(z,\delta) \subset \Phi^J(I^u_{\delta'}(z))$ for any $z \in M$. Then, we have \begin{align*} \Phi^{-t}(D^u(z,\delta)) & \subset \Phi^{-t}(\Phi^J(I^u_{\delta'}(z))) =\Phi^J (\Phi^{-t}(I^u_{\delta'}(z))) =\Phi^J(r_z^{-t}(I^u_{\delta'}(z)))\\ & \subset \Phi^J(I^u_{\epsilon'}(\Phi^{-t}(z))) \subset D^u(\Phi^{-t}(z), 2\epsilon') \subset D^u(\Phi^{-t}(z), \epsilon). \end{align*} for any $t>0$ and $z \in \bigcap_{t' \in [0,t]}\Phi^{t'}(U_1)$. \end{proof} Now, we prove Proposition \ref{prop:non-expansion}. Fix $\epsilon>0$. Let $U_1$ and $\bar{\delta}_1$ be the neighborhood of $\mbox{\rm{Per}}^u_*(\Phi)$ and the function that are given by Lemma \ref{lemma:non-expansion 2}. There exists a neighborhood $U_2$ of $\mbox{\rm{Per}}^u_*(\Phi)$ and a constant $0 <\epsilon_2 <\epsilon$ such that ${\mathcal N}_{\epsilon_2}(U_2) \cap {\mathcal N}_{\epsilon_2}(M \- U_1)=\emptyset$. By Lemma \ref{lemma:non-expansion 1}, \begin{equation*} K=1+\sup\left\{\|D\Phi^{-t}|_{E^u(z)}\| {\;|\;} t \geq 0, z \in M \- U_2 \right\} \end{equation*} is finite. Put $\delta_2=\bar{\delta}_1(\epsilon_2K^{-1})$ and $\delta=\min\{\delta_2,\epsilon_2K^{-1}\}$. It is sufficient to show the inclusion \begin{equation} \label{eqn:non-expansion *} \Phi^{-t}(D^u(z,\delta)) \subset D^u(\Phi^{-t}(z),\epsilon) \end{equation} for any $z \in M$ and $t \geq 0$. For $z \not\in U_1$, we have $D^u(z,\epsilon_2K^{-1}) \cap U_2=\emptyset$, and hence, \begin{equation} \label{eqn:non-expansion 3} \Phi^{-t}(D^u(z,\epsilon_2 K^{-1})) \subset D^u(\Phi^{-t}(z),\epsilon_2) \subset D^u(\Phi^{-t}(z),\epsilon) \end{equation} for any $t \geq 0$. It implies the inclusion (\ref{eqn:non-expansion *}) for $z \not\in U_1$. If $z \in U_1$, put $T=\inf\{t>0 {\;|\;} \Phi^{-t}(z) \not\in U_1\} \in (0,\infty]$. For $0 \leq t <T$, we have \begin{equation*} \Phi^{-t}(D^u(z,\delta_2)) \subset D^u(\Phi^{-t}(z),\epsilon_2 K^{-1}) \subset D^u(\Phi^{-t}(z),\epsilon). \end{equation*} It implies the inclusion (\ref{eqn:non-expansion *}) for the case $T=\infty$ or $0< t <T$. If $T$ is finite, then $\Phi^{-T}(D^u(z,\delta_2)) \subset D^u(\Phi^{-T}(z),\epsilon_2K^{-1})$. Since $\Phi^{-T}(z) \not\in U_1$, we have $\Phi^{-(T+t')}(D^u(z,\delta_2)) \subset D^s(\Phi^{-(T+t')}(z),\epsilon)$ for any $t'\geq 0$ by (\ref{eqn:non-expansion 3}). It implies the inclusion (\ref{eqn:non-expansion *}) for the case $z \in U_1$ and $t \geq T$. \subsection{Hyperbolicity of periodic orbits} \label{sec:hyp} The following proposition is the last piece of the proof of Proposition \ref{prop:Anosov}. \begin{prop} \label{prop:hyperbolicity} If a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ is $s$- and $u$-bounded, then $\mbox{\rm{Per}}^u_*(\Phi)$ is empty. \end{prop} \begin{proof} [Proof of Proposition \ref{prop:Anosov}] Let $\Phi$ be a topologically transitive $\mbox{$\mathbb{P}$\rm{A}}$ flow with a $C^2$ $\mbox{$\mathbb{P}$\rm{A}}$ splitting. Topological transitivity implies that $\Omega_*$ is empty. By Proposition \ref{prop:regular point}, all periodic points are $s$- and $u$-regular. In particular, $\Phi$ and $\Phi^{-1}$ are $E^s$-fine. By Proposition \ref{prop:expansion} for $\Phi$ and $\Phi^{-1}$, we see that $\mbox{\rm{Per}}_*(\Phi)$ consists of finite number of orbits. Take a time-change $\Phi_1$ of $\Phi$ which admits a local invariant foliation transverse to $\mbox{\rm{Per}}_*(\Phi)$. Remark that both $\Phi_1$ and $\Phi_1^{-1}$ are and $E^s$-fine. We apply Propositions \ref{prop:non-expansion} and \ref{prop:hyperbolicity} to $\Phi_1$ and $\Phi_1^{-1}$. The former implies $\Phi_1$ is $s$- and $u$-bounded. The latter for $\Phi_1$ implies that $\mbox{\rm{Per}}_*^u(\Phi_1)$ is empty and the same for $\Phi_1^{-1}$ implies that $\mbox{\rm{Per}}_*^s(\Phi_1)$ is empty. Hence, $\Phi_1$ is an Anosov flow by and Corollary \ref{cor:Anosov}. Since $\Phi$ is a time-change of $\Phi_1$, also $\Phi$ is. \end{proof} The rest of the subsection is devoted to the proof of Proposition \ref{prop:hyperbolicity}. Fix a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ on a closed three-dimensional manifold $M$ which is $s$- and $u$- bounded. Let $TM=E^u +E^s$ be a $\mbox{$\mathbb{P}$\rm{A}}$ splitting of $\Phi$ such that $E^s$ generates a $C^2$ foliation $\mathcal{F}^s$. Remark that $D^s(z,\delta) \subset \mathcal{F}^s(z)$ for any $z \in M$ and $\delta>0$. Take a family $\{\psi_z\}_{z \in M}$ of $C^2$ embeddings from $[-1,1]^3$ to $M$ such that $\psi_z(0,0,0)=z$, $\psi_z([-1,1]^2 \times y) \subset \mathcal{F}^s(\psi_z(0,0,y))$ for any $y \in [-1,1]$, and the map $(z,w) \mapsto \psi_z(w)$ from $M \times [-1,1]^3$ to $M$ is of class $C^2$. By $B(z,\delta)$, we denote the closed ball of radius $\delta$ which is centered at $z$. There exists $\epsilon_0>0$ such that $B(z,8\epsilon_0) \subset \mbox{\rm{Im} } \psi_z$ for any $z \in M$. Since $\Phi$ is $s$- and $u$-bounded, we can take $\delta_0>0$ so that $\Phi^t(D^s(z,\delta_0)) \subset D^s(\Phi^t(z),\epsilon_0)$ and $\Phi^{-t}(D^u(z,\delta_0)) \subset D^u(\Phi^{-t}(z),\epsilon_0)$ for any $z \in M$ and $t \geq 0$. Suppose that $\mbox{\rm{Per}}^u_*(\Phi)$ is non-empty and contains a point $p$. There exists a continuous injective map $H:(-1,1)^2 {\rightarrow} M$ such that \begin{enumerate} \item $H(0,0)=p$, \item $\mbox{\rm{Im} } H \subset \mbox{\rm{Im} } \psi_p$, \item $H(x,\cdot)$ is of class $C^2$ and $H(x \times (-1,1)) \subset D^u(H(x,0),\delta_0)$ for any $x \in (-1,1)$, and \item $H((-1,1) \times y) \subset D^s(H(0,y),\delta_0)$ for any $y \in (-1,1)$. \end{enumerate} We put $V=\bigcup_{x \in (-1,1)}D^u(H(x,0),\delta_0)$. By Proposition \ref{prop:expansion}, $\mbox{\rm{Per}}^u_*(\Phi)$ consists of finitely many orbits. Since $\Phi$ is topologically transitive, the union of periodic orbits is a dense subset of $M$. Hence, $H((-1,1)^2)$ contains a hyperbolic periodic point $q$. Put $(x_*,y_*)=H^{-1}(q)$. For $x \in [0,x_*]$ and $t \geq 0$, we put $J^u(x,t)=\Phi^{-t}(H(x \times [0,y_*]))$. Let $V(x,t)$ be the arcwise connected component of $\Phi^{-t}(V) \cap B(\Phi^{-t}(H(x,0)),3\epsilon_0)$ which contains $\Phi^{-t}(H(x,0))$. Since \begin{equation*} J^u(x,t) \subset \Phi^{-t}(D^u(H(x,0),\delta_0) \subset D^u(\Phi^{-t}(H(x,0)),\epsilon_0), \end{equation*} we have $J^u(x,t) \subset V(x,t)$. Let $\pi_y:\mathbb{R}^3 \to \mathbb{R}$ be the map defined by $\pi_y(w,x,y)=y$. Put $I(x,t)=\pi_y \circ \psi_{\Phi^{-t}(H(x,0))}^{-1}(J^u(x,t))$. We define a map $h_{x,t}:[0,y_*] {\rightarrow} I(x,t)$ by \begin{equation*} h_{x,t}(y)=\pi_y \circ \psi_{\Phi^{-t}(H(x,0))}^{-1}(\Phi^{-t}(H(x,y))). \end{equation*} Remark that it is a $C^2$ diffeomorphism. The map $h_{x_*,t} \circ h_{0,0}^{-1}$ can be decomposed in two ways; \begin{align} \label{eqn:distortion} h_{x_*,t} \circ h_{0,0}^{-1} & = \left(h_{x_*,t} \circ h_{x_*,0}^{-1}\right) \circ \left(h_{x_*,0} \circ h_{0,0}^{-1}\right) \\ & = \left(h_{x_*,t} \circ h_{0,t}^{-1}\right) \circ \left(h_{0,t} \circ h_{0,0}^{-1}\right).\nonumber \end{align} We estimate the distortion of each decomposition. It will lead us to a contradiction. \begin{lemm} \label{lemma:distortion 1} $\{\mbox{\rm{dist}}(h_{x_*,t} \circ h_{x_*,0}^{-1},I(x_*0,0)) {\;|\;} t \geq 0\}$ is bounded. \end{lemm} \begin{proof} Let $T$ be the period of $q$ and put $h=h_{x_*,T} \circ h_{x_*,0}^{-1}$. Then, $I(x_*,T) \subset I(x_*,0)$ and the map $h$ is $C^2$ conjugate to the local return map of $\Phi^{-1}$ on $J^u(x_*,0)$, Since $q$ is a hyperbolic periodic point, there exist $C>0$ and $\lambda \in (0,1)$ such that $|h^n(I(x_*,T))|<C\lambda^n$ for any $n \geq 0$. Take $K>0$ so that $|D(\log |Dh|)(y)|<K$ for any $y \in I_{x_*,0}$. Then, we have \begin{align*} \mbox{\rm{dist}}(h_{x_*,nT} \circ h_{x_*,0}^{-1},I(x_*,0)) & = \mbox{\rm{dist}}(h^n,I(x_*,0)) \\ & = \sum_{m=0}^{n-1} \mbox{\rm{dist}}(h, h^m(I(x_*,0)))\\ & \leq \sum_{m=0}^{n-1} K\cdot C\lambda^n < KC(1-\lambda)^{-1}. \end{align*} Since $\mbox{\rm{dist}}(h_{x_*,t} \circ h_{x_*,0}^{-1},(I(x_*,0)))$ is continuous with respect to $t$, it is bounded on $[0,T]$. Hence, the lemma follows from the formula (\ref{eqn:distortion 2}). \end{proof} We will show that the distortion of the last term of (\ref{eqn:distortion}) is unbounded. Once it is shown, it contradicts Lemma \ref{lemma:distortion 1}, and hence, $\mbox{\rm{Per}}_*^u(\mbox{\rm{Per}}(\Phi))$ is empty. \begin{lemm} \label{lemma:distortion 2} $\{\mbox{\rm{dist}}(h_{0,t} \circ h_{0,0}^{-1},I(0,0)) {\;|\;} t \geq 0\}$ is unbounded. \end{lemm} \begin{proof} Let $T$ be the period of $p=h_{0,0}^{-1}(0)$. Put $h=h_{0,T} \circ h_{0,0}^{-1}$. Since $h$ is $C^2$ conjugate to the return map of $\Phi^{-1}$ on $J^u(0,0)$, we have $I(0,T)=h(I(0,0)) \subset I(0,0)$, $\bigcap_{n \geq 1}h^n(I(0,0))=\{0\}$, $h(0)=0$, and $|Dh(0)|=1$. By the formula (\ref{eqn:distortion 3}), \begin{align*} \mbox{\rm{dist}}(h_{0,nT} \circ h_{0,0}^{-1},I(0,0)) & =\mbox{\rm{dist}}(h^n,I(0,0))\\ & \geq \left|\log|Dh^n(0)|-\log|h^n(I(0,0))|+\log|I(0,0)|\right|\\ & =\left|-\log|h^n(I(0,0))|+\log|I(0,0)|\right|. \end{align*} The last term goes to infinity as $n$ tends to infinity since $\lim_{n {\rightarrow} \infty}|h^n(I(0,0))|=0$. \end{proof} To estimate the distortion of $h_{x_*,t} \circ h_{0,t}^{-1}$, we need some preparations. \begin{lemm} \label{lemma:distortion 3} If $V(x_1,t) \cap V(x_2,t) \neq \emptyset$ for $0 \leq x_1<x_2 \leq x_*$ and $t \geq 0$, then \begin{equation*} \bigcup_{x \in [x_1,x_2]}J^u(x,t) \subset \mbox{\rm{Im} } \psi_{\Phi^{-t}(H(x_1,t))} \end{equation*} \end{lemm} \begin{proof} Since $V(x_1,t) \cup V(x_2,t)$ is arcwise connected, we can take a continuous map $L:[0,1] {\rightarrow} V(x_1,t) \cup V(x_2,t)$ such that $L(0)=\Phi^{-t}(H(x_1,0))$ and $L(1)=\Phi^{-t}(H(x_2,0))$. It defines a continuous map $l:[0,1] {\rightarrow} (-1,1)$ such that $L(\xi) \in \Phi^{-t}(D^u(H(l(\xi)),0),\delta_0)$ for any $\xi \in [0,1]$. The diameter of $V(x_1,t) \cup V(x_2,t)$ is not greater than $6\epsilon_0$ and \begin{equation*} J^u(l(\xi),t) \cup \{L(\xi)\} \subset \Phi^{-t}(D^u(H(l(\xi),0)),\delta_0) \subset D^u(\Phi^{-t}(H(l(\xi),0)), \epsilon_0). \end{equation*} Hence, $J^u(l(\xi),t)$ is contained in $B(\Phi^{-t}(H(x_1,0)),8\epsilon_0)$ for any $\xi \in [0,1]$. Since $[x_1,x_2] \subset \mbox{\rm{Im} } l$ and $B(\Phi^{-t}(H(x_1,0)),8\epsilon_0) \subset \mbox{\rm{Im} } \psi_{\Phi^{-t}(H(x_1,0))}$, the proof is complete. \end{proof} Let $\mbox{\rm{Vol}}(\cdot)$ be the volume on $M$ associated with the fixed Riemannian metric of $M$. \begin{lemm} \label{lemma:distortion 4} There exists a constant $K_*>0$ such that \begin{equation*} |I(x,t)| \leq K_* \mbox{\rm{Vol}}(V(x,t)) \end{equation*} for any $x \in [0,x_*]$ and $t \geq 0$. \end{lemm} \begin{proof} Since $H([0,x_*] \times [0,y_*])$ is contained in $\mbox{\rm{Int}}\; V$, there exists $\epsilon_1>0$ such that $D^s(z,\epsilon_1) \subset V$ for any $z \in H([0,x_*] \times [0,y_*])$. Since $\Phi$ is $s$- and $u$-bounded, we can take $\delta_1>0$ such that $\Phi^t(D^s(z,\delta_1)) \subset D^s(\Phi^t(z),\epsilon_1)$ and $\Phi^{-t}(D^u(z,\delta_1)) \subset D^u(\Phi^{-t}(z),\epsilon_1)$ for any $z \in M$. Put $C(x,t)=\bigcup_{z \in J^u(x,t)}D^s(z,\delta_1)$. For $z=\Phi^{-t}(H(x,y)) \in J^u(x,t)$, $\Phi^t(D^s(z,\delta_1))$ is contained in $D^s(\Phi^t(z),\epsilon_1) = D^s(H(x,y),\epsilon_1)$, and hence, in $V$. Since \begin{equation*} J^u(x,t) \subset \Phi^{-t}(D^u(H(x,0),\delta_0)) \subset D^u(\Phi^{-t}(H(x,0)),\epsilon_0), \end{equation*} we have \begin{align*} C(x,t) & \subset \Phi^{-t}(V) \cap B(\Phi^{-t}(H(x,0)),\epsilon_0+\delta_1) \subset V(x,t). \end{align*} By the $C^2$ smoothness of the map $(z,w,x,y) \mapsto \psi_z(w,x,y)$, there exists $K_0>0$ such that $K_0^{-1}\|v\| \leq \|D\psi_z^{-1}(v)\| \leq K_0\|v\|$ for any $z \in M$, $z' \in \mbox{\rm{Im} } \psi_z$, and $v \in T_{z'}M$. Let $\mbox{Leb}_n$ be the Lebesgue measure on $\mathbb{R}^n$. Since \begin{equation*} \mbox{Leb}_2\left(\psi_{\Phi^{-t}(H(x,0))}^{-1} \left(D^s(z,\delta_1)\right)\right) \geq \pi \delta_1^2K_0^{-2} \end{equation*} for any $z \in J^u(x,t)$, we have \begin{align} \label{eqn:lemma-distortion 4} |I(x,t)| &=|\pi_y \circ \psi_{\Phi^{-t}(H(x,0))}^{-1}(J^u(x,t))| \\ & \leq \frac{K_0^2}{\pi\cdot \delta_1^2} \cdot \mbox{Leb}_3\left(\psi_{\Phi^{-t}(H(x,0))}^{-1}(C(x,t))\right) \nonumber\\ & \leq \frac{K_0^5}{\pi\cdot \delta_1^2} \cdot \mbox{\rm{Vol}}\left(C(x,t)\right) \leq \frac{K_0^5}{\pi\cdot \delta_1^2} \cdot \mbox{\rm{Vol}}\left(V(x,t)\right). \nonumber \end{align} \end{proof} Now, we estimate the distortion of $h_{x_*,t} \circ h_{0,t}^{-1}$. \begin{lemm} \label{lemma:distortion 5} $\{\mbox{\rm{dist}}(h_{x_*,t} \circ h_{0,t}^{-1},I(0,t))\} {\;|\;} t \geq 0\}$ is bounded. \end{lemm} \begin{proof} For $z,z' \in M$, the map $\psi_z^{-1} \circ \psi_{z'}$ can be written as \begin{equation*} \psi_z^{-1} \circ \psi_{z'}^{-1}(w,x,y) =(f_{z,z'}(w,x,y),g_{z,z'}(y)) \end{equation*} by a map $f_{z,z'}$ valued in $\mathbb{R}^2$ and a function $g_{z,z'}$. Take $K_1>0$ so that $|D(\log|Dg_{z,z'}|)(y)|\leq K_1$ for any $z,z' \in M$ and any $y$ in the domain of $g_{z,z'}$. By Lemma \ref{lemma:distortion 3}, if $V(x_1,t) \cap V(x_2,t) \neq \emptyset$ for $0 \leq x_1 < x_2 \leq x_*$, then $\pi_y \circ \psi_{\Phi^{-t}(H(x_1,0))}(J^u(x,t))=I(x_1,t)$ for any $x \in [x_1,x_2]$. It implies that $h_{x_2,t} \circ h_{x_1,t}^{-1}=g_{z_2(t),z_1(t)}$, where $z_i(t)=\Phi^{-t}(H(x_i,0))$. Hence, we have \begin{equation*} \mbox{\rm{dist}}(h_{x_2,t} \circ h_{x_1,t}^{-1},I(x_1,t)) \leq K_1. \end{equation*} Fix $t \geq 0$. Let $\mathcal{S}$ be the set of sequences $(x_i)_{i=0}^m$ that satisfy $x_0=0$, $x_m=x_*$, and $V(x_{i+1},t) \cap V(x_i,t) \neq \emptyset$ for any $i=0,\cdots,m-1$. It is non-empty by the compactness of $[0,1]$. Take $(x_i)_{i=0}^m \in \mathcal{S}$ such that $m$ is minimal in $\mathcal{S}$. For any $z \in M$, the minimality of $m$ implies that the number of $V(x_i,t)$ containing $z$ is at most two. Let $K_*$ be the constant given by Lemma \ref{lemma:distortion 4}. Then, we have \begin{align*} \mbox{\rm{dist}}(h_{x_*,t} \circ h_{0,t}^{-1},I(0,t)) & \leq \sum_{i=0}^{m-1} \mbox{\rm{dist}}(h_{x_{i+1},t} \circ h_{x_i,t}^{-1},I(x_i,t))\\ & \leq K_1 \sum_{i=0}^{m-1}|I(x_i,t)|\\ & \leq K_* K_1 \sum_{i=0}^{m-1}\mbox{\rm{Vol}}(V(x_i,t))\\ & \leq 2K_*K_1 \mbox{\rm{Vol}}(M). \end{align*} Since $K_1$ and $K_*$ does not depend on $t$, the lemma is proved. \end{proof} Since the second component of the middle term of (\ref{eqn:distortion}) does not depend on $t$, Lemma \ref{lemma:distortion 1} implies that the distortion of the middle term is bounded with respect to $t$. It contradicts Lemmas \ref{lemma:distortion 2} and \ref{lemma:distortion 5}, which imply that the distortion of the last term of (\ref{eqn:distortion}) is unbounded with respect to $t$. Therefore, $\mbox{\rm{Per}}_*^u(\Phi)$ is empty. Now, the proof of Proposition \ref{prop:hyperbolicity} is finished. \section{Foliations with tangentially contracting flows} \label{sec:GA} \label{sec:TC} In this section, we prove Theorem \ref{thm:TC}. Let $\mathcal{F}$ be a $C^2$ foliation on a closed three-dimensional manifold $M$. Suppose that $\mathcal{F}$ admits a $C^2$ tangentially contracting flow $\Phi$. Let $C>0$ and $\lambda>1$ be constants such that $\|N\Phi^t|_{(T\mathcal{F}/T\Phi)(z)}\| \leq C\lambda^{-t}$ for any $z \in M$ and $t \geq 0$. \begin{lemm} \label{lemma:TC is PA} There exists a continuous subbundle $E^u$ of $TM$ such that $\Phi$ is a $\mbox{$\mathbb{P}$\rm{A}}$ flow with a $\mbox{$\mathbb{P}$\rm{A}}$ splitting $TM=T\mathcal{F} + E^u$. \end{lemm} \begin{proof} The proof is almost identical to Lemme IV.1.1 in \cite{Gh2}. The differential of the flow $\Phi$ induces a flow $N_\mathcal{F}\Phi$ on $TM/T\mathcal{F}$. Let $S_*$ be the set of points $z \in M$ that satisfies \begin{equation} \label{eqn:TC PA} \limsup_{t {\rightarrow} \infty}\frac{1}{t} \log \|N_\mathcal{F}\Phi^t_z\|\leq -\frac{\log\lambda}{3}. \end{equation} We will show that $S_*$ must be empty. Once it is shown, we have \begin{equation*} \liminf_{t {\rightarrow} \infty} \frac{\|N\Phi^t_{(T\mathcal{F}/T\Phi)(z)}\|}{\|N_\mathcal{F}\Phi^t_z\|} =0 \end{equation*} for any $z \in M$. By a standard argument (see {\it e.g. } Proposition 2.3 of \cite{As3}), we can show that there exists a continuous subbundle $E^u$ of $TM$ such that $TM=T\mathcal{F} + E^u$ is a $\mbox{$\mathbb{P}$\rm{A}}$ splitting for $\Phi$. Suppose that $S_*$ is not empty. Take a point $z_0$ in $S_*$. First, we claim that $\Phi$ admits an attracting periodic point. As an accumulation point of the uniform measures on $\{\Phi^t(z_0) {\;|\;} t \in [0,T]\}$ with $T {\rightarrow} \infty$, we obtain a $\Phi$-invariant Borel probability measure $m_*$ such that \begin{displaymath} \int_M \left(\left.\frac{d}{dt} \log\|N_\mathcal{F}\Phi^t_z\|\right|_{t=0}\right) dm_*(z) \leq -\frac{\log\lambda}{3}. \end{displaymath} It implies that there exists a Borel subset $U_*$ of $M$ such that $m_*(U_*)>0$ and all Lyapunov exponents of $\Phi$ are negative on $U_*$. By Pesin theory, the $\omega$-limit set of any point of $U_*$ is an attracting periodic point. Therefore, the claim is proved. Suppose that $z_a$ is an attracting periodic point of $\Phi$. Since $\Phi$ is tangentially contracting, there exists a compact embedded annulus $A$ in $\mathcal{F}(z_a)$ such that $\Phi^t(A) \subset \mbox{\rm{Int}}\; A$ for any $t>0$, $\bigcap_{t \geq 0}\Phi^t(A)=\mathcal{O}(z_a)$, and $\bigcup_{t \geq 0}\Phi^{-t}(A)=\mathcal{F}(z_a)$. In particular, the leaf $\mathcal{F}^s(z_a)$ is diffeomorphic to $S^1 \times \mathbb{R}$. Since $z_a$ is attracting, we can take a compact neighborhood $U$ of $\mathcal{O}(z_a)$ in $M$ such that $\partial A \cap U=\emptyset$ and $\Phi^t(U) \subset \mbox{\rm{Int}}\; U$ for any $t > 0$. By the choice of $A$, we have $U \cap \mathcal{F}(z_a)=U \cap A$. It implies that $\mathcal{F}(z_a)$ is a proper leaf. By Lemma \ref{lemma:semi-proper}, $\mathcal{F}^s(z_a)$ has trivial holonomy. However, it contradicts that $\mathcal{O}(z_a)$ is an attracting periodic orbit. \end{proof} \begin{lemm} \label{lemna:TC transitive} The $\mbox{$\mathbb{P}$\rm{A}}$ flow $\Phi$ is $E^s$-fine. \end{lemm} \begin{proof} By the strong stable manifold theorem, each leaf of $\mathcal{F}$ is diffeomorphic to $\mathbb{R}^2$ or $S^1 \times \mathbb{R}$. In particular, $\mathcal{F}$ has no closed leaves. By Duminy's theorem, there exists no exceptional minimal set of $\mathcal{F}$. Hence, each leaf of $\mathcal{F}$ is dense in $M$. By the same argument as the above lemma, if $z_0$ is a $u$-irregular periodic point, then $\mathcal{F}(z_0)$ is semi-proper. However, it contradicts that each leaf of $\mathcal{F}$ is dense in $M$. Therefore, any periodic point is $u$-regular. Since $\Phi$ is tangentially contracting, any periodic point is $s$-regular. By Proposition \ref{prop:reduced dichotomy}, either $\Omega_*$ is empty or $M=W^u(\Omega_*^s) \cap \Omega_*^u$. Since $\mathcal{F}$ has no closed leaves, $\Omega_*^s$ is empty. It implies that the latter case can not occur. Therefore, $\Omega_*$ is empty. \end{proof} \begin{prop} The flow $\Phi$ is an Anosov flow. \end{prop} \begin{proof} Let $TM=T\mathcal{F} +E^u$ be a $\mbox{$\mathbb{P}$\rm{A}}$ splitting for $\Phi$. Since $\Phi$ is tangentially contracting with respect to $\mathcal{F}$, we have $\mbox{\rm{Per}}^s_*(\Phi)=\emptyset$. In particular, $\mbox{\rm{Per}}_*(\Phi)=\mbox{\rm{Per}}_*^u(\Phi)$. By Proposition \ref{prop:expansion}, $\mbox{\rm{Per}}^u_*(\Phi)$ consists of finitely many non-hyperbolic periodic orbits if it is not empty. Any time-change of $\Phi$ preserves each leaf of $\mathcal{F}$ and is tangentially contracting with respect to $\mathcal{F}$. Hence, we may assume that $\Phi$ admits a local invariant foliation transverse to $\mbox{\rm{Per}}_*(\Phi)=\mbox{\rm{Per}}_*^u(\Phi)$ (see the beginning of Subsection \ref{sec:hyp}). By Proposition \ref{prop:non-expansion}, $\Phi$ is $u$-bounded. By the same argument as the hyperbolic case in \cite{Do}, there exists a continuous $D\Phi$-invariant splitting $T\mathcal{F}=T\Phi \oplus E^{ss}$ and constants $C>0$ and $\lambda>0$ such that $\|D\Phi^t|_{E^{ss}(z)}\| \leq C\lambda^{-t}$ for any $z \in M$ and $t \geq 0$. It implies that $\{\|D\Phi^t|_{T\mathcal{F}(z)}\| {\;|\;} z \in M, t \geq 0\}$ is bounded. Hence, $\Phi$ is $s$-bounded. Since $\Phi$ is a $C^2$ $E^s$-fine $\mbox{$\mathbb{P}$\rm{A}}$ flow, Proposition \ref{prop:hyperbolicity} implies $\mbox{\rm{Per}}_*(\Phi)=\mbox{\rm{Per}}_*^u(\Phi)=\emptyset$. By Corollary \ref{cor:Anosov}, $\Phi$ is an Anosov flow. \end{proof} Now, we prove Theorem \ref{thm:TC}. By Th\'eor\`eme 4.1 of \cite{Gh}, $\mathcal{F}$ admits a $C^r$ transverse projective structure. By Th\'eor\`eme 5.1 of \cite{Ba} ({\it c.f.} Th\'eor\`eme 4.7 of \cite{Gh}) $\Phi$ is topologically equivalent to an algebraic Anosov flow. Therefore, $\mathcal{F}$ is homeomorphic to the weak stable foliation of an algebraic Anosov flow. Such a $C^r$ foliation with $r \geq 2$ is classified completely by Ghys \cite{Gh} and Ghys and Sergiescu \cite{GS}. Their result implies that $\mathcal{F}$ is $C^r$-diffeomorphic to the weak stable foliation of an algebraic Anosov flow.
2301.07282
\section{Introduction} Social media has become an important tool in election campaigns. Political actors receive a variety of benefits from using social media; for example, causing voting behavior~\cite{kovic2020brute}, attracting new party members~\cite{gibson2018friend}, and provoking political debate~\cite{paul2017compass}. Consequently, they increasingly expect their messages on social media to have the above-mentioned effects. General users are also often exposed to political topics on social media. On Twitter, the U.S. election alone was the second most tweeted event in 2016~\cite{independent_news}. Thus, the way to use social media has a strong influence on politics and election topics. How electoral candidates handle social media according to their chances of winning an election is unclear. Existing many studies addressing political communication on social media often focus on binary opposition, such as the two-party system, i.e., ruling and opposition parties~\cite{heiss2019drives,keller2018followers,bobba2019social}. Not much research has been conducted on fringe candidates' political communication on social media because they have little influence and are unlikely to affect the overall outcome of an election in a two-party system like the U.S. In recent years, however, they often activate political discussions on social media and turn out their followers, with running as candidates even though they have low chances of winning elections for fusion voting~\cite{fusion} and issue awareness~\cite{issue, kitunzi2016influence}. The freshness of their slogans and their substandard movement sometimes succeed in attracting people's attention and contributing to their win; for example, in Japan, the Trumpian-inspired party (Sanseito) won a seat in the House of Councilors in 2022~\cite{sanseito}. Their increasing presence makes it important to understand their behaviors and strategies on social media, which could not be covered by an analysis of the existing two-party system against the backdrop of U.S. society. Likewise, it is not well understood how leading candidates who rarely lose elections use social media, compared to them. In this work, we aim to deepen our understanding of the differences in their social media strategies during elections in response to the chances of winning. To this aim, we collect the candidates' posts and user information in the Japanese Twitter-sphere leading up to the 2022 Upper House election and attempt to characterize them classified into four groups according to the chances of winning (almost winning, even, almost losing, and proportional representation group). We tackle the following research questions by the comparison between four groups. \noindent \textbf{RQ1: What are the characteristics of the frequency of tweets and user information?} We attempt to examine and find a statistical difference between the four groups in basic tweet behavior and user information. It provides useful insight into how each group is dealing with social media. \noindent \textbf{RQ2: What kind of content does each group post during the election period?} We analyze what kind of content is likely to be posted during election periods through the topic model. By identifying differences in the content that each group is most likely to discuss, it becomes clear what election issues they want the public to pay attention to and what they want to claim. We expect to see differences in social media strategies based on the chances of winning. \noindent \textbf{RQ3: What type of content affects user engagement?} We analyze which content is more likely to gain user engagement through the regression analysis method. The mechanism of what content they encourage their followers to share their content has so far not been a focus of extensive study. We seek to fill the gap by understanding how users are likely to respond to content in each group on Twitter communication during an election. This also helps all parties effectively promote participatory democracy globally and their campaign policies. \noindent \textbf{RQ4: Is there a difference in the way they communicate with other users on social media?} One of the most efficient ways for candidates to communicate directly with voters and other candidates is to utilize reply functions. We attempt to elucidate how the electoral situation makes a difference in the way they communicate with them. By answering these research questions, we made the following contribution: \begin{itemize} \item We revealed that the number of followers, i.e., the popularity on social media, does not necessarily increase the chances of winning an election. Nonetheless, our analysis also finds that candidates with little chance of winning are more aggressive in their social media strategies than other candidates (Chapter 4). \item Candidates in a state of close competition tend to tweet more about the neighborhoods where their constituents live, while candidates with little chance of winning the election tend to post tweets asking all of their followers to vote or share (Chapter 5). \item Tweets asking all of their followers to vote or share, which candidates with little chance of winning the election frequently post, were unpopular in terms of user engagement (Chapter 6). \item Candidates who are more likely to win use social media for broadcasting rather than for iterating with voters. Also, unlike the findings of existing studies, candidates in a state of close competition have fewer interactions with other users (Chapter 7). \end{itemize} To the best of our knowledge, this is the first study to examine candidates' behavior in social media according to the odds of winning an election. Our study can benefit from exploring how variations in candidates' situations may directly affect the way candidates engage with voters through social media. We can gain insight into how each candidate faces social media communications in an election context. \section{Related Work} \subsection{Political communication on social media} The impact of social media on political communication has been a significant topic~\cite{haq2020survey}. Before the advent of social media, political communication based on democracy was mainly performed to be mediated by traditional mass media. The emergence of social media has profoundly changed the form of political communication by providing new spaces for conversation and social interaction~\cite{papacharissi2004democracy}. This change brought benefits such as revitalizing political debate and increased diversity~\cite{chadwick2008web}. On the other hand, social media also brings negative aspects, such as political filter bubbles or echo chambers~\cite{barbera2015tweeting}. Despite the advantages and disadvantages of the emergence of social media, there is no doubt that it is currently the most important source of political information for voters and an important forum for political actors to disseminate information~\cite{knoll2020social,pew_research}. Much research has been conducted on how public users participate in political communication on social media~\cite{stier2020populist,blassnig2019populist}. In particular, public users with a strong voice in the domain of politics have received attention in studies of political behavior. Even though they are not politicians, they have many followers and influence the behavior of other users~\cite{bode2016politics,vaccari2015follow}. On the other hand, it is said that their views do not necessarily represent the views of the groups to which they belong~\cite{barbera2015understanding}. For political actors, the use of social media plays an important role because it can achieve various purposes; they can inform a broader belief, interact with voters, or mobilize their followers~\cite{magin2017campaigning}. They attempt to gain support and spread their claims through political communication on social media platforms~\cite{klinger2015emergence}. They also make an attempt to get a lot of user engagement, e.g., liking, commenting, and sharing with other users, for success in social media communication~\cite{popa2020informing}. A large amount of user engagement depends on many factors; profile characteristics ~\cite{keller2018followers,vaccari2015follow}, the post content~\cite{xenos2017understanding}, the sentiment and style characteristics~\cite{blassnig2021popularity,heiss2019drives}, the attached images~\cite{farkas2021images}, and polarization rhetoric~\cite{ballard2022dynamics}. Political actors are (consciously or unconsciously) concerned about what content to show and how to show it, in order to gain user engagement. Differences in political communication in social media also emerge depending on the position and affiliation of political actors; political party~\cite{keller2018followers,blassnig2021popularity}, right-wing or left-wing~\cite{morstatter2018alt}, political leader~\cite{vaccari2015follow,jain2021twitter}, populist or not~\cite{bobba2019social,blassnig2019populist}, and famous or not~\cite{graham2013between}. These studies have shown that position and affiliation are strongly related to the content of posts, the ease of gaining engagement, and the manner of reply. However, it is still unclear how political communication on social media depends on the chance of winning an election. \subsection{Election campaigns} Elections are the prime of democracy. Elections tend to increase the volume of posts related to politics and are the catalyst for political discussion on social media~\cite{ahmed2014my,jungherr2016twitter}. Most political actors are naturally interested in the outcome of elections. Even in the field of research, social media in the role of the social sensor has been considered as an alternative to polls or a possible predictor of election outcomes~\cite{tumasjan2010predicting,kulshrestha2017politically,burnap2016140}. However, it is currently considered difficult to predict the outcome of elections from social media because of the complex relationship between post engagement and whether to win or not~\cite{jungherr2017digital}. The political discussion among politicians and general users during the election period is very active, making it a useful subject for analysis. While most studies have focused on presidential and congressional elections in the U.S.~\cite{paul2017compass,bovet2018validation}, some studies analyze the relationship between social media and elections in countries outside the U.S. because the relationship between social media and political communication is similar for elections in most countries; U.K.~\cite{burnap2016140}, Germany~\cite{jurgens2011small}, Belgium~\cite{boireau2014determining}, Egypt~\cite{elghazaly2016political}. We focus on the political communication of candidates during Japan's elections, same as \cite{yoshida2018information, usui2018analysis}. \section{Data} \subsection{2022 Japan Election} We employ tweet data of candidates running for the 2022 Upper House of Councilors Election in Japan, which was announced on June 22 and held on July 10, 2022, to elect 125 members of the upper house of the National Diet, as the subject of our analysis. There are some reasons why the election is a desirable case study for the aim. First, in the 2022 elections, candidates can choose between two election ways to run; a proportional representation system or a constituency system in each prefecture. A proportional representation system reflects the overall distribution of public support for each political party and ensures minority groups have a measure of representation proportionate to their electoral support. A constituency system selects one or several representatives, depending on the size of the electoral district, in proportion to the number of individual votes for candidates. In effect, two electoral ways are run in parallel during one election period, where we allow analyzing social media strategies from various perspectives. Second, Twitter is quite popular in Japan with approximately 60 million users, and is roughly the same number of daily active users as the U.S.~\cite{nhk_news}. It is said that 83\% of the candidates in the election also campaigned through Twitter, which is a higher percentage than in any other social media service~\cite{senkyo_dot}. For these reasons, looking at political communications on Twitter in Japan is useful in terms of post volume and multidimensional analysis, when analyzing social media strategies during an election term. The result of the election is that the ruling Liberal Democratic Party (LDP) increased its seats, and the largest percentage of women have been elected so far. It is interesting to note that a new party, the Trump-inspired Party (Sanseito), won seats. Two days before the election, the assassination of the previous prime minister Shinzo Abe caused a great flutter. \begin{table}[t] \centering \small \caption{Basic stats of four groups; almost Winning (W), Even (E), almost Losing (L), and Proportional Representation (PR) group. It shows the number of candidates, the percentage of those winning the election, and the percentage belonging to the ruling parties. The number in parentheses is the value when including users who do not have a Twitter account.} \begin{tabular}{crrr} \toprule \multirow{2}{*}{Group} & \multicolumn{1}{c}{Number of} & \multicolumn{1}{c}{\% of winning} & \multicolumn{1}{c}{\% of the} \\ & \multicolumn{1}{c}{the candidates} & \multicolumn{1}{c}{the election} & \multicolumn{1}{c}{ruling party}\\ \midrule W & 60 (63) & 98.33\% (98.41\%) & 65.00\% (65.07\%)\\ E & 23 (23) & 47.82\% (47.82\%) & 39.13\% (39.13\%)\\ L & 217 (281) & 0.92\% (0.71\%) & 1.38\% (1.06\%)\\ PR & 124 (178) & 32.23\% (28.09\%) & 25.00\% (28.08\%)\\ \bottomrule \end{tabular} \label{basic_info} \end{table} \subsection{Data collection} We identified the Twitter accounts of the candidates in the 2022 House of Councilors Elections. In all, 433 of the 545 candidates had Twitter accounts, and we crawled their user profiles and their posts. We collect their data using Twitter Academic API~\cite{pfeffer2022sample}. We treat their tweets as the subject of analysis, which are divided into two term periods based on the date of the election announcement date; the term from the election announcement to the election date called as \textit{Election} term and two months prior to the date of the election announcement called as \textit{Pre-election} term, where there is an almost similar number of tweets as during the \textit{Election} term. \begin{figure*}[t] \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/follow} \subcaption{Number of following} \label{user_character_a} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/follower} \subcaption{Number of followers} \label{user_character_b} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/tweet} \subcaption{Number of tweets} \label{user_character_c} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/year} \subcaption{Account age} \label{user_character_d} \end{subfigure} \caption{ Comparison of the four types of user information for each group represented by box plot; (a) Number of following, (b) Number of followers, (c) Number of tweets, and (d) Account age. We test whether two groups are likely to derive from the same group using Mann Whitney U (MWU) tests. We express the degree of statistical significance in stars; ns: p-value $\leq 1.00$, *: p-value $\leq 0.05$, **: p-value $\leq 0.01$, ***: p-value $\leq 0.001$, and ****: p-value $\leq 0.0001$} \label{user_character} \end{figure*} Our study focuses on how political communication on social media differs depending on their chances of winning the election. We divided the candidates' accounts into four groups; almost Winning (W), Even (E), almost Losing (L), and Proportional Representation (PR) group. In other words, constituency candidates assign to three groups depending on their chances of winning and proportional-representation candidates belong to the fourth group because whether they win the election depends largely on the popularity of the party to which they belong. The judgment criterion for the assignment of the groups of each candidate was based on the survey of the electoral situation published by the Asahi Shimbun, which is the third largest newspaper in the world, three days after the announcement of the election~\cite{asahi_shin}. Their surveys are reported whether each candidate takes the lead in the election or not, based on their coordination of the situation and interviews. We assign the candidates judged to be superior or slightly superior to the ``almost Winning (W)'' group, those judged to be in a state of close competition to the ``Even (E)'' group, and those judged to be inferior or no mention to the ``almost Losing (L)'' group. The basic stats are shown in Table~\ref{basic_info}. The percentage of Groups W and E with Twitter accounts is near 100\%, while those of Groups L and PR are 77.2\% and 69.6\%, respectively. Groups that are likely to win or are unsure of their chances of winning appear to be more active in engaging in social media strategy. Also, since the percentage of candidates for each group that won the election decreases in order from group W to group L, the survey as the basis for the group assignment appears to be reasonable. What is the relationship between group assignments and the ruling and opposition parties? The ratio of ruling parties shows an extreme bias in the ``Almost Losing (L)'' group. This is due to the fact that the ruling Liberal Democratic Party (LDP) is very selective in its candidates, and some opposition parties are fielding large numbers of candidates. The ``Almost Losing (L)'' group is mostly composed of members in the opposition parties, but the other groups are unbiased and mixed with both the ruling and opposition parties. Note that our study is based on the perspective of the chance of winning an election, and offers a new perspective on political communication on social media, that differs from the dichotomy perspective between the ruling party and the opposition. \section{RQ1: What are the characteristics of the frequency of tweets and user information?} This section characterizes the frequency of tweets and user information by the four groups we have defined to understand the differences among their strategies in social media. \subsection{User characteristics} We examine the differences among the groups for four types of user characteristics; the number of following, the number of followers, the number of tweets, and account age. The results are shown in Figure~\ref{user_character}. The comparison is expressed by box plot, and we use Mann Whitney U tests~\cite{mann1947test} for the statistical significance test. The number of following of candidates does not differ significantly between any of the groups, except the pair of W and PR, as shown in Figure~\ref{user_character} (a). Although group E appears to tend to have fewer followers than the other groups, it is apparent that the majority of users only follow 100 -- 1,000 users. The number of followers shows that group L was significantly different from the other groups, as shown in Figure~\ref{user_character} (b). The lower bar of the box plot in group L is spreading downwards, suggesting that some of the users in group L have not fully gained popularity on social media. For example, the median number of followers in group W is 31,835, while that in group L is 9,849. On the other hand, several candidates in group L have more than 100,000 followers. This implies that while more followers (the popularity on social media) does not necessarily increase the chance of winning an election, above a certain amount of followers is necessary to have a certain chance of winning, i.e., to join group W or E. \begin{figure}[t] \centering \includegraphics[width = \linewidth]{fig/time_figure} \caption{The average number of (a) all tweets, (b) replies, and (c) retweets from Apr 22 to Jul 10, 2022. Each colored line represents each group. We set two months prior to the date of the election announcement as \textit{Pre-election} term and the term from the election announcement to the election date as \textit{Election} term. } \label{time_series_tweets} \end{figure} For the number of tweets, the result in Figure~\ref{user_character} (c) shows that W and PR are significantly different, the same as the number of following. It indicates that the facing of social media differs among the leading candidates depending on the election way. Both the number of followers and the number of tweets are higher for the PR group to which candidates running in the proportional representation system belong, showing that they differ in their engagement with other public users. Taking into account age, group L is significantly different from the other groups, as shown in Figure~\ref{user_character} (d). Both the number of followers and account age in group L are smaller than in other groups due to the fact that there is some first-time candidate in the group. Also, the number of tweets in group L is almost the same as those in the other groups despite the young age of the account, indicating that they are actively working on the social media strategy. \begin{figure}[t] \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/Tweets_B} \subcaption{\scriptsize{Number of tweets}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/RPs_B} \subcaption{\scriptsize{Number of replies}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/RTs_B} \subcaption{\scriptsize{Number of retweets}} \end{subfigure} \caption{Number of tweets for each group during \textit{Pre-election} term.} \label{pre_election_tweet} \end{figure} \begin{figure}[t] \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/Tweets} \subcaption{\scriptsize{Number of tweets}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/RPs} \subcaption{\scriptsize{Number of replies}} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=\textwidth]{fig/RTs} \subcaption{\scriptsize{Number of retweets}} \end{subfigure} \caption{Number of tweets for each group during \textit{Election} term.} \label{election_tweet} \end{figure} \subsection{Time series in a number of tweets} We investigate whether there is a difference in the number of tweets by each group during \textit{Pre-Election} and \textit{Election} terms. The time series of the average number of tweets are shown in Figure~\ref{time_series_tweets}, and the comparisons of the number of tweets in each term are shown in Figures~\ref{pre_election_tweet} and \ref{election_tweet}. The number of tweets during \textit{Election} term tends to increase more than those during \textit{Pre-Election} term because of the activation of electoral campaigns on social media. During \textit{Pre-Election} term, the number of tweets, replies, and retweets is significantly fewer for group W than for other groups. Candidates who have already gained popularity use social media in smaller amounts. On the other hand, group L makes use of replies more than any other group during the \textit{Pre-Election} term, indicating that it is aware of the dialogue with other users even before the election. During \textit{Election} term, group W significantly posts fewer tweets, replies, and retweets, which is similar characteristics as during \textit{Pre-Election} term. Group E tends to have larger tweets but fewer replies than other groups. It suggests that the competitive state of their rivals has led to an increase in their own election campaign tweets. Group L focuses on interacting with other users through a reply function, the same as during \textit{Pre-Election} term. Interestingly, group PR retweets during \textit{Election} term than other groups. We consider that the candidates in group PR frequently retweet their party's propaganda because the rise in popularity of their party directly leads to their electoral triumph, due to the electoral system in which they run. \section{RQ2: What kind of content does each group post during the election period?} In this section, to better understand the topics to which each group tends to refer, we use a topic modeling approach to group tweets into meaningful topics. The identification of the content that each group talks about makes clear what election issues they want the public to pay attention to and what they want to claim. \begin{figure*}[t] \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{fig/TOPIC-pre} \subcaption{\textit{Pre-Election} term} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{fig/TOPIC} \subcaption{\textit{Election} term} \end{subfigure} \caption{Fraction of groups in each topic.} \label{fraction} \end{figure*} \subsection{Topic model} We use a topic model to group all 211,495 tweets except retweets posted by candidates belonging to each group from Apr 22 to Jul 10, 2022 (in \textit{Election} and \textit{Pre-Election} terms) into clusters and to describe their properties. We chose the Biterm Topic Model (BTM)~\cite{yan2013biterm} as our clustering method. This model is a derivative of Latent Dirichlet Allocation (LDA)~\cite{blei2003latent} and is known to be able to extract topics with high accuracy for short sentences. Specifically, this method assigns topics based on word sets with high co-occurrence rates among word pairs in a single tweet. The input to BTM is word pairs for each tweet. Before creating these word pairs, as a text preprocessing step, we remove particles, auxiliaries, and stop words, and replace some words with specific words. Each candidate frequently uses one's name and one's party affiliation in tweets for his/her election campaign. To mitigate the impact of individual and party names in the topic model, we replace each individual name with \textit{PERSONAL\_NAME} and the party name with \textit{PARTY\_NAME}. As the number of clusters, we searched for the appropriate number of topics by coherence score in increments of 5 in the range of 10 to 100, and chose 35 as the initial number of topics. We clustered the preprocessed tweets into 35 topics using BTM. Then, we merged pairs of topics that were similar among the estimated set of topics. Concretely, the top 50 words in each topic were extracted according to $\phi$, which represents the distribution of words in the topic, and a pair of topics with more than 20\% overlap between these words was merged as one topic. Finally, we set 28 topics, which one of the co-authors reviewed, inspecting the words as well as the context in which they appeared, and assigned a label to each topic; Free, Economy, Emperor, Poster, Win, Family, Life, COVID-19, Reform, Tax, Think, Diplomacy, Income, Vaccination, Thank, Battle, Region, Proportional Representation (PR), Childcare, Constitution, Streaming, Reschedule, Schedule, Soapbox, Assassin, Please, Campaign, and Expression. The percentage of each group for each topic is shown in Figure~\ref{fraction}. \begin{table}[t] \caption{Topics with a large and small number of tweets in each group} \label{topic_number} \centering \footnotesize \scalebox{0.8}{ \begin{tabular}{ccllll} \toprule & Ranking & W & E & L & PR\\ \midrule & 1 & Schedule & Schedule & Schedule & Schedule\\ & 2 & Diplomacy & Diplomacy & Diplomacy & Diplomacy\\ & 3 & Please & Please & Campaign & Campaign\\ \textit{Pre-} & \vdots & & & & \\ \textit{Election} & 26 & Constitution & Expression & Income & Income\\ & 27 & Battle & Constitution & Reschedule & Reschedule\\ & 28 & Reschedule & Win & Battle & Win\\ \midrule & 1 & Please & Region & Please & Please\\ & 2 & Schedule & Diplomacy & Campaign & Diplomacy\\ & 3 & Streaming & Streaming & Diplomacy & Schedule\\ \textit{Election} & \vdots & & & & \\ & 26 & Childcare & Constitution & Childcare & Childcare\\ & 27 & Income & Childcare & Economy & Income\\ & 28 & Constitution & Income & Income & Constitution\\ \bottomrule \end{tabular} } \end{table} \subsection{Results} First, we examine the number of tweets belonging to each topic to identify popular and unpopular topics. In all tweets, the topic with the most tweets is Please (11.88\%), which includes tweets asking users to do something please (e.g., vote), and the topic with the fewest tweets is Constitution (0.87\%) related to constitutional amendments. The results of topics with high and low numbers for tweets in each group are shown in Table~\ref{topic_number}. The topics with a large number of tweets are similar in all groups. During \textit{Pre-Election} term, it shows a high number of tweets on four topics; Schedule, Diplomacy, Please, and Campaign. Before the election announcement, there were many tweets on election-related topics such as Schedule, which reports the upcoming schedule, and Campaign, which reports the schedule of campaign speeches, suggesting that preparations for the election were being made early on. In addition, the tense situation in Russia and Ukraine has led to many tweets about Diplomacy. During \textit{Election} term, in groups W and E, tweets on Streaming with respect to television and internet broadcast increase. The candidates with a high chance of winning increase to have opportunities to appear on TV, about which they tweet. Topics with a small number of tweets during \textit{Pre-Election} naturally include subjects that are likely to be posted after the start of the election, such as Reschedule, Win, and Battle. During \textit{Election} term, the number of tweets on topics that are usually central to political discussions, such as Childcare, Income, Constitution, and Economy, is few. It is apparent that during the election period, they do not discuss political issues but focus on the promotion of themselves and their political party. Observations of popular and unpopular topics showed no big differences between the groups. Therefore, we introduce a new index, which is a form similar to Pearson's Chi-square statistics ($\chi^2$)~\cite{greenwood1996guide}, to discover topics that are distinctive in each group. This index quantifies the degree to which each group deviates upward from the expected probability on each topic, which we call Dev\_Score. It is represented by the following equation; \begin{align} \alpha^{i}_{j} &= \tfrac{Tweets^{\text{group}\ i}_{\text{topic}\ j}}{\sum_{k = 1}^{\text{N\ of\ topics}}Tweets^{\text{group}\ i}_{\text{topic}\ k}} \nonumber \\ \mu^{\setminus i}_{j} &= \tfrac{1}{\text{N\ of\ groups} - 1}\sum_{k = 1; k \neq i}^{\text{N\ of\ groups}}\alpha^{k}_{j} \nonumber \\ \text{Dev\_Score}^{i}_{j} &= \tfrac{\alpha^{i}_{j} - \mu^{\setminus i}_{j}}{\mu^{\setminus i}_{j}} \nonumber \end{align} , where $\alpha^{i}_{j}$ represents the fraction of topic $j$ in group $i$, $\mu^{\setminus i}_{j}$ represents the average of each $\alpha^{*}_{j}$ except group $i$. The deviation score $\text{Dev\_Score}^{i}_{j}$ indicates the degree of specificity of topic $j$ in group $i$, compared to other groups. Table~\ref{topic_kai} shows the top three topics of Dev\_Score for each group. Group W tends to focus on Free such as freedom of expression more than other groups. Group E shows that the intensity of the elections triggered tweets related to the area in which they are running for office (Region). They tweeted extensively on Streaming about their own appearances on TV and the Internet to increase their visibility and attention. In addition, in the \textit{Election} term, they posted more on Assassin, the topic of tweets about former Prime Minister Abe's assassination, than any other group. This indicates a tendency to mention sensational incidents. Group L actively posted Please tweets, urging people to vote and support the campaign in both terms, suggesting that the campaign is in a tough state. In group PR, during \textit{Election} term, they often tweet about the election, such as Please and Battle. While, during \textit{Pre-Election} term, they are likely to mention political policies such as Reform and Childcare or express their own ideas (Think), suggesting that they are less aware of the election before the election than other groups because of the largely dependent on the popularity of their political party about winning the election. \begin{table}[t] \caption{Top 3 topics for the Dev\_Score for each group} \label{topic_kai} \centering \footnotesize \scalebox{0.8}{ \begin{tabular}{ccllll} \toprule & Ranking & W & E & L & PR\\ \midrule \multirow{2}{*}{\textit{Pre-}} & 1 & Free & Diplomacy & Please & Reform\\ \multirow{2}{*}{\textit{Election}}& 2 & Poster & Streaming & Campaign & Think\\ & 3 & Economy & Poster & PR & Childcare\\ \midrule & 1 & Life & Region & Please & Please\\ \textit{Election} & 2 & Free & Assassin & Campaign & Battle\\ & 3 & PR & Streaming & Expression & Emperor\\ \bottomrule \end{tabular} } \end{table} \section{RQ3: What type of content affects user engagement?} \begin{figure*}[t] \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m0_} \subcaption{Group W in \textit{Pre-Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m1_} \subcaption{Group E in \textit{Pre-Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m2_} \subcaption{Group L in \textit{Pre-Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m3_} \subcaption{Group PR in \textit{Pre-Election}} \end{subfigure} \\ \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m0} \subcaption{Group W in \textit{Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m1} \subcaption{Group E in \textit{Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m2} \subcaption{Group L in \textit{Election}} \end{subfigure} \hfill \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=\textwidth]{fig/m3} \subcaption{Group PR in \textit{Election}} \end{subfigure} \caption{All topics sorted by effect size ($\beta_{1}$) that predicts the number of user engagement (retweets) by Equation~\ref{equ} in each group and each term. } \label{regression2} \end{figure*} \subsection{Regression model} What type of content is likely to gain user engagement in each group? This section examines the trade-off between tweet topics and user engagement using a linear mixed-effects model~\cite{gelman2006data,bates2014fitting}. The data is analyzed at the tweet level, while mixed effects account for variability across the characteristics of the user. Concretely, we set the number of user engagements measured by the number of retweets\footnote{We also performed a regression analysis of the number of likes as an independent variable, but the results were similar to those of the number of retweets as an independent variable in Figure~\ref{regression2}. This section shows only the results for the number of retweets.} as the independent variable and apply a log transformation to reduce the influence of extremes. We set the topic of the tweet, obtained in Section 5, as the explanatory dummy variable. Moreover, we set a random intercept per each candidate because the numbers of followers are highly correlated with user engagements~\cite{uysal2011user}. We use their political party as a control variable to mitigate the influence of the political party~\cite{keller2018followers,blassnig2021popularity}. The regression model is defined as below; \begin{align} \label{equ} \log (Engagement + 1) = \beta_{1} * Topic \nonumber \\ + \beta_{2} * Party + \phi + \epsilon \end{align} where $Engagement$ is the number of retweets, $\phi$ is the random effect for one of all the candidates, and $\epsilon$ is the error term. We report the effect size $\beta_{1}$, which is the coefficients of all topics, for each group in each term by fitting the model to their tweets. \subsection{Results} We present all topics sorted by effect size in each group in Figure~\ref{regression2}. During \textit{Pre-Election} term, topics regarding political policies tend to get more retweets; Tax, Expression (the topic on the regulation of expression), Economy, Childcare, and Emperor (the topic on the Emperor System). On the Constitution topic that sparks national debate, tweets in group L and PR tend to get retweets, while those in group W are less likely to be retweeted. The candidates in group W, who have already gained popularity with the public, may be less likely to make tweets that generate public interest on sensational topics for fear of social media ablaze. During \textit{Election} term, topics regarding political policies tend to get more retweets, similar to the characteristics before the election announcement. Just because it is an election period does not necessarily mean that election-related topics such as Campaign (the topic of campaign speech) and Schedule (the topic of reporting upcoming events) are likely to get a lot of retweets. On the other hand, tweets in group E with no obvious election results tend to get more retweets for Soapbox (the topic of their own soapbox oratory) and PR (the topic of the support for proportional representation), suggesting that they are making an effort to spread information about the election. Group W gets retweets, especially for tweets about Assassin, which is the topic of tweets about former Prime Minister Abe's assassination. They are better than other groups at expressing sympathy and anger over sensational incidents and have gained support. What is interesting is that Please, which is a distinctive topic in group L and PR (Table\ref{topic_kai}), is not a topic that can get retweets. Candidates in these groups frequently ask their followers to vote and retweet, but it indicates that such tweets are unlikely to get a response from followers. \section{RQ4: Is there a difference in the way they communicate with other users on social media?} Figure~\ref{pre_election_tweet} and \ref{election_tweet} showed a significant difference in the level of activity in the use of the reply function in each group in Section 4. This section attempts to analyze the replies that communicate directly with other users and political actors in more detail. \noindent \textbf{Reply target users}: Figure~\ref{reply_target} (a) and (b) show the number of the users to whom candidates in each group replied during \textit{Pre-Election} and \textit{Election} terms. Similar to the number of replies, groups W and E have far fewer users replied to than groups L and PR. Groups E and W had fewer than 10 users conversing via the reply function during the election period, while some in Group L conversed with more than 10 users. Then, figure~\ref{reply_target} (c) shows the cumulative distribution of the number of followers, representing popularity on social media, of reply target users in each group. It represents the tendency of the conversation with users who have a large number of followers, in order of groups W, E, PR, and L. Concretely, while 68.30\% of the users conversed with by candidates in group W have less than 10,000 followers, 75.31\% of the users conversed with by candidates in group L have less than 10,000 followers. In addition, the verification accounts of the reply target users are 39.54\%, 33.00\%, 18.51\%, and 17.7\% in groups W, E, L, and PR, respectively. The percentage of candidates who have had at least one conversation with another candidate among their reply target users is 70.00\%, 73.91\%, 78.77\%, and 72.58\% in groups W, E, L, and PR. These results indicate that candidates in groups W and E, where there is above a certain chance of winning, focus more on conversations with users with a verified badge and with many followers than general users. In other words, they are using social media for broadcasting, not for interacting with voters~\cite{graham2013between}. The finding that group E, who is in a state of close competition, has fewer interactions with other users than groups L and PR differs from the finding in previous study~\cite{kahn2022spectacle}, that the more intense the election campaign, the stronger the interaction with voters. \begin{figure}[t] \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[width=\textwidth]{fig/preelect_type} \subcaption{Number of reply target users during \textit{Pre-Election} term} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[width=\textwidth]{fig/elect_type} \subcaption{Number of reply target users during \textit{Election} term} \end{subfigure} \\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{fig/follower_distribution} \subcaption{The cumulative distribution of number of followers in the reply target} \end{subfigure} \caption{Statistics on target users conversed with through each candidate's reply} \label{reply_target} \end{figure} \begin{figure}[t] \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{fig/sentiment_reply.pdf} \subcaption{Time series of sentiment score} \end{subfigure} \\ \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=\textwidth]{fig/toxity_reply.pdf} \subcaption{Time series of toxicity score} \end{subfigure} \caption{Time series of the average of (a) sentiment and (b) toxicity score of replies in each group. The average score per day is expressed as a single point. The point where there are less than 5 tweets in a day has been removed.} \label{time_reply} \end{figure} \noindent \textbf{Sentiment and toxicity score in reply tweets} The form of presentation such as whether the reply is positive or negative also significantly impacts voters' impressions~\cite{ferrucci2020civic}. We investigate each group's replies from two perspectives: sentiment and toxicity. For the sentiment analysis, we use Asari~\cite{asari}, which returns a sentiment value between 0 and 1 (the closer to 1, the more positive) when a Japanese sentence is inputted. Asari, which is an open-source sentiment quantification model based on SVM, is reported to perform with compelling accuracy as BERT-based models and is faster. For the toxicity analysis, we use Jigsaw's Perspective API~\cite{perspective}, which is widely used for hate speech detection, to measure text toxicity using machine learning. Figure~\ref{time_reply} (a) shows the time series of the average score of replies in sentiment. There is a tendency for the average score of replies to increase for \textit{Election} term than for \textit{Pre-Election} term for all groups, e.g., the average score of replies in group W increased from 0.786 to 0.821. It means that more candidates are making positive replies during the election period than usual. The comparisons between groups show a high average score in the order of Groups PR, L, W, and E. For example, during \textit{Electuion} term, the difference is remarkable, with an average score of 0.897 for group PR compared to 0.820 for Group E. It is interesting to note that the candidates who are more likely to be elected have fewer positive replies than those to run in the proportional representation or with little chance of winning the election. Figure~\ref{time_reply} (b) shows the time series of the average score of replies in toxicity. The change in toxicity scores over the two terms tended to vary by the group; in groups W and E, the toxicity score increases from \textit{Pre-Election} term to \textit{Election} term ($0.022 \rightarrow 0.027$ and $0.019 \rightarrow 0.030$), while in groups L and PR, it decreases ($0.042 \rightarrow 0.034$ and $0.031 \rightarrow 0.023$). The fact that there were more toxic replies in the two groups runs counter to the intuition to be cautious in replying to other users during the elections and requires further investigation. \section{Limitation and future work} This research attempts to shed light on the relationship between the chance of winning an election and candidates' political communication on social media. However, it is necessary to acknowledge that there are several limitations to be considered. The first limitation is that it is unclear whether our findings targeting candidates in the Japanese election can be generalized to those in other countries. The application of our analysis to candidates' tweets in other countries may lead to new findings; therefore, the analysis of other countries with different electoral systems and social backgrounds is also expected to be conducted. Second, we used data for the three months prior to the election, but it is not clear whether similar conclusions can be drawn for a period far from the election period or for other events. To clarify this point, it is essential to work with large datasets over a long period in the future. Finally, there is still much room to analyze the tweet contents. Our research focused on the topics, toxicity, and sentiment of their tweets; furthermore, by utilizing natural language processing techniques such as rhetorical analysis, stance detection, or discourse framing, we expect to grasp clues to understand the candidates’ intentions from the text in their tweets. \looseness=-1 \section{Conclusions} Our study advances our understanding of how the chance of winning an election affects political communication on social media. We grouped election candidates into four groups according to the chance and characterized their social media strategy in terms of users, topics, and reply behavior. Our analysis showed that the attitude with which they engage in political communication and the topics they talk about differ depending on the chance of winning an election. Furthermore, it discovered, as their chances of winning increase, candidates narrow the targets they communicate with, from people in general to the electrical districts and specific persons. Our findings highlighting candidate behavior from a new perspective are helpful for future election strategies. \looseness=-1 \section*{Ethical Considerations} The data in this paper is derived from publicly-accessible user-generated content. We pay the utmost attention to the privacy of individuals in this study. When sharing our twitter data, we will publish only a list of tweet IDs. \looseness=-1
1502.01345
\section{Introduction \& Background} When a star orbits close to a massive black hole ($M_{\rm BH} \ga 10^{4}$\,M$_{\odot}$) such that its periastron distance is $\lesssim R_{\ast}(M_{\rm BH}/m_{\ast})^{1/3}$ (where $R_{\ast}$ and $m_{\ast}$ are the radius and the mass of the star, respectively), it will be disrupted and cause what is commonly referred to as a tidal disruption event (Hill 1975; Rees 1988). A fraction (roughly 50\%) of the stellar debris escapes while the rest is put in a highly eccentric orbit around the black hole, triggering the accretion process (e.g., Evans \& Kochanek 1989; Lodato \& Rossi 2011). These events are unique in the sense that they provide a one-time opportunity to study the onset of accretion and the formation of accretion disks and jets, which are currently only poorly understood. For disrupting black holes with masses $\la 4 \times 10^{7}$\,M$_{\odot}$, the initial accretion rate can exceed their Eddington limit by a factor of a few tens (e.g., Giannios \& Metzger 2011). Numerical studies suggest that such high accretion rates should produce outflows/jets driven by strong radiative pressure forces (e.g., Ohsuga et al. 2005). Although the precise jet launching mechanism is still highly debated (see Tchekhovskoy et al. 2010, and references therein), we know from X-ray and radio observations of black hole binaries and active galactic nuclei that jets and accretion are mutually dependent (e.g., Merloni et al. 2003; Falcke et al. 2004; Plotkin et al. 2012). Therefore, accretion initiated by the tidal disruption of a star is anticipated to be a natural site for producing jets. Given that the black hole jet directions, are uniformly distributed over the sky, most of the jetted events will be offset from our line of sight owing to collimation. Theoretical studies suggest that off-axis relativistic jets, although initially unobservable because of Doppler beaming, should be detectable after a few years when the ejecta slow down to mildly relativistic speeds (Giannios \& Metzger 2011). But recent radio follow-up studies of tidal disruption events (TDEs) spanning 1--22\,yr after the initial disruption have detected radio emission from only $\la 17$\% of the sample (see Tables 1 \& 2 of van Velzen et al. 2013), suggesting that maybe only a specific subset of events --- those requiring special conditions --- produce relativistic jets (Bower et al. 2013; van Velzen et al. 2013). {\it Swift} J164449.3+573451 (Sw~J1644+57, hereafter) is the first and the best-studied relativistic TDE (one accompanied by a relativistic outflow; e.g., Levan et al. 2011; Bloom et al. 2011; Burrows et al. 2011; Zauderer et al. 2011, 2013). The main observed properties of this source are as follows. (1) Long-lived ($\Delta t \approx 1$\,yr), luminous ($L_{\rm X,iso} \approx 10^{47}$\,erg\,s$^{-1}$), rapidly variable X-ray emission with a power-law secular decline; (2) self-absorbed radio emission indicative of relativistic ejecta; (3) location consistent with the nucleus of a redshift $z = 0.354$, compact, mildly star-forming galaxy; and (4) significant ($\sim 7$\%) near-infrared (NIR) polarization, strongly favoring an on-axis viewing angle (Wiersema et al. 2012). Observations at late times ($\ga 100$\,d) have both reinforced and complicated this picture. The overall trend of Sw~J1644+57's X-ray light curve, neglecting the short-timescale variability, can be described by a more or less constant plateau stage in the first 10\,d (rest frame)$\footnote{All of the durations quoted in this paper will be accompanied by a qualifier indicating whether they were calculated in the rest frame or the observer frame. For instances where a qualifier is not given, it should be assumed that the values are in the observer frame.}$ followed by a power-law decline with an index consistent with both $-5/3$ and $-2.2$, corresponding to a complete and a partial disruption of the star$\footnote{The disruption is partial if the mass lost by the star is $\la 50$\%, while it is referred to as complete if the star loses more than 50\% of its mass (Guillochon \& Ramirez-Ruiz 2013).}$, respectively (see Figure 1 of Tchekhovskoy et al. 2014; see also the gray data points in the top panel of Figure 1 of this paper). The X-ray intensity of the source drops abruptly by a factor of $\sim 170$ over a timescale of $\Delta t/t \la 0.2$ roughly a year after the disruption (see Figure 4 of Zauderer et al. 2013). This has been attributed to jet turnoff when the mass accretion rate dropped below the Eddington value, $\dot{M}_{\rm Edd} = L_{\rm Edd}/\eta c^{2}$, where $L_{\rm Edd}$, $\eta$, and $c$ are the Eddington luminosity, radiative efficiency, and speed of light, respectively (Zauderer et al. 2013). Radio emission was detected from the source $\sim 0.9$\,d (rest frame) after its discovery in the hard X-rays (Zauderer et al. 2011). This early stage radio emission has been argued to represent relativistic jetted emission directly pointed along our line of sight (Zauderer et al. 2011). A follow-up radio campaign showed that the radio emission brightened starting about one month after discovery (observer frame; Berger et al. 2012). Berger et al. (2012) interpret this increase in energy as slower ejecta catching up with the forward shock at late times, although other explanations also exist (e.g., Barniol Duran \& Piran 2013). Sw~J1644+57 is an exceptional TDE with signatures of a strong jet. Unfortunately, its host galaxy has a large line-of-sight extinction (Levan et al. 2011), making it challenging to study the evolution of the accretion disk expected to be observable in the ultraviolet/optical/infrared (UVOIR; e.g., Strubbe \& Quataert 2009; Lodato \& Rossi 2011). Although we have learned a great deal from Sw~J1644+57, the question of what aspect makes it conducive to produce a relativistic jet still remains. To answer this question, one approach would be to build a census of Sw~J1644+57-like sources. It has also been suggested that Sw~J1644+57 could represent a tidal disruption of a white dwarf by a member of the long-sought intermediate-mass black holes (IMBHs; mass range of a few $\times 10^{2-5}$\,M$_{\odot}$; Krolik \& Piran 2011). With only a handful of strong cases of such black holes known thus far (e.g., ESO HLX X-1: Farrell et al. 2009; Webb et al. 2012; M82 X-1: Kaaret et al. 2009; Pasham, Strohmayer, \& Mushotzky 2014), studying such systems could provide insight into weighing and hence identifying such unique objects. Soon after the 2011 March 25 discovery of Sw~J1644+57, {\it Swift} discovered another transient, {\it Swift} J2058.4+0516 (hereafter Sw~J2058+05), on 2011 May 17 (Cenko et al. 2012; hereafter C12). An early-time ($\la 2$ months since discovery), multiwavelength study showed a number of similarities with Sw~J1644+57 (C12). More specifically, Sw~J2058+05 occupied the same location in the X-ray versus optical luminosity plot as Sw~J1644+57, and its early-phase (20\,d after outburst; rest frame) radio, UVOIR, and X-ray spectral energy distribution (SED) was similar to that of Sw~J1644+57 (see Figures 4 \& 5 of C12). More importantly, strong radio emission coincident with the X-rays was detected $\sim 20$\,d after the initial trigger, suggesting relativistic ejecta (C12). Unlike Sw~J1644+57, Sw~J2058+05 shows no evidence for line-of-sight extinction (C12), so we can study the system at UVOIR wavelengths in more detail. If it can be established that Sw~J2058+05 behaves analogously to Sw~J1644+57, then we can start to gain confidence that there is a class of such relativistic TDEs. This paper is a follow-up work to C12 and we address the remaining questions. (1) How does Sw~J2058+05 evolve on longer timescales? (2) Assuming the UVOIR can be modeled with a single-temperature blackbody, how do the properties of the putative blackbody evolve on these timescales? (3) Is the emission consistent with originating from the center of the host galaxy? (4) What is the mass of the disrupting black hole? The paper is arranged as follows. In \S 2, we discuss the details of our X-ray, UVOIR, and radio observations. The results and the analysis are described in \S 3, while we discuss the similarity between Sw~J2058+05 and Sw~J1644+57, estimate the black hole mass, and so on in \S 4. We give the main conclusions of this study in \S 5. Throughout this paper, we adopt a standard $\Lambda$CDM cosmology with $H_{0}$ = 71 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}$ = 0.27 and $\Omega_{\Lambda}$ = 1 - $\Omega_{m}$ = 0.73 (Spergel et al. 2007). \section{Data Primer} \subsection{X-ray Data} The X-ray data of Sw~J2058+05 used in this study were acquired with three different instruments: the X-Ray Telescope (XRT; Burrows et al. 2005) on the {\it Swift} Gamma-Ray Burst Explorer (Gehrels et al. 2004), the European Photon Imaging Camera (EPIC; Str{\"u}der et al. 2001; Turner et al. 2001) on the {\it XMM-Newton} Observatory (Jansen et al. 2001), and the Advanced CCD Imaging Spectrometer (ACIS; Garmire et al. 2003) on the {\it Chandra} X-ray observatory (Weisskopf et al. 2002). We describe the data from each of these facilities below. \subsubsection{{\it Swift}/XRT Observations} Sw~J2058+05 was discovered by the BAT (Barthelmy et al. 2005) onboard {\it Swift} on 2011 May 17 (Krimm et al. 2011). Soon after the BAT detection, starting 2011 May 27, a Target of Opportunity (ToO) program was initiated to monitor this source. Between 2011 May 27 and 2012 July 19 (a temporal baseline of 419\,d), {\it Swift} observed the source on 32 occasions for about 2--3\,ks per observation. The data from the first 60\,d of this monitoring program have already been reported by C12. They find that the source's hard X-ray flux falls below the BAT detection limits soon after reaching its peak luminosity (see the top panel of Figure 1 of C12). Here we extended the analysis to late times and use only the XRT X-ray (0.3--10\,keV) data from {\it Swift}. We started our data analysis with the raw, level-1 XRT data products. Using the latest HEASARC calibration database (CALDB version 20140709) files, we ran {\it xrtpipeline} to extract the level-2 (scientific) event files. As suggested by the XRT data-analysis guide\footnote{ See \url{http://www.swift.ac.uk/analysis/xrt/lccorr.php}.}, we extracted the exposure maps to take into account the bad pixels and columns ({\it xrtpipeline} with the qualifier {\it createexpomap=yes}). These exposure maps were then used to correct the ancillary response files (arfs: effective area, using {\it xrtmkarf}) of each of the 32 observations. Twenty two of the monitoring observations were taken in the photon-counting (PC) mode, with the remainder in the windowed-timing (WT) mode (see Table 1 for more details). As recommended by the XRT user guide, we only used events with grades 0--12 in the case of PC-mode observations and grades 0--2 for WT-mode data. We then used {\it XSELECT} to extract energy spectra from each of the individual observations. For the PC-mode data we extracted the source spectra from a circular region centered on J2000.0 coordinates $\alpha = 20^{\rm h}58^{\rm m}19.90^{\rm s}$ and $\delta = +05^{\circ}13'32\farcs0$, as derived by C12 using the {\it Chandra}/High-Resolution Camera (HRC) data. We chose an extraction radius of 47\farcs1 to include roughly 90\% (at 1.5\,keV) of the light from the source (as estimated from XRT's fractional encircled energy function). A background region free of point sources was extracted from a nearby area. Given the low count rates, we chose a radius twice that of the source region in order to better estimate the background. For the WT-mode source region we chose a square box of width 94\farcs3 and oriented along the roll angle of the spacecraft --- that is, parallel to the WT-mode readout streak. Background was estimated from two square regions (width = 94\farcs3) on either side of the source region. The orientation of the square regions (both the source and the background) was adjusted between individual observations to align with the roll angle of the spacecraft during that particular exposure. \subsubsection{{\it XMM-Newton} Observations} {\it XMM-Newton} observed Sw~J2058+05 on three occasions (177, 179 and 340 d after the BAT detection; see Table 1 for further details). For the current study, we only used data acquired by EPIC, and both the ``pn'' and MOS data were used to achieve a higher signal-to-noise ratio (SNR). We started our analysis with the raw observation data files (ODF) and reprocessed them using {\it XMM-Newton}'s Standard Analysis System's (SAS) tools {\it epproc} and {\it emproc} for the pn and the MOS data, respectively. The standard data filters of {\it (PATTERN$\leq$4)} and {\it (PATTERN$\leq$12)} were used for the pn and the MOS data, respectively, and we only considered events in the energy range 0.3--10.0\,keV. All the time intervals of prominent background flaring were excluded from the analysis. The source events were extracted from a circular region of radius $33''$. This choice was made to include roughly 90\% of the light from the source as estimated from the fractional encircled energy of the EPIC instruments. A background region of similar radius was chosen from a nearby uncontaminated region. \subsubsection{{\it Chandra}/ACIS Observations} {\it Chandra} observed Sw~J2058+05 on four occasions. One of the observations was during the early phase of the outburst (C12), while the others were carried out on days 685, 896, and 899 after the initial BAT detection. Since we are interested in the late-time properties of the source, we only utilized the last three observations taken with ACIS. More details about these observations can be found in Table 1. Similar to the XRT and the EPIC data, we started our analysis with level-1 (secondary) data and reprocessed them using {\it Chandra}'s data-analysis system (CIAO) task {\it chandra\_repro} to account for any calibration changes that may have occurred since the epochs of these observations. Standard data filters were used for reprocessing. All further analysis was carried out on these level-2 event files. \subsection{Ground-Based Optical Photometry Data} Soon after discovery, we started a campaign to carry out multiband photometry of Sw~J2058+05 in the UV, optical, and NIR wavebands using multiple instruments. These include the High Acuity Wide field K-band-Imager (HAWK-I; Pirard et al. 2004) and the FOcal Reducer and Spectrograph 2 (FORS2; Appenzeller et al. 1998) on the 8.2\,m Very Large Telescope (VLT), and the Gemini Multi-Object Spectrograph (GMOS; Hook et al. 2004) mounted on the 8\,m Gemini-South telescope. VLT data were reduced via the standard instrument pipelines for FORS and HAWK-I in {\it esorex}, while Gemini data were processed using the {\it gemini IRAF}\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation (NSF).} package. Photometric calibration was performed relative to nearby point sources from SDSS (optical) and 2MASS (NIR). The resulting photometry, all in the AB magnitude system, is presented in Table 2. The reported magnitudes are not corrected for foreground Galactic extinction along the line of sight to Sw~J2058+05 [$E(B-V) = 0.095$ mag; Schlafly \& Finkbeiner 2011], but such corrections were applied before all subsequent analysis. The observations prior to 2011 August 12 can be found in Table 1 of C12. \subsection{HST Observations} We observed the location of Sw~J2058+05 with the Wide Field Camera 3 (WFC3) on the {\it Hubble Space Telescope} (\textit{HST}) in three separate epochs: 2011 Aug. 30, 2011 Nov. 30 (Proposal GO-12686; PI Cenko) and 2013 Dec. 10 (Proposal GO-13479; PI Levan). Observations were obtained with the $F160W$ filter through the IR channel in all three epochs, as well as with the $F475W$ filter through the UVIS channel in the first two epochs. An additional epoch of imaging was obtained on 2014 Aug. 31 in the $F606W$ filter with the Wide Field Camera (WFC) detector on the Advanced Camera for Surveys (ACS). These data were downloaded after on-the-fly processing from the \textit{HST} archive, and subsequently drizzled using \textit{astrodrizzle} (Fruchter \& Hook 2002) to final pixel scales of 65\,mas ($F160W$), 30\,mas ($F475W$), and 33\,mas ($F606W$). We performed aperture photometry at the location of Sw~J2058+05 in all images using the prescriptions from the various \textit{HST} handbooks. The resulting photometry, all corrected to the AB system, is displayed in Table 2. \subsection{Optical and Near-Infrared Spectra} We obtained optical and NIR spectra of Sw~J2058+05 with the Low-Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the 10\,m Keck I telescope, FORS2 on the 8\,m VLT UT1 (Antu), and the XSHOOTER (Vernet et al. 2011) spectrograph on the 8\,m VLT UT2. Details of the configuration for each spectrum are provided in Table 3. For the Keck/LRIS and FORS2 data, one-dimensional spectra were optimally extracted, a wavelength solution was generated from observations of lamps, and flux calibration was performed via spectrophotometric standards. The XSHOOTER spectra were processed through the \textit{reflex} environment. For all spectra the slit was oriented at the parallactic angle to minimize losses caused by atmospheric dispersion (Filippenko 1982). \subsection{Radio Data} We obtained a single epoch of imaging of Sw~J2058+05 with the National Radio Astronomy Observatory's (NRAO\footnote{The National Radio Astronomy Observatory is a facility of the NSF operated under cooperative agreement by Associated Universities, Inc.}) Very Long Baseline Array (VLBA) to search for spatially extended radio emission (project code BC0199). Observations were obtained on 2011 Aug. 12 ($\Delta t \approx 40$\,d after discovery, in the rest frame) at central frequencies of 8.4 and 22\,GHz with a recording rate of 512\,Mb\,s$^{-1}$. All 10 stations (SC, HN, NL, FD, LA, PT, KP, OV, BR, and MK) were planned for both frequencies; however, the NL station was lost for our 8\,GHz observation (owing to a receiver problem). Initial data processing was performed using the AIPS software package (Greisen 2003). J2101+0341 was used for primary phase and astrometric calibration, while J2050+0407 and J2106+0231 were used as secondary calibrators and for evaluation and correction of tropospheric effects on astrometry. The resulting images achieved an angular resolution of $\sim 1$\,mas at 8.4\,GHz and 0.3\,mas at 22\,GHz. A faint ($f_{\nu} = 350 \pm 70$\,$\mu$Jy), unresolved source is detected in the 8.4\,GHz image at the J2000.0 position $\alpha = 20^{\mathrm{h}} 58^{\mathrm{m}} 19.897282^{\mathrm{s}}$ $\pm$ 0.000006$^{\mathrm{s}}$, $\delta = +05^{\circ} 13\arcmin 32\farcs24306$ $\pm$ 0.00016${\mathrm{''}}$\footnote{The reported uncertainties in both RA and DEC are the statistical errors obtained from fits to VLBA data}. No emission is detected at this location in the 22\,GHz image to a 3$\sigma$ upper limit of $f_{\nu} < 580$\,$\mu$Jy. Both measurements suggest a decline in radio luminosity by a factor of a few from VLA observations of Sw~J2058+05 presented by C12 ($\sim 20$\,d rest frame). \section{Analysis} This section is divided into five parts: (1) we show the long-term X-ray light curve of Sw~J2058+05 and compare its behavior with that of Sw~J1644+57; (2) we carry out astrometry using {\it HST} and VLBA to pin down Sw~J2058+05's location within its host galaxy; (3) we study the evolution of the UVOIR SED; (4) we analyze the late-time optical spectra; and (5) we consider limits on the size of the radio-emitting region. \subsection{Long-term and Short-term X-ray Light Curves} The individual {\it Swift}/XRT observations do not have enough counts to constrain the source's spectral parameters. Hence, we extracted an average energy spectrum by combining all of the XRT PC-mode data\footnote{We excluded the WT-mode data to avoid any systematics caused by the low-energy spectral residuals as described in {\it Swift} XRT digest at \url{http://www.swift.ac.uk/analysis/xrt/digest\_cal.php}.}. This was achieved by first extracting a source spectrum and a background energy spectrum from each of the 22 observations (most of these observations were at epochs 25--86\,d after discovery; rest frame) and then combining them all using the ftool {\it sumpha}. Similarly, we combined all of the individual ancillary response files, weighted by total counts per observation, using the ftool {\it addarf}. The response files (RMF) were averaged using the ftool {\it addrmf}. The combined spectrum was then rebinned using the {\it grppha} tool to have a minimum of 25 counts per spectral bin. With the latest version of the X-ray spectral fitting package {\it XSPECv12.8.2} (Arnaud 1996), we then fitted this combined 0.3--10\,keV energy spectrum with a power-law model modified by absorption ({\it phabs$*$zwabs$*$pow} in XSPEC). The Galactic column density was fixed at $0.088 \times 10^{22}$\,cm$^{-2}$ (Kalberla et al. 2005; Willingale et al. 2013)\footnote{See \url{http://www.swift.ac.uk/analysis/nhtot/index.php}.}, while the power-law index and the intrinsic absorption column at $z = 1.1853$ (C12) were free to vary. The best-fit power-law index and intrinsic absorption column density were $\Gamma = 1.47 \pm 0.08$ and $n_{\rm H} = (0.30 \pm 0.15) \times 10^{22}$\,atoms\,cm$^{-2}$, respectively (with a reduced $\chi^2 = 0.74$ for 102 degrees of freedom). We then used these best-fit power-law model parameters (fixing the power-law index and the absorbing column density but keeping the power-law normalization free) and extracted the source flux from each of the individual observations. We only considered observations with a total number of counts greater than 50. In cases where the total number of counts was less than 50, we averaged neighboring observations. The best-fit absorbed power-law model (with fixed power-law index and absorbing column) yielded a reduced $\chi^2$ in an acceptable range of 0.5--1.3 for these individual epochs. We then fitted each of the three {\it XMM-Newton}/EPIC (both pn and MOS simultaneously) X-ray spectra of Sw~J2058+05 using the same model as above ({\it phabs$*$zwabs$*$pow}). We generated the EPIC response files using the $arfgen$ and $rmfgen$ tools which are part of {\it XMM-Newton}'s SAS software. Given that each of these observations had total counts in excess of 1600, we left all the model parameters free to vary except for the redshift and the Milky Way column density. The best-fit model parameters are indicated in Table 4. It is interesting to note that while the best-fit absorbing column densities are consistent with the value derived from the combined {\it Swift} XRT data acquired at early times, the power-law indices are slightly steeper at late times. The luminosity values derived from modeling the {\it XMM-Newton} spectra are indicated by the magenta squares in Figure 1. The source was not detectable in the {\it Chandra/ACIS} images with the naked eye. Nevertheless, using the CIAO task {\it srcflux}, we estimated an upper limit to the 0.3--10\,keV X-ray flux for Poisson statistics. In doing so, we assumed that the source spectrum is defined by an absorbed power-law model with the parameters estimated from the {\it XMM-Newton} data (see Table 4). The power-law index and the intrinsic absorption column density were set to 1.79 and $0.19 \times 10^{22}$\,atoms\,cm$^{-2}$, respectively (mean of the {\it XMM-Newton} values). The isotropic luminosity upper limits are indicated by the blue squares in Figure 1. In addition, we studied the short-term variability of the source on timescales of a few hundred to a few thousand seconds using the {\it XMM-Newton} data. We first extracted a combined EPIC (pn and MOS) light curve from each of the three {\it XMM-Newton} observations. One such light curve (black) along with the background (red) binned with a time resolution of 500 s is shown in Figure 2. It is clear that the source varies significantly, with the most drastic variation around 32,000\,s when the overall count rate changes by a factor of 2.5 within a timescale of $\la 1000$\,s. To further confirm the variability, we modeled the light curve with a constant. The best-fit model gave 0.073 counts s$^{-1}$ with a reduced $\chi^2$ of 2.3 ($\chi^2 = 236$ for 102 degrees of freedom). Again, this suggests that a constant count rate model is strongly disfavored. Rapid X-ray variability on similar timescales has also been observed from Sw~J1644+57 (e.g., Krolik \& Piran 2011) and also nonrelativistic TDE candidates such as SDSS~J120136.02+300305.5 (see Figure 5 of Saxton et al. 2012). Finally, to test for any possible coherent oscillations in the X-rays (0.3--10\,keV), we extracted a power-density spectrum using the longest {\it XMM-Newton} observation (ObsID: 0694830201) with an effective exposure of roughly 48\,ks. We find that the power spectrum is flat (white noise) and is consistent with being featureless (see Figure 3). \subsection{HST Astrometry} Dynamical friction within a galaxy ensures that supermassive black holes sink to the center within a few Gyr after formation (e.g., Equation 4 of Miller \& Colbert 2004). Therefore, if Sw~J2058+05 is an event caused by a supermassive black hole, it should arise from the center of the host galaxy. To constrain the (projected) offset between the transient emission from Sw~J2058+05 and its underlying host, we took three approaches. First, we compared the VLBA position for Sw~J2058+05 (\S2.5) with the host localization derived from \textit{HST}. We used the $F606W$ image from 2014 for this purpose (as opposed to the $F160W$ images obtained in Dec. 2013), owing to its higher SNR and smaller native pixel scale. While the VLBA position for Sw~J2058+05 is the most precise available, the dominant source of uncertainty results from alignment of the \textit{HST} images onto the FK5 reference grid using common point sources from 2MASS (60\,mas in each coordinate). After alignment, we measured a position for the host centroid in the \textit{HST} images of $\alpha = 20^{\mathrm{h}} 58^{\mathrm{m}} 19.898^{\mathrm{s}}$, $\delta = +05^{\circ} 13\arcmin 32\farcs30$. As this is offset from the VLBA position by 58\,mas, we conclude that the radio position is consistent with the host nucleus, within our uncertainties. Next, we performed digital image subtraction on our $F160W$ images obtained on 2011 Aug. 30 (top-left panel of Figure 4) and 2013 Dec. 10 (top-right panel of Figure 4) to more precisely constrain the \textit{relative} transient-host offset (e.g., Levan et al. 2011). The resulting subtraction image is displayed in the bottom-left panel of Figure 4. Assuming that the flux in the final epoch of imaging is dominated by the host galaxy, we measured a radial offset between the transient emission and the host centroid of 0.34 pixels (i.e., 22\,mas). Including contributions to the relative astrometric uncertainty from image alignment (0.18 pixel in each coordinate) and measurement of the host centroid (0.10 pixel in each coordinate), we find a null probability of measuring such an offset of 27\% (assuming a Rayleigh distribution for the radial offset). Thus, we conclude that the transient emission is consistent with the host nucleus at this level of precision, as well. Finally, we measured the relative offset between the 2011 $F475W$ images of Sw~J2058+05 (dominated by transient emission) and our 2014 $F606W$ image of the field (presumed to be host dominated). We find that the centroids in the two images are offset by 10\,mas, while our alignment uncertainty is only 15\,mas in each coordinate. This method offers the most precise constraint on the relative transient-host offset, and we formally place a 90\% confidence limit of $\Delta \theta \lesssim 45$\,mas, corresponding to a projected distance of $\lesssim 400$\,pc at $z = 1.1853$. This is comparable to the limits on the transient-host offset derived for Sw~J1644+57 ($d < 150$\,pc (1$\sigma$) at $z = 0.354$; Levan et al. 2011). \subsection{Temperature and Radius Evolution of the Blackbody} The UVOIR SED of Sw~J2058+05 at early times ($\Delta t \lesssim 1$\,yr, observer frame) is quite blue, significantly more so than one would expect from simple forward-shock models (e.g., Granot \& Sari 2002). Motivated by the observed SEDs in nonrelativistic TDEs (e.g., Gezari et al. 2012), we fit, wherever possible, the UVOIR SEDs with a single-temperature blackbody. The best-fit model parameters from the six epochs are indicated in Table 5. Including host-galaxy extinction as an additional free parameter in modeling these SEDs did not improve the fits. Formally, we limit the host extinction to $A_V \la 0.2$ mag (90\% confidence), assuming it has an extinction law similar to that in the Milky Way (Pei 1992). All of the SEDs along with the best-fit model are shown in the top panel of Figure 5. We show the evolution of the temperature and the radius of the blackbody in the bottom-left and bottom-right panels, respectively. There is clear evidence for a decrease in the blackbody temperature at late times before the X-ray flux drops off, and marginal evidence for an increase in the radius. But given the large error bars in the radii, we cannot strongly rule-out the possibility that the radius remains constant throughout. We also note, however, that with reduced $\chi^{2}$ values as low as we find in several epochs, the quoted uncertainties should be treated with some degree of caution. Regardless, it is clear that the emission has become much redder in our final epoch, with a largely flat SED in $\nu L_{\nu}$. \subsection{Optical/Near-Infrared Spectra} Our highest-SNR spectrum, obtained with Keck/LRIS on 2011 Aug. 28, is plotted in Figure 6. We also fit our Keck/LRIS spectra to single-temperature blackbody models, and find $T_{\mathrm{BB}} = (1.8 \pm 0.2) \times 10^{4}$\,K on 2011 Aug. 2 and $T_{\mathrm{BB}} = (2.3 \pm 0.1) \times 10^{4}$\,K on 2011 Aug. 28 (solid green line in Figure 6). These results are largely consistent with the values derived from our broadband photometry, providing additional confidence in the above analysis. For comparison, in Figure 6 we also plot the composite quasar (QSO) spectrum from SDSS (Vanden Berk et al. 2001), and a spectrum taken near maximum light of the TDE PS1-10jh (Gezari et al. 2012). In all cases the spectra of Sw~J2058+05 are dominated by a blue, featureless continuum. No obvious emission or absorption features are detected in any spectra, with the exception of the initial spectrum from 2011 June 1 presented in C12, from which the redshift of $z$ = 1.1853 was derived from narrow \ion{Mg}{2} and \ion{Fe}{2} absorption lines. Clearly, the strong, broad emission lines that dominate QSOs in the near-UV (e.g., \ion{Mg}{2}, \ion{C}{3}], and \ion{C}{4}) are not present in our spectra of Sw~J2058+05. In addition to a hot ($T_{\mathrm{BB}} \approx 3 \times 10^{4}$\,K) blackbody continuum, PS1-10jh displayed high-ionization \ion{He}{2} $\lambda$4686 and $\lambda$3203 emission lines. Our Keck/LRIS spectra do not probe sufficiently far into the rest-frame optical to cover the stronger \ion{He}{2} $\lambda$4686 feature. We see no evidence for broad emission at this location in our XSHOOTER spectra; however, the SNR is quite low in these data. A number of optically discovered TDEs also display broad H$\alpha$ emission (Arcavi et al. 2014; Holoien et al. 2014), although it is unclear if the presence/absence of H is due to properties of the disrupted star (Gezari et al. 2012) or the radial extent of the newly formed accretion disk (Guillochon et al. 2014). Again, we detect no evidence for broad emission lines at rest-frame H$\alpha$ (or any other Balmer lines, for that matter), but are limited by the low SNR at these wavelengths. We can also limit the presence of narrow, nebular emission lines from the underlying host galaxy. In particular, we do not detect either [\ion{O}{2}] $\lambda$3727 or H$\alpha$. If we assume unresolved emission lines at these wavelengths, we calculate limiting flux values of $f$(\ion{O}{2}) $< 4.8 \times 10^{-17}$\,erg\,s$^{-1}$\,cm$^{-2}$ and $f$(H$\alpha$) $< 6.5 \times 10^{-17}$\,erg\,s$^{-1}$\,cm$^{-2}$. Using the relations from Kennicutt (1998) between emission-line luminosity and star-formation rate, we limit the presence of recent star formation in the host of Sw~J2058+05 to be $\lesssim 5$\,M$_{\odot}$\,yr$^{-1}$ (uncorrected for extinction). This is consistent with an estimate of the star-formation rate derived from the UV ($F606W$) luminosity of the host galaxy, for which we find 0.8\,M$_{\odot}$\,yr$^{-1}$ (using the calibration from Kennicutt 1998). \subsection{Size of the Radio-Emitting Region} The detection of radio emission from Sw~J2058+05 confirms the presence of nonthermal electrons in the circumnuclear ejecta. We can apply standard equipartition arguments (Readhead 1994; Kulkarni et al. 1998) to place a lower limit on the size of the radio-emitting region. Using the formulation valid for relativistic outflows from Barniol Duran et al. (2013), our VLBA detection at $\Delta t \approx 40$\,d (rest frame) implies $R_{\mathrm{eq}} \gtrsim 7 \times 10^{16}$\,cm. Similarly, these observations, though not as constraining as those presented by C12\footnote{Applying the same formulation to the VLA data from C12, we find $R_{\mathrm{eq}} \gtrsim 6 \times 10^{16}$\,cm, $\Gamma_{\mathrm{eq}} \gtrsim 1.5$, and $E_{\mathrm{eq}} \gtrsim 4 \times 10^{49}$\,erg.}, imply at least transrelativistic expansion ($\Gamma_{\mathrm{eq}} \gtrsim 0.6$) from an energetic outflow ($E_{\mathrm{eq}} \gtrsim 3 \times 10^{49}$\,erg). The above limit on the physical size of the radio-emitting region corresponds to a lower limit on the angular size of $\Theta \gtrsim 3 \Psi$\,$\mu$as, where $\Psi$ is the jet opening angle. For any feasible jet opening angle, this result is consistent with the unresolved nature of the source in the VLBA imaging ($\Theta \lesssim 1$\,mas). \section{Discussion} \subsection{Radiation Mechanisms and the Broadband SED} To better understand the nature of Sw~J2058+05, we first consider the origin of the emission in the three regimes probed here: radio, UVOIR, and X-ray. We derived a robust lower limit on the size of the radio-emitting region (based solely on equipartition arguments in \S3.5), $R_{\mathrm{radio}} \gtrsim 7 \times 10^{16}$\,cm. Together with more stringent limits on the bulk Lorentz factor from C12 ($\Gamma \gtrsim 1.5$), we conclude that the radio emission is generated by the forward shock of a newly formed, at least mildly relativistic jet. An identical conclusion was reached by several authors (e.g., Zauderer et al. 2011; Bloom et al. 2011) in the case of Sw~J1644+57. The X-rays, on the other hand, must clearly have a distinct origin. The rapid variability on a rest-frame timescale of $\lesssim 500$\,s require the size of the X-ray-emitting region to be $R_{\mathrm{X-ray}} \lesssim c\, \delta t \approx 2 \times 10^{13}$\,cm. This clearly rules out a forward-shock origin. However, the tremendous peak X-ray luminosity, many orders of magnitude above Eddington for any feasible black hole, suggests some association with the newly formed jet (as does the rapid turnoff; see below). One possibility is that the X-rays are generated in the base of the jet (e.g., Bloom et al. 2011; Zauderer et al. 2011), though the process by which this occurs remains a mystery. Again, the analogy with Sw~J1644+57 holds well. Finally, we have demonstrated that the UVOIR data, both photometry and spectra, are well fit by a single-temperature blackbody with $T_{\mathrm{BB}} \approx$ few $\times 10^{4}$\,K. The inferred blackbody radius, which appears to remain roughly constant, is $R_{\mathrm{opt}} \approx 10^{15}$\,cm. Together with the long-lived blue colors, the radius also seems to disfavor a forward-shock origin for the UVOIR component. Similarly, the derived blackbody spectrum severely underpredicts the observed X-ray flux. Instead, these values are consistent with spectral studies of nonrelativistic TDE candidates in the literature with apparent blackbody temperatures and radii in the range of (1--10) $\times 10^{4}$\,K and (0.1--20) $\times 10^{15}$\,cm, respectively (e.g., Gezari et al. 2009b, 2012; Armijo \& de Freitas Pacheco 2013; Guillochon et al. 2014; Chornock et al. 2014; Cenko et al. 2012b; Holoien et al. 2014; Arcavi et al. 2014), although there are some TDE candidates that tend to show higher disk temperatures of $\ga 10^{5}$\,K accompanied by smaller emitting regions of size $\la 10^{13}$\,cm (e.g., Gezari et al. 2008). However, for any plausible black hole mass, the blackbody radius is orders of magnitude larger than the radius at which disruption should occur. Such large radii have been attributed to reprocessing in some external region (see, for example, the numerical simulations of Guillochon et al. 2014). It is important to note here, that while Sw~J1644+57 lacked detectable UV and optical emission, the high degree of polarization observed in the NIR was attributed to jetted emission from the forward shock (Wiersema et al. 2012), and not from the (presumably largely isotropic) accretion disk. Naively, unless the reprocessing region was nonisotropic, we would expect a low degree of optical polarization from Sw~J2058+05 if this simplistic picture is correct. For future relativistic TDE candidates, polarization observations would be an important test of this model. \subsection{Energetics} Using the best-fit blackbody luminosities and integrating the resulting light curve (using the trapezoidal rule) in the rest frame between epochs 5.7 and 181.4\,d, we estimate the total UVOIR energy radiated to be $\sim 5 \times 10^{51}$\,erg. Similarly, we integrated the X-ray light curve (top panel of Figure 1) and estimate the total isotropic energy to be $\sim 4 \times 10^{53}$\,erg. Assuming an opening angle of $\sim 0.1$ rad, similar to what has been estimated for Sw~J1644+57 (Zauderer et al. 2013; Metzger et al. 2012), we measure the total, beaming-corrected X-ray energy output to be $\sim 4 \times 10^{51}$\,erg. The bolometric luminosity is, however, expected to be a factor of a few higher than the X-ray luminosity. Assuming the bolometric value is a factor of 3 (similar to that of Sw~J1644+57; Burrows et al. 2011), one can estimate the total accreted mass onto the black hole using Equation 5 of the supplemental information of Burrows et al. (2011). We find this value to be $\sim 0.1$\,M$_{\odot}$, which is comparable to Sw~J1644+57's 0.2\,M$_{\odot}$ (Burrows et al. 2011; Zauderer et al. 2013), both appropriate for disruption of a $\sim 1$\,M$_{\odot}$ star. \subsection{Nature of the Rapid X-ray Dropoff} The X-ray emission from Sw~J2058+05 drops abruptly between days 200 and 300 (rest frame), consistent with what was seen for Sw~1644+57 (top panel of Figure 1). More specifically, Sw~J2058+05's intensity decreases by a factor of $\ga 160$ within a span of $\Delta t/t \le 0.95$ compared to Sw~J1644+57's factor of $\sim 170$ decline over a span of $\Delta t/t \la 0.2$ (Levan \& Tanvir 2012; Sbarufatti et al. 2012; Zauderer et al. 2013). Interestingly, in both of these sources, the X-ray dimming occurs on a comparable timescale after disruption. In the case of Sw~J1644+57, Zauderer et al. (2013) interpreted this sudden decrease in the flux as an accretion-mode transition from a super-Eddington to a sub-Eddington state. This is consistent with the transitioning timescale predicted from numerical simulations (e.g., Figures 4 \& 2 of Evans \& Kochanek 1989 and De Colle et al. 2012, respectively). Assuming the same process is responsible for the abrupt flux change in Sw~J2058+05, we can attempt to estimate the mass of the black hole by equating the luminosity at turnoff to the Eddington luminosity. From the X-ray light curve (see Tables 1 \& 4), it is evident that the isotropic X-ray luminosity drops from $1.3 \times 10^{45}$\,erg\,s$^{-1}$ to less than $8.4 \times 10^{42}$\,erg\,s$^{-1}$, suggesting an Eddington value somewhere in between these two limits. Using these two values and assuming radiative efficiency, beaming angle, and bolometric correction values of 0.1, 0.1 rad, and 30\%, respectively (similar to Sw~J1644+57; Burrows et al. 2011), we constrained the black hole mass $M_{\rm BH}$ to be $10^{4}$\,M$_{\odot} \la M_{\rm BH} \la 2 \times 10^{6}$\,M$_{\odot}$. Furthermore, numerical simulations suggest that the time to dropoff (transition from super-Eddington to sub-Eddington) since the disruption is shorter for more massive black holes (see Figure 2 of De Colle et al. 2012). The X-ray dropoff in Sw~J2058+05 occurs $\sim 100$\,d earlier than that in Sw~J1644+57, suggesting that its black hole may be more massive. However, it is interesting to note that even the optical light curves of Sw~J2058+05 undergo an abrupt change during an epoch roughly consistent with the X-ray dimming (see the bottom panel of Figure 1). We find that the optical flux, for instance in the $r$ band, drops by a factor of at least 5 within a narrow span of $\Delta t /t \approx 0.16$. We speculate, within the context of the following simple model, that the X-rays are coming from the base of the jet and the optical originates from the reprocessed UV/soft-X-ray disk photons in the ambient medium. In such a scenario, the proposed super-Eddington to sub-Eddington accretion transition would presumably change the accretion-disk structure to lower its emission, thus explaining the reduction in the amount of the reprocessed light. Obviously, the true situation is more complicated, with specific details about the radiative efficiency, beaming, and other factors. It can be better understood with more detailed modeling, but this is beyond the scope of the current paper. On the other hand, the longterm X-ray light curve of Sw~J2058+05 does not exhibit the numerous sudden dips observed in Sw~J1644+57 (see Figure 1). In the case of Sw~J1644+57, it has been argued that the X-ray dips originate from jet precession and nutation, which causes it to briefly to go out of our line of sight (e.g., Saxton et al. 2012). We speculate, based on the lack of such dips in Sw~J2058+05, that its jet may be more stable compared to Sw~J1644+57. However given the poor sampling of the X-ray light curve, the current data cannot completely rule-out the presence of dips in Sw~J2058+05. \subsection{Other $M_{\mathrm{BH}}$ Estimates} The mass limits derived above based on the X-ray turnoff are consistent with other methods of estimating $M_{\mathrm{BH}}$. First, we can use the X-ray variability timescale to place an upper limit on black hole mass. Equating the limit on the size of the X-ray-emitting region with a Schwarzschild radius suggests a compact object of mass less than $5 \times 10^{7}$\,M$_{\odot}$. Also, assuming that the optical flux in our final two {\it HST} epochs is dominated by the host galaxy (and not transient emission), we can constrain the mass of the central supermassive black hole using the well-known bulge luminosity vs. black hole mass relations (e.g., Lauer et al. 2007). Neglecting for the moment K-corrections [aside from the cosmological $-2.5 \log(1+z)$ factor], the distance modulus at $z = 1.1853$ implies an absolute magnitude of $-18.7$ from the $F606W$ observation (approximately rest-frame $U$ band) and $-19.4$ from the $F160W$ observation (approximately rest-frame $I$ band). Both suggest $M_V \approx -19$\,mag, or an inferred supermassive black hole mass of $M_{\rm BH} \lesssim 3 \times 10^{7}$\,M$_{\odot}$. While there is significant scatter in the bulge luminosity vs. black hole mass relation, our limits are conservative in the sense that they assume \textit{all} of the observed luminosity derives from the bulge (and none from, say, a disk). At the very least, we can robustly conclude that $M_{\rm BH} < 10^{8}$\,M$_{\odot}$, the limit above which a nonspinning black hole cannot tidally disrupt a solar mass main-sequence star (Rees 1988). \section{Conclusions} The goal of this work is to use multiwavelength data and study the long-term ($\sim 1$\,yr) behavior of candidate relativistic TDE Sw~J2058+05. Our main conclusions are as follows. (1) The long-term X-ray turnoff and the host-galaxy nuclear association of Sw~J2058+05 strengthen the similarity between Sw~J2058+05 and Sw~J1644+57. (2) Rapid X-ray variability on a timescale $\la 500$\,s at late times (before the X-ray dropoff) suggests that X-ray photons originate near the black hole and not from a forward shock. If the X-rays were to come from the forward shock, they would vary on much longer timescales. (3) Based on the blackbody modeling of the optical data of Sw~J2058+05 (in ways not possible with Sw~J1644+57 because of the large host-galaxy extinction), we find that the optical originates from farther out ($\sim 10^{15}$\,cm) than the X-rays. Also, the UVOIR SED modeling severely underpredicts the X-ray emission. Lastly, the early-time optical data did not show variability on timescales of a few 1000\,s, suggesting again an emission size of larger than a few 1000 light seconds. However, the X-rays originate very close to the black hole. We conclude based on these lines of evidence that the optical and the X-rays have distinct origins. (4) The size of the optically emitting region of Sw~J2058+05 suggests that it originates from reprocessing. The fact that reprocessing is seen in X-ray-selected events (as well as optical ones) suggests it is a relatively common phenomenon. (5) In Sw~J2058+05-like events, the X-ray dropoff (both the flux or the timescale measurements) could be a probe of the black hole mass. (6) These observations imply the need for improved modeling to better understand Sw~J2058+05-like events. {\it Facilities:} \facility{Swift (XRT)}, \facility{XMM (EPIC)}, \facility{Chandra (ACIS)}, \facility{VLT (FORS2, HAWK-I, XSHOOTER)}, \facility{Keck (LRIS)}, \facility{Gemini:South (GMOS-S)}, \facility{HST (WFC3, ACS)}, \facility{VLBA} \acknowledgments We thank the \textit{XMM-Newton} and \textit{HST} teams, in particular Project Scientist N.~Schartel and STScI director M.~Mountain, for the approval and prompt scheduling of our DD requests. We are also grateful to James Guillochon and Ryan Chornock for valuable discussions. D.R.P. is grateful for valuable discussions with Sjoert van Velzen and Nick Stone. S.B.C. thanks the Aspen Center for Physics and NSF Grant \#1066293 for hospitality during the preparation of this manuscript. K.L.P. acknowledges support from the UK Space Agency. Support for this work was provided by the National Aeronautics and Space Administration (NASA) through Chandra Award Number GO3-14107X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. D.R.P. and S.B.C. also acknowledge support from \textit{HST} grant GO-13611-006-A. The work of A.V.F. was made possible by NSF grant AST-1211916, the TABASGO Foundation, and the Christopher R. Redlich Fund. A.V.F. and S.B.C. also acknowledge the support of Gary and Cynthia Bengier. Finally, we would like to thank the referee for his/her careful comments and suggestions. The scientific results reported in this article are based in part on observations made by the {\it Chandra} X-ray Observatory, NASA/ESA {\it Hubble Space Telescope}, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555 and, {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. Also, observations were obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil) and Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina). Also, based on observations made with ESO Telescopes at the La Silla or Paranal Observatories. We acknowledge the use of public data from the Swift data archive. \newpage \begin{figure} \begin{center} \vspace{-.25in} \includegraphics[width=5.5in, height=7.in, angle=0]{Figure1.ps} \end{center} {\textbf{Figure 1:} {\it Top panel:} Comparing the long-term X-ray (0.3--10\,keV) light curve of Sw~J2058+05 (filled circles and squares) with that of Sw~J1644+57 (gray; adapted from Burrows et al. 2011). An abrupt decline in the X-ray luminosity seen in Sw~1644+57 (Zauderer et al. 2013) is also evident in Sw~J2058+05. The magenta squares are flux estimates of Sw~J2058+05 from {\it XMM-Newton}/EPIC, while the blue are the upper limits from {\it Chandra}/ACIS (see Table 1). The discovery time of Sw~J2058+05 is not precisely constrained, but we refer to the time of discovery as 00:00:00 on 2011 May 17 (MJD = 55698) as per Cenko et al. (2012). The rest-frame time was thus estimated as (time $-$ 55698)/$(1+z)$, where $z = 1.1853$. {\it Bottom panel:} The long-term light curve of Sw~J2058+05 in various optical bands (data available in Table 2), showing a similar sharp decline as seen in the X-rays. } \label{fig:figure1} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=6.5in, height=4.5in, angle=0]{Figure2.ps} \end{center} {\textbf{Figure 2:} {\it XMM-Newton}/EPIC (both pn and MOS) X-ray (0.3--10\,keV) light curve of Sw~J2058+05 (filled black circles), highlighting X-ray variability on timescales of $\sim 500$\,s. The light curve was derived from the longest good time interval of 50\,ks from the {\it XMM-Newton} observation with ID 0694830201 and binned at 500\,s. The background during the same time is shown in red. The source light curve was fit to a constant-flux model and shows clear variability ($\chi^2 = 236$ for 102 degrees of freedom). } \label{fig:figure2} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=5.5in, height=4.25in, angle=0]{Figure3.ps} \end{center} {\textbf{Figure 3:} {\it XMM-Newton}/EPIC-pn (ObsID: 0694830201) X-ray (0.3--10\,keV) power density spectrum of Sw~J2058+05. The power spectrum is Leahy normalized (Leahy 1983) with a Poisson noise level of 2 (dashed horizontal line). The frequency resolution is 7.8\,mHz and each bin is an average of 188 independent power spectral measurements. The confidence limits ($3\sigma$/99.73\% and $3.9\sigma$/99.99\%) are indicated by the two horizontal dotted lines. The spectrum is featureless and consistent with being flat (white noise). } \label{fig:figure3} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=6.5in, height=4.15in, angle=0]{Figure4.ps} \end{center} {\textbf{Figure 4:} {\it Top-left panel:} {\it HST}/WFC3 $F160W$ image of the location of Sw~J2058+05 obtained on 2011 Aug. 30 (two months after Sw~J2058+05 reached its peak luminosity). {\it Top-right panel:} An image of Sw~J2058+05 with the identical instrument configuration from 2013 Dec. 10 (long after the outburst when the optical emission is dominated by the host galaxy). {\it Bottom-left panel: } Digital image subtraction of the two $F160W$ frames. To within measurement uncertainties, the location of the resulting transient emission is consistent with the centroid of the host galaxy. {\it Bottom-right panel: } Zoomed-in image of the host galaxy in the $F606W$ filter. The centroid of the host galaxy is indicated by the white cross. Our most precise astrometric constraints come from aligning this $F606W$ image from 2014 Aug 31 with a previous HST image of the transient from 2011 Aug 30 in the $F475W$ filter, for which the 68\% confidence uncertainty in the astrometric tie between the two frames is 23 mas in radius (blue circle). The VLBA position for Sw~J2058+05, along with the uncertainty in connecting the VLBA position to the HST astrometric frame (68\% confidence radius of 90 mas) is indicated by the red circle. } \label{fig:figure4} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=6.in, height=5.85in, angle=0]{Figure5.ps} \end{center} {\textbf{Figure 5:} {\it Top panel:} UVOIR SEDs of Sw~J2058+05 at various epochs (filled circles). The best-fit single-temperature blackbodies (solid curves) are also shown. $\Delta$t$_{rest}$ refers to days in rest frame since discovery. {\it Bottom-left panel:} The blackbody temperature as a function of the rest-frame time since discovery. {\it Bottom-right panel:} The blackbody radius as a function of the rest-frame time since discovery. All of the error bars indicate 90\% confidence limits. } \label{fig:figure5} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=6.in, height=4.85in, angle=0]{Figure6.ps} \end{center} {\textbf{Figure 6:} Optical and NIR spectra of Sw~J2058+05 (black) taken with Keck/LRIS on 2011 Aug. 28 ($\sim 47$\,d after discovery, measured in the rest frame). The solid green line shows a fit to a single blackbody with $T_{\mathrm{BB}} = (2.3 \pm 0.1) \times 10^{4}$\,K. For comparison, the composite SDSS spectrum of quasars (Vanden Berk et al. 2001) and the spectrum of the TDE PS 1-10jh taken at its peak luminosity (Gezari et al. 2012) are shown in red and blue, respectively. The spectrum of Sw~J2058+05 does not contain any apparent absorption or emission lines at this stage.} \label{fig:figure6} \end{figure} \clearpage \begin{table} \caption{Summary of X-ray Spectral Modeling of Sw J2058+05} \begin{center} {\footnotesize \begin{tabular}{ccccc} \toprule \toprule \\ {\bf Instrument} & {\bf ObsID} & {\bf MJD Date}$\dagger$$\dagger$ & {\bf X-ray Flux$^{\ast}$} & {\bf Notes}$\dagger$ \\ \\ \midrule {\it Swift}/XRT & 00032004001 & 55708.915 & 48.12$^{+1.61}_{-1.70}$ & PC Mode \\ {\it Swift}/XRT & 00032004002 & 55711.582 & 64.49$^{+2.20}_{-2.19}$ & WT Mode \\ {\it Swift}/XRT & 00032004003 & 55714.412 & 59.37$^{+2.24}_{-2.08}$ & WT Mode \\ {\it Swift}/XRT & 00032004004 & 55717.879 & 48.78$^{+1.62}_{-1.51}$ & WT Mode \\ {\it Swift}/XRT & 00032004005 & 55720.568 & 31.78$^{+1.66}_{-1.59}$ & WT Mode \\ {\it Swift}/XRT & 00032004007 & 55726.045 & 12.61$^{+0.90}_{-0.88}$ & WT Mode \\ {\it Swift}/XRT & 00032004008 & 55729.110 & 15.17$^{+1.08}_{-1.02}$ & WT Mode \\ {\it Swift}/XRT & 00032004009 & 55735.539 & 9.58$^{+0.84}_{-0.78}$ & WT Mode \\ {\it Swift}/XRT & 00032004010 & 55738.548 & 8.01$^{+0.79}_{-0.89}$ & WT Mode \\ {\it Swift}/XRT & 00032026001 & 55743.760 & 3.33$^{+0.74}_{-0.64}$ & WT Mode \\ {\it Swift}/XRT & 00032026002 & 55748.457 & 2.85$^{+0.38}_{-0.43}$ & WT Mode \\ {\it Swift}/XRT & 00032026003 & 55753.531 & 1.59$^{+0.38}_{-0.34}$ & PC Mode \\ {\it Swift}/XRT & 00032026004 & 55760.699 & 1.90$^{+0.36}_{-0.34}$ & PC Mode \\ {\it Swift}/XRT & 00032026005 & 55763.907 & 2.69$^{+0.64}_{-0.64}$ & PC Mode \\ {\it Swift}/XRT & 00032026006 & 55768.723 & 0.99$^{+0.25}_{-0.23}$ & PC Mode \\ {\it Swift}/XRT & 00032026007 & 55773.203 & 1.30$^{+0.28}_{-0.32}$ & PC Mode \\ {\it Swift}/XRT & 00032026009 & 55783.373 & 1.05$^{+0.24}_{-0.24}$ & PC Mode \\ {\it Swift}/XRT & 00032026010-012 & 55806.229 & 0.64$^{+0.17}_{-0.15}$ & PC Mode \\ {\it Swift}/XRT & 00032026013-015 & 55853.296 & 0.37$^{+0.09}_{-0.11}$ & PC Mode \\ {\it Swift}/XRT & 00032026016-021 & 55885.110 & 0.14$^{+0.05}_{-0.05}$ & PC Mode \\ {\it XMM-Newton}/EPIC & 0679380801 & 55885.635 & 0.19$^{+0.02}_{-0.02}$ & Exposure: 23 ks \\ {\it XMM-Newton}/EPIC & 0679380901 & 55887.787 & 0.16$^{+0.02}_{-0.02}$ & Exposure: 29 ks \\ {\it XMM-Newton}/EPIC & 0694830201 & 56049.048 & 0.17$^{+0.01}_{-0.01}$ & Exposure: 55 ks \\ {\it Chandra}/ACIS & 14975 & 56383.806 & $\le$ 1.05$\times$10$^{-3}$ & Exposure: 30 ks\\ {\it Chandra}/ACIS & 16498 & 56594.972 & $\le$ 1.76$\times$10$^{-3}$ & Exposure: 20 ks\\ {\it Chandra}/ACIS & 14976 & 56597.639 & $\le$ 1.63$\times$10$^{-3}$ & Exposure: 30 ks\\ \bottomrule \bottomrule \\ \end{tabular}\\ {\textsuperscript{$\dagger$$\dagger$}The source was discovered on MJD 55698. \textsuperscript{$\ast$}The X-ray fluxes were estimated in the bandpass of 0.3--10\,keV and have units of $10^{-12}$\,erg\,s$^{-1}$\,cm$^{-2}$. These represent the values just outside our Galaxy. The X-ray luminosities in the top panel of Figure 1 were estimated as flux $\times 4\pi D^{2}$, where $D$ is 8200\,Mpc. See text for details on the modeling. \textsuperscript{$\dagger$}PC refers to photon counting and WT to windowed timing. } } \end{center} \end{table} \clearpage \begin{table} \begin{center} \caption{A Summary of UV/Optical/IR observations of Sw J2058+05} {\scriptsize \begin{tabular}{cccccc} \toprule \toprule \\ {\bf UTC} & {\bf MJD Date} & {\bf Telescope} & {\bf Filter} & {\bf Exposure} & {\bf AB Magnitude$^{\ast}$} \\ {\bf date} & & & & {\bf (seconds)} & \\ \\ \midrule 2011 Aug 12.05 & 55785.05 & VLT - HAWK-I & J & 1020 & $22.72 \pm 0.33 $ \\ 2011 Aug 12.07 & 55785.07 & VLT - HAWK-I & K & 1080 & $>21.6$ \\ 2011 Aug 20.07 & 55793.07 & VLT - FORS 2 & u & 840.0 & $22.86 \pm 0.13$ \\ 2011 Aug 20.07 & 55793.07 & VLT - FORS 2 & g & 120.0 & $22.69 \pm 0.11$ \\ 2011 Aug 20.07 & 55793.07 & VLT - FORS 2 & r & 120.0 & $23.07 \pm 0.11$ \\ 2011 Aug 20.08 & 55793.08 & VLT - FORS 2 & i & 200.0 & $22.84 \pm 0.11$ \\ 2011 Aug 20.08 & 55793.08 & VLT - FORS 2 & z & 720.0 & $22.79 \pm 0.14$ \\ 2011 Aug 30.56 & 55803.56 & HST - WFC3 & F160W (H band)$^{\dagger\dagger}$ & 1196.9 & $23.36 \pm 0.02$ \\ 2011 Aug 30.58 & 55803.58 & HST - WFC3 & F475W (SDSS g)$^{\dagger}$ & 1110.0 & $23.06 \pm 0.02$ \\ 2011 Sept 2.21 & 55806.21 & VLT - HAWK-I & J & 1020 & $ 22.22 \pm 0.22 $ \\ 2011 Sept 2.21 & 55806.21 & VLT - HAWK-I & K & 1080 & $ 21.87 \pm 0.25 $ \\ 2011 Sept 22.06 & 55826.06 & VLT - FORS 2 & r & 120.0 & $22.98 \pm 0.10 $ \\ 2011 Sept 22.07 & 55826.07 & VLT - FORS 2 & u & 840.0 & $23.11 \pm 0.11$ \\ 2011 Sept 22.07 & 55826.07 & VLT - FORS 2 & g & 120.0 & $22.97 \pm 0.13$ \\ 2011 Sept 22.08 & 55826.07 & VLT - FORS 2 & i & 200.0 & $23.10 \pm 0.14$ \\ 2011 Sept 22.08 & 55826.07 & VLT - FORS 2 & z & 720.0 & $23.32 \pm 0.08$ \\ 2011 Sept 24.99 & 55828.99 & VLT - HAWK-I & J & 1020 & $22.46 \pm 0.16 $ \\ 2011 Sept 25.01 & 55829.01 & VLT - HAWK-I & K & 1080 & $21.60 \pm 0.20 $ \\ 2011 Nov 20.02 & 55885.02 & Gemini-S - GMOS & u & 300.5 & $>24.5$ \\ 2011 Nov 20.02 & 55885.02 & Gemini-S - GMOS & g & 100.5 & $23.93 \pm 0.21 $ \\ 2011 Nov 20.03 & 55885.02 & Gemini-S - GMOS & r & 100.5 & $23.53 \pm 0.16 $ \\ 2011 Nov 30.96 & 55894.96 & HST - WFC3 & F160W (H band)$^{\dagger\dagger}$ & 1196.9 & $23.56 \pm 0.02$ \\ 2011 Nov 30.99 & 55894.99 & HST - WFC3 & F475W (SDSS g)$^{\dagger}$ & 1110.0 & $23.89 \pm 0.02$ \\ 2012 June 16.32 & 56094.32 & VLT - FORS 2 & r & 400.0 & $24.24 \pm 0.17 $ \\ 2012 June 16.33 & 56094.33 & VLT - FORS 2 & g & 400.0 & $24.78 \pm 0.15$ \\ 2012 June 16.34 & 56094.34 & VLT - FORS 2 & u & 840.0 & $25.14 \pm 0.38$ \\ 2012 June 16.35 & 56094.35 & VLT - FORS 2 & i & 240.0 & $24.32 \pm 0.17$ \\ 2012 June 16.35 & 56094.35 & VLT - FORS 2 & z & 720.0 & $23.99 \pm 0.23$ \\ 2012 July 18.27 & 56126.27 & VLT - FORS2 & u & 840 & 25.39 $\pm$ 0.26\\ 2012 July 18.26 & 56126.26 & VLT - FORS2 & g & 400 & 24.97 $\pm$ 0.14\\ 2012 July 18.26 & 56126.26 & VLT - FORS2 & r & 400 & 24.48 $\pm$ 0.14\\ 2012 July 18.28 & 56126.28 & VLT - FORS2 & i & 240 & 24.20 $\pm$ 0.15 \\ 2012 July 18.29 & 56126.29 & VLT - FORS2 & z & 720 & 23.81 $\pm$ 0.25 \\ 2012 Aug 22.09 & 56161.09 & VLT - FORS2 & u & 840 & $>25.8$ \\ 2012 Aug 22.08 & 56161.08 & VLT - FORS2 & g & 400 & $>26.4$\\ 2012 Aug 22.08 & 56161.08 & VLT - FORS2 & r & 400 & $>26.0$\\ 2012 Aug 22.10 & 56161.10 & VLT - FORS2 & i & 240 & $>24.9$ \\ 2012 Aug 22.10 & 56161 & VLT - FORS2 & z & 720 & $>25.2$\\ 2012 Oct 09.01 & 56209.01 & VLT - FORS2 & u & 840 &$>26.0$ \\ 2012 Oct 09.01 & 56209.01 & VLT - FORS2 & g & 400 & $>26.3$ \\ 2012 Oct 09.00 & 56209.00 & VLT - FORS2 & r & 400 & $>26.2$ \\ 2012 Oct 09.02 & 56209.02 & VLT - FORS2 & i & 240 & $>24.8$ \\ 2012 Oct 09.03 & 56209.03 & VLT - FORS2 & z & 720 & $>25.2$ \\ 2013 Dec 10.58 & 56636.58 & HST - WFC3 & F160W (H band)$^{\dagger\dagger}$ & 2611.8 & $25.99 \pm 0.08$ \\ 2014 Aug 31.48 & 56900.48 & HST/ACS - WFC & F606W & 5236.0 & $26.78 \pm 0.10$ \\ \bottomrule \bottomrule \\ \end{tabular}\\ {\textsuperscript{$\ast$}Reported magnitudes have not been corrected for Galactic extinction (E(B - V) = 0.095 mag; Schlafly \& Finkbeiner 2011). Upper limits represent 3$\sigma$ uncertainties. \textsuperscript{$\dagger$}{\it HST/F475W} filter has a bandpass similar to SDSS's g band. \textsuperscript{$\dagger\dagger$}{\it HST/F160W} filter has a bandpass similar to the standard H band.} } \end{center} \end{table} \clearpage \begin{table} \begin{center} \caption{Optical/Near-IR Spectra of Sw J2058+05} \begin{tabular}{ccccc} \toprule \toprule \\ {\bf Date} & {\bf Telescope/Instrument} & {\bf Wavelength} & {\bf Exposure} & {\bf SNR$^{\ast}$} \\ {\bf (UT)} & & {\bf (\AA)} & {\bf (s)} & \\ \\ \midrule 2011 Aug 2.41 & Keck/LRIS (blue) & 3360--5600 & 1800.0 & 3.4 \\ 2011 Aug 2.41 & Keck/LRIS (red) & 5600--10,200 & 1800.0 & 2.0 \\ 2011 Aug 4.16 & VLT/FORS & 3400--6100 & 4800.0 & 2.4 \\ 2011 Aug 28.47 & Keck/LRIS (blue) & 3360--5600 & 1800.0 & 5.3 \\ 2011 Aug 28.47 & Keck/LRIS (red) & 5600--10,200 & 1800.0 & 2.9 \\ 2011 Sep 2.04 & VLT/XSHOOTER (UV) & 3000--5560 & 3600.0 & 0.5 \\ 2011 Sep 2.04 & VLT/XSHOOTER (VIS) & 5300--10,200 & 3600.0 & 0.2 \\ 2011 Sep 2.04 & VLT/XSHOOTER (NIR) & 9900--24,800 & 3600.0 & 0.1 \\ \bottomrule \bottomrule \\ \end{tabular}\\ {\textsuperscript{$\ast$}Per resolution element.} \end{center} \end{table} \clearpage \begin{table} \begin{center} \caption{Summary of {\it XMM-Newton} X-ray (0.3--10\,keV) Spectral Modeling of Sw J2058+05} \begin{tabular}{cccccc} \toprule \toprule \\ {\bf ObsID$^{a}$} & {\bf Absorbing} & {\bf Power-law} & {\bf Power-law} & {\bf $\chi^2$/dof} & {\bf X-ray Flux$^{d}$} \\ & {\bf column$^{b}$} & {\bf index$^{c}$} & {\bf Normalization} & & \\ \\ \midrule 0679380801 & 0.23$^{+0.15}_{-0.13}$ & 1.89$^{+0.15}_{-0.13}$ & 3.6$^{+0.4}_{-0.4}$ & 53/48 & 0.19$^{+0.02}_{-0.02}$ \\ 0679380901 & 0.15$^{+0.18}_{-0.16}$ & 1.81$^{+0.18}_{-0.16}$ & 2.8$^{+0.4}_{-0.3}$ & 55/64 & 0.16$^{+0.02}_{-0.02}$ \\ 0694830201 & 0.19$^{+0.13}_{-0.12}$ & 1.67$^{+0.10}_{-0.10}$ & 2.5$^{+0.2}_{-0.2}$ & 86/94 & 0.17$^{+0.01}_{-0.01}$ \\ \bottomrule \bottomrule \\ \end{tabular}\\ {\textsuperscript{a}{\it XMM-Newton} assigned observation ID. \textsuperscript{b}Units of $10^{22}$\,atoms\,cm$^{-2}$. \textsuperscript{c}The X-ray spectra were modeled with {\it phabs$*$zwabs$*$pow} in XSPEC. The Galactic column ({\it phabs}) was fixed at $0.088 \times 10^{22}$\,atoms\,cm$^{-2}$ (Kalberla et al. 2005 \& Willingale et al. 2013) and the redshift in {\it zwabs} was fixed at 1.1853 (C12). \textsuperscript{d}The X-ray fluxes were estimated in the bandpass of 0.3--10\,keV and have units of $10^{-12}$\,erg\,s$^{-1}$\,cm$^{-2}$. } \end{center} \end{table} \clearpage \begin{table} \begin{center} \caption{Summary of UVOIR SED Modeling of Sw J2058+05$^{a}$} \begin{tabular}{ccccc} \toprule \toprule \\ {\bf UTC} & {\bf MJD Date} & {\bf Blackbody} & {\bf Blackbody } & {\bf $\chi^2$/dof} \\ & {\bf (rest-frame days since discovery)} & {\bf temperature$^{b}$} & {\bf radius$^{c}$} & \\ \\ \midrule 2011 May 29 & 55710.41 & 2.9$\pm$0.5 & 66.6$\pm$12.4 & 0.3/2 \\ & (5.67) & & & \\ 2011 June 3 & 55715.40 & 2.9$\pm$0.5 & 65.4$\pm$12.9 & 0.3/2 \\ & (7.96) & & & \\ 2011 June 10 & 55722.26 & 4.9$\pm$1.1 & 41.3$\pm$8.1 & 3.8/3 \\ & (11.10) & & & \\ 2011 Aug 20 & 55793.07 & 2.6$\pm$0.2 & 70.3$\pm$8.2 & 12.3/3 \\ & (43.51) & & & \\ 2011 Sept 22 & 55826.07 & 2.6$\pm$0.2 & 65.2$\pm$5.4 & 0.2/3 \\ & (58.60) & & & \\ 2012 June 16 & 56094.34 & 1.5$\pm$0.2 & 71.3$\pm$15.5 & 2.5/3 \\ & (181.37) & & & \\ 2012 July 18 & 56126.27 & 1.4$\pm$0.1 & 88.1$\pm$15.0 & 4.7/3 \\ & (196.0) & & & \\ \bottomrule \bottomrule \\ \end{tabular}\\ {\textsuperscript{a}Data for the first three epochs was acquired by C12 while the rest are from Table 2. \textsuperscript{b}Units of 10,000\,K. \textsuperscript{c}Units of AU (astronomical unit). The SEDs were modeled with a single-temperature blackbody. } \end{center} \end{table} \clearpage \vfill\eject \newpage
2212.09480
\section{Introduction: file preparation and submission} The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals. This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted. It is not necessary to mimic the appearance of a published article. The advice on \LaTeX\ file preparation in this document applies to the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific submission instructions. \begin{table} \caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.} \footnotesize \begin{tabular}{@{}llll} \br Short form of journal title&Macro name&Short form of journal title&Macro name\\ \mr 2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\ Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\ Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\ Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\ Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion&\verb"\NF"\\ Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\ Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\ Eur. J. Phys.&\verb"\EJP"&Nanotechnology&\verb"\NT"\\ Inverse Problems&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\ J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\ J. Geophys. Eng.$^d$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\ J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\ J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\ J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\ J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\ J. Phys. B: At. Mol. Opt. Phys.&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\ J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\ J. Phys. D: Appl. Phys.&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\ J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\ J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\ Metrologia&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\ \br \end{tabular}\\ $^{a}$UK spelling is required; $^{b}$MSC classification numbers are required; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references. \end{table} \normalsize Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}. Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file that gives slightly non-standard output in order to supply any extra information required. It is not necessary to give references in the exact style of references used in published articles, as long as all of the required information is present. Also note that there is an incompatibility between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different class file. Whatever journal you are submitting to, please look at recent published articles (preferably articles in your subject area) to familiarize yourself with the features of the journal. We do not demand that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort commonly seen on \verb"arXiv.org" is fine---but your submission should be presented in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal. The generic advice in this document---on what to include in an abstract, how best to present complicated mathematical expressions, and so on---applies whatever class file you are using. \subsection{What you will need to supply} Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit a new article to us you need only submit a PDF of your article. When you submit a revised version, we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style. \subsubsection{Text.}When you send us the source files for a revised version of your submission, you should send us the \LaTeX\ source code of your paper with all figures read in by the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{}, not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master' \LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or \verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files all together in a single directory with no subdirectories}. \begin{itemize} \item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are available in common distributions of \LaTeXe; however, if it is essential to use a non-standard package then any extra files needed to process the article must also be supplied. Try to avoid using any packages that manipulate or change the standard \LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain \TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases. \end{itemize} \subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files (see section \ref{figinc}) or created using standard \LaTeX\ drawing commands. Please name all figure files using the guidelines in section \ref{fname}. We accept submissions that use pdf\TeX\ to include PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower (to avoid problems in the ScholarOne system). You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file. \label{fig1}All figures should be included within the body of the text at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command. Authors should avoid using special effects generated by including verbatim PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools and packages. \subsubsection{References.\label{bibby}} You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively you can use BibTeX: our preferred \verb".bst" styles are: \begin{itemize} \item For the numerical (Vancouver) reference style we recommend that authors use \verb"unsrt.bst"; this does not quite follow the style of published articles in our journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio produces a reference style that closely matches that in published articles. The file is available from \verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" . \item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty" in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/". Note that the \verb"jphysicsB.bst" bibliography style does not include article titles in references to journal articles. To include the titles of journal articles you can use the style \verb"dcu.bst" which is included in the \verb"harvard.sty" package. The output differs a little from the final journal reference style, but all of the necessary information is present and the reference list will be formatted into journal house style as part of the production process if your article is accepted for publication. \end{itemize} \noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any \verb".bst" style file(s) you have used. \subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy before writing your article. \subsection{Naming your files} \subsubsection{General.} Please name all your files, both figures and text, as follows: \begin{itemize} \item Use only characters from the set a to z, A to Z, 0 to 9 and underscore (\_). \item Do not use spaces or punctuation characters in file names. \item Do not use any accented characters such as \'a, \^e, \~n, \"o. \item Include an extension to indicate the file type (e.g., \verb".tex", \verb".eps", \verb".txt", etc). \item Use consistent upper and lower case in filenames and in your \LaTeX\ file. If your \LaTeX\ file contains the line \verb"\includegraphics{fig1.eps}" the figure file must be called \verb"fig1.eps" and not \verb"Fig1.eps" or \verb"fig1.EPS". If you are on a Unix system, please ensure that there are no pairs of figures whose names differ only in capitalization, such as \verb"fig_2a.eps" and \verb"fig_2A.eps", as Windows systems will be unable to keep the two files in the same directory. \end{itemize} When you submit your article files, they are manipulated and copied many times across multiple databases and file systems. Including non-standard characters in your filenames will cause problems when processing your article. \subsubsection{\label{fname}Naming your figure files.} In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, \verb"figure1.eps", \verb"figure2a.eps", etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as \verb"figure2a_2e.eps", and so forth. \subsection{How to send your files} Please send your submission via the ScholarOne submission system. Go to the journal home page, and use the `Submit an article' link on the right-hand side. \section{Preparing your article} \subsection{Sample coding for the start of an article} \label{startsample} The code for the start of a title page of a typical paper in the \verb"iopart.cls" style might read: \small\begin{verbatim} \documentclass[12pt]{iopart} \begin{document} \title[The anomalous magnetic moment of the neutrino]{The anomalous magnetic moment of the neutrino and its relation to the solar neutrino problem} \author{P J Smith$^1$, T M Collins$^2$, R J Jones$^3$\footnote{Present address: Department of Physics, University of Bristol, Tyndalls Park Road, Bristol BS8 1TS, UK.} and Janet Williams$^3$} \address{$^1$ Mathematics Faculty, Open University, Milton Keynes MK7~6AA, UK} \address{$^2$ Department of Mathematics, Imperial College, Prince Consort Road, London SW7~2BZ, UK} \address{$^3$ Department of Computer Science, University College London, Gower Street, London WC1E~6BT, UK} \ead{williams@ucl.ac.uk} \begin{abstract} ... \end{abstract} \keywords{magnetic moment, solar neutrinos, astrophysics} \submitto{\jpg} \maketitle \end{verbatim} \normalsize At the start of the \LaTeX\ source code please include commented material to identify the journal, author, and (if you are sending a revised version or a resubmission) the reference number that the journal has given to the submission. The first non-commented line should be \verb"\documentclass[12pt]{iopart}" to load the preprint class file. The normal text will be in the Computer Modern 12pt font. It is possible to specify 10pt font size by passing the option \verb"[10pt]" to the class file. Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended. The article text begins after \verb"\begin{document}". Authors of very long articles may find it convenient to separate their article into a series of \LaTeX\ files each containing one section, and each of which is called in turn by the primary file. The files for each section should be read in from the current directory; please name the primary file clearly so that we know to run \LaTeX\ on this file. Authors may use any common \LaTeX\ \verb".sty" files. Authors may also define their own macros and definitions either in the main article \LaTeX\ file or in a separate \verb".tex" or \verb".sty" file that is read in by the main file, provided they do not overwrite existing definitions. It is helpful to the production staff if complicated author-defined macros are explained in a \LaTeX\ comment. The article class \verb"iopart.cls" can be used with other package files such as those loading the AMS extension fonts \verb"msam" and \verb"msbm", which provide the blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure captions. An extra style file \verb"iopams.sty" is provided to load these packages and provide extra definitions for bold Greek letters. \subsection{\label{dblcol}Double-column layout} The \verb"iopart.cls" class file produces single-column output by default, but a two-column layout can be obtained by using \verb"\documentclass[10pt]" at the start of the file and \verb"\ioptwocol" after the \verb"\maketitle" command. Two-column output will begin on a new page (unlike in published double-column articles, where the two-column material starts on the same page as the abstract). In general we prefer to receive submissions in single-column format even for journals published in double-column style; however, the \verb"\ioptwocol" option may be useful to test figure sizes and equation breaks for these journals. When setting material in two columns you can use the asterisked versions of \LaTeX\ commands such as \verb"\begin{figure*} ... \end{figure*}" to set figures and tables across two columns. If you have any problems or any queries about producing two-column output, please contact us at \verb"submissions@iop.org". \section{The title and abstract page} If you use \verb"iopart.cls", the code for setting the title page information is slightly different from the normal default in \LaTeX. If you are using a different class file, you do not need to mimic the appearance of an \verb"iopart.cls" title page, but please ensure that all of the necessary information is present. \subsection{Titles and article types} The title is set using the command \verb"\title{#1}", where \verb"#1" is the title of the article. The first letter of the title should be capitalized with the rest in lower case. The title appears in bold case, but mathematical expressions within the title may be left in light-face type. If the title is too long to use as a running head at the top of each page (apart from the first) a short form can be provided as an optional argument (in square brackets) before the full title, i.e.\ \verb"\title[Short title]{Full title}". For article types other than papers, \verb"iopart.cls" has a generic heading \verb"\article[Short title]{TYPE}{Full title}" and some specific definitions given in table~\ref{arttype}. In each case (apart from Letters to the Editor and Fast Track Communications) an optional argument can be used immediately after the control sequence name to specify the short title; where no short title is given, the full title will be used as the running head. Not every article type has its own macro---use \verb"\article" for any not listed. A full list of the types of articles published by a journal is given in the submission information available via the journal home page. The generic heading could be used for articles such as those presented at a conference or workshop, e.g. \small\begin{verbatim} \article[Short title]{Workshop on High-Energy Physics}{Title} \end{verbatim}\normalsize Footnotes to titles may be given by using \verb"\footnote{Text of footnote.}" immediately after the title. Acknowledgment of funding should be included in the acknowledgments section rather than in a footnote. \begin{table} \caption{\label{arttype}Types of article defined in the {\tt iopart.cls} class file.} \footnotesize\rm \begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus12pt}}l}} \br Command& Article type\\ \mr \verb"\title{#1}"&Paper (no surtitle on first page)\\ \verb"\ftc{#1}"&Fast Track Communication\\ \verb"\review{#1}"&Review\\ \verb"\topical{#1}"&Topical Review\\ \verb"\comment{#1}"&Comment\\ \verb"\note{#1}"&Note\\ \verb"\paper{#1}"&Paper (no surtitle on first page)\\ \verb"\prelim{#1}"&Preliminary Communication\\ \verb"\rapid{#1}"&Rapid Communication\\ \verb"\letter{#1}"&Letter to the Editor\\ \verb"\article{#1}{#2}"&Other articles\\\ & (use this for any other type of article; surtitle is whatever is entered as {\tt \#1})\\ \br \end{tabular*} \end{table} \subsection{Authors' names and addresses} For the authors' names type \verb"\author{#1}", where \verb"#1" is the list of all authors' names. Western-style names should be written as initials then family name, with a comma after all but the last two names, which are separated by `and'. Initials should {\it not} be followed by full stops. First (given) names may be used if desired. Names in Chinese, Japanese and Korean styles should be written as you want them to appear in the published article. Authors in all IOP Publishing journals have the option to include their names in Chinese, Japanese or Korean characters in addition to the English name: see appendix B for details. If the authors are at different addresses a superscripted number, e.g. $^1$, \verb"$^1$", should be used after each name to reference the author to his/her address. If an author has additional information to appear as a footnote, such as a permanent address, a normal \LaTeX\ footnote command should be given after the family name and address marker with this extra information. The authors' affiliations follow the list of authors. Each address is set by using \verb"\address{#1}" with the address as the single parameter in braces. If there is more than one address then the appropriate superscripted number, followed by a space, should come at the start of the address. E-mail addresses are added by inserting the command \verb"\ead{#1}" after the postal address(es) where \verb"#1" is the e-mail address. See section~\ref{startsample} for sample coding. For more than one e-mail address, please use the command \verb"\eads{\mailto{#1}, \mailto{#2}}" with \verb"\mailto" surrounding each e-mail address. Please ensure that, at the very least, you state the e-mail address of the corresponding author. \subsection{The abstract} The abstract follows the addresses and should give readers concise information about the content of the article and indicate the main results obtained and conclusions drawn. It should be self-contained---there should be no references to figures, tables, equations, bibliographic references etc. It should be enclosed between \verb"\begin{abstract}" and \verb"\end{abstract}" commands. The abstract should normally be restricted to a single paragraph of around 200 words. \subsection{Subject classification numbers} We no longer ask authors to supply Physics and Astronomy Classification System (PACS) classification numbers. For submissions to {\it Nonlinearity}\/ we ask that you should supply Mathematics Subject Classification (MSC) codes. MSC numbers are included after the abstract using \verb"\ams{#1}". The command \verb"\submitto{#1}" can be inserted, where \verb"#1" is the journal name written in full or the appropriate control sequence as given in table~\ref{jlab1}. This command is not essential to the running of the file and can be omitted. \subsection{Keywords} Keywords are required for all submissions. Authors should supply a minimum of three (maximum seven) keywords appropriate to their article as a new paragraph starting \verb"\noindent{\it Keywords\/}:" after the end of the abstract. \subsection{Making a separate title page} To keep the header material on a separate page from the body of the text insert \verb"\maketitle" (or \verb"\newpage") before the start of the text. If \verb"\maketitle" is not included the text of the article will start immediately after the abstract. \section{The text} \subsection{Sections, subsections and subsubsections} The text of articles may be divided into sections, subsections and, where necessary, subsubsections. To start a new section, end the previous paragraph and then include \verb"\section" followed by the section heading within braces. Numbering of sections is done {\it automatically} in the headings: sections will be numbered 1, 2, 3, etc, subsections will be numbered 2.1, 2.2, 3.1, etc, and subsubsections will be numbered 2.3.1, 2.3.2, etc. Cross references to other sections in the text should, where possible, be made using labels (see section~\ref{xrefs}) but can also be made manually. See section~\ref{eqnum} for information on the numbering of displayed equations. Subsections and subsubsections are similar to sections but the commands are \verb"\subsection" and \verb"\subsubsection" respectively. Sections have a bold heading, subsections an italic heading and subsubsections an italic heading with the text following on directly. \small\begin{verbatim} \section{This is the section title} \subsection{This is the subsection title} \end{verbatim}\normalsize The first section is normally an introduction, which should state clearly the object of the work, its scope and the main advances reported, with brief references to relevant results by other workers. In long papers it is helpful to indicate the way in which the paper is arranged and the results presented. Footnotes should be avoided whenever possible and can often be included in the text as phrases or sentences in parentheses. If required, they should be used only for brief notes that do not fit conveniently into the text. The use of displayed mathematics in footnotes should be avoided wherever possible and no equations within a footnote should be numbered. The standard \LaTeX\ macro \verb"\footnote" should be used. Note that in \verb"iopart.cls" the \verb"\footnote" command produces footnotes indexed by a variety of different symbols, whereas in published articles we use numbered footnotes. This is not a problem: we will convert symbol-indexed footnotes to numbered ones during the production process. \subsection{Acknowledgments} Authors wishing to acknowledge assistance or encouragement from colleagues, special work by technical staff or financial support from organizations should do so in an unnumbered `Acknowledgments' section immediately following the last numbered section of the paper. In \verb"iopart.cls" the command \verb"\ack" sets the acknowledgments heading as an unnumbered section. Please ensure that you include all of the sources of funding and the funding contract reference numbers that you are contractually obliged to acknowledge. We often receive requests to add such information very late in the production process, or even after the article is published, and we cannot always do this. Please collect all of the necessary information from your co-authors and sponsors as early as possible. \subsection{Appendices} Technical detail that it is necessary to include, but that interrupts the flow of the article, may be consigned to an appendix. Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list. If there are two or more appendices they should be called Appendix A, Appendix B, etc. Numbered equations will be in the form (A.1), (A.2), etc, figures will appear as figure A1, figure B1, etc and tables as table A1, table B1, etc. The command \verb" \section{Introduction} In presence of a magnetic field, the thermoelectric response acquires an off-diagonal component. This is the Nernst coefficient, which is particularly large in dilute metals with highly mobile carriers~\cite{Behnia2016}. An emblematic case is elemental bismuth. The peak Nernst signal in (large and dislocation free crystals of) bismuth approaches 1 V/K at liquid He temperature and a magnetic field of 1 T~\cite{Galev1981}. Expressed in the natural unit for thermoelectric response ($k_B/e= 8.6 \times 10^{-5} V/K$), this becomes a very large number. Often, the Nernst response is much weaker than this natural unit. However, even when it is small as nV/K, it can be safely measured by many experimentalists. The focus of the present paper is the Nernst signal generated by the vortex motion in a type II superconductor. This phenomenon was discovered in 1960s~\cite{LOWELL1967,solomon1967,HUEBENER1967947} and became the subject of several contemporaneous theoretical investigations \cite{Stephen1966,Clem1968,Maki1968}. A few decades later, it was also observed in superconducting cuprates \cite{Huebener_1995}, which host a remarkably broad vortex liquid state \cite{Blatter1994}. Despite this long history, a recent experimental study by Rischau \textit{et al.}~\cite{Rischau2021} demonstrated that the vortex Nernst signal cannot be quantitatively understood with available theories. This is in surprising contrast with the case of the Nernst signal generated by the superconducting fluctuations above the critical temperature. In the latter case, the theory originally formulated by Ussishkin, Sondhi and Huse \cite{Ussishkin} has proven to be a great success. It was confirmed and extended by other theorists~\cite{Serbyn2009,Michaeli2009,Levchenko2011} and was experimentally confirmed in both conventional \cite{Pourret2006,Pourret2007,Spathis2008} and cuprate \cite{kokanovic2009,Chang2012,Tafti2014} superconductors (For reviews, see \cite{Pourret2009,Behnia2016,Cyr2018}). Ironically, the exploration of the Nernst effect in this century was initially driven by attributing to vortex-like excitations an unexplained Nernst signal above the critical temperature in undoped cuprates \cite{Xu2000}. Thanks to numerous experiments which followed this report, we have nowadays a reasonable understanding of the amplitude of the quasi-particle Nernst signal in metals and the fluctuating Nernst signal of in the normal state of superconductors \cite{Behnia2016}. The rough amplitude of the anomalous Nernst signal of magnets can also be guessed from their anomalous Hall conductivity \cite{Xu2020}. In contrast, we lack an established quantitative account of how mobile vortices give rise to a Nernst response. The present paper aims to summarize what is known and what is yet to be understood about the measured vortex liquid Nernst signal in various superconductors. It identifies a link between an upper bound to the amplitude of the Nernst signal and a lower bound to the viscosity of the vortex liquid and highlights a distinction between two concepts of entropy of a superconducting vortex: the one statically stocked in its core and the one dynamically carried by the mobile flux line. \section{The amplitude of S$_{xy}$ and its surprising invariability among superconductors} \begin{figure*} \centering \includegraphics[width=16cm]{Fig-Sxy-superconductors.pdf} \caption{\textbf{The amplitude of the Nernst signal in various superconductors:} Temperature dependence of the largest Nernst signal in several superconductors. Despite three orders of magnitude variation in their critical temperature, these superconductors display a roughly comparable vortex Nernst signal, which ranges between 2 and 11 $\mu V/K$. In contrast, the quasi-particle Nernst signal in metals can be many orders of magnitude larger ($ \approx mV/ K$) or many orders of magnitude smaller ($ \approx nV/ K$), according to the mobility and the Fermi energy of the metal in question. } \label{fig:var-sup} \end{figure*} Experimentally, Nernst effect is measured by applying a thermal gradient along the crystal (let us call it the x-axis) and measuring an electric field in the lateral direction (the y-axis) in presence of a magnetic field perpendicular to both (along the z-axis). The Nernst signal is defined as: \begin{equation} S_{xy} =\frac{E_y}{\nabla_x T} \label{Sxy} \end{equation} A striking fact about the amplitude of S$_{xy}$ in superconducting crystals or thin films measured in the past half a century is that it peaks to a value in the range of $\mu V/K$. This feature, first highlighted in ref. \cite{Rischau2021}, is illustrated in Fig.\ref{Sxy}. Despite the fact that the superconducting critical temperature is $\approx 0.3 K$ in strontium titanate and $\approx 100 K$ in Bi2212, their Nernst peaks, occurring at widely different temperatures and magnetic field, are comparable in magnitude. This is to be contrasted with the magnitude of S$_{xy}$ in metals, where it can vary by many orders of magnitude \cite{Behnia2009,Behnia2016}. The S$_{xy}$ peak in bismuth, a dilute semi-metal hosting high-mobility carriers, is five to eight orders of magnitude larger \cite{Sugihara,Mangez1976,Behnia2007,Galev1981} than in dense metallic perovskytes \cite{Xu2008,Jin2021}. Let us examine a possible link between this observation and a general property of liquids. \subsection{A lower bound to viscosity of liquids} The shear viscosity of a fluid, $\eta$, quantifies the tangential stress, $\sigma_{xy}$, required to generate a velocity gradient in the fluid \cite{falkovich2011fluid}: \begin{equation} \sigma_{xy} =\eta \frac{dv}{dz} \label{eta} \end{equation} $\eta$ is the dynamic viscosity, which is expressed in units of Poise ($\equiv$ Pa.s). Divided by the particle mass, $m$ and particle density, $n$ of the fluid, it yields the kinematic viscosity, $\nu=\frac{\eta}{m n}$. The latter is equivalent to momentum diffusivity. Note that the sign of viscosity can only be positive. It also happens that its amplitude cannot be arbitrarily small \cite{Purcell,Trachenko2021}. Indeed, the magnitude of kinematic viscosity is bounded by fundamental constants \cite{Trachenko2021}. Trachenko and Brazhkin have demonstrated that, while the amplitude of viscosity of a fluid depends on the microscopic interactions among its constituent particles, it always exceeds a minimal bound, set by $m$ and the electron mass, $m_e$ \cite{Trachenko2020}: \begin{equation} \nu_{min} = \frac{1}{4 \pi}\frac{\hbar}{\sqrt{m_e m}} \label{nu} \end{equation} The temperature dependence of viscosity in common fluids displays a minimum, which approaches (while exceeding) $\nu_{min}$. The minimum arises because of the competition between a liquid-like regime (where warming diminishes viscosity by enabling particles to hop across energy barriers) and a gas-like regime (where warming strengthens viscosity by shortening their mean-free-path) \cite{Trachenko2020}. In supercritical fluids, this minimum in momentum diffusivity is accompanied by a similar minimum in thermal diffusivity \cite{Trachenko2021c}. This minimum has attracted recent attention in the context of the debate on a possible bound to the ratio of viscosity to entropy density in fluids \cite{Cremonini2011}. This gives us an opportunity to compare vortex liquids with common liquids. \subsection{The viscosity-to-entropy density ratio} Kinematic viscosity, $\eta$, can be expressed in units of Planck constant per volume and entropy density, $s$, in units of Boltzmann constant per volume. Therefore, their ratio, $\frac{\eta}{s}$, can be expressed in units of $\frac{\hbar}{k_B}$ for any fluid It has been noticed that in common fluids this ratio approaches unity, but remains above it (see Fig. \ref{fig:eta-min}.a) \cite{Cremonini2012}. This observation was made in the context of the debate following the discovery of a theoretical holographic bound to $\frac{\eta}{s}$ \cite{Policastro2001,Kovtun2005}. Eq. \ref{nu} defines an explicit minimum for kinematic viscosity. However, one can see that its also implies a minimum in $\frac{\eta}{s}$. Within a numerical factor of the order of unity, the entropy density of a classical liquid is of the order of $k_B$ times its molar density. The dynamic viscosity ($\eta$) is the kinematic viscosity ($\nu$) times the mass density of the liquid and the particle mass ($m$) can be expressed in terms of the proton mass $m_P$. Therefore, Eq. \ref{nu} leads to: \begin{equation} \frac{\eta}{s} \geq \frac{1}{4 \pi}\sqrt{\frac{m_p}{m_e}}\frac{\hbar}{k_B} \label{etas} \end{equation} Here, $\frac{m_p}{m_e}= 1836$ is the proton-electron mass ratio. Thus, this inequality accounts for what is seen in Fig. \ref{fig:eta-min}.a. \subsection{The peak in S$_{xy}$ as a minimum in $\frac{\eta}{s}$} Let us come back to the vortex liquid. The ratio of the quantum of the magnetic flux $\Phi_0=\frac{h}{2e}$ to the Nernst signal $S_{xy}$ can be expressed in units of $\frac{\hbar}{k_B}$. Let us show that it can be assimilated to the ratio of viscosity to the entropy density of the vortex liquid. \begin{figure*} \centering \includegraphics[width=14cm]{Fig-etaS.pdf} \caption{\textbf{The viscosity-to-entropy density ratio:} a) The ratio of dynamic viscosity, $\eta$, to entropy density, s, in two fluids (H$_2$O and helium) at two different pressures (25 and 100 MPa). The data was extracted from the National Institute of Science and Technology (NIST) reference database \cite{nist}. Note the presence of a minimum, which is larger, but of the order of $\hbar/k_B$. b) The ratio of the quantum of the magnetic flux ($\Phi_0=\frac{h}{2e}$) to the Nernst signal ($S_{xy}=\frac{E_y}{\nabla_x}$) in several superconductors. This quantity is equivalent to viscosity to entropy density ratio of a vortex liquid (see text). Expressed in units of $\hbar/k_B$, it has a minimum, which is an order of magnitude larger than what is seen in classical fluids.} \label{fig:eta-min} \end{figure*} The vortex Nernst signal arises due to the drift velocity of magnetic flux lines, $v_x$. Taking the number of vortices per unit area to be $n_V$, the Josephson equation yields the Electric field: \begin{equation} E_y= n_V \Phi_0 v_x \label{Josephson} \end{equation} This drift velocity along the x-axis is caused by the thermal force exerted by a thermal gradient, which is itself balanced by a viscous force: \begin{equation} \nabla_x T s = n_V \eta v_x \label{n_vs} \end{equation} Compare this equation with eq. \ref{eta}, to note that the assumption here is that $\eta$ is a genuine shear velocity and the drift velocity of vortices can be assimilated to a gradient in local fluid velocity generated by thermal stress. Combining these two equations leads to the following expression for the Nernst signal: \begin{equation} S_{xy}\equiv\frac{E_y}{\nabla_x T}= \Phi_0\frac{s}{\eta} \label{N eta s} \end{equation} Therefore, the ratio of the Nernst signal to the quantum of the magnetic flux in a given vortex liquid equals the inverse of the ratio of the viscosity to the entropy density. Plotting $\frac{\Phi_0}{S_{xy}}$ for a few superconductors among those shown in Fig. \ref{fig:var-sup}, we can see that the viscosity-to-entropy-density in different vortex liquids presents a minimum of comparable amplitude, which in units of $\hbar/k_B$, exceeds what is seen in common liquids by one order of magnitude (Fig. \ref{fig:eta-min}b). Note the difference in the origin of minimum in the two cases. In common liquids, the non-monotonic temperature dependence is due to viscosity, in the vortex liquids, it is mostly due to the difference in entropy between the two components of the fluid. Nevertheless, the comparison suggests that the observed similarity of the amplitude of S$_{xy}$ across superconductors may be a consequence of a bound, set by fundamental constants, to the ratio of viscosity to entropy density in any liquid of superconducting vortices. \section{ Extracting the mobile entropy of vortices from $\alpha_{xy}$} \begin{figure*} \centering \includegraphics[width=16cm]{Fig-cuprates-v2.pdf} \caption{\textbf{Nernst effect and resistivity in cuprates:} Published figures of the Nernst and resistivity data in three cuprate superconductors: Top Nd$_{2-x}$Ce$_x$CuO$_4$ (NCCO) \cite{Gollnik1998}; Bottom left: YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO) \cite{Li1994}; La$_{1.92}$Sr$_{0.08}$CuO$_4$(LSCO) \cite{Capan2002} . Reprinted with permission; Copyright (1994, 1998, 2002) by the American Physical Society. In the case of NCCO, the negative sign of the Nernst coefficient is due to the choice of a different sign convention.} \label{fig:cuprates} \end{figure*} While the Nernst signal, S$_{xy}$ is what is immediately measured in a Nernst experiment, more extensive attention has been given to $\alpha_{xy}$, sometimes called the Nernst conductivity. It is the off-diagonal component of the thermoelectric conductivity tensor defined by $\overline{\rho}\overline{\alpha}= \overline{S}$, In general one has: \begin{equation} S_{xy}=\rho_{xx}\alpha_{xy}+\rho_{xy}\alpha_{xx} \end{equation} However, in our context, the second term can be safely neglected in most cases. Then $\alpha_{xy}$ can be extracted from the measured S$_{xy}$ and the measured $\rho_{xx}$. Note that in three dimensions, where resistivity is expressed in $\Omega$.m, the units of $\alpha_{xy}$ is A/K.m. But the length disappears in two dimensions, and $\alpha_{xy}$ can be expressed in a natural unit: $\frac{ek_B}{\hbar}\approx 21 nA/K$ \cite{Pourret2006}. In the flux flow regime, electrical resistivity is set by the balance between the Lorentz force and a viscous force. Therefore: \begin{equation} \rho_{xx}=\Phi_0 \frac{n_V}{\eta} \label{fFR} \end{equation} By assuming that the viscous force on vortices is the same in a flux flow experiment and in a Nernst experiment, one can link the amplitude of $\alpha_{xy}$ to the mobile entropy of a vortex per unit of length, $S_d$ \cite{Huebener1979}: \begin{equation} \alpha_{xy}= \frac{S_{xy}}{\rho_{xx}}=\frac{S_d}{\Phi_0} \end{equation} In three dimensions, $S_d$ is expressed in units of $J.K^{-1}.m^{-1}$. Multiplying this quantity by a length leads to $S^d_{sheet}$, a quantity which can be expressed in units of Boltzmann constant. \subsection{Experimentally observed $\alpha_{xy}$ in superconductors} Rischau \textit{et al.} observed that the peak $\alpha_{xy}$, extracted by dividing the maximum $S_{xy}$ in the vortex state to the resistivity at the same temperature and magnetic field reveals a general, and yet to be understood, trend. Here is a brief review of this point. \textbf{Cuprates} - Numerous studies have been devoted to quantifying either the Ettingshausen ~\cite{Palstra1990} and the Nernst ~\cite{Li1994,Gollnik1998,Wang2001,Wang2002,Capan2002,Wang2006,Rullier2006,Balci2003} effects in the vortex state of the cuprates. Fig. \ref{fig:cuprates} reproduces the data from three different studies, where both the Nernst signal and the resistivity were reported. As seen in Table \ref{Tab1}, which summarizes the situation for four different cuprates, if one takes the c-axis lattice parameter to pass from $S^d$ to $S^d_{sheet}$, one finds that, its magnitude varies between 0.7 $k_B$ in LSCO to 4 $k_B$ in Bi2212. Boltzmann constants. In other words, $S^d_{sheet}$ is of the order of magnitude of the Boltzmann constant, despite the fact that at each CuO$_2$ plane, thousands of quasi-particles are associated with the normal core of a vortex. Note that the relatively large value of $S^d_{sheet}$ in Bi2212 is a consequence of its relatively longer c-axis parameter. Instead of the latter, one may take the distance between copper-oxygen planes (of the order of 0.6 to 0.8 nm) or the superconducting coherence length (between 2 to 7 nm) as the third-dimension distance. Such choices will lead to different magnitudes of $S^d_{sheet}$, which will remain comparable to $k_B$ in order of magnitude. Nevertheless, in all these cases, the difference between compounds appears to be larger than the experimental uncertainty (for a recent discussion of the cuprate data, following ref. \cite{Rischau2021}, see \cite{HUEBENER20211353975}). \begin{table} \begin{tabular}{|cccccc|} \hline \centering Compound & $T_{c}$ & N$^{peak}$& $\rho^{peak}$ & c & S$_{d}^{sheet}$ \\ & [K] & [$\mu$/K] & [$\mu \Omega$ cm] & [nm] & [10$^{-23}$J/K] \\ \hline YBa$_2$Cu$_3$O$_{7-\delta}$ (12 T)\cite{Li1994} & 92 & 2.6 & 50 & 1.19 & 1.3 \\ Bi$_{2}$Sr$_{2}$Ca Cu$_2$$_2$O$_{8+x}$ (12 T) \cite{Li1994}& 95 & 4.1 & 50 & 3.1 & 5.1\\ Nd$_{2-x}$Ce$_x$CuO$_4$ (1 T) \cite{Gollnik1998}& 24 & 1.6 & 20 & 1.2 & 1.9 \\ La$_{1.92}$Sr$_{0.08}$CuO$_4$ (12 T) \cite{Capan2002}& 29 & 9.1 & 260 & 1.2 & 0.88 \\ \hline \end{tabular} \caption{\textbf{Extracting vortex entropy per sheet in cuprates-}The maximum Nernst signal, the resistivity corresponding to temperature and magnetic field of the Nernst peak, the c-axis lattice parameter and the extracted entropy per vortex per sheet in the three cuprate superconductors.} \label{Tab1} \end{table} \begin{figure*} \centering \includegraphics[width=16cm]{Fig-3glos.pdf} \caption{\textbf{Nernst effect and resistivity in three superconductors:} Temperature dependence of resistivity and the Nernst signal in an organic superconductor \cite{Logvenov1996}, in an iron-based superconductor \cite{Pourret2011} and in strontium titanate \cite{Rischau2021}. In all cases, the Nernst signal peaks in the vortex liquid state at a a fraction of the critical temperature and a fraction of the zero-temperature upper critical field. This data has been used for values expressed in table \ref{Tab2}.} \label{fig:three-super} \end{figure*} \textbf{Other uncommon superconductors} - The Nernst effect in $\kappa$-(BEDT-TTF)$_2$X$_2$ family of organic superconductors was studied by two different groups ~\cite{Logvenov1996,Nam2007}. Logevenov \textit{et al.} ~\cite{Logvenov1996} also reported the resistivity of their samples. In the iron-based superconductor, FeTe$_{0.6}$Se$_{0.4}$, a Nernst peak in the vortex liquid state was detected by Pourret \textit{et al.} \cite{Pourret2011} who reported concomitant flux flow resistivity data, too. Finally, the vortex Nernst signal and concomitant resistivity data were reported in the case of superconducting strontium titanate by Rischau \textit{et al.} \cite{Rischau2021}. These three data sets are reproduced in Fig.\ref{fig:three-super} and summarized in table \ref{Tab2}. Here also, one may contest the choice of lattice parameter as the relevant distance to pass from 3D to 2D. Note that the c-axis superconducting coherence is more than one order of magnitude longer in cubic strontium titanate than in cuprates. Therefore the similarity would disappear if one takes it as the relevant perpendicular distance. \begin{table} \begin{tabular}{|cccccc|} \hline \centering Compound & $T_{c}$ & N$^{peak}$& $\rho^{peak}$ & c & S$_{d}^{sheet}$ \\ & [K] & [$\mu$/K] & [$\mu \Omega$ cm] & [nm] & [10$^{-23}$J/K] \\ \hline $\kappa$-(ET)$_2$Cu[N(CN)$_2$]Br (2 T)\cite{Logvenov1996} & 11 & 6.1 & 380 & 2.9 & 0.96\\ FeTe$_{0.6}$Se$_{0.4}$ (24 T) \cite{Pourret2011}& 14 & 4 & 48 & 0.58 & 0.96\\ SrTi$_{1-x}$Nb$_x$O$_3$ (0.06 T) \cite{Rischau2021}& 0.35 & 11 & 100 & 0.39 & 0.89 \\ \hline \end{tabular} \caption{\textbf{Extracting vortex entropy per sheet in three other superconductors-}The maximum Nernst signal, the resistivity corresponding to temperature and magnetic field of the Nernst peak, the c-axis lattice parameter and the extracted entropy per vortex per sheet in the three superconductors. } \label{Tab2} \end{table} \textbf{Historical Nernst data on superconducting niobium and its alloys} - We saw that in many superconductors, explored during the last few decades, the amplitude of $\alpha_{xy}$ leads to extracting an entropy per vortex per sheet of the order of $k_B$. However, as one can see in tables \ref{Tab1}, and \ref{Tab2}, the resistivity of the normal state in these superconductors is relatively large. One may wonder if the observation holds for superconductors with a higher conductivity. Most historical explorations of the Nernst effect did not explicitly report the measured amplitude of S$_{xy}$. They focused on the measured transverse voltage and often extracted S$^d$. In the case of niobium, Huebener and Seher \cite{Huebener1969} studied high-purity foils and found a Nernst signal as large as S$_{xy}\sim \mu V/K$ in samples with a residual resistivity as low as 20 n$\Omega$cm. This would imply an entropy per vortex per sheet exceeding $k_B$ by several orders of magnitude. In another study, de Lange and Otter \cite{deLange1975} quantified the magnitude of the vortex transport entropy, S$_d$ in a niobium alloy (Nb$_{80}$Mo$_{20}$) and found that it peaks to $S_d\simeq 10^{-11} J/K.m$. Combined with a lattice parameter of 0.3 nm, this yields a sheet entropy, S$_{d}^{sheet}$ exceeding 200 $k_B$. Thus, niobium differs from the superconductors listed in the two tables. This suggests that S$_{d}^{sheet}\approx$ $k_B$ is a lower bound. On the other hand, in order of magnitude, the amplitude of S$_{xy}$ in this low-resistivity superconductor is comparable to those listed above. \subsection{Theory of the vortex transport entropy} As mentioned in the introduction, as early as its discovery, the vortex Nernst effect was a subject of theoretical investigations \cite{Stephen1966,Clem1968,Maki1968}. In 2010, Sergeev and co-workers~\cite{Sergeev_2010}, reviewing these earlier theories argued that magnetization currents do not transfer the thermal energy and therefore, the vortex transport entropy is much smaller than what was believed to be. Sergeev \textit{et al.}~\cite{Sergeev_2010} contested the validity of the following expression for the vortex transport entropy derived by Stephen~\cite{Stephen1966}: \begin{equation} S^{EM}_{d} = -\frac{\phi_0}{4\pi}\frac{\partial H_{c1}}{\partial T} \label{s1} \end{equation} This expression assumes that the entropy of the vortex is set by the temperature derivative of the energy cost to introduce a vortex at the lower critical field, $H_{c1}$. But, according to Sergeev \textit{et al.}~\cite{Sergeev_2010}, such an assumption implies that supercurrents transport entropy. They derived the following expression for the vortex transport entropy : \begin{equation} S^{core}_{d} \simeq -\pi \xi^2 \frac{\partial }{\partial T}\frac{H_c^2}{8\pi} \label{s2} \end{equation} This theoretical expression for S$^d$ failed to explain the case of a BCS superconductor with a vortex liquid regime and well-established material-dependent parameters. Indeed, SrTi$_{1-x}$Nb$_x$O$_3$ at optimal doping (x=0.01) is such a superconductor. This is a s-wave superconductor \cite{Lin2014_multiple,Lin2015} where both the lower and the upper critical fields have been experimentally measured~\cite{Collignon2017} ($H_{c1}$(0)= 4.8 Oe; $H_{c2}$(0)= 240. Inserting these numbers in Eq. \ref{s2}, one finds $S^{EM}_d \approx 5.2 \times 10^{-12}$J/K. m. However, the experimental $S_{d}$ is $\approx 2.3 \times 10^{-14}$J/K. m (equal to the ratio of $S^d_{sheet}$ and c in Table \ref{Tab2}). Thus, there is a fifty-fold discrepancy between theory and experiment. Compared to Eq. \ref{s1}, Eq. \ref{s2} leads to a downward revision to the amount of entropy carried by a vortex \cite{Sergeev_2010}. Nevertheless, what it yields is still more than one order of magnitude larger than what the the experiment finds. This discrepancy, together with the empirical observation that S$^d_{sheet}\approx k_B$, motivates us to explore other reasons for a theoretical overestimation of vortex transport entropy. This is the subject of the next section \footnote{For alternative proposals following the experimental observation reported in ref. \cite{Rischau2021}, see \cite{Segeev2021} and \cite{diamantini2022}.}. \section{Distinguishing between vortex entropy at rest and in motion} The experimental Nernst data and the theoretical expectation (Eq. \ref{s2}) are confronted following a chain of reasoning, which contains a contestable assumption: The entropy stocked in the core remains intact when transported by a mobile vortex. In general, it is true that the magnitude of $\alpha_{xy}$quantifies the ratio of entropy to the magnetic flux of a mobile carrier \cite{Behnia2016} (See also \cite{Bergman2010}). When the two opposing forces exerted on the carrier (the thermal force on its entropy and the Lorentz force on its magnetic flux) cancel each other, there is a finite $\alpha_{xy}$. It represents the ratio of a charge density flow to a perpendicular thermal gradient. This picture gives an account of the expressions for $\alpha_{xy}$ generated by electronic quasi-particles or fluctuating Cooper pairs, given the amount of entropy or magnetic flux they carry with themselves \cite{Behnia2016}. Now, a superconducting vortex is bound to a magnetic flux and its core has an excess of entropy. Therefore, one would naively expect to see that its generate a finite $\alpha_{xy}$ corresponding to the ratio of the entropy inside the core to the quantum of magnetic flux. However, this overlooks the fact that in this mesoscopic entity, the entropy and the magnetic flux are not necessarily bound to each other. \begin{figure*} \centering \includegraphics[width=16cm]{Fig-vortex-movement_v2.pdf} \caption{\textbf{Vortex core and normal fluid in presence of thermal gradient:} a) A cooper pair in the immediate vicinity of a vortex core breaks up on the hot side of the vortex. b) As the vortex drifts under the influence of a thermal gradient, quasi-particles inside the core mix with those outside. c) When the vortex moves a distance of the order of its core radius, it can be totally stripped off its stocked entropy as a consequence of the exchange of inside and outside quasi-particles.} \label{fig:core} \end{figure*} Fig.\ref{fig:core}a is a sketch of a superconducting vortex in presence of a thermal gradient. Around the vortex core, Cooper pairs whirl around the magnetic field with a velocity which increases as the core is approached. The core begins when this velocity becomes unsustainable and the pair breaks up. In presence of a thermal gradient, because of the temperature dependence of the superconducting gap, the depairing is favored on the hot side. A vortex drifting from the hot side to the cold side of the sample has thus a leaking tail. There is a steady transformation of peripheral Cooper pairs to core quasi-particles. Moreover, quasi-particles are not restricted to the normal core. At fine temperature, a fraction of the total electronic density outside the vortex core is the normal fluid, which also diffuses heat in response to the thermal gradient. The failure of Eq.\ref{s2} can be traced to the fact that because of the slow drift of the vortices, the mobile magnetic flux will be stripped of its entropy during a distance of the order of the core diameter (i.e. the superconducting coherence length, $\xi$) (See Fig.\ref{fig:core}b, c). Let us make a rough estimate of how slow a vortex should move in order to be stripped of its stocked entropy. The first relevant quantity is the tunneling rate between inside and outside of the vortex core. This rate is $\approx \frac{\Delta^2}{\hbar E_F}$. The second is the number of quasi-particles stocked inside the core. In the most naive approach, the vortex core, can be assimilated to a normal-state cylinder of radius $\xi$ and the number of quasi-particles will be $\approx k_F^2 \xi^2$. In a more sophisticated approach, what the core contains are $\approx k_F \xi$ Andreev bound states \cite{Kopnin1991,Stone1996}. Therefore, one can estimate the time it takes to entirely reconfigure the vortex to be of the order of $k_F \xi \times \frac{ \hbar E_F}{\Delta^2}$. A drifting vortex with a velocity of $v_L$ will see its core content replaced if the drift time along a distance of $\xi$ is longer than this time. This will happen if : \begin{equation} \frac{\xi}{v_L} > k_F \xi \times \frac{ \hbar E_F}{\Delta^2} \label{s3} \end{equation} Neglecting numerical factors of the order of unity, one finds that the inequality \ref{s3} eventually leads to: \begin{equation} \frac{v_L}{v_F} < (\frac{\Delta}{E_F})^2 \label{s4} \end{equation} Numerical prefactors of the order of unity have been neglected. Therefore the vortex core will be stripped off its quasi-particles and entropy if the drift velocity is much smaller than the Fermi velocity, v$_F$, multiplied by the square of the ratio of the superconducting gap to the Fermi energy $\frac{\Delta}{E_F}$. Now, let us consider the numbers. The measured electric field in a Nernst experiment is of the order of $\approx 10^{-3}$Vm$^{-1}$ in a magnetic field of $\approx 1 $ T Therefore, the flux lines drift with a velocity of $10^{-3}$ m/s. The Fermi velocity, on the other hand, is of the order of $10^5$ m/s. This is a discrepancy of 8 orders of magnitude. It allows inequality \ref{s4} to hold, provided that $\frac{\Delta}{E_F}$ does not fall below a very small number. Interestingly, all superconductors listed in tables \ref{Tab1} and \ref{Tab2} have relatively large $\frac{\Delta}{E_F}$ ratios (in the range of $10^{-1}$ to $10^{-2}$). On the other hand, in an elemental superconductor such as Nb, $\frac{\Delta}{E_F}$ can be as small as $10^{-4}$. In the latter case, the vortex may not be totally stripped from the entropy stocked in their core. \subsection{An irreducible entropy per vortex ?} Is there a lower boundary to the entropy of a moving vortex? If the whole content of the vortex core get replaced during a movement as short as $\xi$, why should there be any mobile entropy at all? In search of a possible answer to this question, let us recall what Volovik \cite{Volovik2003} pointed out about vortices in fermionic superfluids. He stated that analogs of the black hole horizon can occur in `\textit{liquids moving with velocities exceeding the local maximum attainable speed of quasiparticles. Then an inner observer, who uses only quasiparticles as a means of transferring the information, finds that some regions of space are not accessible for observation. For this observer, who lives in the quantum liquid, these regions are black holes.}'\cite{Volovik2003} It is tempting to speculate that a lower boundary of $k_B$ln2 arises is a consequence of the event horizon surrounding a vortex. An information barrier between observers inside and outside would ensure the survival of this last bit of information when the vortex constituents are totally replaced. \section{Concluding remarks} The transport properties of the vortex liquids are often more complex than what it seems at the first sight. This is true not only for the Vortex Nernst response, but also for flux flow resistivity \cite{Narayan_2003} and the Hall response in the vortex liquid regime \cite{Auerbach2020}. A satisfactory theory of the Nernst response in the vortex liquid is yet to be formulated. This is in contrast with the case of the Nernst signal in the normal state of a superconductor, where the theory has been tested by multiple experimental studies \cite{Behnia2016,jotzu2021}. Here, I argued that the rough amplitude of the Nernst signal can be understood provided that the vortex liquid respects a bound to $\frac{\eta}{s}$ (the viscosity-to entropy density ratio), which is observed by other liquids. Moreover, the entropy inside a vortex core does not remain bound to the magnetic flux line during the drift if the ratio of the superconducting gap to the Fermi energy is above reasonable threshold. This may explain why the entropy per vortex is much smaller in superconductors with a larger $\frac{\Delta}{E_F}$ ratio, but not in superconducting niobium. The subject deserves further theoretical and experimental investigations. \section{Acknowlegements} I am grateful to Beno\^it Fauqu\'e, M. V. Feigel'man, S. A. Hartnoll, A. Kapitulnik, S. A. Kivelson, K. Trachenko, C.W. Rischau, and G. E. Volovik for discussions. This work was supported by the Agence Nationale de la Recherche (ANR-18-CE92-0020-01; ANR-19-CE30-0014-04). \newline \bibliographystyle{unsrt}
2111.00030
\section{Introduction} Sound source localization (SSL) has been one of the most classic and consistently researched topics of microphone array signal processing \cite{brandstein2001microphone}, with wide ranging applications from acoustic scene analysis \cite{politis2020overview} and acoustic monitoring \cite{valenzise2007scream}, to speech enhancement \cite{dibiase2001robust} and spatial audio rendering \cite{pulkki2018parametric}. SSL methods usually focus on providing the direction-of-arrival (DOA) of a single or multiple concurrent sources, while temporal smoothing of a single DOA and association of multiple estimates of multiple DOAs over time forms the topic of sound source tracking (SST) \cite{dibiase2001robust}. Recently, the field, traditionally dominated by geometric or statistical model-based approaches, has seen a surge in data- and learning-based SSL proposals using deep neural network (DNN) architectures \cite{wang2018robust, adavanne2018direction, adavanne2018sound, perotin2019crnn, chakrabarty2019multi, nguyen2020robust, diaz2020robust, bianco2020semi}. A deep-learning paradigm on SSL opens up a few interesting research questions, such as basic spectrogram\cite{adavanne2018sound, chakrabarty2019multi} versus refined spatial \cite{perotin2019crnn, nguyen2020robust} multichannel input features, coupling the network architecture to SSL effectively \cite{chakrabarty2019multi, krause2021comparison}, choosing appropriate training source signals for generalization \cite{chakrabarty2019multi, vargas2021improved}, strong versus weak supervision \cite{bianco2020semi}, and posing SSL as a classification \cite{adavanne2018direction, chakrabarty2019multi, perotin2019crnn, nguyen2020robust} or regression \cite{adavanne2018sound, diaz2020robust, perotin2019regression} problem. The latter division was already present in earlier attempts of single-source deep-learning SSL, such as classification in \cite{xiao2015learning} and regression in \cite{vesperini2016neural}. In classification-based SSL, the range of possible DOAs is discretized into distinct DOA classes, with the classifier having as many outputs as the number of them. Classification-based SSL has certain advantages: it can serve as a simultaneous source activity detector and it can handle multiple sources with a single network architecture. On the other hand, the gridding determines the effective resolution, errors are higher at boundaries between grid points, and coarse resolutions cannot accommodate well moving source scenarios. Additionally, for full 3D DOA estimation in azimuth-elevation, even moderate resolutions require hundreds of classes, posing challenges in obtaining adequate training data and training effectively. Classification-based SSL was the dominant paradigm until recently, where studies such as \cite{adavanne2018sound} brought increased attention to regression, with similar performance to classification further validated, e.g., in \cite{perotin2019regression}. Regression-based SSL has its own advantages: a single regressor on DOA vectors or angles can handle the whole DOA domain for a single source with one to three outputs, estimation is continuous, and moving source scenarios are handled naturally \cite{adavanne2019localization, adavanneThesis}. However, some auxiliary activity detection is required to gate the constant stream of DOAs during inference \cite{diaz2020robust}. Furthermore, in the multi-source case, as many regressors as the presumed maximum number of sources are needed, posing problems of permutations between sources and regression outputs, preventing effective training and increasing localization errors during inference \cite{Cao2020}. Regression-based SSL is popular in the context of joint sound event localization and detection (SELD), e.g., in the submissions of the DCASE 2019 and DCASE 2020 challenges \cite{politis2020overview}, where participants could use simultaneous event classification information to infer activity and disentangle permutation issues. However, in a classical multi-source SSL setting independent of source signal type, not much work has been done in addressing the above issues. In this study, we propose a training strategy for multi-source regression-based SSL that circumvents all the aforementioned issues. More specifically, a) instead of optimizing only spatial localization errors as it is commonly done, source detection terms are included in the loss improving overall performance, b) permutation errors are avoided by integrating tracking-inspired loss terms, c) the method provides an end-to-end training strategy that can handle dynamic changing conditions with variable number of sources, suitable for real-life annotated recordings. \section{Localization and tracking metrics} Considering a recording with maximum number $N_\mathrm{max}$ sound sources active over its duration, not necessarily simultaneously, we can define the predictions of an SSL system as $\tilde{\mathbf{X}}_t = [\tilde{\mathbf{x}}_1(t), ...,\tilde{\mathbf{x}}_i(t), ..., \tilde{\mathbf{x}}_{M_t}(t)]$, where $\tilde{\mathbf{x}} = [\tilde{x},\tilde{y},\tilde{z}]$ is the estimated DOA or position vector of a single source, and $M_t$ is the number of predictions at the $t$-th frame. At the same time, $N_t\leq N_{max}$ ground truth sources and their locations are denoted by $\mathbf{X}_t = [\mathbf{x}_1(t), ...,\mathbf{x}_j(t), ..., \mathbf{x}_{N_t}(t)]$. The combinations of estimations and predictions form the $M_t\times N_t$ distance matrix $\mathbf{D}_t$ with an appropriate spatial distance measure for the application; e.g. the angular distance $d_{ij} = \arccos( \tilde{\mathbf{x}}_i\cdot \mathbf{x}_j / ||\tilde{\mathbf{x}}_i|| ||\mathbf{x}_j|| )$ when DOAs are considered. Based on $\mathbf{D}$, we can also consider an optimal association of references and predictions, in a minimum cost sense, expressed by a $M_t \times N_t$ binary association matrix $\mathbf{A}_t = \mathcal{H}(\mathbf{D})$, where $\mathcal{H}(\cdot)$ is the Hungarian algorithm \cite{Hungarian}. The association matrix $\mathbf{A}$ allows an optimal frame-wise \emph{localization error} (LE) to be computed between the $K_t=\min(M_t,N_t)$ associated predictions-references, as \begin{equation} LE_t = \frac{1}{K_t}\sum_{i,j} a_{ij}(t) d_{ij}(t) = \frac{|| \mathbf{A}_t \odot \mathbf{D}_t||_1}{||\mathbf{A}_t||_1}, \label{eq:le} \end{equation} with $d_{ij} = [\mathbf{D}]_{ij}$, $a_{ij} = [\mathbf{A}]_{ij}$, $||\cdot||_1$ being the $L_{1,1}$ entrywise matrix norm, and $\odot$ the entrywise matrix product. Complementary to LE, the association matrix $\mathbf{A}$ indicates hits/true positives (TP) $TP_t = K_t$, false alarms/false positives (FP) $FP_t = \max(0, M_t-N_t)$ , and misses/false negatives (FN) $FN_t = \max(0, N_t-M_t)$. From those, detection metrics such as the \emph{localization recall} (LR), \emph{localization precision} (LP), and a \emph{localization F1-score} (LF1) can be computed \cite{politis2020overview}. The above SSL metrics reveal the performance of the system in detecting and localizing accurately the sources in the scene but not how well the estimates are maintained across time, which is the task of tracking. Tracking metrics for multiple objects or sources is still an open field of research. Some established ones, such as OSPA \cite{schuhmacher2008consistent} favour trajectory consistency, while others like the CLEAR Multiple Object Tracking (MOT) metrics \cite{bernardin2008evaluating} try to balance between good localization performance in presence of \emph{identity switches} (IDS), and consistent identities between estimates from frame-to-frame. Two complementary MOT metrics are proposed in \cite{bernardin2008evaluating}, the MOT-precision (MOTp), and MOT-accuracy (MOTa) \begin{align} MOTp &= \frac{\sum_t || \mathbf{A}_t \odot \mathbf{D}_t||_1}{\sum_t K_t} \\ MOTa &= 1 - \frac{\sum_t FP_t+FN_t+IDS_t}{\sum_t N_t}. \end{align} As it is evident, MOTp is actually equivalent to LE, averaged across all frames. IDS can be computed by comparison of the current and previous frame association matrices $\mathbf{A}_t, \mathbf{A}_{t-1}$ and knowledge of the source ID for every column of $\mathbf{A}$ across frames, e.g. as in \cite{xu2020train}. MOTa itself is a combination of detection metrics with an additional tracking penalty expressed by IDS. \begin{figure}[!tp] \centerline{\includegraphics[width=\linewidth, height=8cm,keepaspectratio]{images/Differentiable_Tracking-Based_Training.pdf}} \vspace{-10pt} \caption{Block diagram of Differentiable Tracking-Based Training.} \label{fig:doanet} \vspace{-10pt} \end{figure} \section{Proposed method} The proposed method is strongly inspired by the work of \cite{xu2020train} on training video object detectors with an additional network plugged in the end of the object detectors, optimizing directly the MOT metrics through a differentiable soft-approximation of them. To the best of our knowledge, this strategy has not been attempted before on SSL problems, and its effects on multi-source regression have not been studied. Our proposal follows the training of \cite{xu2020train} with certain modifications. The overall block diagram is shown in Fig.~\ref{fig:doanet}, consisting of the localization network, termed herein \emph{DOAnet}, and a deep Hungarian network (\emph{Hnet}) taking as input the distance matrix $\mathbf{D}$ computed from the DOAnet outputs, and predicting an association matrix $\tilde{\mathbf{A}}$. The $\tilde{\cdot}$ indicates a (soft) differentiable approximation of the underlying quantity. A series of differentiable matrix manipulations follow that provide further soft approximations of $\tilde{LE}, \tilde{FP}, \tilde{TP}, \tilde{FP}, \tilde{FN}$, and $\tilde{IDS}$. From those approximations, the differentiable $dMOTp$ and $dMOTa$ are constructed and their combination serves as the overall training objective. A difference with the video-based work of \cite{xu2020train} is that, contrary to video object detectors, the localization regressors are constantly active. Hence, we introduce an additional track activity output branch in the localizer, contributing a third loss term in the overall loss. During inference, the DOA and track activity outputs are combined to form consistent DOA trajectories. \begin{figure}[!tp] \centerline{\includegraphics[width=\linewidth, height=5cm,keepaspectratio]{images/Hungarian_Net.pdf}} \vspace{-10pt} \caption{Block diagram of Hungarian network.} \label{fig:hnet} \vspace{-10pt} \end{figure} \subsection{Hungarian network (Hnet)} \label{sec:hnet} The Hnet is the fundamental block of the proposed differentiable tracking-based training strategy. It estimates the association matrix $\tilde{\mathbf{A}}$ of a dimension identical to the input distance matrix $\mathbf{D}$. In comparison to the deep Hungarian network proposed in~\cite{xu2020train}, we employ a simplified architecture as shown in Fig.~\ref{fig:hnet} with three losses to train Hnet swiftly and efficiently. We use a gated recurrent unit (GRU) input layer with 128 units, that treats one of the two dimensions of the input matrix D as the time-sequence, and the other as the feature length. The output time-sequence of GRU is fed to a single-head self-attention network~\cite{vaswani2017attention} to identify the time steps with correct associations. The output of the self-attention layer is processed by a fully-connected network with a sigmoid non-linearity, that estimates $\tilde{\mathbf{A}}$ as a multiclass multilabel classification task. Additionally, to guide the network to predict a maximum of one association per row and column, as expected for associations resulting from the Hungarian algorithm; we perform max-operation on the output of fully-connected network (before the sigmoid non-linearity used to compute $\tilde{\mathbf{A}}$) along both temporal ($\bf{max_T()}$) and feature ($\bf{max_F()}$) axes. We employ sigmoid non-linearity on these outputs, since more than one class can be active in an output instance. Finally, the Hnet is trained in a multi-task framework with weighted combinations of the three losses, each computed using binary cross-entropy between the predictions and the target labels of $\mathbf{A}$, $\bf{max_T(\mathbf{A})}$, and $\bf{max_F(\mathbf{A})}$ respectively. \subsection{Differentiable direction of arrival network (DOAnet)} \label{sec:doanet} Regarding the DOAnet, we propose a convolutional recurrent neural network (CRNN) architecture, following an updated version of SELDnet~\cite{adavanne2018sound} as the baseline of DCASE 2020~\cite{politis2020dataset}. The detailed architecture is shown in Fig.~\ref{fig:doanet}. Based on the chosen array type, we employ different multichannel acoustic features. For the first-order Ambisonics (FOA) format we extract 4 channel-wise mel-band energies and 3 channels of acoustic active intensity vectors \cite{pulkki2018parametric} representing their $(x, y,z)$ vector components, resulting to in total 7 features. All features are computed using 64 mel-bands resulting in a total feature dimension of $7\times T\times 64$, where $T$ is the number of temporal input frames. Similarly, for the MIC array we compute 4 channel-wise mel energies, and GCC-PHAT curves between channel-pairs resulting in 6-channels of features, and a total feature dimension of $10\times T\times 64$. The network is identical for both spatial formats. Three convolutional layers, with 128 units each, are employed to learn shift-invariant features from the input acoustic features. Maxpooling is performed on both temporal and feature axes to obtain an output of dimension $128\times T/5\times 8$, where $T/5$ amounts to 100 msec and is equal to the temporal resolution of DOA labels in the dataset (see Section~\ref{sec:dataset}). Two layers of bidirectional GRUs, each with 128 units are employed to model the temporal structure of the convolutional features. Thereafter, two separate branches are employed to learn - a) the DOA trajectories and b) their temporal track activity. The DOA trajectory output branch is of dimensions $T/5 \times(3 N_{max})$, where for each time frame the location of $N_{max}$ DOAs in Cartesian form is estimated using regression. Since DOAs constitute unit vectors and their components are bounded in $[-1,1]$, tanh activations are used. The second output is of dimension $T/5\times N_{max}$, indicating track activity for the $N_{max}$ DOA outputs at each time instance. Since any of the $N_{max}$ tracks can be active for a given frame, sigmoid activations are used. During training of the DOAnet, pairwise Euclidean distances are computed between the $M_t$ predicted and $N_t$ reference DOAs, forming the distance matrix $\mathbf{D}$. Euclidean distances are used instead of angular (cosine) distances, since they were found in \cite{adavanne2018sound, perotin2019regression} to perform better during training. Note that we embed the pairwise distances in a $\mathbf{D}$ matrix of the maximum dimensions $N_\mathrm{max} \times N_\mathrm{max}$, padding rows and columns beyond $M_t, N_t$ with out-of-range values (i.e. $>>2$). The input sequence to Hnet has finally the dimension $T/5\times N_{max}\times N_{max}$. A pre-trained Hnet with frozen weights is then employed to obtain the soft associations $\tilde{\mathbf{A}}$ from input $\mathbf{D}$. The combined DOAnet, Hnet, and final differentiable operations forming dMOTa and dMOTp, are jointly trained by a weighted combination of three losses - the dMOTA, dMOTP, and the track-activity loss. Since the Hnet weights are frozen, weight updates are only performed on DOAnet. The differentiable tracking losses of dMOTa and dMOTp are computed in an identical fashion as proposed in~\cite{xu2020train} using the inputs $\mathbf{D}$ and $\tilde{\mathbf{A}}$. As the loss for the track-activity branch, we perform a row max operation on the $\tilde{\mathbf{A}}$ matrix to obtain a $N_\mathrm{max} \times 1$ vector of soft activity values for all regressors. Higher values indicate higher probability of activity. The values are further thresholded and binarized. The collection of such vectors across frames result in the binary matrix $\mathbf{D}_\mathrm{ref}$ of size $T/5\times N_\mathrm{max}$ that is treated as the reference temporal activity of the DOA regressors. Then, the temporal activity branch is optimized with a binary cross entropy loss between its predicted $\mathbf{D}_\mathrm{pred}$ and reference $\mathbf{D}_\mathrm{ref}$ track activities. In order to support open research and reproducibility we are publicly releasing the code of Hnet\footnote{https://github.com/sharathadavanne/hungarian-net} and DOAnet\footnote{https://github.com/sharathadavanne/doa-net}. \begin{table}[t] \large \centering \caption{Results of differentiable tracking based training on DCASE2020 SELD task dataset.} \label{tab:results} \resizebox{0.49\textwidth}{!}{% \begin{tabular}{l|cccc|cccc} & \multicolumn{4}{c|}{\textbf{FOA}} & \multicolumn{4}{c}{\textbf{MIC}} \\ \cline{2-9} \textbf{Loss function} & \textbf{\begin{tabular}[c]{@{}c@{}}LE $\downarrow$/\\ MOTp\end{tabular}} & \textbf{MOTa} $\uparrow$ & \textbf{IDS} $\downarrow$ & \textbf{LR} $\uparrow$ & \textbf{\begin{tabular}[c]{@{}c@{}}LE $\downarrow$/\\ MOTp\end{tabular}} & \textbf{MOTa} $\uparrow$& \textbf{IDS} $\downarrow$ & \textbf{LR} $\uparrow$ \\ MSE & 25.4 & $\sim$ & $\sim$ & $\sim$ & 25.3 & $\sim$ & $\sim$ & $\sim$ \\ dMOTp & 13.7 & $\sim$ & $\sim$ & $\sim$ & 13.6 & $\sim$ & $\sim$ & $\sim$ \\ \multicolumn{9}{l}{\textbf{+Augmentation}} \\ \hline dMOTp & 12.1 & $\sim$ & $\sim$ & $\sim$ & 11.8 & $\sim$ & $\sim$ & $\sim$ \\ dMOTp+Act & 9.7 & 69.0 & 2374 & 86.9 & 8.7 & 71.3 & 1982 & 87.3 \\ dMOTp+dMOTa+Act & 9.5 & \textbf{70.5} & \textbf{2188} & \textbf{88.1} & 8.5 & \textbf{72.1} & \textbf{1812} & \textbf{87.6} \\ \multicolumn{9}{l}{} \\ \multicolumn{9}{l}{\textbf{DCASE2020 top submissions}} \\ \hline Du\_USTC (1) & \textbf{7.4} & $\sim$ & $\sim$ & 84.7 & \textbf{7.4} & $\sim$ & $\sim$ & 84.7 \\ Nguyen\_NTU (2) & 12.1 & $\sim$ & $\sim$ & 82.0 & $\sim$ & $\sim$ & $\sim$ & $\sim$ \\ Shimada\_SONY (3) & 7.5 & $\sim$ & $\sim$ & 83.5 & $\sim$ & $\sim$ & $\sim$ & $\sim$ \end{tabular} } \vspace{-10pt} \end{table} \section{Evaluation} \subsection{Hungarian network training} \label{sec:training} In order to train the Hnet, we generate a dataset with a training split of 405k distance matrices $\mathbf{D}$ and their corresponding association matrices $\mathbf{A}$. The validation split is 10\% the size of the training split. The dimensions of $\mathbf{D}$ and $\mathbf{A}$ are the same and fixed to ($N_{max} \times N_{max}$), where $N_{max} = 2$ is the maximum polyphony in the dataset. We sample equal number of $\mathbf{D}$ matrices by randomly choosing reference and predicted DOAs from spherical equiangular grids with resolutions of 1, 2, 3, 4, 5, 10, 15, 20, and 30 degrees. All combinations of (number of predictions, number of reference) such as {(0,0), (0,1), (1,0), (1,1), (1,2), (2,1), (2,2)} are represented equally in the dataset. As mentioned in Sec.~\ref{sec:doanet}, Euclidean distances are used to form the distance pairs in $\mathbf{D}$. Due to padding $\mathbf{D}$ to $N_\mathrm{max}\times N_\mathrm{max}$ dimensions even when $M_t,N_t<N_\mathrm{max}$, random high distance values are assigned to the respective inactive entries, helping Hnet to easily identify the correct number of active DOAs and their associations. An example is depicted in the first input $\mathbf{D}$ distance matrix of Fig.~\ref{fig:hnet}, with the corresponding association $\mathbf{A}$ under it. After training, Hnet achieves an F-score of $>$99\% on any $\mathbf{D}$ data generated with the aforementioned specifications. \subsection{Evaluation setup} \label{sec:dataset} For the evaluation of the whole differentiable training strategy we use the development set of the \emph{TAU-NIGENS Spatial Sound Events 2020} dataset \cite{politis2020dataset}, provided in the DCASE2020 Task 3 (SELD) challenge. It consists of diverse spatialized sound events, including moving sources, emulated in challenging real reverberant conditions using measured room impulse responses from 13 different rooms, with real spatial ambient noise added. The recordings are offered in two 4-channel formats: a tetrahedral microphone array (MIC), and first-order Ambisonics (FOA). The same development set split is used for training, validation, and testing as indicated in the challenge \cite{politis2020dataset}. The spatiotemporal annotations are used to extract the reference DOAs, event identities, and temporal activations at each frame, required for the evaluation of the system, ignoring the class/sound-type label of the original annotations. An additional evaluation is conducted on an augmented version of the dataset. Following a simple spatial augmentation strategy popular in DCASE 2020 \cite{Du2020_task3_report}, additional recordings of overlapping sources were generated by simple mixing of recordings with no overlap with another four non-overlapping ones, resulting in 4 times the original dataset of 2-source overlapping recordings. \section{Results} The results across both formats, MIC and FOA are presented in Table~\ref{tab:results}. Results of $LE/MOTp$ are shown for all tested configurations, while results for $MOTa, IDS, LR$ are shown only for configurations including the track activity detection branch. Without activity detection, all regressors are constantly outputting DOAs, hence $LR=100\%$ and the rest of the detection scores are not meaningful. As the first result, and as a baseline, we train the DOAnet using an MSE loss between predicted and reference DoAs without any association strategy. This configuration ends up in large errors due to permutations on the estimates that prohibit effective training and result in suboptimal performance during inference. Just replacing it with the dMOTp loss, which finds the optimal assignment with the minimum frame-wise LE, almost doubles the localization accuracy. Moving to the augmented dataset for the same dMOTp loss, we have a further small decrease in LE. By introducing the activity detection branch and the respective loss, the LE/MOTp is further reduced below 10$^\circ$. With track activity information introduced, we can also get a realistic picture of the localization detection and MOTa scores. Solely the combination of track activity loss and dMOTp achieves a high $LR$ in the challenging and dynamic reverberant conditions of the dataset, with sources appearing, overlapping, and disappearing often in the testing set. Adding the dMOTa loss increases the $MOTa$ and $LR$ metrics further. Apart from improvements in $LE$ and $LR$, dMOTa improves trajectory consistency at the regressor outputs; something that is not captured by the $LE,LR$ metrics. Instead, this improvement is exemplified by the $IDS$ scores, which drop significantly when dMOTa is included. For a comparative look with other systems on the same dataset, we include the top three systems of the DCASE2020 challenge, along with their reported challenge $LE,LR$ results in the development dataset. The proposed training strategy of multi-source regression SSL is competitive against those methods, with both $LE$ and $LR$ being on a similar range. Furthermore, the proposed DOAnet with differentiable tracking-based training is much simpler than these proposals in terms of complexity, and it achieves such results without relying on additional sound class information. However, it has to be noted that the comparison is qualitative, since the $LR$ and $LE$ scores in the challenge submissions are first computed between the target sound classes, and then averaged. \section{Conclusions} A method has been presented for end-to-end training of regression-based multi-source localizers that can handle realistic training data of time-varying varying source numbers, overlapping scenarios, and moving sources. Similarly, during inference and for the same dynamic acoustic conditions, the method achieves low localization errors, high localization detection scores, and improved tracking performance between the multiple DOA regressors. The approach is competitive against state-of-the-art SELD systems, at a reduced complexity and without dependency on sound-type detection information. \bibliographystyle{IEEEtran}
1609.04338
\section{Introduction} \qquad Socio-economic problems have been the targets of several studies in last years \cite{econ_book,pmco_book}. Those interdisciplinary topics are usually treated by means of computer simulations of agent-based models, which allow us to understand the emergence of collective phenomena in those systems. Among the studied problems, one of great interest is tax evasion dynamics, which is interesting from the practical point of view because tax evasion remains to be a major predicament facing governments \cite{bloom,prinz,andreoni}. Economists studied models of tax evasion during several years \cite{gachter,frey,follmer,slemrod,davis,salmina,wenzel,hood}, and more recently physicists also became interested in the subject \cite{zaklan,lima1,lima2,lima3,llacer,seibold,bertotti,meu_econo} (for recent reviews, see \cite{bloom,prinz}). Experimental evidence provided by Gachter suggests that tax payers tend to condition their decision regarding whether to pay taxes or not on the tax evasion decision of the members of their group \cite{gachter}. In addition, Frey and Torgler also provide empirical evidence on the relevance of conditional cooperation for tax morale \cite{frey}. Taking those discussions into account, Zaklan \textit{et al.} recently proposed a model that has been attracted attention \cite{zaklan}. In the so-called Zaklan model, the dynamics of tax payers and tax evaders is analyzed by means of the two-dimensional Ising model at a given temperature $T$. Each agent $i$ in the artificial population may be in one of two possible states, namely $s_{i}=+1$ (honest) or $s_{i}=-1$ (cheater or tax evader). A transition $s_{i} \to -s_{i}$ (or a spin flip) is controlled by the ``social temperature'' $T$ and also depends on the nearest neighbors' states of the agent (or spin) at site $i$. Thus, for low temperatures few spin flips occur and for high temperatures many spin flips occur. In other words, tax evaders have the greatest influence to turn honest citizens into tax evaders if they constitute a majority in the respective neighborhood. In addition, some punishment rules are applied: there is a probability $p_{a}$ of an audit each agent is subject to in every period and a length of time $k$ detected tax evaders remain honest \cite{zaklan}. In another work, the dynamics of the model was also controlled by another two-state model, namely the majority-vote model with noise \cite{maj_vot}, where the noise $q$ plays the role of the temperature. In this case, similar results were found \cite{lima3}, suggesting that the results of the Zaklan model are robust. An interesting extension of such models is to consider that the transition from honest to evader is not abrupt. In this case, it can be considered a third state that can be called susceptible of undecided \cite{davis,meu_econo}. The presence of such class was analyzed taking into account the dynamics of kinetic exchange opinion models \cite{meu_econo,biswas,meu,allan_pla}, and considering the same punishment rules of the Zaklan model. In this case, it was discussed \cite{meu_econo} that the presence of such third class affects substantially the dynamics, and that the compliance is high below the critical point (of the order-disorder transition) of the opinion dynamics governed by the kinetic exchanges. On the other hand, above the critical point the tax evasion can be considerably reduced by the enforcement mechanism. In this work we propose another three-state agent-based model to analyze tax evasion dynamics. The transitions among the classes are ruled by probabilities, similarly to what happens in models of epidemic spreading \cite{bailey,anderson,satorras2015}. The enforcement mechanism is considering in the mentioned probabilities, as well as the social pressure of the contacts of a given individual. We will also see that the emergence of tax evaders in the population can be associated with a nonequilibrium phase transition, that was not observed in previous physics models of tax evasion, to the best of our knowledge \cite{zaklan,lima1,lima2,lima3,llacer,seibold,bertotti,meu_econo}. This work is organized as follows. In section 2 we define the model's rules and the individuals presented in the population. After, we discuss results in three distinct topologies. Finally, in section 3 we present our conclusions and final remarks. \section{Model and Results} \qquad We considered a population of $N$ agents defined in a given network of contacts, that will be defined specifically in the following. Each individual $i$ ($i=1,2,...,N$) can be in one of three possible states or attitudes at a given time step $t$, represented by $X_{1}(t)$, $X_{2}(t)$ and $X_{3}(t)$. In other words, $X_{j}$ represents the number of individuals in a given state, with $j=1,2,3$. The state $X_{1}$ represents a \textit{honest tax payer}, i.e., an individual 100$\%$ convinced of his/her honesty, who does not consider evasion. He/she is either habitually compliant or he/she is a recent evader who has become honest as a result of enforcement efforts or social norms. On the other hand, the state $X_{3}$ represents a cheater, i.e, an individual who is an \textit{evading tax payer}. Whether a tax payer continues to evade depends on both enforcement and the effect of social interactions. Finally, the third state $X_{2}$ consists of taxpayers who are dissatisfied with the tax system (perhaps as a result of seeing others evade without being punished). These taxpayers are not actively evading, but they might if the perceived benefits of doing so exceed the perceived costs. For this group, evasion is an option, and so we classify them as \textit{susceptibles} \cite{davis}, i.e., they are susceptible to become evaders. We consider here two distinct mechanisms to govern the transitions among the above-mentioned classes $X_{1}, X_{2}$ and $X_{3}$: social interactions and enforcement regime. The possible transitions are as follows: \begin{eqnarray} \label{eq1} X_{1} + X_{3} & \stackrel{\lambda}{\rightarrow} & X_{2} + X_{3} ~, \\ \label{eq2} X_{2} & \stackrel{\alpha}{\rightarrow} & X_{3} ~, \\ \label{eq3} X_{3} + X_{1} & \stackrel{\delta}{\rightarrow} & X_{1} + X_{1} ~, \\ \label{eq4} X_{3} & \stackrel{\beta}{\rightarrow} & X_{1} ~. \end{eqnarray} The interpretation of these transitions is as follows. Eq. (\ref{eq1}) represents an encounter of a honest agent $X_{1}$ with an evader $X_{3}$. In this case, with probability $\lambda$ the honest individual becomes susceptible $X_{2}$. The parameter $\lambda$ can be viewed as the persuasion power of the evaders. The following transition, Eq. (\ref{eq2}) represents a spontaneous transition from the susceptible state $X_{2}$ to the evader state $X_{3}$. The enforcement affects the behavior of a susceptible individual through its effect on the perceived costs of evasion (cost-benefit analysis). Thus, we assume that some susceptible tax payers will perceive that the benefits of evasion exceed the costs of evasion in each period, leading these individuals to evade. This is represented by the probability $\alpha$. In this case, we consider that the transition from honest to evader is not abrupt: the individual first becomes susceptible and after he might become evader. Eq. (\ref{eq3}) represents the opposite transition in comparison with Eq. (\ref{eq1}). In this case, it represents an encounter of an evader agent $X_{3}$ with a honest tax payer $X_{1}$. In this case, with probability $\delta$ the evader individual becomes honest. The parameter $\delta$ can be viewed as the persuasion power of the honests. One can also consider that this last transition occurs to the state $X_{2}$, but for simplicity we consider that the evaders go directly to the honest compartment. Finnaly, Eq. (\ref{eq4}) represents another enforcement effect. We consider that evaders become compliant after they are audited or when their perceptions regarding the costs and benefits of evasion change, either through experience or changing economic conditions \cite{davis}. This transition occurs with probability $\beta$, that can be viewed as a measure of the efficiency of the government's fiscalization. As in the previous case, one can also consider that some evaders might not be rehabilitated when they are audited, remaining susceptible rather than becoming honest, but for simplicity we will not consider those additional transitions. In the following subsections we consider the model defined in Eqs. (\ref{eq1}) - (\ref{eq4}) on the top of three distinct topologies: fully-connected network, Erd\"{o}s-R\'enyi random graph and scale-free Barab\'asi-Albert network. \subsection{Fully-connected network} \qquad In this section we consider the model on a fully-connected graph. Considering the densities of each state, namely $x_{j}=X_{j}/N$ ($j=1,2,3$), the above Eqs. (\ref{eq1}) - (\ref{eq4}) can be translated on the mean-field equations \begin{eqnarray} \label{eq5} \frac{d}{dt}\,\,x_{1} & = & \beta\,x_{3} - \lambda\,x_{1}\,x_{3} + \delta\,x_{1}\,x_{3} ~, \\ \label{eq6} \frac{d}{dt}\,x_{2} & = & -\alpha\,x_{2} + \lambda\,x_{1}\,x_{3} ~, \\ \label{eq7} \frac{d}{dt}\,x_{3} & = & \alpha\,x_{2} - \beta\,x_{3} - \delta\,x_{1}\,x_{3} ~. \end{eqnarray} \noindent where now $x_{1}$, $x_{2}$ and $x_{3}$ denote the fractions of honest, susceptible and tax evader individuals, respectively. One can start analyzing the time evolution of the three classes of individuals. We numerically integrated Eqs. (\ref{eq5}), (\ref{eq6}) and (\ref{eq7}) in order to analyze the effects of the variation of the model's parameters. As initial conditions, we considered $x_{1}(0)=0.98$, $x_{2}(0)=0.02$ and $x_{3}(0)=0$, and for simplicity we fixed $\alpha=0.2$ and $\delta=0.3$, varying the parameters $\lambda$ and $\beta$. In Fig. \ref{fig1} we exhibit results for fixed $\beta=0.2$ and typical values of $\lambda$ (left panels) and for fixed $\lambda=0.8$ and typical values of $\beta$ (right panels). For the cases with fixed $\beta$, one can see that the increase of $\lambda$ causes the decrease of $x_{1}$ and the increase of $x_{2}$ and $x_{3}$. Remembering that $\lambda$ models the persuasion of evaders $x_{3}$ in the social interactions with honests $x_{1}$, i.e. the transition given by eq. (\ref{eq1}), it is easier to understand those results: for increasing values of $\lambda$ more agents goes from the class $x_{1}$ to the class $x_{2}$, and these susceptible individuals after can go to the evader class $x_{3}$, which cause an increse of the susceptible and evader classes, and the decrease of honests. On the other hand, in the graphics with fixed $\lambda$, one can see that the increase of $\beta$ leads to an increase of honests and a decrease of susceptibles and evaders. This is compatible with the interpretation of $\beta$ as a government fiscalization: the increase of the efficiency of the enforcement regime leads to the rise of honesty in the population, as well as the decrease of tax evasion. Thus, the variation of those two parameters $\beta$ and $\lambda$ models the competition of pressure of the social contacts and the State's fiscalization. \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig1a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig1d.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig1b.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig1e.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig1c.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig1f.eps} \end{center} \caption{(Color online) Time evolution of the three densities of agents $x_{1}$, $x_{2}$ and $x_{3}$ for the mean-field formulation of the model, based on Eqs. (\ref{eq5}) - (\ref{eq7}). The fixed parameters are $\alpha=0.2$ and $\delta=0.3$. In the left panels it is shown the evolution for $\beta=0.2$ and typical values of $\lambda$, whereas in the right panels we exhibit the evolution for $\lambda=0.8$ and typical values of $\beta$.} \label{fig1} \end{figure} One can observe in Fig. \ref{fig1} that the fractions $x_{1}$, $x_{2}$ and $x_{3}$ evolve with time and after some steps they stabilize. One can derive analytically the stationary fractions of the three classes by taking the time derivatives equal to zero in Eqs. (\ref{eq5}), (\ref{eq6}) and (\ref{eq7}). In this case, one can obtain the fixed points as functions of the models'parameters, \begin{eqnarray} \label{eq8} x_{1}^{*} & = & \frac{\beta}{\lambda-\delta} ~, \\ \label{eq9} x_{2}^{*} & = & \frac{\lambda\,\beta\,(\lambda-\delta-\beta)}{(\lambda-\delta)\,[\lambda\,\beta+\alpha\,(\lambda-\delta)]} ~, \\ \label{eq10} x_{3}^{*} & = & \frac{\lambda-\delta-\beta}{\lambda-\delta+(\lambda/\alpha)\,\beta} ~. \end{eqnarray} \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig2a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig2b.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig2c.eps} \end{center} \caption{(Color online) Stationary fractions $x_{1}^{*}$ (upper left), $x_{2}^{*}$ (upper right) and $x_{3}^{*}$ (lower) of the three classes of agents for the mean-field formulation of the model, given by Eqs. (\ref{eq8}) - (\ref{eq10}). The fractions are plotted as a function of $\lambda$ for typical values of $\beta$. The fixed parameters are $\alpha=0.2$ and $\delta=0.3$.} \label{fig2} \end{figure} \noindent One can see from the above equations that the model undergoes a nonequilibrium phase transition if we consider the stationary fraction of evaders $x_{3}^{*}$ as an order parameter: for $\lambda\leq\lambda_{c}$ the stationary solutions are given by $x_{1}^{*}=1$ and $x_{2}^{*}=x_{3}^{*}=0$, whereas for $\lambda>\lambda_{c}$ the solutions give us $x_{1}^{*}>0$, $x_{2}^{*}>0$ and $x_{3}^{*}>0$, where the threshold is given by $\lambda_{c}=\beta+\delta$. This is an active-absorbing transition \cite{dickman,hinrichsen}, and it separates a phase where the tax evaders disappear of the population in the long-time limit and the population is formed only by honests, from a phase where there is a finite fraction of evaders in the long time. The susceptible agents also survive in the active phase, and they disappear in the absorbing phase. For a better analysis of the stationary behavior, we show in Fig. \ref{fig2} the stationary fractions $x_{1}^{*}$, $x_{2}^{*}$ and $x_{3}^{*}$ as functions of $\lambda$ for typical values of $\beta$. The results are based on Eqs. (\ref{eq8}) - (\ref{eq10}). One can see the mentioned phase transition in the lower panel: for values $\lambda\leq \lambda_{c}=\beta+\delta$ we have $x_{3}^{*}=0$, and for $\lambda>\lambda_{c}$ we have $x_{3}^{*}>0$. In addition, one can see again the behaviors discussed above, i.e., the decrease of the evasion for increasing fiscalization ($\beta$) and the decrease of honests due to social pressure of tax evaders ($\lambda$), that are realistic features of the model. Comparing the three values of $\beta$ in the lower panel of Fig. \ref{fig2}, we see that the enforcement regime can be extremely effective for control the evasion. Indeed, this effect can be seen in Eqs. (\ref{eq8}) - (\ref{eq10}): for increasing $\beta$ we see that $x_{3}^{*}$ decreases, and $x_{1}^{*}$ increases. \subsection{Erd\"{o}s-R\'enyi Random graph} \qquad In this section we consider the model on Erd\"{o}s-R\'enyi (ER) random graphs. The network is formed by $N$ isolated nodes, and we connect each pair with probability $p$. In this case, we performed simulations considering the rules given by Eqs. (\ref{eq1})-(\ref{eq4}) for network size $N=10^{4}$ and the connection probability $p=5\times 10^{-4}$, which gives us an average connectivity $\langle k\rangle=5$. The numerical procedure is as follows. We visit every node in the ER graph and apply the rules (\ref{eq1})-(\ref{eq4}). In the case where the chosen node is in the $X_{1}$ state, for example, we apply the rule (\ref{eq1}) if he/she has at least one neighbor in the $X_{3}$ state. The same occurs for the social interaction given by Eq. (\ref{eq3}). The remaining rules (\ref{eq2}) and (\ref{eq4}) are spontaneous transitions. As in the previous subsection, one can start analyzing the time evolution of the three classes of individuals. We considered the same initial conditions as before, namely $x_{1}(0)=0.98$, $x_{2}(0)=0.02$ and $x_{3}(0)=0$, and for simplicity we fixed $\alpha=0.2$ and $\delta=0.3$, varying the parameters $\lambda$ and $\beta$. In Fig. \ref{fig3} we exhibit results for fixed $\beta=0.2$ and typical values of $\lambda$ (left panels) and for fixed $\lambda=0.8$ and typical values of $\beta$ (right panels). One can see a qualitative similar behavior observed in the fully-connected graph, i.e., for the cases with fixed $\beta$, one can see that the increase of $\lambda$ leads to the decrease of $x_{1}$ and the increase of $x_{2}$ and $x_{3}$, since $\lambda$ is related to the social pressure of tax evaders over honest individuals. In addition, for the graphics with fixed $\lambda$, the increase of the fiscalization $\beta$ leads to an increase of honests and a decrease of susceptibles and evaders. However, the stationary values are different from the previous cases, as well as the impact of fiscalization and social pressure. In order to better see these differences, we exhibit in Fig. \ref{fig4} the stationary values $x_{1}^{*}$, $x_{2}^{*}$ and $x_{3}^{*}$ as functions of $\lambda$ for typical values of the fiscalization $\beta$. Comparing the three graphics, one can see that the fiscalization can reduce the fraction of tax evaders in the population, even if the social pressure $\lambda$ of dishonest individuals over honest ones is high: if $\beta$ is increased from $0.1$ to $0.5$ the stationary fraction of evaders in the population reduces from $\approx 0.4$ to $\approx 0.2$ for $\lambda=1.0$. However, in comparison with the mean-field case, the reduction of evasion is smaller. Thus, considering a more realistic topology, the pressure of the social contacts in the network leads to a slow decrease of the honest tax payers in comparison with the fully-connected graph. This occurs since a given agent in the ER random graph is always connected with the same neighbors (average value $\langle k\rangle$), and in the fully-connected case each agent can interact with all others. Furthermore, one can see a large density of susceptibles in comparison with the fully-connected graph. One can also see the above-mentioned active-absorbing phase transition, but the threshold values are very small in comparison with the mean-field case. All these differences appear as consequences of the presence of a more realistic topology for modelling the society. \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig3a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig3d.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig3b.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig3e.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig3c.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig3f.eps} \end{center} \caption{(Color online) Time evolution of the three densities of agents $x_{1}$, $x_{2}$ and $x_{3}$ obtained from simulations of the model defined on the ER Random Graph. The fixed parameters are $\alpha=0.2$ and $\delta=0.3$. In the left panels it is shown the evolution for $\beta=0.2$ and typical values of $\lambda$, whereas in the right panels we exhibit the evolution for $\lambda=0.8$ and typical values of $\beta$.} \label{fig3} \end{figure} \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig4a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig4b.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig4c.eps} \end{center} \caption{(Color online) Stationary fractions $x_{1}^{*}$, $x_{2}^{*}$ and $x_{3}^{*}$ as functions of $\lambda$ for $\beta=0.1$ (upper left), $\beta=0.3$ (upper right) and $\beta=0.5$ (lower) for the model simulated on the ER graph. The fixed parameters are $\alpha=0.2$ and $\delta=0.3$.} \label{fig4} \end{figure} \subsection{Barab\'asi-Albert network} \qquad Finally, in this section we consider the model on Barab\'asi-Albert (BA) scale-free networks. In this case, we performed simulations considering the rules given by Eqs. (\ref{eq1})-(\ref{eq4}) for network size $N=10^{4}$. Each generated network starts with $2$ nodes connected between themselves, and at each time step we add 1 node and 1 link to a pre-existing node, considering the usual preferential attachment procedure (probability proportional to the connectivity). The numerical procedure is the same described for the ER graph: we visit every node in the BA network and apply the rules (\ref{eq1})-(\ref{eq4}). In the case where the chosen node is in the $X_{1}$ state, for example, we apply the rule (\ref{eq1}) if he/she has at least one neighbor in the $X_{3}$ state. The same occurs for the social interaction given by Eq. (\ref{eq3}). The remaining rules (\ref{eq2}) and (\ref{eq4}) are spontaneous transitions. \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig5a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig5d.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig5b.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig5e.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig5c.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig5f.eps} \end{center} \caption{(Color online) Time evolution of the three densities of agents $x_{1}$, $x_{2}$ and $x_{3}$ obtained from simulations of the model defined on the BA scale-free networks. The fixed parameters are $\alpha=0.2$ and $\delta=0.3$. In the left panels it is shown the evolution for $\beta=0.2$ and typical values of $\lambda$, whereas in the right panels we exhibit the evolution for $\lambda=0.8$ and typical values of $\beta$.} \label{fig5} \end{figure} The time evolution of the densities is very similar to the previous cases, as it is shown in Fig. \ref{fig5}. In addition, the stationary values $x_{1}^{*}$, $x_{2}^{*}$ and $x_{3}^{*}$ are exhibited in Fig. \ref{fig6}. In comparison with the previous cases, the decrease of the fraction of honests is slower when we rise the probability $\lambda$. We also see a similar behavior related to the increase of the enforcement $\beta$, with a good reduction of tax evasion, and a rapid increase of the susceptible agents for increasing values of $\lambda$. As in the other graphs, one can see that we have $x_{3}^{*}=0$ for sufficient small values of $\lambda$. However, as it is typical in epidemic-like models in scale-free BA networks, this is only a finite size effect, and we do not expect an ``epidemic threshold'' in this case \cite{satorras2015}. As an evidence of this fact, we plot in Fig. \ref{fig7} the thresholds $\lambda_{c}(N)$ obtained from simulations as functions of the inverse networks size $N^{-1}$. As one can see in the left panel of Fig. \ref{fig7}, for the ER random graph $\lambda_{c}(N)$ tends to stabilize at a finite value for increasing values of $N$, suggesting a nonzero threshold. On the other hand, for the BA network the threshold $\lambda_{c}(N)$ decays as a power law for increasing sizes, signaling the absence of the ``epidemic threshold'' in the thermodynamic limit $N^{-1}\to 0$ (see the right panel of Fig. \ref{fig7}) \cite{satorras2015}. In this case, for a sufficient large network, one expect the presence of noncompliant individuals (tax evaders) in the population in the long-time limit, for any value of $\lambda>0$. Comparing the results for the two complex topologies (ER and BA), one can see that a given variation of $\lambda$ (social parameter) leads to distinct variations of the fraction of tax evaders (see Figs. \ref{fig3} - \ref{fig6}). This effect can be related to the presence of hubs (highly connected nodes) in the BA network, as well as the large fluctuation in the average connectivity. These characteristics are absent in the ER random graph, and are crucial for the spreading of influence in the population. \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig6a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig6b.eps} \\ \vspace{0.5cm} \includegraphics[width=0.33\textwidth,angle=270]{fig6c.eps} \end{center} \caption{(Color online) Stationary fractions $x_{1}^{*}$, $x_{2}^{*}$ and $x_{3}^{*}$ as functions of $\lambda$ for $\beta=0.1$ (upper left), $\beta=0.3$ (upper right) and $\beta=0.5$ (lower) for the model simulated on the BA network. The fixed parameters are $\alpha=0.2$ and $\delta=0.3$.} \label{fig6} \end{figure} \begin{figure}[t] \begin{center} \vspace{6mm} \includegraphics[width=0.33\textwidth,angle=270]{fig7a.eps} \hspace{0.3cm} \includegraphics[width=0.33\textwidth,angle=270]{fig7b.eps} \end{center} \caption{(Color online) Thresholds $\lambda_{c}(N)$ as functions of the inverse network size $N^{-1}$ for the ER random graph (left) and BA network (right). The fixed parameters are $\alpha=0.2$ and $\delta=0.3$, and the results are for $\beta=0.1$, $0.3$ and $0.5$. The straight line in panel (b) indicates a power-law decay on $\lambda_{c}(N)$, signaling the absence of the ``epidemic threshold'' in the thermodynamic limit.} \label{fig7} \end{figure} \section{Concluding remarks} \qquad In this work, we have studied the dynamics of tax evasion through an epidemic-like model. We considered three compartments, namely honest tax payers, tax evaders and susceptibles, individuals that are in a intermediate class among honests and dishonests. The transitions among these classes are ruled by four distinct probabilities, that represent social interactions and the enforcement regime. We study the dynamics of the system on the top of three distinct topologies: fully-connected graph, Erd\"{o}s-R\'enyi random graph and Barab\'asi-Albert scale-free network. For the fully-connected graph, one derive mean-field equations that allow us to analyze in details the dynamics and the steady-state properties of the model. Some realistic behaviors were observed, as the reduction of the evasion due to enforcement regime, as well as the increse of honesty. We also observed that the emergence of evaders in the population is associated with a nonequilibrium phase transition: for small values of the control parameter there are only honest tax payers in the population, and above the critical point the three classes (honests, susceptibles and evaders) coexist in the system. The results are qualitatively similar for the other topologies, but regarding the stationary values of the densities of individuals, we verified that the control of tax evasion is harder if the model is simulated on the top of complex networks. In this case, we also verified that the effect of social pressure is more pronounced in comparison with the mean-field case. We observed that in the mean-field case the tax evasion (fraction of noncompliant agents) is absent ($x_{3}^{*}=0$) even for high social pressure (high $\lambda$), which can be seen as a limitation of the model in the simple case of a fully-connected topology. However, this unrealistic feature disappears when we considered the model on the top of complex networks. Indeed, for the ER random graph the region where the tax evaders disappear of the population in the steady states is given by a narrow range of values of $\lambda$, even if the government's fiscalization is high. On the other hand, in the BA scale-free network this effect is more pronounced, and for sufficient large networks the evasion is always present in the population, even for strong fiscalization. Some qualitative comparison with real data can be done. Some authors estimated the tax evasion in Brazil in the range $15 - 22\%$, or even higher values (see \cite{siqueira} and references therein). This range of evasion can be verified in our results for small values of $\beta$, i.e., for weak fiscalization and/or light punishment, as occurs in Brazil \cite{page1,book}. For example, in Fig. \ref{fig2} the fraction of evasion in the range $15 - 22\%$ can be observed for $\beta=0.1$, considering the range $\approx 0.45 < \lambda < 0.5$ (mean field). For the networks, one can see the mentioned range of evasion for $\approx 0.15 < \lambda < 0.4$ ($\beta=0.1$) and $\approx 0.2 < \lambda < 0.7$ ($\beta=0.3$), for the ER network (see Fig. \ref{fig4}). For the BA network, $\approx 0.25 < \lambda < 0.4$ ($\beta=0.1$) and $\approx 0.5 < \lambda < 1.0$ ($\beta=0.3$) (see Fig. \ref{fig6}). The phase transition observed in the mean-field formulation of the model is an active-absorbing phase transition, and the predicted exponent for the order parameter is $1$ ($x_{3}^{*} \sim (\lambda-\lambda_{c})^{1}$) as in the mean-field directed percolation, that is the prototype of a phase transition to an absorbing state \cite{dickman,hinrichsen}. It would be interesting to estimate numerically other critical exponents of the model, as well as to simulate it in regular d-dimensional lattices (square, cubic, for example) in order to obtain all the critical exponents. This is important to define precisely the universality class of the model, as well as its upper critical dimension. This extension is left for a future work. Furthermore, it can also be considered the inclusion of heterogeneities in the population, like agents' conviction, mass media effects, etc. \section*{Acknowledgments} The authors acknowledge financial support from the Brazilian funding agencies CNPq and CAPES.
2111.12660
\section{Squarefree polynomials and the geometry of numbers} \label{sec:GON} The main intermediate result on our way to Theorem \ref{thm:main} is the following theorem on squarefree integer polynomials. \begin{thm} \label{thm:squarefree} Choose a compact subset $\Sigma$ of $\R$, and choose a H\"{o}lder probability measure $\mu$ on $\Sigma$. Suppose \[\int \log|Q| d\mu \ge 0\] for every nonzero integer polynomial $Q$. Then there is a $C > 0$ determined from $\mu$ and $\Sigma$ so, for all degrees $n \ge 2$, there is a squarefree integer polynomial $Q_n$ of degree $n$ whose $n$-norm with respect to $(\mu, \Sigma)$ satisfies \[\norm{Q_n}_n \le n^{C \sqrt{n}}.\] \end{thm} The proof of this requires two different applications of the geometry of numbers. First, we have the following application of the flatness theorem to integer polynomials. This theorem will also be needed in the path from Theorem \ref{thm:squarefree} to Theorem \ref{thm:main}. \begin{thm} \label{thm:adjust} There is a positive real number $C$ so we have the following: Take $c$ and $n$ to be positive integers, and suppose we have a squarefree integer polynomial $P$ with factorization \[P(z) = c (z - \alpha_1) (z - \alpha_2) \dots (z - \alpha_n)\] over $\C$, so the $\alpha_i$ are all distinct. For $i \le n$, define a degree $n-1$ polynomial \[P_i(z) = P(z)/(z - \alpha_i).\] Given any real polynomial $Q(z)$ of degree at most $n - 1$, there then are complex numbers $\beta_1, \dots, \beta_n$ satisfying \[\sum_{i \le n} |\beta_i| \le C n \log 2n\] so \[Q(z) - \sum_{i \le n} \beta_i P_i(z)\] is an integer polynomial. \end{thm} \begin{proof} Take $K = \QQ(\alpha_1, \dots, \alpha_n)$. For $i$ in $\{1, 2, \dots, n\}$ and $j$ in $\{0, 1, \dots, n-1\}$, take $b_{ij}$ to be the degree $j$ coefficient of $P_i$. These coefficieints are all algebraic integers. Otherwise, some $b_{ij}$ would have negative valuation at some prime of $K$, and this cannot happen by Gauss's lemma since discrete valuation rings are unique factorization domains. Note that a field automorphism of $K$ that takes $\alpha_i$ to $\alpha_k$ will also take $P_{i}$ to $P_{k}$. For any rational integers $d_0, \dots, d_{n-1}$, we may conclude that \[\left\{ \sum_{0 \le j \le n-1} d_j b_{ij} \,:\,\, 1 \le i \le n\right\}\] is the set of roots to a monic integer polynomial. In particular, if any element in this set is nonzero, there is some element in this set of absolute value at least one. But the $P_i$ are linearly independent, so these sums are all zero only if $d_j = 0$ for each $j$. Take $P^{\circ}_1, \dots, P^{\circ}_n$ to be a list of polynomials containing $P_i$ for every $i \le n$ for which $\alpha_i$ is real and containing \[\tfrac{1}{\sqrt{2}}\left(P_i + \overline{P_i}\right)\quad\text{and}\quad \tfrac{i}{\sqrt{2}} (P_i - \overline{P_i})\] for every $i \le n$ for which $\alpha_i$ is not real. For every sequence of rational integers $d_0, \dots, d_{n-1}$ that are not all zero, the above argument shows there is some $i \le n$ so that, if we write $P^{\circ}_i(z)$ in the form $b^{\circ}_{n-1}z^{n-1} + \dots + b^{\circ}_0$, we have \[\left| d_0b^{\circ}_0 + \dots + d_nb^{\circ}_n\right| \ge 1.\] We can conclude from the flatness theorem for simplices \cite[Corollary 2.5]{BLPS_flat99} that, for some absolute $C_0 > 0$, any translate of the convex body \[K = \left\{ \sum_{i \le n} \beta_i P^{\circ}_i \,:\,\, \sum_{i \le n} |\beta_i| \le C_0 n \log 2n \right\}\] contains an integer polynomial. This suffices to prove the theorem. \end{proof} Combining this theorem with the Remez inequality gives the following useful corollary. \begin{cor} \label{cor:adjust} Fix a compact finite union of intervals $\Sigma$ and a H\"{o}lder probability measure $\mu$ with support contained in $\Sigma$. Choose an integer $n > 1$, a real polynomial $Q$ of degree at most $n-1$, and a squarefree integer polynomial $P$ of degree $n$. Then there is an integer polynomial $R$ of degree at most $n-1$ so that \[\norm{Q - R}_n \le n^C \cdot \norm{P}_n,\] where $C > 0$ just depends on $\Sigma$ and $\mu$. \end{cor} \begin{proof} Take $R$ to be the polynomial $Q - \sum_{i \le n}\beta_i P_i$ constructed in Theorem \ref{thm:adjust}. The upper bound on the norm of $Q - R$ follows from Lemma \ref{lem:Remez} and the bounds on the $\beta_i$. \end{proof} Our second use of the geometry of numbers is an application of Minkowski's results on successive minima to integer polynomials. The precedent for this result comes from Hilbert \cite{Hilbert94}, who proved a similar result in the case where $\Sigma$ is an interval and $\mu$ is the unweighted equilibrium distribution. \begin{prop} \label{prop:Mink} Choose an integer $n \ge 0$, distinct real numbers $\alpha_0, \dots, \alpha_n$, and positive real numbers $w_0, \dots, w_n$. Define $K$ to be the set of real polynomials $P$ of degree at most $n$ that satisfy \[w_i \cdot |P(\alpha_i)| \le 1 \quad\text{for all }\, 0 \le i \le n.\] For a nonnegative integer $i$ satisfying $i \le n$, take $\lambda_i$ to be the least real number so $\lambda_i K$ contains at least $i+1$ linearly independent integer polynomials. Then \[\frac{1}{(n+1)!}D \,\le\, \lambda_0 \cdot\lambda_1\cdot \dots \cdot\lambda_n \,\le\, D ,\] where we have taken the notation \[D = \left(\prod_{0 \le i \le n}^n w_i \right)\cdot \left(\prod_{0 \le i < j \le n} |\alpha_i - \alpha_j|\right).\] \end{prop} \begin{proof} By identifying the real polynomial $a_nz^n + \dots + a_1z + a_0$ with the tuple $(a_0, a_1, \dots, a_n)$, we may think of $K$ as a convex, centrally-symmetric subset of $\R^{n+1}$. Minkowski's second theorem \cite[p. 376]{Gruber07} then shows \[ \frac{2^{n+1}}{(n+1)!} \Vol(K)^{-1} \,\le\,\lambda_0 \cdot\lambda_1\cdot \dots \cdot\lambda_n\,\le\, 2^{n+1} \Vol(K)^{-1}.\] To prove the proposition, we now just need to estimate the volume of $K$. So define a polynomial $P(z) = (z - \alpha_0) \dots (z - \alpha_n)$, and consider the polynomials $P_i(z) = P(z)/(z - \alpha_i)$ for every $i$ satisfying $0 \le i \le n$. Then $K$ can alternatively be defined by \[K = \left\{ \sum_i \beta_i P_i \,:\,\, \left|\beta_i w_i P_i(\alpha_i) \right| \le 1\right\}.\] If we instead consider \[K_1 = \left\{ \sum_i \beta_i P_i \,:\,\, 0 \le \beta_i \le 1\right\},\] we see that \[\Vol K =2^{n+1} \cdot \left(\prod_{0 \le i \le n}^n w_i^{-1} |P_i(\alpha_i)|^{-1}\right)\cdot \Vol K_1.\] The volume of $K_1$ is the absolute value of the determinant of the $(n+1) \times (n+1)$ matrix $M$ whose $ij^{th}$ entry is given by the degree $j -1$ coefficient of $P_{i-1}$. This is a matrix of elementary symmetric polynomials in $\alpha_0, \dots, \alpha_n$. Following the argument \cite[p. 41-42]{Macd15} then gives \[\Vol K_1 = \prod_{0 \le i < j \le n} |\alpha_i - \alpha_j|,\] and we can conclude that $\Vol K = 2^{n+1} D^{-1}$. The result follows. \end{proof} \begin{prop} \label{prop:squeeze} Take $\mu$ to be a H\"{o}lder probability measure with support contained in a compact subset $\Sigma$ of $\R$. Choose nonnegative integers $n$ and $m$ and a nonzero integer polynomial $R$, and take $P_0, P_1, \dots, P_m$ to be linearly independent integer polynomials of degree at most $m$. Then, if $\int_{\Sigma} \log|R| d\mu$ is nonnegative, we have \[\prod_{i = 0}^m \norm{P_i R}_n \ge\frac{1}{(m+1)!} \cdot \exp\left(\left(mn - \tfrac{1}{2}m^2 + n - \tfrac{1}{2}m\right) \cdot I(\mu)\right).\] \end{prop} \begin{proof} Take $X_0, \dots, X_m$ to be random elements selected from $\Sigma$ according to the distribution $\mu$. From linearity of expectation, we see that the average value of \[\sum_{i \le m} \log |R(X_i)| + \sum_{i \le m} nU^{\mu}(X_i) + \sum_{ i < j \le m} \log|X_i - X_j|\] is \[(m+1) \cdot \left(\int_{\Sigma} (\log|R| + nU^{\mu}) d\mu \right)\,-\, \tfrac{1}{2}(m^2 + m)\cdot I(\mu) \,\ge\, \left(mn - \tfrac{1}{2}m^2 + n - \tfrac{1}{2}m\right)\cdot I(\mu) .\] In particular, there is some choice of $\alpha_0, \dots, \alpha_m$ in $\Sigma$ for which we have \[\prod_{i \le m} R(\alpha_i) \cdot e^{nU^{\mu}(\alpha_i)} \cdot \prod_{i < j \le m} |\alpha_i - \alpha_j| \ge \exp\left(\left(mn - \tfrac{1}{2}m^2 + n - \tfrac{1}{2}m\right)\cdot I(\mu)\right).\] The result then follows from Proposition \ref{prop:Mink} once we notice that \[\norm{RP}_n = \max_{x \in \Sigma} \left|R(x)P(x)e^{n U^{\mu}(x)}\right| \ge \max_{i \le n}\left| R(\alpha_i)P(\alpha_i)e^{n U^{\mu}(\alpha_i)}\right|.\] \end{proof} In the case where $R = 1$, $n = m$, and $\Sigma$ is a compact finite union of intervals, this bound is nearly sharp. Specifically, we have the following result. \begin{prop} \label{prop:Hilbert} Choose a compact finite union of intervals $\Sigma$, choose a H\"{o}lder probability measure $\mu$ with support contained in $\Sigma$, and choose an integer $n \ge 2$. Then there is a sequence $P_0, \dots, P_n$ of linearly independent integer polynomials of degree at most $n$ so \begin{equation} \label{eq:Hilbert} \prod_{i = 0}^n \norm{P_i}_n \le n^{Cn} \exp\left(\tfrac{1}{2} n^2 I(\mu)\right), \end{equation} where $C$ depends just on $\mu$ and $\Sigma$. \end{prop} \begin{proof} Define $\alpha_1, \dots, \alpha_{n+1}$ and $P_{n + 1, \mu}$ from $\mu$ as in Definition \ref{defn:approx}. Applying \eqref{eq:potential_approx} and Lemma \ref{lem:Remez} gives that there is $C_0 > 0$ determined from $\mu$ and $\Sigma$ so \[\norm{P_{n, \mu}(z)/(z - \alpha_i)}_n \,\le\, \max_{ j \le n + 1\,}n^{C_0} e^{n U^{\mu}(\alpha_j)} \left|P_{n, \mu}'(\alpha_j)\right|\] for all integers $i$ in $[1, n+1]$. The polynomials $P_{n, \mu}(z)/(z - \alpha_i)$ give a basis for the set of real polynomials of degree at most $n$, so the triangle inequality gives \[\norm{P}_n \le (n+1) n^{C_0} e^{n U^{\mu}(\alpha_i)} \max_{i \le n+1}\left| P(\alpha_i)e^{n U^{\mu}(\alpha_i)}\right|,\] for every real polynomial $P$ of degree at most $n$. The result follows for a good choice of $C$ from Proposition \ref{prop:Mink} \end{proof} In the context of Theorem \ref{thm:main}, the following consequence of Proposition \ref{prop:Hilbert} accounts for the fact that the discriminant of a squarefree integer polynomials is a nonzero integer. The assumption that $\mu$ is H\"{o}lder can be removed, as we will show in Proposition \ref{prop:general_energy}. \begin{cor} \label{cor:neg_energy} Suppose $\mu$ is a H\"{o}lder probability measure on a compact subset $\Sigma$ of $\R$. Then, if $\int_{\Sigma} \log|Q| d\mu$ is nonnegative for every nonzero integer polynomial $Q$, the energy of $\mu$ satisfies \[I(\mu) \le 0.\] \end{cor} \begin{proof} For $n \ge 2$, Proposition \ref{prop:Hilbert} shows there is a nonzero integer polynomial $P_n$ with $\norm{P_n}_n \le n^C \exp\left(\tfrac{1}{2} n I(\mu)\right)$. It follows that \[0 \,\le\, \int_{\Sigma} \log|P_n| d\mu \,\le\, C \log n + \tfrac{1}{2} n I(\mu) - n\int_{\Sigma} U^{\mu}d\mu \,=\, C\log n - \tfrac{1}{2} n I(\mu).\] So $C \log n \ge \tfrac{1}{2} n I(\mu)$ for all $n \ge 2$, and this implies that $I(\mu)$ is nonpositive. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:squarefree}] If $\Sigma'$ contains $\Sigma$, we see that the $n$-norm of a polynomial with respect to $(\mu, \Sigma)$ is at most the $n$-norm with respect to $(\mu, \Sigma')$. So it suffices to prove the theorem in the case where $\Sigma$ is a closed interval. Take $P_0, \dots, P_n$ to be the linearly independent polynomials appearing in Proposition \ref{prop:Hilbert}. Permuting if necessary, we assume that \[\norm{P_0}_n \le \norm{P_1}_n \le \dots \le \norm{P_n}_n.\] For $i \le n$, we define the usable degree up to $P_i$ to be difference between the maximal value of $\deg P_j$ attained for $j \le i$ and the degree of the greatest common divisor of $P_0, \dots, P_i$. The usable degree up to $P_i$ is always at least $i$; for $P_0$, it is always $0$. Take $k$ maximal so $P_k$ has usable degree less than $n - n^{1/2}$, and take $m$ to be its usable degree. From Lemma \ref{lem:squarefree_nocommon}, there are nonnegative integers $b_0, \dots, b_{k-1}$ no larger than $n(k+3)$ so \[ P_k + b_{k-1}P_{k-1} + \dots + b_0 P_0\] takes the form $PR$, where $P$ divides $P_i$ for $i \le k$ and $R$ is a squarefree polynomial of degree $m$. Applying Theorem \ref{thm:adjust} with the squarefree polynomial $R$ to the polynomials \[\tfrac{1}{2}, \,\tfrac{1}{2}z, \dots, \,\tfrac{1}{2}z^m\] gives a sequence of linearly independent half-integer polynomials $\tfrac{1}{2}R_1, \dots, \tfrac{1}{2}R_m$ of degree at most $m-1$. Applying Lemma \ref{lem:Remez} to $RP$ then shows that there is some $C_0 > 0$ depending just on $\Sigma$ and $\mu$ so \[\norm{PR_i}_n \le n^{C_0} \norm{PR}_n\quad\text{for }1 \le i \le m.\] There is a subset $S$ of $\{1, \dots, m\}$ of cardinality $m-k$ so \[\{P_0, \dots, P_k\} \cup \{PR_i\,:\,\, i \in S\}\] is a basis for the real vector space of degree $m + \deg P$ polynomials that are divisible by $P$. So Proposition \ref{prop:squeeze} gives \begin{align*} \prod_{i=0}^{m} \norm{P_i}_n \,&\ge \, n^{-C_1n}\prod_{i = 0}^k \norm{P_i}_n \cdot \prod_{i \in S} \norm{PR_i}_n \\ \,&\ge\, n^{-2C_1 n} \cdot \exp\left(\left(\tfrac{1}{2}n^2 - \tfrac{1}{2}(n-m)^2\right) \cdot I(\mu)\right). \end{align*} for some $C_1 > 0$ depending just on $\Sigma$ and $\mu$. Combining \eqref{eq:Hilbert} with this inequality and applying Corollary \ref{cor:neg_energy} gives \[\prod_{i = m+1}^n \norm{P_i}_n \,\le\, n^{C_2n} \exp\left( \tfrac{1}{2}(n-m)^2 \cdot I(\mu)\right) \,\le\, n^{C_2n},\] where $C_2 > 0$ depends just on $\Sigma$ and $\mu$. In particular, since $n-m > n^{1/2}$, \[\norm{P_{m+1}}_n \le n^{C_2 \sqrt{n}}.\] By our assumption, the usable degree up to $P_{m+1}$ is at least $n - n^{1/2}$. Apply Lemma \ref{lem:squarefree_nocommon} to find an integer polynomial \[P^{\circ}R^{\circ} = P_{m+1} + b^{\circ}_{m} P_{m} + \dots + b^{\circ}P_0\] with $R^{\circ}$ squarefree and of degree at least $n - n^{1/2}$. Iterating Lemma \ref{lem:Remez} on $P^{\circ}R^{\circ}$ at most $n^{1/2}$ times yields \[\norm{R^{\circ}}_n \le n^{C_3 \sqrt{n}},\] where $C_3 > 0$ depends just on $\Sigma$ and $\mu$. To arrive at our final squarefree polynomial $Q_n$, we choose a subset $S$ of $\{1, \dots, n\}$ of cardinality $n - \deg R^{\circ}$ so $R^{\circ}(i) \ne 0$ for each $i \in S$, and we define \[Q_n(z) = R^{\circ}(z) \cdot \prod_{i \in S} (z - i)\] This has norm bounded by $n^{C \sqrt{n}}$ for a good choice of $C$, giving the theorem. \end{proof} \section{Finding real polynomials to adjust} \label{sec:adjust} Considered as measures on the Riemann sphere $\C^{\infty}$, the associated counting measures $\mu_{Q_n}$ of the polynomials constructed in Theorem \ref{thm:squarefree} will \weaks converge to $\mu$ \cite[Theorem III.4.2]{SaTo97}. This is one condition required of the polynomials in Theorem \ref{thm:main}. If $\Sigma$ is a compact finite union of intervals, we can use Corollary \ref{cor:adjust} to produce monic polynomials whose counting measures converge to $\mu$. Specifically, take $P_{n, \mu}$ to be the degree-$n$ approximating polynomial to $\mu$. Corollary \ref{cor:adjust} tells us there is a monic integer polynomial $P_n$ so \[\norm{P_{n, \mu} - P_n}_n \le n^{C \sqrt{n}}.\] Using the Eisenstein condition, we can force the polynomials $P_n$ to be irreducible. The measures $\mu_{P_n}$ still converge to $\mu$, so the polynomials $P_1, P_2, \dots$ obey all but one condition of Theorem \ref{thm:main}. But this last condition, that each $P_n$ have all its roots contained in $\Sigma$, is not so easily disposed of. Typically, we would show a real polynomial $P$ has a root in the range $(x, \,y)$ by showing that $P(x)$ and $P(y)$ have different signs. This argument does not go through for the $P_n$ constructed above. After all, while we can say there is some $x$ in every interval $(\alpha_{i-1}, \,\alpha_i)$ so \[|P_{n, \mu}(x)| \ge n^{-C} e^{-nU^{\mu}(x)},\] the best bound we have for $P_{n, \mu} - P_n$ at this $x$ is \[\left|P_{n, \mu}(x) - P_n(x)\right| \le n^{C\sqrt{n}} e^{-nU^{\mu}(x)},\] so $P_n(x)$ has no reason to have the same sign as $P_{n, \mu}(x)$. We get around this problem with the following trick. \begin{prop} \label{prop:early_ints} Choose a compact finite union of intervals $\Sigma$. Taking $\kappa$ to be the capacity of $\Sigma$, we assume $\kappa > 1$. Then there is a positive integer $D_0$ and positive real $C$ so we have the following: Choose positive integers $m$ and $n$ satisfying $m < n < \kappa^{m/2}$, with $m$ divisible by $D_0$ and greater than $C$. Choose any monic real polynomial $P$ of degree $n - m$. Then there is a monic real polynomial $Q$ of degree $m$ satisfying the following three conditions: \begin{enumerate} \item The degree $i$ coefficient of the product $PQ$ is an even integer for every integer $i$ in $[n-m, n-1]$. \item Take $X$ to be the set of roots of $Q$. Then $X$ is a subset of $\Sigma$, and we have \[ \kappa^m n^{-C} \cdot \min_{\,\alpha \in X\,} |x - \alpha|\, \le\, |Q(x)| \,\le\, \kappa^m n^C\] for all $x \in \Sigma$. \item Given any root $\alpha$ of $Q$, and given any $\alpha'$ that is either a root of $P$ or a boundary point of $\Sigma$, we have \[n^{-C} \le |\alpha - \alpha'|.\] \end{enumerate} \end{prop} We start by showing that this proposition suffices to prove our main theorem. We cannot apply this proposition directly to the polynomials $P_{n, \mu}$, so we ``prune'' them first, per the following definition. \begin{defn} \label{defn:prune} Take $\Sigma$ to be a compact finite union of intervals, take $n$ to be a positive integer, and take $P$ to be a monic squarefree complex polynomial. Take $S$ to be the set of roots of $P$. We assume there is a cardinality $n$ subset $\widetilde{S}$ of $S \cap \Sigma$ so that, for every connected component $I$ of $\Sigma$ that meets $S$, the intersection $I \cap \widetilde{S}$ contains neither the least nor greatest element of $I \cap S$. Choosing any $\widetilde{S}$ satisfying this condition, we then call the polynomial \[\widetilde{P}(z) = \prod_{\alpha \in \widetilde{S}} (z - \alpha)\] a degree $n$ pruned polynomial of $P$. \end{defn} \subsection{Proof of Theorem \ref{thm:main}} By Proposition \ref{prop:limit_Holder}, we may assume without loss of generality that $\mu$ is H\"{o}lder and that $\Sigma$ is a compact finite union of intervals. Take $k_0$ to be the number of components of $\Sigma$. For every $k > 1$, take $\widetilde{P}_k$ to be a degree $k$ pruned polynomial of the approximating polynomial $P_{k + 2k_0, \,\mu}$. The $k$-norm of this polynomial can be bounded using Lemma \ref{lem:Remez}. Fix some sufficiently large $C_0 > 0$, and take $D_0$ as in Proposition \ref{prop:early_ints}. Suppose we wish to construct an irreducible polynomial of sufficiently large degree $n$. Take $m$ to be the greatest multiple of $D_0$ less than $C_0 \sqrt{n} \log n$, and take $Q$ to be the polynomial constructed in Proposition \ref{prop:early_ints} from $\widetilde{P}_{n - m}$. Every root of $Q \widetilde{P}_{n-m}$ is contained in an open subinterval $(x_0, x_1)$ of $\Sigma$ so that $(x_0, x_1)$ contains no other root of $Q \widetilde{P}_{n-m}$ and so \begin{equation} \label{eq:QP_roots} \left|Q(x_i)\widetilde{P}_{n-m}(x_i)\right|e^{n U^{\mu}(x_i)} \,\ge \,\kappa^{C_0 \sqrt{n} \log n} n^{-C_2}\quad\text{for } \, i = 0, 1, \end{equation} where $C_2$ depends on $\mu$, $\Sigma$, and $D_0$. Writing $Q \widetilde{P}_{n-m}$ in the form $z^n + a_{n-1} z^{n-1} + \dots + a_0$, we now apply Corollary \ref{cor:adjust} to the real polynomial \[\tfrac{1}{4}\left(a_{n-m -1}z^{n- m -1} + \dots + a_0\right) + \tfrac{1}{2}\] with the squarefree integer polynomial of degree $n-m$ constructed in Theorem \ref{thm:squarefree}. Calling the resulting integer polynomial $R$, take \[P_n = z^n + a_{n-1}z^{n-1} + \dots + a_{n-m}z^{n-m} + 4R - 2\] Applying Eisenstein's criteria at the prime $2$, we see this polynomial is irreducible. It also satisfies \[\norm{P_n - Q\widetilde{P}_{n-m}}_ n \le n^{C_1\sqrt{n}},\] where $C_1$ just depends on $\Sigma$ and $\mu$. Assuming $C_0 \log \kappa$ is larger than $C_1$, \eqref{eq:QP_roots} implies that $P_n$ has all roots contained in $\Sigma$ for sufficiently large $n$. We still find that there is $C > 0$ so \[\norm{P_n}_n \le n^{C \sqrt{n}}\] for all $n$, so the $\mu_{P_n}$ \weaks converge to $\mu$. This was the last condition to check. \qed \subsection{Proof of Proposition \ref{prop:early_ints}} For the rest of this section, we fix $\Sigma$ of capacity $\kappa > 1$ as in the proposition statement, and we take $k_0$ to be the number of components of $\Sigma$. The proof of Proposition \ref{prop:early_ints} takes a similar approach to Robinson's proof \cite{Robi64, Serre18} that infinitely many algebraic integers have all conjugates inside $\Sigma$. The main strategy of both our proof and Robinson's proof is to take a Chebyshev polynomial for $\Sigma$ and nudge its coefficients to satisfy the conditions we want. In a convention that agrees with our definition of $n$-norm, we will use the notation $\norm{P}_0$ as shorthand for $\max_{x \in \Sigma\,} |P(x)|$. \begin{lem} \label{lem:Chebyshev} For $r \ge 0$, there is a unique real monic degree $r$ polynomial $T_r$ for which there are $r+1$ points $x_0 < x_1 < \dots < x_r$ in $\Sigma$ so that \[T_r(x_i) = (-1)^{n - i} \norm{T_r}_0 \quad\text{for all } \, i \le r.\] This polynomial satisfies \[ C_0^{-1} \cdot \kappa^r\le \norm{T_n}_0 \le C_0 \cdot \kappa^r,\] where $C_0 > 1$ depends just on $\Sigma$. \end{lem} \begin{proof} This characterization of the Chebyshev polynomials for $\Sigma$ is known as the alternation theorem and can be found in \cite[Theorem 1.1]{CSZ_Cheb17}. The bounds on $\norm{T_n}_0$ follow from asymptotic work of Widom \cite[Theorem 11.5]{Widom69}; see also \cite{Totik09}. \end{proof} With $T_r$ and $x_0 < \dots < x_r$ as in this lemma, we see that $T_r$ must have a unique zero in each interval $(x_{i-1}, x_i)$. In particular, at most $k_0$ of its zeros can lie outside $\Sigma$. We can thus prune the polynomials as follows \begin{notat} \label{notat:prune_T} For $r \ge 1$, take $\widetilde{T}_r$ to be a pruned polynomial for $T_{r + 3k_0}$ on $\Sigma$. Given this polynomial, there is a sequence $x_1 < x_2 < \dots < x_{r + k_0}$ of elements in $\Sigma$ so \begin{equation} \label{eq:pruned_T} \left|\widetilde{T}_r(x_k)\right| \ge C_1^{-1} \cdot \kappa^{r} \end{equation} and so that $(x_i, x_{i+1})$ either does not lie entirely in $\Sigma$ and contains no root of $\widetilde{T}_r$, or lies entirely in $\Sigma$ and contains a unique root of $\widetilde{T}_r$. Here, the constant $C_1 > 0$ depends just on $\Sigma$. \end{notat} We now take \[D_0 = \left\lceil \frac{4C_1C_0 }{\kappa - 1}\right\rceil.\] With this setting, we can prove the following lemma, which follows the basic idea of \cite[Construction 7.3]{BCLPS21}. \begin{lem} \label{lem:Bclps} Take $m$ and $n$ as in Proposition \ref{prop:early_ints}, and take $r$ to be the integer $m/D_0$. Take $x_1 < \dots < x_{r + k_0}$ to be the sequence of points constructed from $\widetilde{T}_r$ in Notation \ref{notat:prune_T}. Given $P$ of degree $n - m$ as in Proposition \ref{prop:early_ints}, there is a real monic polynomial $T$ of degree $r$ so that the degree $i$ coefficient of \[T^{D_0} P\] is an even integer for $n - r \le i < n$, and so that \[T(x_i)\big/\widetilde{T}_r(x_i) \ge 1/2\] for $1 \le i \le r+ k_0$. \end{lem} \begin{proof} Define a sequence of coefficients $\lambda_{r-1}, \lambda_{r-2}, \dots, \lambda_0$ as follows: \begin{itemize} \item Take $\lambda_{r-1} \ge 0$ minimal so \[\left(\widetilde{T}_r + \lambda_{r-1} T_{r-1}\right)^{D_0} P\] has an even coefficient at degree $n-1$. \item Take $\lambda_{r-2} \ge 0$ minimal so \[\left(\widetilde{T}_r + \lambda_{r-1} T_{r-1} + \lambda_{r-2} T_{r-2}\right)^{D_0} P\] has an even coefficient at degree $n-2$. Note that this does not affect the coefficient at degree $n-1$ of this product. \item Repeat this process for $\lambda_{r-3}, \lambda_{r-4}, \dots, \lambda_0$. \end{itemize} We then take \[T = \widetilde{T}_r + \lambda_{r-1} T_{r-1} + \dots + \lambda_1 T_1 + \lambda_0,\] and we see that $T^{D_0}P$ has even coefficients in degrees between $n-r$ and $n-1$. The $\lambda_i$ are bounded by $2/D_0$. Applying the triangle inequality to \eqref{eq:pruned_T} and Lemma \ref{lem:Chebyshev} gives \[T(x_j)\big/\widetilde{T}_r(x_j) \ge 1 - \sum_{i = 1}^r \frac{2C_0C_1}{D_0\kappa^i},\] for $1 \le j \le r+k_0$. This difference is at least $1/2$ by our choice of $D_0$. \end{proof} Assuming $D_0 > 1$, the natural next step is to choose minimal $\epsilon_{m-r-1} \ge 0$ so \[\left(T^{D_0} - \epsilon_{m-r-1} T_{m-r-1}\right) P\] has even coefficient at degree $n - r -1$. However, this adjustment will likely introduce complex roots to the polynomial being constructed. Instead of $T^{D_0}$, it is safer to start with \begin{equation} \label{eq:stepped} Q_0 = \prod_{k = 0}^{D_0 - 1} \left(T - k \cdot C_2^{-1} \kappa^r\right), \end{equation} where $C_2 = 4C_1D_0$. If $(x_i, x_{i+1})$ contains a root of $T$ for a given $i < r + k_0$, we see from the intermediate value theorem that $Q_0$ has at least $D_0$ roots in $(x_i, x_{i+1})$. This accounts for all the roots of $Q_0$, so there are exactly $D_0$ roots in such an interval. Again invoking the intermediate value theorem, we find there is a sequence $y_1 < y_2 < \dots < y_{m + k_0}$ of points in $\Sigma$ so each $(y_i, y_{i+1})$ either contains a unique root of $Q_0$ and is contained in $\Sigma$, or contains no root of $Q_0$ and is not contained in $\Sigma$, and so \[|Q_0(y_i) | \ge C_3^{-1} \cdot \kappa^m\quad\text{for } i \le m+k_0,\] where $C_3$ depends just on $\Sigma$, $C_0$, and $C_1$. This polynomial also satisfies $\norm{Q_0} \le n^{C_4} \kappa^m$, where $C_4$ depends on $\Sigma$, $C_0$, and $C_1$. With this setup, we can finish the proof of Proposition \ref{prop:early_ints}. Without loss of generality, we assume that $r = m/D_0$ is large enough so \[C_3^{-1} \ge \frac{4C_0\kappa^{-r}}{1 - \kappa^{-1}}.\] We define a sequence $\epsilon_{m-r}, \epsilon_{m-r-1}, \dots, \epsilon_0$ as follows: \begin{itemize} \item Take $\epsilon_{m-r} \ge 0$ minimal so \[Q_0 + \epsilon_{m-r} T_{m-r}\] has an even degree $m-r$ coefficient. \item Take $\epsilon_{m-r - 1} \ge 0$ minimal so \[Q_0 + \epsilon_{m-r} T_{m-r} + \epsilon_{m-r-1}T_{m-r-1}\] has an even degree $m-r-1$ coefficient. \item Repeat this process for $\epsilon_{m-r-2}, \dots, \epsilon_0$. \end{itemize} Then we take \[Q_1 = Q_0 + \epsilon_{m-r} T_{m-r} + \dots + \epsilon_0 T_0,\] The $\epsilon_i$ are bounded by $2$, so our assumption on $r$ forces \[Q_1(y_i)\big/Q_0(y_i) \ge 1/2\] for $1 \le i \le m + k_0$. We also see that $Q_1P$ has the even integer coefficients required for Proposition \ref{prop:early_ints}. Suppose the interval $(y_{i-1}, \,y_i)$ contains the root $\alpha$ of $Q_1$. Since its roots are all simple and real, the polynomial $Q_1(x)/(x - \alpha)$ has zero derivative exactly once between any two of its adjacent roots, and it does not attain a zero derivative before its first root or after its final root. As a consequence, \[\min\big(|Q_1(y_{i-1})|, \,|Q_1(y_i)|\big) \cdot |x - \alpha| \, \le\, Q_1(x) \cdot\left(y_i - y_{i-1}\right)\] for all $x$ in $(y_{i-1}, \,y_i)$. If $(y_{i-1}, \,y_i)$ does not contain a root of $Q_1$, we instead have \[\min\big(|Q_1(y_{i-1})|, \,|Q_1(y_i)|\big) \, \le\, Q_1(x).\] for $x$ in $(y_{i-1}, \,y_i)$. Similar inequalities hold for $x > y_{m + k_0}$ and $x < y_1$, and we find that $Q_1$ obeys the second requirement of Proposition \ref{prop:early_ints}. So we just need to adjust $Q_1$ to a polynomial $Q$ whose roots are far from the roots of $P$ and the boundary points of $\Sigma$. We take \[ a = 2\left\lfloor \frac{\kappa^m}{8C_3m(k_0 + n)}\right\rfloor.\] From the assumption $\kappa^{m/2} >n$, we find that this is positive for sufficiently large $n$, and we may restrict to such $n$ without loss of generality. Given $k$ satisfying $0 \le k \le m(k_0 + n)$, we see that \[Q = Q_1 + ka\] satisfies \[Q(y_i)\big/Q_0(y_i) \ge 1/4\] for $1 \le i \le m +k_0$. At the same time, the magnitude of the derivative of $Q_1$ can be bounded on $\Sigma$ by $n^{C_5} \kappa^m$, where $C_5$ depends on $\Sigma$, $C_0$, $C_1$, and $C_3$. As a result, if $k_1 \ne k_2$ are integers in the range $[0, m(k_0 + n)]$, the $i^{th}$ largest roots of \[Q_1 + k_1a \quad\text{and}\quad Q_1 + k_2a\] are separated by at least $n^{-C_6}$, where $C_6 > 0$ depends just on $\Sigma$ and the prior $C_i$. By the pigeonhole principle and triangle inequality, we can choose an integer $k$ in the range $[0, m(k_0 + n)]$ so the distance between any root of $Q$ and any root of $P$ or boundary point of $\Sigma$ is at least $\tfrac{1}{2}n^{-C_6}$. The polynomial $Q$ still obeys the first and second condition of the proposition, so we are done. \qed \section{Introduction} \label{sec:intro} Given an algebraic integer $\alpha \in \C$, take $\alpha_1 = \alpha,\, \alpha_2, \dots, \,\alpha_n$ to be the complex roots of the minimal polynomial of $\alpha$ over $\QQ$. The degree and trace of $\alpha$ are then defined by \[\deg(\alpha) = n \,\,\,\text{ and }\,\, \tr(\alpha) = \alpha_1 + \alpha_2 + \dots + \alpha_n.\] We call $\alpha$ totally positive if the conjugates $\alpha_1, \dots, \alpha_n$ are all positive real numbers. It has long been known that there are infinitely many totally positive algebraic integers $\alpha$ with $\tr(\alpha)/\deg(\alpha)$ at most $2$, with Siegel citing the family \begin{equation} \label{eq:tracetwoeasy} \left\{ 4 \cos^2(\pi/p) \,:\,\, p \text{ an odd prime}\right\} \end{equation} as an example \cite{Sieg45}. At the other end, the totally positive algebraic integers satisfying $\tr(\alpha) < 1.793145\cdot \deg(\alpha)$ have been determined explictly; there are a total of $14$ such algebraic integers \cite{WaWuWu21}. This last result fits into the framework of the following problem, which was codified by Borwein in \cite{Borw02}. \begin{ssstproblem} Fix $\epsilon > 0$. Show that there are finitely many totally positive algebraic integers satisfying \begin{equation} \label{eq:trace_problem} \tr(\alpha) \le (2 - \epsilon)\deg(\alpha), \end{equation} and explictly compute this list of exceptions if possible. \end{ssstproblem} This problem statement reflects the general consensus that there are only finitely many totally positive algebraic integers satisfying \eqref{eq:trace_problem} for any fixed positive $\epsilon$. Our first result is that this turns out not to be the case. \begin{cor} \label{cor:Serre} The inequality \[\tr(\alpha) < 1.8984\cdot\deg(\alpha)\] holds for infinitely many totally positive algebraic integers $\alpha$. \end{cor} The constant $1.8984$ hints at the method underlying our work. To explain this, we need to recall the approach to the trace problem pioneered by Smyth in \cite{Smyth84a}. For a given $\lambda > 0$, suppose that one has found a finite sequence $a_1, \dots, a_N$ of positive numbers and a finite sequence $Q_1, \dots, Q_N$ of nonzero integer polynomials so \begin{equation} \label{eq:Smyth} x \ge \lambda + \sum_{k \le N} a_k \log|Q_k(x)| \end{equation} holds for all positive real numbers $x$. Given a totally positive algebraic integer $\alpha$ with conjugates $\alpha_1 = \alpha,\, \alpha_2, \dots, \,\alpha_n$, we may sum this inequality over the $\alpha_i$. This yields \[\sum_{i \le n} \alpha_i \ge \lambda n + \sum_{k \le N}a_k \log\left|\prod_{i \le n} Q_k(\alpha_i)\right|.\] The product $\prod_{i \le n} Q_k(\alpha_i)$ is recognizable as the resultant $\res(P, Q_k)$, where $P$ is the minimal polynomial of $\alpha$. In particular, if $\alpha$ is not a root of some $Q_k$, $\res(P, Q_k)$ is a nonzero integer for each $k$, and we are left with \[\tr(\alpha) \ge \lambda \deg(\alpha).\] Smyth's original article gives an instance of \eqref{eq:Smyth} with $\lambda = 1.7719$ and about $15$ auxiliary polynomials. The current state of the art \cite{WaWuWu21} gives an instance with $\lambda = 1.793145$ and $130$ auxiliary polynomials. It was observed by Smyth \cite{Smyth99} and Serre \cite[Appendix B]{AgPe08} that there were values of $\lambda$ less than $2$ for which \eqref{eq:Smyth} would not hold for any choice of $Q_1, \dots, Q_N$ and $a_1, \dots, a_N$. Smyth wrote that this suggested ``that perhaps $2$ is not in fact the smallest limit point [of the ratios $\tr(\alpha)/\deg(\alpha)$]'' \cite[p. 316]{Smyth99}, but these results are more often viewed as evidence of the limitations of Smyth's method. Our main result for the trace problem is that Smyth's optimistic interpretation is correct. \begin{cor} \label{cor:Smyth_limit} Choose $\lambda > 0$. Suppose that, for every finite sequence $Q_1, \dots, Q_N$ of nonzero integer polynomials and every finite sequence $a_1, \dots, a_N$ of positive numbers, the inequality \[x \le \lambda + \sum_{k \le N} a_k \log|Q_k(x)|\] holds for some $x > 0$. Then, for any $\epsilon > 0$, there are infinitely many totally positive algebraic integers $\alpha$ satisfying \[\tr(\alpha) < (\lambda + \epsilon) \cdot \deg(\alpha).\] \end{cor} Serre showed that the condition of this result holds for $\lambda = 1.898302\dots$, so Corollary \ref{cor:Serre} follows from Corollary \ref{cor:Smyth_limit}. \begin{rmk} In Section \ref{ssec:bal}, we will show that Serre's result does not give the optimal upper bound for the trace problem, so the constant in Corollary \ref{cor:Serre} can be decreased somewhat. Indeed, ongoing computational work of the author suggests that the condition of Corollary \ref{cor:Smyth_limit} holds for $\lambda = 1.81$. \end{rmk} Corollary \ref{cor:Smyth_limit} is a special case of Corollary \ref{cor:general_Smyth}, which shows that Smyth's method suffices for a wide variety of optimization problems for monic integer polynomials. Since Honda--Tate theory reduces the study of abelian varieties over finite fields to the study of certain algebraic integers \cite{Honda68}, we can use Corollary \ref{cor:general_Smyth} to construct abelian varieities over finite fields with extreme point counts; see Proposition \ref{prop:Honda}. One consequence of this is the following result, which is closely related to Corollary \ref{cor:Serre}. \begin{cor} \label{cor:Serre2} There is some $C > 0$ so we have the following: Choose a square prime power $q$ no smaller than $C$. Then there are infinitely many $\FFF_q$-simple abelian varieties $A$ over $\FFF_q$ satisfying \[\#A(\FFF_q) \,\ge\, \left(q + 2\sqrt{q} - 0.8984\right)^{\dim A}\] and infinitely many more satisfying \[\#A(\FFF_q)\,\le\, \left(q - 2\sqrt{q} + 2.8984\right)^{\dim A}.\] \end{cor} This improves on the prior records for this problem found in \cite{Kade21} and \cite{BCLPS21}. Further optimization work could improve this result and extend it to non-square $q$. The constants replacing $0.8984$ and $2.8984$ in such a sharpened generalization would depend on the fractional part of $2\sqrt{q}$. Corollary \ref{cor:general_Smyth} is a consequence of the main theorem of this paper, Theorem \ref{thm:main}. Subject to some restrictions, this theorem characterizes the distributions on $\R$ that are attainable as the limit of the distribution of conjugates of some sequence of totally real algebraic integers $\alpha_1, \alpha_2, \dots$, in the sense considered by Serre in \cite{Serre18}. In line with the above vindication of Smyth's method, we find that the only obstruction to obtaining a given distribution is the integrality of the resultant of integer polynomials. We introduce some notation to make this precise. \begin{defn} Throughout this paper, the term \emph{Borel measure}, or just \emph{measure}, will denote a finite positive measure on the $\sigma$-algebra of Borel sets of $\C$. For any complex number $\alpha$, take $\delta_{\alpha}$ to be the Borel measure defined by \[\delta_{\alpha}(Y) = \begin{cases} 1 &\text{ if } \alpha \in Y \\ 0 & \text{ otherwise.}\end{cases}\] Given a complex polynomial $P(z) = a_n \cdot \prod_{i \le n} (z - \alpha_i)$ of degree $n \ge 1$, we follow \cite{Smyth84a} and \cite[(1.2.1)]{Serre18} to define the associated counting measure $\mu_P$ by \[\mu_P = \tfrac{1}{n}( \delta_{\alpha_1} + \dots + \delta_{\alpha_n}).\] This is a probability measure supported on $\{\alpha_1, \dots, \alpha_n\}$. Given a compact subset $\Sigma$ of $\C$, we will endow the set of Borel measures whose support is contained in $\Sigma$ with the \weaks topology, where an infinite sequence $\mu_1, \mu_2, \dots$ converges to $\mu$ if and only if \[\lim_{k \to \infty} \int_{\Sigma} f d\mu_k = \int_{\Sigma} f d\mu\] for every continuous function $f: \Sigma \to \R$. \end{defn} \begin{thm} \label{thm:main} Take $\Sigma$ to be a compact subset of $\R$ with at most countably many components. We assume that $\Sigma$ has capacity strictly larger than $1$ (see Definition \ref{defn:capacity}). Then, for any Borel probability measure $\mu$ with support contained in $\Sigma$, the following two conditions are equivalent: \begin{enumerate} \item For every nonzero integer polynomial $Q$, \[\int_{\Sigma} \log|Q(x)| d\mu(x) \ge 0.\] \item There is an infinite sequence of distinct irreducible monic integer polynomials $P_1, P_2, \dots$ so the support of $\mu_{P_k}$ is contained in $\Sigma$ for every $k$ and so $\mu_{P_1}, \mu_{P_2}, \dots$ has \weaks limit $\mu$. \end{enumerate} \end{thm} The proof that condition (2) implies condition (1) in this theorem was given by Serre as \cite[Lemma 1.3.4]{Serre18}. Given a nonzero integer polynomial $Q$, Serre notes that $\log|Q|$ is upper semicontinuous on $\Sigma$, and concludes that \[\int_{\Sigma} \log|Q| d\mu \ge \limsup_{k \to \infty} \int_{\Sigma} \log|Q| d\mu_{P_k} = \limsup_{k \to \infty} \frac{\log |\res(P_k, Q)|}{\deg P_k} \ge 0,\] with the first inequality following from the monotone convergence theorem. The proof of the converse, that (1) implies (2), is new and accounts for most of the length of this paper. In the final section, after finishing the proof of this theorem, we will show that it implies the other corollaries mentioned so far. This last section will also give a general construction for measures satisfying the equivalent conditions of Theorem \ref{thm:main}. This construction uses the potential-theoretic technique of balayage and usually requires good estimates for certain nonelementary integrals. Such estimates can be made rigorously using a computer, allowing us to optimize the measure for whichever problem we are studying. However, we have decided to limit the scope of this paper to results that can be proved without such computations. It would be interesting to remove some of the restrictions placed on $\Sigma$ in Theorem \ref{thm:main}. It is straightforward to find totally disconnected compact subsets of $\R$ of capacity greater than $1$ that contain no algebraic number, so the restriction to $\Sigma$ with countably many components is probably inevitable. The restriction to subsets of $\R$ avoids annoying complex sets like $\{z \in \C\,:\,\,|z| = 3/2\}$, but it seems reasonable to expect the theorem to hold for e.g$.$ closures of open sets in $\C$, with \cite{FeSz55} making some progress in this direction. We note that the study of Weil numbers reduces to the study of totally real algebraic integers whose conjgates lie in certain intervals, so a version of Theorem \ref{thm:main} for complex $\Sigma$ is not necessary to study such numbers. Removing the restriction to $\Sigma$ of capacity strictly greater than $1$ would be of greatest interest. From Proposition \ref{prop:general_energy}, we know this would only be interesting for $\Sigma$ of capacity exactly $1$, and that the only measure to consider would be the unweighted equilibrium measure. The second condition of Theorem \ref{thm:main} is known for some special $(\Sigma, \mu)$ of this form \cite{Robi64}, but it remains unknown whether this condition holds more generally. \subsection*{Acknowledgments} We would like to thank Jean--Pierre Serre and Chris Smyth for their comments on this paper and for providing their relevant correspondence. We would also like to thank Frank Calegari, Brian Conrad, Pol van Hoften, Borys Kadets, Wanlin Li, and Bjorn Poonen for useful feedback on this project. This research was partially conducted during the period the author served as a Clay Research Fellow. \section{Limits of measures and balayage} \label{sec:limits} Our first goal is to prove Proposition \ref{prop:limit_Holder}, which reduced the proof of Theorem \ref{thm:main} to its proof for H\"{o}lder measures on a compact finite union of intervals. As part of this, the following measure will be important. \begin{notat} \label{notat:balbump} Given real numbers $b> a> 0$, we define $\nu_{[a, b]}$to be the measure supported on $[a, b]$ given by \[d\nu_{[a, b]}(t) = \frac{dt\sqrt{ab}}{\pi t \sqrt{(b - t)(t - a)}}.\] This is a H\"{o}lder probability measure. For all $z \in \C$, we have \begin{equation} \label{eq:first_balay} U^{\nu_{[a, b]}}(z) \le - \log|z| + \log\left(\frac{a + 2\sqrt{ab} + b}{b - a}\right), \end{equation} with equality for $z$ in $[a, b]$ (see \cite[II.4]{SaTo97}). In particular, for $\epsilon$ in $(0, 1/2)$, take $\nu_{\epsilon}$ to be $\nu_{[\epsilon^2, \epsilon]}$. Given any other probability measure $\mu$ with support contained in a compact subset $\Sigma$ of $\R$, we can consider the convolution measure $\mu * \nu_{\epsilon}$. This is a H\"{o}lder probability measure with support contained in \[\big\{x \in \R\,:\,\, [x - \epsilon, x - \epsilon^2] \cap \Sigma \ne \emptyset\big\}.\] As $\epsilon$ tends to $0$, the measure $\mu * \nu_{\epsilon}$ tends to $\mu$ in the \weaks topology. We also see that there is a $C > 0$ not depending on $\mu$ or $\epsilon$ so \begin{equation} \label{eq:good_imitation} \max_{z \in \C} \big(U^{\mu * \nu_{\epsilon}}(z) - U^{\mu}(z)\big) \le C \epsilon^{1/2}. \end{equation} \end{notat} The following construction will also be quite useful. \begin{defn} Choose a compact subset $\Sigma$ of $\R$ of capacity greater than $1$, and choose a compact subset $\Sigma'$ of $\R$ containing $\Sigma$. Choose measures $\mu$ and $\nu$ with support contained in $\Sigma'$. We assume $\mu$ is a probability measure. Taking \[ \gamma =1 - \nu(\Sigma'),\] we assume $\gamma$ lies in $[0, 1)$. Take $\mu_{\Sigma}$ to be the unweighted equilibrium measure for $\Sigma$, and consider \[B = \sup_{\substack{U^{\mu}(z) < \infty}} \left(U^{\nu + \gamma\cdot \mu_{\Sigma}}(z) - U^{\mu }(z)\right).\] We assume $B$ exists and is finite. Assuming this, it must be nonnegative. We then define the \emph{sweetened measure} of $\nu$ with respect to $\Sigma$ and $\mu$ to be the probability measure \[\text{sw}(\nu) = (\beta + \gamma - \beta\gamma)\cdot \mu_{\Sigma} + (1 - \beta)\cdot \nu,\] where $\beta$ in $(0, 1)$ is selected so \[\frac{\beta}{1 -\beta} = \frac{B}{-I(\mu_{\Sigma})}.\] The \emph{amount of sweetener} in this measure is defined to be $\beta + \gamma - \beta \gamma$. This measure is defined so \[U^{\text{sw}(\nu)}(z) \le (1 - \beta) U^{\mu}(z) \quad\text{for all } z \in \C.\] In particular, if $\int_{\Sigma'} \log|Q| d\mu \ge 0$ for a given nonzero integer polynomial $Q$, we see that \begin{equation} \label{eq:sweetened} \int_{\Sigma'} \log|Q| d\text{sw}(\nu)\ge 0 \end{equation} as well. \end{defn} With these two constructions, we first generalize Corollary \ref{cor:neg_energy} to non-H\"{o}lder measures. \begin{prop} \label{prop:general_energy} Choose a Borel probability measure $\mu$ whose support is contained in the compact subset $\Sigma$ of $\R$. Suppose $\int_{\Sigma} \log|Q| d\mu$ is nonnegative for all nonzero integer polynomials $Q$. Then \[I(\mu)\le 0.\] \end{prop} \begin{proof} Choose a closed bounded interval $\Sigma_0$ of capacity greater than $1$ containing \[\big\{x \in \R\,:\,\, [x-1 , x ] \cap \Sigma \ne \emptyset\big\}.\] For a given $\epsilon$ in $(0, 1/2)$, consider the sweetened measure $\text{sw}(\mu * \nu_{\epsilon})$ defined with respect to $\mu$ and $\Sigma$. These measures are H\"{o}lder, so \eqref{eq:sweetened} gives $I(\text{sw}(\mu * \nu_{\epsilon})) \le 0$ for all $\epsilon$ in $(0, 1/2)$. At the same time, the amount of sweetener in $\text{sw}(\mu * \nu_{\epsilon})$ is bounded by $C \epsilon^{1/2}$ for some $C$ depending on $\Sigma_0$ by \eqref{eq:good_imitation}. So these measures \weaks converge to $\mu$ as $\epsilon$ tends to $0$, and the monotone convergence theorem shows $I(\mu) \le 0$ (see \cite[Theorem I.6.8]{SaTo97}). \end{proof} \begin{proof}[Proof of Proposition \ref{prop:limit_Holder}] From Proposition \ref{prop:general_energy}, we know that $I(\mu)$ is nonpositive, so no point of $\Sigma$ has positive measure. In particular, the measure of the set of isolated points of $\Sigma$ is zero. We may write $\Sigma$ as the union of its set of isolated points with a set of the form $\cup_{k \ge 1} \Sigma_k$, where each $\Sigma_k$ is a compact finite union of intervals and $\Sigma_k \subseteq \Sigma_{k+1}$ for all $k \ge 1$. Since $\Sigma$ has capacity greater than $1$, we may assume that each $\Sigma_k$ has capacity greater than $1$. We may also assume that $\mu(\Sigma_k)$ is always positive. For $k \ge 1$, take \[\gamma_k = 1 - \mu(\Sigma_k).\] These tend to $0$ as $k$ tends to $\infty$. As a result, the \weaks limit of the sweetened measures $\text{sw}(\mu_k)$ defined with respect to $\mu$ and $\Sigma_1$ is $\mu$. These measures also satisfy \[\int_{\Sigma_k} \log|Q| d\text{sw}(\mu_k) \ge 0\] for every nonzero integer polynomial $Q$, and $\text{sw}(\mu_k)$ has support contained in the compact finite union of intervals $\Sigma_k$. So it suffices to prove the proposition in the case that $\Sigma$ is a compact finite union of intervals \[\Sigma = \bigcup_{i \le n} [x_i, y_i].\] Take $\mu_-$ to be the restriction of $\mu$ to $\bigcup_{i \le n} [x_i, \tfrac{1}{2}(x_i + y_i)]$, and take $\mu_+ = \mu - \mu_-$. Take $\nu_{\epsilon}$ as in Notation \ref{notat:balbump}, and define a measure $\nu^*_{\epsilon}$ on $[-\epsilon, -\epsilon^2]$ by $\nu^*_{\epsilon}(Y) = \nu_{\epsilon}(-Y)$. If $2 \epsilon$ is smaller than $y_i - x_i$ for $i \le n$, the measure \[\mu_{\epsilon} = \mu^- * \nu_{\epsilon} + \mu^+ *\nu^*_{\epsilon}\] has support contained in $\Sigma$. Further, \eqref{eq:good_imitation} implies that the sweetened measures $\text{sw}(\mu_{\epsilon})$ defined with respect to $\mu$ and $\Sigma$ have amount of sweetener bounded by $C\epsilon^{1/2}$ for some $C$ depending on $\Sigma$. In particular, the measures $\text{sw}(\mu_{\epsilon})$ \weaks converge to $\mu$ as $\epsilon$ tends to $0$. For sufficiently small $\epsilon$, these measures are H\"{o}lder, have support contained in $\Sigma$, and satisfy \[\int_{\Sigma_k} \log|Q| d\text{sw}(\mu_{\epsilon}) \ge 0\] for every nonzero integer polynomial $Q$ . The proposition follows. \end{proof} \subsection{The limit of Smyth's method} \begin{notat} \label{notat:general_optimization} Take $\Sigma$ to be a closed, possibly unbounded subset of $\R$ containing at most countably many connected components, and take $F: \Sigma \to \R$ to be some continuous function. In the case that $\Sigma$ is unbounded, we assume that $F(x)/\log x$ tends to $+ \infty$ as the magnitude of $x \in \Sigma$ tends to infinity. We also assume that $\Sigma$ has capacity greater than $1$. Take $\alpha_1, \alpha_2, \dots$ to be an enumeration of the algebraic integers whose minimal polynomials have all complex zeros contained in $\Sigma$. We take $n_i$ to be the degree of $\alpha_i$, $P_i$ to be its minimal polynomial, and $\alpha_{i1}, \dots, \alpha_{in_i}$ to be its conjugates. The mean value of $F$ on the conjugates of $\alpha_i$ is $ \sum_{j \le n_i}F(\alpha_{ij})/n_i$, and we are interested in the limit point \[\lambda(\Sigma, F) \,:=\, \liminf_{i \to \infty} \sum_{j \le n_i} F(\alpha_{ij})/n_i \,=\,\liminf_{i \to \infty} \int_{\Sigma} F d\mu_{P_i}. \] \end{notat} This defines a class of optimization problems where Smyth's approach can be applied \cite{Smyth84a}. As a consequence of Theorem \ref{thm:main}, we can show that the limit of Smyth's method suffices to calculate $\lambda(\Sigma, F)$. \begin{cor} \label{cor:general_Smyth} Take $\Sigma$ and $F$ as in Notation \ref{notat:general_optimization}. Then $\lambda(\Sigma, F)$ is the least upper bound of the real $\lambda$ for which there is a finite sequence $Q_1, \dots, Q_N$ of nonzero integer polynomials and a finite sequence $a_1, \dots, a_N$ of positive numbers so \[F(x) \ge \lambda + \sum_{i =1}^N a_i \log|Q_i(x)|\] for all $x \in \Sigma$. \end{cor} \begin{proof} Take $Q_1, Q_2, \dots$ to be an enumeration of the irreducible integer polynomials. For $N \ge 1$, take $\mathscr{M}_N$ to be the set of probability measures supported on $\Sigma$ that satisfy $\int_{\Sigma} \log|Q_i| d\mu \ge 0$ for all $i \le N$. Take \[\lambda_N = \min_{\mu \in \mathscr{M}_N} \int_{\Sigma} F d\mu,\] and take $\mu_N$ to be some measure in $\mathscr{M}_N$ attaining this minimum. Considering the associated dual linear problem, we see there is some sequence $a_1, \dots, a_N$ of positive numbers so \[F(x) \ge \lambda_N + \sum_{i = 1}^N a_i \log|Q_i(x)| \quad\text{for all } x \in \Sigma.\] Our first observation is that there is some compact $\Sigma_0$ contained in $\Sigma$ so each $\mu_N$ has support contained in $\Sigma_0$. This is trivial if $\Sigma$ is compact, so suppose otherwise. Choose a compact subset $\Sigma_1$ of $\Sigma$ of capacity greater than $1$. For $R \ge 2$, take $\mu_{N, R}$ to be the restriction of $\mu_N$ to $[-R, R]$, and take $\text{sw}(\mu_{N, R})$ to be the sweetened measure with respect to $\mu_N$ and $\Sigma_1$. Using the fact that $F(x)/\log |x|$ tends to infinity as $|x|$ tends to infinity, we find there is a $R > 0$ depending on $\Sigma$ and $\Sigma_1$ so, if $\mu_{N, R}$ does not equal $\mu_N$, then \[\int_{\Sigma} Fd\text{sw}(\mu_{N, R}) < \int_{\Sigma} F d\mu_{N},\] which contradicts the choice of $\mu_N$ since $\text{sw}(\mu_{N, R})$ lies in $\mathscr{M}_N$. Given this $R$, the $\mu_{N}$ are all supported in $\Sigma_0 = \Sigma \cap [-R, R]$. Now, the set of probability measures with support contained in $\Sigma_0$ is sequentially compact under the \weaks topology \cite[Theorem 0.1.3]{SaTo97}, so some subsequence of the $\mu_N$ converge to a $\mu$ supported on $\Sigma_0$. This measure satisfies $\int \log|Q| d\mu \ge 0$ for all nonzero integer polynomials $Q$ by the monotone convergence theorem. Since $\int F d\mu $ equals the limit of the $\lambda_N$, the proposition follows by Theorem \ref{thm:main}. \end{proof} Corollary \ref{cor:Smyth_limit} follows immediately. For results on abelian varieties over finite fields, we use the following basic proposition. \begin{prop} \label{prop:Honda} Choose a prime power $q$. Take $\Sigma$ to be the interval $\left[-2\sqrt{q},\, 2\sqrt{q}\right]$ and define $F: \Sigma \to \R$ by $F(x) = \log|q + 1- x|$. Then, for any $\epsilon > 0$, there are infinitely many $\FFF_q$-simple abelian varieties $A$ over $\FFF_q$ satisfying \[\big(\# A(\FFF_q)\big)^{1/\dim A} \,\le\, \exp\big(\lambda( \Sigma, F) + \epsilon\big),\] infinitely many more satisfying \[\big(\# A(\FFF_q)\big)^{1/\dim A} \,\ge\, \exp\big(-\lambda( \Sigma, -F) - \epsilon\big),\] but only finitely many satisfying \[\big(\# A(\FFF_q)\big)^{1/\dim A} \,\not\in \, \big[\exp\big(\lambda( \Sigma, F) - \epsilon\big),\,\, \exp\big(-\lambda( \Sigma, -F) + \epsilon\big)\big].\] \end{prop} \begin{proof} This follows from Honda--Tate theory \cite{Honda68}; see \cite[Proposition 2.1]{Kade21}. \end{proof} Given this proposition and Corollary \ref{cor:general_Smyth}, we see that Corollary \ref{cor:Serre2} follows from Serre's work in \cite[Appendix B]{AgPe08}. In future work, we hope to find bounds on the constants appearing in this proposition for non-square $q$. \begin{rmk} Recall that, in the course of the proof of Theorem \ref{thm:main}, we have imposed restrictions on the coefficients of the polynomials $P_k$ modulo $4$. The same argument allows us to restrict these coefficients to be in certain congruence classes modulo $4q$. Following the argument of \cite{BCLPS21}, this freedom can be used to force the infinite families of abelian varieties constructed in Proposition \ref{prop:Honda} to all be geometrically simple and ordinary. \end{rmk} \begin{comment} Following the Honda--Tate theoretic argument of \cite[Proposition 2.1]{Kade21}, we can also prove the following. \begin{cor} \label{cor:Abelian} Choose a prime power $q$. Take $\Lambda$ to be the least upper bound of the $\lambda \ge 0$ for which there is a finite sequence of nonzero integer polynomials $Q_1, \dots, Q_N$ and a finite sequence of positive numbers $a_1, \dots, a_N$ so \[\log(q + 1 - x) \,\le\, \lambda + \sum_{k \le N} a_k \log|Q_k(x)|\quad\text{for all }\, x \in \left[-2 \sqrt{q}, \,2\sqrt{q}\right].\] Then, for any $\epsilon > 0$, there are infinitely many $\FFF_q$-simple abelian varieties $A$ over $\FFF_q$ satisfying \[\# A(\FFF_q) \,\ge\, e^{\left(\Lambda - \epsilon\right) \dim A},\] but only finitely many satisfying \[\# A(\FFF_q) \,\ge\, e^{\left(\Lambda + \epsilon\right) \dim A}.\] Similarly, take $\lambda_-$ to be the greatest lower bound of the $\lambda \ge 0$ for which there is a finite sequence of nonzero integer polynomials $Q_1, \dots, Q_N$ and another sequence of positive numbers $a_1, \dots, a_N$ so \[\log(q + 1 - x) \le \lambda - \sum_{k \le N} a_k \log|Q_k(x)|\quad\text{for all }\, x \in \big[-2 \sqrt{q}, \,2\sqrt{q}\big].\] Then, for any $\epsilon > 0$, there are infinitely many ordinary simple abelian varieties $A/\FFF_q$ satisfying \[\# A(\FFF_q) \,\ge\, e^{\left(\lambda_+ - \epsilon\right) \dim A}.\] and infinitely many more satisfying \[\# A(\FFF_q)\, \le\, e^{(\lambda_- + \epsilon) \dim A}.\] At the same time, there are only finitely many simple abelian varieties $A/\FFF_q$ satisfying either \[\# A(\FFF_q) \,\ge\, e^{(\lambda_+ + \epsilon) \dim A} \quad\text{or}\quad \# A(\FFF_q) \,\le\, e^{(\lambda_- - \epsilon) \dim A}.\] \end{cor} A similar result for abelian varieties with low $\FFF_q$ point counts can also be proved. \end{comment} It would be nice to better understand the optimal measures $\mu$ produced in the proof of Corollary \ref{cor:general_Smyth}. For the Schur--Siegel--Smyth trace problem, we do not know if this optimal measure is unique, if it is H\"{o}lder, or if its energy is exactly $0$, for example. The following proposition at least gives some hint that these limit measures can be pretty bumpy, with craters corresponding to exceptional algebraic integers. \begin{prop} \label{prop:crater} Take $\Sigma$ and $F$ as in Notation \ref{notat:general_optimization}. Choose a probability measure $\mu$ with compact support contained in $\Sigma$ so that $\int \log|Q| d\mu \ge 0$ for every nonzero integer polynomial $Q$ and $\int F d\mu = \lambda(\Sigma, F)$, and take $P$ to be an irreducible monic integer polynomial whose roots are all non-isolated points in $\Sigma$ and which satisfies \[\int_{\Sigma} F d\mu_P < \lambda(\Sigma, F).\] Then $\int_{\Sigma} \log|P| d\mu = 0$, and the support of $\mu_P$ is disjoint from the support of $\mu$. \end{prop} \begin{proof} Take $\mathscr{M}$ to be the set of probability measures $\nu$ with compact support contained in $\Sigma$ that satisfy $\int \log|Q| d\nu \ge 0$ for all integer polynomials $Q$ that are indivisible by $P$. The counting measure $\mu_P$ is in $\mathscr{M}$. Take $\Sigma_0$ to be a compact subset of $\Sigma$ of capacity greater than $1$ which does not contain any root of $P$. Define $\nu_{\epsilon}$ as in Notation \ref{notat:balbump}. Assuming that $\Sigma$ contains a neighborhood of every root of $P$, we find that the sweetened measure $\text{sw}(\mu_P *\nu_{\epsilon})$ defined with respect to $\mu_P$ and $\Sigma_0$ is supported in $\Sigma$ for sufficiently small $\epsilon$, so $\text{sw}(\mu_P *\nu_{\epsilon})$ lies in $\mathscr{M}$. We also see that, for all sufficiently small $\epsilon$, \[\int F d\text{sw}(\mu_P *\nu_{\epsilon}) < \lambda(\Sigma, F).\] Note that $\int \log|P| d\text{sw}(\mu_P *\nu_{\epsilon})$ is finite. Using the convexity of $\mathscr{M}$, we can conclude that $\int \log|P| d\mu = 0$ and that there is some $a \in (0, 1]$ so the minimum of \begin{equation} \label{eq:mu_winner} \min_{\nu \in \mathscr{M}}\left( (1 -a) \cdot \int F d\nu \, -\, a \cdot\int \log|P| d\nu\right) \end{equation} is attained at $\nu = \mu$. In the case that some root of $P$ is not contained in a neighborhood contained in $\Sigma$, we may adjust this argument as necessary using the measures $\nu_{\epsilon}^*$ appearing in the proof of Proposition \ref{prop:limit_Holder}, and we arrive at the same conclusion. For $\epsilon > 0$, take $\mu_{\epsilon}$ to be the restriction of $\mu$ to \[\big\{x \in \Sigma\,:\,\, |x - \alpha| \ge \epsilon \text{ for each root } \alpha \text{ of } P\big\},\] and take $\gamma_{\epsilon} = 1 - \mu_{\epsilon}(\Sigma)$. We consider the sweetened measures $\text{sw}(\mu_{\epsilon})$ defined with respect to $\mu$ and $\Sigma_0$. We find there are $C_0 , C_1 > 0$ so, for all sufficiently small $\epsilon$, \begin{align*} \int \log|P| d\text{sw}(\mu_{\epsilon}) &\,\ge\, C_0^{-1} \gamma_{\epsilon}\log \epsilon^{-1} + \int \log|P| d\mu\quad\text{and}\\ \int F d\text{sw}(\mu_{\epsilon}) &\,\le\, C_1 \gamma_{\epsilon} + \int F d\mu. \end{align*} In particular, since \eqref{eq:mu_winner} attains its minimum at $\mu$, we see that $\gamma_{\epsilon}$ is $0$ if $\epsilon$ is smaller than $e^{-a^{-1} C_0 C_1}$. This implies that the roots of $P$ are not in the support of $\mu$. \end{proof} \subsection{The balayage construction} \label{ssec:bal} With Theorem \ref{thm:main} proved, it is natural to want to find measures to which it can be applied. A natural way of constructing such measures again relies on the integrality of the resultant of integer polynomials, \begin{prop} \label{prop:need_bal} Take $\Sigma$ to be a compact subset of $\R$ of capacity greater than $1$ with at most countably many connected components, and take $\mu$ to be a probability measure on $\Sigma$. Suppose there is a finite sequence of $Q_1, \dots, Q_N$ of distinct irreducible primitive integer polynomials and a finite sequence $a_1, \dots, a_N$ of positive numbers so $\sum_{i \le N} a_i \deg Q_i \le 1$ and \begin{equation} \label{eq:need_bal} U^{\mu}(z) \le \sum_{i \le N} -a_i \log|Q_i(z)| \quad\text{for all } z \in \C. \end{equation} Suppose further that $\int_{\Sigma} \log|Q_i| d\mu$ is nonnegative for $i \le N$. Then $\mu$ obeys the equivalent conditions of Theorem \ref{thm:main}. \end{prop} \begin{proof} Given a nonzero integer polynomial $Q$, we need to show \[\int_{\Sigma} \log|Q| d\mu \ge 0.\] We may assume the $Q$ is irreducible without loss of generality. If it is a multiple of some $Q_i$, the inequality follows by assumption. Otherwise, take $c$ to be the leading term of $Q$, and take $n$ to be its degree. We have \begin{align*} \int_{\Sigma} \log|Q| d\mu \,&=\, \log|c| - n\int_{\Sigma} U^{\mu} d\mu_Q\\ &\ge\, \sum_{i \le N} a_i\left(\deg Q_i \cdot\log|c| +n \int \log|Q_i| d\mu_Q\right) \ge 0, \end{align*} with the final inequality following since the resultants $\res(Q, Q_i)$ are nonzero rational integers. \end{proof} Constructing measures satisfying \eqref{eq:need_bal} can be accomplished via the process of balayage as defined in \cite[Theorem II.4.4]{SaTo97}. Given a compact finite union of intervals $\Sigma_0$ of positive capacity and a probability measure $\mu$ with compact support that satisfies $\mu(\Sigma_0) = 0$, balayage yields a measure $\widehat{\mu}$ supported on $\Sigma_0$ and nonnegative number $c$ so \[U^{\widehat{\mu}}(z) \le U^{\mu}(z) + c \quad\text{for all } z\in \C\] and so that $U^{\widehat{\mu}}(z) = U^{\mu}(z) + c$ for all $z$ in $\Sigma_0$, with this last equality requiring \cite[Theorem I.4.6]{SaTo97}. To apply this result in the context of Theorem \ref{thm:main}, choose distinct irreducible primitive integer polynomials $Q_1, \dots, Q_N$ and a compact finite union of intervals $\Sigma_0$ in $\Sigma$ of capacity at least $1$ containing no root of any $Q_i$. Choose nonnegative numbers $a_0, \dots, a_N$ summing to $1$. Taking $\widehat{\mu_{Q_i}}$ to be the balayage of $\mu_{Q_i}$ to $\Sigma_0$, and taking $\mu_0$ to be the unweighted equilibrium measure on $\Sigma_0$, we may consider the measure \[\mu = a_0 \mu_0 + a_1 \widehat{\mu_{Q_1}} + \dots + a_N \widehat{\mu_{Q_N}}.\] To show that this measure satisfies the condition of Proposition \ref{prop:need_bal}, we only need to check $N+ 1$ inequalities. First, for some $x$ in $\Sigma_0$, we need to show \[U^{\mu}(x) \le -\sum_{i =1}^N a_in_i \log|Q_i(x)|,\] where $n_i$ is the degree of $Q_i$. Then, for $i \le N$, we need to show that \[\int \log|Q_i| d\mu = \log |c_i| - \sum_{Q_i(\alpha) = 0} U^{\mu}(\alpha) \ge 0,\] where $c_i$ is the leading term of $Q_i$ and the sum is over the roots $\alpha$ of $Q_i$. If these conditions are satisfied, then $\mu$ satisfies the condition of Proposition \ref{prop:need_bal}, and Theorem \ref{thm:main} can be applied Unfortunately, calculating the potential $U^{\mu}$ becomes more difficult as the number of components of $\Sigma_0$ increases. Specifically, as shown in \cite[II.4]{SaTo97}, calculating this potential is roughly equivalent to finding the Green function for $\C^{\infty}\backslash \Sigma_0$. This can be done using Schwarz--Christoffel conformal maps \cite{EmTr99}, but the required maps become increasingly complicated as the number of components of $\Sigma_0$ increases. In the case where $\Sigma_0$ is some closed interval $[a, b]$, the corresponding Green's function is elementary and the situation is much simpler. We have already encountered this case; the balayage from the Diract delta supported at $0$ to $[a, b]$ is given by the measure $\nu_{[a, b]}$ that we defined in Notation \ref{notat:balbump}, and its potential is easy to compute. In his letter to Smyth dated 24 February 1998, Serre showed that, for any $t > 0$ and $\gamma \in [0, 1]$, the inequality \[x \ge c + t \gamma \log(x) + (1 - \gamma) t \tfrac{1}{\deg\, R} \log|R(x)|\] can only hold for all $x > 0$ and real polynomials $R$ with leading term and constant term of modulus at least $1$ if $c \le 1.8983\dots$. He did this by integrating the sides of this inequality first with respect to the equilibrium measure $\mu_{[a, b]}$ on a certain interval $[a, b]$, and then with respect to the pushforward of the equilibrium measure $[b^{-1}, a^{-1}]$ under the map $z \mapsto 1/z$. This latter measure turns out to simply be $\nu_{[a, b]}$. Adapting Serre's argument, we find there is a convex combination \[\mu = c \mu_{[a, b]} + (1 - c)\nu_{[a, b]}\] that satisfies $U^{\mu}(0) = 0$ and $U^{\mu}(a) = -(1 - c) \log a$, where the optimal choices of $a$ and $b$ have first digits given by \[a = 0.0873528949\dots \quad\text{and}\quad b = 4.4110763504\dots.\] From Proposition \ref{prop:need_bal}, we find that $\mu$ satisfies the equivalent conditions of Theorem \ref{thm:main} for $\Sigma = [a, b]$. This measure has $\int x d\mu = 1.8983020089\dots$. However, we know that this measure is not perfectly optimized for the trace problem. Indeed, the $14$ totally positive algebraic integers $\alpha$ with $\tr(\alpha)/\deg(\alpha)$ at most $1.793$ have all conjugates lying in the support of $\mu$ \cite{WaWuWu21}, while the support of an optimal measure would contain none of these points by Proposition \ref{prop:crater}. The next goal would be to find explicit measures using more complicated balayages that come closer to an optimal measure for the trace problem. We would also hope to start optimizing measures for other problems. These computational goals bring us beyond the scope of this paper, so we will stop. \section{Preliminaries on measures and polynomials} \label{sec:prelim} \begin{defn} \label{defn:capacity} Choose a compact subset $\Sigma$ of $\C$. Given a Borel measure $\mu$ supported on $\Sigma$, we define the potential function $U^{\mu}: \C \to \R \cup\{\infty\}$ of $\mu$ by \[U^{\mu}(z) = \int_{\Sigma} -\log|z - w| d\mu(w),\] and we define the energy of $\mu$ by \[I(\mu) = \int_{\Sigma} U^{\mu}(z)d\mu(z) = \int_{\Sigma} \int_{\Sigma} -\log|z - w| d\mu(w) d\mu(z).\] The energy of $\mu$ lies in $\R\cup \{\infty\}$. Following e.g. \cite{Ransford95}, we define the capacity of $\Sigma$ by \[c_{\Sigma} \,=\, {\sup}_{\mu}\, e^{ -I(\mu)},\] where the supremum is taken over all probability measures with support contained in $\Sigma$. If $\Sigma$ has positive capacity, the supremum is attained for a unique nonnegative unit measure of minimal energy known as the unweighted equilibrium measure of $\Sigma$. \end{defn} For most of our proof of Theorem \ref{thm:main}, we will restrict our attention to measures which have relatively nice potential functions. \begin{defn} \label{defn:Holder} Given a Borel measure $\mu$ on $\R$, we call the measure \emph{H\"{o}lder} if there are positive numbers $A$ and $\eta$ so \[\mu([x, y]) \le A \cdot |y - x|^{\eta}\] for every bounded closed real interval $[x, y]$. \end{defn} Given a H\"{o}lder measure $\mu$, we see that the potential $U^{\mu}$ is a finite, H\"{o}lder continuous function. That is to say, there are positive numbers $A_1$ and $\eta_1$ so \begin{equation} \label{eq:pot_Hold} \left|U^{\mu}(y) - U^{\mu}(x) \right|\le A_1 \cdot |y - x|^{\eta_1} \end{equation} for all real numbers $x$ and $y$. Much of our work also requires a slightly nicer class of compact subsets $\Sigma$. \begin{defn} \label{defn:CFUOI} Given a compact subset $\Sigma$ of $\R$, we call $\Sigma$ a \emph{compact finite union of intervals} if $\Sigma$ is nonempty, contains finitely many connected components, and has no isolated points. \end{defn} The following proposition shows there is no issue in restricting our proof of Theorem \ref{thm:main} to H\"{o}lder measures on compact finite unions of intervals. \begin{prop} \label{prop:limit_Holder} Suppose $\Sigma$ is a compact subset of $\R$ with at most countably many components and capacity strictly greater than $1$, and choose a probability measure $\mu$ with support contained in $\Sigma$ so \[\int \log|Q| d\mu \ge 0 \quad\text{for every nonzero integer polynomial }\, Q.\] Then there is a sequence $\mu_1, \mu_2, \dots$ of H\"{o}lder probability measures and a sequence $\Sigma_1, \Sigma_2, \dots$ of closed subsets of $\Sigma$ so \begin{enumerate} \item For every $k \ge 1$, $\Sigma_k$ is a compact finite union of intervals, and $\mu_k$ has support contained in $\Sigma_k$; \item For every $k \ge 1$ and nonzero integer polynomial $Q$, $\int_{\Sigma_k} \log|Q| d\mu_k \ge 0$; and \item The sequence $\mu_1, \mu_2, \dots$ \weaks converges to $\mu$. \end{enumerate} \end{prop} We will prove this proposition in Section \ref{sec:limits}. Our eventual goal is to find monic integer polynomials whose associated counting measures converge to a given $\mu$. As a first step, it is convenient to have a sequence of monic real polynomials with this behavior. \begin{defn} \label{defn:approx} Choose a compact subset $\Sigma$ of $\R$ and a H\"{o}lder probability measure $\mu$ with support contained in $\Sigma$, and choose a positive integer $n$. For every nonnegative integer $ i \le n$, we take $\alpha_i$ to be the minimal $\alpha$ in $\Sigma$ for which \[\mu\big((-\infty, \alpha]\big) = i/n.\] We then define the degree $n$ approximating polynomial to $\mu$ by \[P_{n, \mu}(z) = \prod_{i \ge 1}^n (z - \alpha_i).\] \end{defn} \begin{prop} Given a compact subset $\Sigma$ of $\R$ and a H\"{o}lder probability measure $\mu$ with support contained in $\Sigma$, there is a $C > 0$ determined from $\Sigma$ and $\mu$ so the following holds: Choose any integer $n \ge 2$, and define $\alpha_0, \dots, \alpha_n$ and $P_{n, \mu}$ from $\mu$ as in Definition \ref{defn:approx}. Then the energy $I(\mu)$ of $\mu$ satisfies \begin{equation} \label{eq:energy_approx} n^{-C n}\,\le\, e^{n^2I(\mu)/2}\cdot\prod_{1 \le i < j \le n} |\alpha_i - \alpha_j| \,\le\, n^{C n } \end{equation} and the potential $U^{\mu}$ satisfies \begin{equation} \label{eq:potential_approx} n^{-C} \cdot \min_{\,1 \le i \le n} |x - \alpha_i| \,\le\, e^{nU^{\mu}(x)} \cdot\left|P_{n, \mu}(x)\right| \,\le\, n^C \end{equation} for all real $x$. \end{prop} \begin{proof} From the H\"{o}lder condition, we note that there is a $C_0 > 0$ determined just from $\mu$ so \begin{equation} \label{eq:Holder_spaced} \alpha_{i+1} - \alpha_i \ge n^{-C_0} \end{equation} for $i \ge 0$. Define measures $\mu_1, \dots, \mu_n$ on $\R$ by \[\mu_i(Y) = \mu\left(Y \cap [\alpha_{i-1}, \alpha_i]\right)\quad\text{for }\, i \le n.\] Then $\mu = \mu_1 + \dots + \mu_n$. Given real $x$ and an integer $i$ satisfying $1 \le i \le n$, we have \begin{alignat*}{4} &n^{-1} \log| x - \alpha_{i-1}| &&\,\le \, \int_{\Sigma} \log|x - t| d\mu_i(t) &&\,\le\, n^{-1} \log| x - \alpha_{i}| \quad&&\text{ if } x \le \alpha_{i-1}\quad\text{and}\\ &n^{-1} \log| x - \alpha_{i}| &&\,\le\, \int_{\Sigma} \log|x - t| d\mu_i(t) &&\,\le\, n^{-1} \log| x - \alpha_{i-1}| \quad&&\text{ if } \alpha_i \le x. \end{alignat*} In the case that $x$ lies in $[\alpha_{i-1}, \alpha_i]$, we find that there is $C_1 > 0$ determined just from $\Sigma$ and $\mu$ so that \[ -C_1n^{-1} \log n \,\le\, \int_{\Sigma} \log|x - t| d\mu_i(t) \,\le\, C_1n^{-1}.\] Combining these inequalities with \eqref{eq:Holder_spaced} gives \eqref{eq:potential_approx} for a good choice of $C$. Similarly, given integers $i$ and $j$ satisfying $1 \le i \le j \le n$, we find there is constant $C_2$ depending just on $\mu$ so the integral $I_{ij} = -\int\int \log|z - w| d\mu_i(z) d\mu_j(w)$ satisfies \[n^{-2} \log|\alpha_j - \alpha_{i-1}| \,\le\, I_{ij} \,\le\, \begin{cases} n^{-2} \log|\alpha_{j-1} - \alpha_{i}| &\text{ if } i + 1 < j\\ C_2 n^{-2}\log n &\text{ otherwise.} \end{cases}\] Summing these inequalities and applying \eqref{eq:Holder_spaced} then gives \eqref{eq:energy_approx} for a good choice of $C$. \end{proof} \begin{defn} \label{defn:norm} Fix a H\"{o}lder measure $\mu$ with support contained in the compact subset $\Sigma$ of $\R$. Given a complex polynomial $P$ and a nonnegative integer $n$, define the $n$-norm of $P$ with respect to $(\mu, \Sigma)$ by \[\norm{P}_n = \,{\max}_{\,x \in \Sigma\,}\left(e^{n U^{\mu}(x)} \cdot |P(x)|\right) .\] \end{defn} With this definition, we see that \eqref{eq:potential_approx} implies \[n^{-C} \le \norm{P_{n, \mu}}_n \le n^C\] for $n \ge 2$. The following consequence of the Remez inequality will be used several times. \begin{lem} \label{lem:Remez} Choose a compact finite union of intervals $\Sigma$, and take $\mu$ to be a H\"{o}lder probability measure with support contained in $\Sigma$. Then there is a $C > 0$ depending on $\Sigma$ and $\mu$ so the following holds: Choose integers $n \ge 0$ and $m \ge 1$, and choose a complex polynomial $P$ of degree $m$. Take $\alpha$ to be some root of $P$. Then \begin{equation} \label{eq:Remez} \norm{P(z)/(z -\alpha)}_n \le (n + m + 1)^C \cdot \norm{P}_n. \end{equation} \end{lem} \begin{proof} Take $\delta$ to be the minimal length of a component in $\Sigma$. Take $A_1, \eta_1$ to be the constants appearing in \eqref{eq:pot_Hold}, and take \[\delta_0 = \min\left(\delta, (n+1)^{-1/\eta}\right).\] Then, given any point $x_0$ in $\Sigma$, there is a subinterval $I$ of $\Sigma$ of length at least $\delta_0$ containing $x_0$, and there is $C_0 > 0$ depending only on $A_1$ and $\eta_1$ so \begin{equation} \label{eq:Remez_help} \left|nU^{\mu}(x) - nU^{\mu}(y)\right| \le C_0 \end{equation} for all $x$ and $y$ in $I$. Take $\Sigma_0$ to be the intersection of $\Sigma$ with the disk of radius $\tfrac{1}{4}m^{-2} \delta_0$ centered at $\alpha$. By the Remez inequality \cite[Theorem 5.1.1]{BoEr95}, there is an absolute constant $C_1 > 0$ so \[{\max}_{\,x \in I} \,|P(x)| \le C_1 \cdot {\max}_{\,x \in I \backslash (I \cap \Sigma_0)} \,|P(x)|.\] The result follows for a good choice of $C$ by combining this inequality with \eqref{eq:Remez_help}. \end{proof} We will also need the following lemma. \begin{lem} \label{lem:squarefree_nocommon} Choose positive integers $k$ and $n$, and take $Q_1, \dots, Q_k$ to be complex nonzero polynomials of degree at most $n$, with at least one polynomial of degree exactly $n$. There are then nonnegative integers $b_2, \dots, b_k$ no larger than $n(k+ 2)$ so \[Q_1 + b_2 Q_2 + \dots + b_k Q_k\] is of degree $n$ and has a decomposition $Q R$, where $Q$ divides $Q_i$ for each $i \le k$, and where $R$ is a squarefree polynomial that is coprime to each $Q_i$. \end{lem} \begin{proof} Take $Q$ to be the greatest common divisor of $\{Q_1, \dots, Q_k\}$. For any complex numbers $b_2, \dots, b_k$, take $R(b_2, \dots, b_k)$ to be the polynomial $Q_1/Q + b_2 Q_2/Q + \dots + b_k Q_k/Q$. For some $b_2, \dots, b_k$, we find that $R(b_2, \dots, b_k)$ has degree $n - \deg Q$, is coprime to all $Q_i$, and is squarefree. Take $X$ to be the set of roots of the polynomial $\prod_{i \le k} Q_i$, and take $c_i$ to be the degree $n - \deg Q$ coefficient of $Q_i/Q$ for $i \le k$. The polynomial \[ (c_1 + b_2c_2 + \dots + b_kc_k) \cdot \prod_{\alpha \in X} R(b_2, \dots, b_k)(\alpha) \cdot \disc R(b_2, \dots, b_k)\] is then a nonzero polynomial in $b_2, \dots, b_k$ of degree bounded by $n(k + 2) $. From a finite differencing method or Bezout's theorem, we find that this polynomial must be nonzero at some integer point $(b_2, \dots, b_k)$ in the cube $[0, n(k+2)]^{k - 1}$, and the result follows. \end{proof}
2205.10966
\section{Introduction}\label{S:recIntroduction} In 2006, we started studying planar semimodular lattices in my papers with E.~Knapp \cite{GKn07}--\cite{GKn10}. More than four dozen publications have been devoted to this topic since; see G. Cz\'edli's list\\ \verb+http://www.math.u-szeged.hu/~czedli/m/listak/publ-psml.pdf+ \smallskip An \emph{SPS lattice} $L$ is a planar semimodular lattice that is also \emph{slim} (it does not contain an $\SM 3$-sublattice). Following my paper with E.~Knapp~\cite{GKn09}, a planar semimodular lattice $L$ is \emph{rectangular}, if its left boundary chain has exactly one doubly-irreducible element other than the bounds (the \emph{left corner}) and its right boundary chain has exactly one doubly-irreducible element other than the bounds (the \emph{right corner}) and the two corners are complementary. An \emph{SR lattice} $L$ is a rectangular lattice that is also \emph{slim}. Rectangular lattices are easier to work with than planar semimodular lattices, because they have much more structure. Moreover, a planar semimodular lattice has~a (congruence-preserving) extension to a rectangular lattice, so we can prove many result for SPS lattices by verifying them for SR lattices (G.~Gr\"atzer and E.~Knapp~\cite{GKn09}). It turns out that there is another way to obtain SR lattices from SPS lattices. Before we state it, we need a definition. Let $L$ be a planar lattice. We call the interval $I = [o, i]$ of $L$ \emph{rectangular}, if there are complementary $a, b \in I$ such that the element $a$ is to the left of the element $b$. Now we state a new property of SR lattices. \begin{theorem}\label{T:Main} Let $L$ be an slim, planar, semimodular lattice and let $I$ be a rectangular interval of $L$. Then the lattice $I$ is slim and rectangular. \end{theorem} In a paper with E. Knapp about a dozen years ago, we introduced \emph{natural diagrams} for SR lattices. Five years later, G. Cz\'edli introduced \emph{${\E C}_1$-diagrams}. We prove that they are the same. We will present some applications, including a recent result of G. Cz\'edli \cite{gCcc}. For the background of this topic and its applications outside lattice theory, see Section 1.2 of G. Cz\'edli and G. Gr\"atzer~\cite{CG21}. \subsection*{Statements and declarations}\hfill \emph{Data availability statement.} Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. \emph{Competing interests.} Not applicable as there are no interests to report. \subsection*{Basic concepts and notation.} The basic concepts and notation not defined in this note are freely available in Part~I of the book \cite{CFL2}, see \\ {\tt arXiv:2104.06539}\\ We will reference it as CFL2. \section{Fork extensions} We discuss in Section 4.3 of CFL2 a result of G.~Cz\'edli and E.\,T. Schmidt~\cite{CS13}: for an SPS lattice $L$ and covering square $C$ in $L$, we can \emph{insert} a fork in $L$ at $C$ to obtain the lattice extension $L[C]$, which is also an SPS lattice, see Figure~\ref{F:fork}. In this figure, the elements of the covering square $C$ are grey filled, the elements of the fork are black filled. The third and fourth diagrams represent the same lattice, \emph{De~gustibus non est disputandum}. \begin{figure}[htb \centerline{\includegraphics[scale=0.60]{Fork}} \caption{Inserting a fork into $L$ at $C$}\label{F:fork} \end{figure} As illustrated by Figure~\ref{F:fork-delete}, we can sometimes \emph{delete} a fork. Let $L$ be an SPS lattice and let $S$ be a covering $\mathsf S_7$ in $L$, with middle element $m$, left corner $a$ and right corner $b$. Let us assume that the top element~$t$ of $S$ is \emph{minimal}, that is, there is no $S'$ a covering $\mathsf S_7$ with top element~$t'$ that is smaller: that is, $t' < t$. \begin{figure}[t! \centerline{\includegraphics[scale=.8]{fork-delete}} \caption{Deleting a fork}\label{F:fork-delete} \end{figure} \begin{lemma}[G. Cz\'edli and E.\,T. Schmidt \cite{CS13}]\label{L:delete} Let $L$ be an SR lattice and let \[S = \set{o, m \mm a, m \mm b, a,b,m,t}\] be a minimal cover\-ing~$\mathsf S_7$ in $L$. Then~$L$ has a sublattice~$L^-$ with a covering square \[ C = S - \set{m, m \mm a, m \mm b} = \set{o, a,b,t} \] such that $L = L^-[C]$. In other words, we can delete the fork in $S$ and the lattice~$L^-$ is the lattice $L$ with the fork deleted. \end{lemma} The structure of SR lattices is described as follows, see G.~Cz\'edli and E.\,T. Schmidt~\cite{CS13}. \begin{theorem}[Structure Theorem]\label{T:structureshort} A slim rectangular lattice $K$ can be obtained from a grid $G$ by inserting forks \lp$n$-times\rp. \end{theorem} We thus associate a natural number $n$ with an SR lattice~$K$; we call it the \emph{rank} of $K$, and denote it by $\Rank K$. It is easy to see that the $\Rank K$ is well defined. There is a stronger version of Theorem~\ref{T:structureshort}, implicit in G.~Cz\'edli and E.\,T.~Schmidt~\cite{CS13}. We~present it with a short proof. \begin{theorem}[Structure Theorem, Strong Version]\label{T:Structure} For every slim rectangular lattice~$K$, there is a grid~$G$, a natural number $n = \Rank K$, and sequences \begin{equation}\label{E:latticesequence G = K_1, K_2, \dots, K_{n-1}, K_n = K \end{equation} of slim rectangular lattices and \begin{equation}\label{E:Csequence C_1 = \set{o_1, c_1,d_1, i_1}, C_2 = \set{o_2, c_2,d_2, i_2} , \dots, C_{n-1} = \set{o_{n-1}, c_{n-1},d_{n-1}, i_{n-1}} \end{equation} of $4$-cells in the appropriate lattices such that \begin{equation}\label{E:Clatticesequence G = K_1, K_1[C_1] = K_2, \dots, K_{n-1}[C_{n-1}] = K_n= K \end{equation} and the principal ideals $\id{c_{n-1}}$ and $\id{d_{n-1}}$ are distributive. \end{theorem} \begin{proof} We prove by induction on $n$. If $n = 0$, then $K$ is distributive by G.~Gr\"atzer and E. Knapp \cite{GKn09}, so the statement is trivial. Now let us assume that the statement holds for $n-1$. Let $K$ be an~SR lattice with $n$ covering $\mathsf S_7$-s. As in Lemma~\ref{L:delete}, we take $S$, a~\emph{minimal} covering $\mathsf S_7$ in $K$. Then we form the sublattice~$K^-$ by deleting the fork at $S$. So~we get a covering square $C = C_{n-1} = \set{o_{n-1}, c_{n-1},d_{n-1}, i_{n-1}}$ of~$K^-$ such that $K = K^-[C]$. Since $K^-$ has $n-1$ covering $\mathsf S_7$-s, we get the sequence \[ G = K_1, K_1[C_1] = K_2, \dots, K_{n-2}[C_{n-2}] = K_{n-1} = K^-, \] which, along with $K = K^-[C]$, prove the statement for $K$. The minimality of $S$ implies that the principal ideals $\id{c_{n-1}}$ and~$\id{d_{n-1}}$ are distributive. \end{proof} \section{Proving Theorem~\ref{T:Main}}\label{S:Proving} Theorem~\ref{T:Main} obviously holds for grids. Otherwise, we can assume that the SR lattice $K$ is not a grid, so $n = \Rank K>1$. Let~$K^-$ be the lattice we obtain by deleting a minimal fork in~$K^-$ at the covering square \[ C_{n-1} = \set{o_{n-1}, c_{n-1},d_{n-1}, i_{n-1}}. \] We obtain $K$ from $K^-$ by inserting a fork at $C_{n-1}$. We add the element $m$ in the middle of $C_{n-1}$, and add the sequences of elements $x_1, \dots$ on the left going down and $y_1, \dots$ on the right going down as in Figure~\ref{F:fork}. Let $I$ be a rectangular interval in $K$ with corners $a, b$, where $a$ is to the left of $b$. We want to prove that $I$ is an SR lattice. Of~course, the lattice~$I$ is slim. We induct on $n = \Rank K$. There are three subcases. Case 1. $I$ is disjoint to $\id m$, as illustrated in Figure~\ref{F:forkcase1}. Then the interval $I$ is not changed as we add the fork to $K^-$. By induction, $I$ is rectangular in $K^-$, therefore,~$I$ is also rectangular in $K$. Case 2. In Figure~\ref{F:forkcase2} (and Figure~\ref{F:forkcase3}), the bold lines form the boundary of the rectangular sublattice $I$ in $K^-$, the elements of $C_{n-1}$ are grey filled, and the elements~$m$, $x_1$, \dots, $y_1$, \dots\ are black filled. The element $m$ is internal in $I$, so the element $a$ is $c_{n-1}$ or it is to the left of~$c_{n-1}$ and symmetrically, see Figure~\ref{F:forkcase2}. Therefore, $C_{n-1} = [o_{n-1},i_{n-1}]_{K^-}$ is a covering square in ${K^-}$ and we obtain the interval $[o_{n-1},i_{n-1}]_{K}$ of $K$ by adding a fork to $C_{n-1}$ at $[o_{n-1},i_{n-1}]_{K^-}$. A fork extension of an SR lattice is also an SR lattice, so we conclude that $I$ is an SR lattice. Case 3. $m$ is not an internal element of $I$ but some $x_i$ or $y_i$ is, see Figure~\ref{F:forkcase3}, where~$y_2$ is an internal element of $I$. By utilizing that $\id{d_{n-1}}$ is distributive, we conclude that we obtain $I$ from $[o,i]_{K^-}$ by replacing a cover preserving $\SC m \times \SC 2$ by $\SC m \times \SC 3$, and so $I$ remains rectangular. \section{Applications of Theorem~\ref{T:Main}}\label{S:Applications} The next statement follows directly from Theorem~\ref{T:Main}. \begin{corollary}\label{C:main} Let $L$ be an SPS lattice and let $I$ be a rectangular interval of~$L$. Let \lp P\rp be any property of SR lattices. Then the property \lp P\rp holds for the lattice $I$. \end{corollary} \newpage \centerline{\includegraphics[scale=.9]{Case1}} \begin{figure}[htb] \caption{Proving Theorem~\ref{T:Main}: Case 1} \label{F:forkcase1} \end{figure} \bigskip \bigskip \bigskip \bigskip \centerline{\includegraphics[scale=.9]{Case2}} \begin{figure}[htb] \caption{Proving Theorem~\ref{T:Main}: Case 2} \label{F:forkcase2} \end{figure} \newpage \begin{figure}[htb] \centerline{\includegraphics[scale=1]{Case3}} \caption{Proving Theorem~\ref{T:Main}: Case 3} \label{F:forkcase3} \end{figure} \begin{figure}[h!] \centerline{\includegraphics[scale = 1]{peak}} \caption{The lattice $\mathsf S_7$, two diagrams} \label{F:S7+} \end{figure} For instance, let (P) be the property: the intervals $[o, a]$ and $[o, b]$ are chains and all elements of the lower boundary of $I$ are meet-reducible, except for $a, b$. Then we get the main result of G. Cz\'edli \cite{gCcc}. \begin{corollary}\label{C:rectint Let $L$ be an SPS lattice and let $I$ be a rectangular interval of $L$ with corners $a, b$. Then $[o, a]$ and $[o, b]$ are chains and all the elements of the lower boundary of $I$ except for~$a, b$ are meet-reducible. \end{corollary} Another nice application is the following. \begin{corollary}\label{C:meet Let $L$ be an SPS lattice and let $I$ be a rectangular interval of $L$ with corners $a, b$. Then for any $x \in I$, the following equation holds: \[ x = (x \mm a) \jj (x \mm b). \] \end{corollary} Here is a more elegant way to formulate the last result. \begin{corollary}\label{C:abc Let $L$ be an SPS lattice and let $a,b,c$ be pairwise incomparable elements of $L$. If $a$ is to the left of $b$, and $b$ is to the left of $c$, then \[ b = (b \mm a) \jj (b \mm c). \] \end{corollary} \section{Special diagrams} \subsection{Natural diagrams} SR lattices have some particularly nice diagrams such as the \emph{natural diagrams} of my paper with E. Knapp~\cite{GKn10}, which laid the foundation of rectangular lattices. Natural diagrams were discovered more than a~dozen years ago, many years before the appearance of it competitor, the $\E C_1$-diagrams of G.~Cz\'edli---see the next section. For an SR lattice $L$, let $\Chl L$ be the lower left and $\Chr L$ the lower right boundary chain of $L$, respectively, and let $\cornl L$ be the left and $\cornr L$ the right corner of $L$, respectively. We regard $G = \Chl L \times \Chr L$ as a planar lattice, with $\Chl L = \Chl G$ and $\Chr L = \Chr G$. Then the map \begin{equation}\label{E:gy \gy \colon x \mapsto (x \mm \cornl L, x \mm \cornr L) \end{equation} is a meet-embedding of $L$ into $G$; the map $\gy$ also preserves the bounds. Therefore, the image of $L$ under $\gy$ in $G$ is a diagram of $L$, we call it the \emph{natural diagram} representing $L$. For~instance, the second diagram of Figure~\ref{F:S7+} shows the natural diagram representing~ $\mathsf S_7$. Let $L$ be an SR lattice. By the Structure Theorem, Strong Version, we can represent~$L$ in the form $L = K[C]$, where $K$ is an SR lattice, $C = \set{o, c, d, i}$ is a $4$-cell of~$K$ satisfying that $\id c$ and $\id d$ are distributive. Let $\E D$ be a diagram of $K$. We~form the diagram $\E D[C]$ by adding the elements $m, x_1, \dots$, and $m, y_1, \dots$, as in the last diagram of Figure~\ref{F:fork}, so that the lines spanned by the elements $m, x_1, \dots$ and m, $y_1, \dots$ are both normal. \begin{lemma}\label{L:C1} Let $L$, $C$, $K$, $\E D$, and $\E D[C]$ be as in the previous paragraph. Then $\E D[C]$ is a diagram of~$L$. \end{lemma} \begin{proof} This is obvious. \end{proof} \begin{lemma}\label{L:C2} Let us make the assumptions of Lemma~\ref{L:C2}. If $\E D$ is a natural diagram of $K$, then $\E D[C]$ is a natural diagram of~$L$. \end{lemma} \begin{proof} So let $\E D$ be a natural diagram of $K$. Let the line $m, x_1, \dots$ terminate with~$x_{k_l}$ and the line $m, y_1, \dots$ with $y_{k_r}$. We have to show that all the new elements in $L$ can be represented as a join $u_l \jj u_r$, where $u_l \in \Chl L$ and $u_r \in \Chr L$. Indeed, $m = x_{k_l} \jj x_{k_r}$. The others follow from the distributivity assumptions. \end{proof} \subsection*{$\E C_1$-diagrams} This research tool, introduced by G. Cz\'edli, has been playing an important role in some recent papers, see G. Cz\'edli \cite{gC17}--\cite{gCcc}, G. Cz\'edli and G.~Gr\"atzer~\cite{CG21}, and G.~Gr\"atzer~\cite{gG21}; for the definition, see G.~Cz\'edli \cite{gC17} and G.~Gr\"atzer~\cite{gG21}. In the diagram of an SR lattice $K$, a \emph{normal edge} (\emph{line}) has a slope of $45\degree$ or~$135\degree$. Any edge (line) of slope strictly between $45\degree$ and $135\degree$ is \emph{steep}. Figure~\ref{F:S7+} depicts the lattice $\mathsf S_7$. A \emph{peak sublattice}~$\mathsf S_7$ (\emph{peak sublattice}, for short) of a lattice $L$ is a sublattice isomorphic to $\mathsf S_7$ such that the three edges at the top are covers in the lattice $L$. \begin{definition} A diagram of a slim rectangular $L$ is a \emph{${\E C}_1$-diagram}, if the middle edge of a peak sublattice is steep and all other edges are normal. \end{definition} In other words, an edge is steep if it is the middle edge of a peak sublattice; if an edge is not the middle edge of a peak sublattice, then it is normal. \begin{theorem}\label{T:well} Every slim rectangular lattice $L$ has a ${\E C}_1$-diagram. \end{theorem} This was proved in G. Cz\'edli \cite{gC17}. My note \cite{gG21a} presents a short and direct proof. \section{Natural diagrams and ${\E C}_1$-diagrams are the same} \label{S:natural = $C_1$} We start with a trivial statement. \begin{lemma}\label{L:C2} Let us make the assumptions of Lemma~\ref{L:C2}. If $\E D$ is a ${\E C}_1$-diagram of $K$, then $\E D[C]$ is a ${\E C}_1$-diagram of~$L$. \end{lemma} Now we state our second result on SR lattices. \begin{theorem}\label{T:C1=natural} Let $L$ be a SR lattice. Then a natural diagram of $L$ is a ${\E C}_1$-diagram. Conversely, every ${\E C}_1$-diagram is natural. \end{theorem} \begin{proof} Let us assume that the SR lattice~$L$ can be obtained from a~grid~$G$ by adding forks $n$-times, where $n = \Rank L$. We induct on $n$. The case $n = 0$ is trivial because then $L$ is a grid. So let us assume that the theorem holds for $n - 1$. By the Structure Theorem, Strong Version, there is a SR lattice~$K$ and a $4$-cell $C = \set{o,a,b,i}$ of~$K$ satisfying that $\id c$ and $\id d$ are distributive such that $K$ can be obtained from the grid~$G$ by adding forks $(n-1)$-times and also $L = K[C]$ holds. Now form the natural diagram $\E D$ of $K$. By induction, it is a ${\E C}_1$-diagram. By~Lemma~\ref{L:C1}, the diagram $\E D[C]$ is both natural and ${\E C}_1$. We prove the converse the same way. \end{proof} Natural diagrams exist by definition. So Theorem~\ref{T:well} also follows from Theorem~\ref{T:C1=natural}. G. Czédli \cite{gC17} also defined \emph{${\E C}_2$-diagrams}. A ${\E C}_1$-diagram is ${\E C}_2$, if any two edges on the lower boundary are of the same length. We use Theorem~\ref{T:C1=natural} to prove two results of G. Cz\'edli \cite{gC17}. \begin{theorem}\label{T:C-2} Let $L$ be a SR lattice. Then $L$ has a ${\E C}_2$-diagram. \end{theorem} \begin{proof} Let $C_l$ and $C_r$ be chains of the same length as $\Chl L$ and $\Chr L$, respectively. Then $\Chl L \times \Chr L$ and $C_l \times C_r$ are isomorphic, so we can regard the map $\gy$, see~\eqref{E:gy}, as a map from $L$ into $C_l \times C_r$, a bounded and meet-preserving map. So the natural diagram it defines is the diagram of the lattice $L$. If we choose $C_l$ and $C_r$ so that the edges are of the same size, we obtain a ${\E C}_2$-diagram of the SR lattice $L$. \end{proof} Natural diagrams have a left-right symmetry. The symmetric diagram is obtained with the map \begin{equation}\label{E:gy2 \widetilde{\gy} \colon x \mapsto (x \mm \cornr L, x \mm \cornl L) \end{equation} replacing \eqref{E:gy}. \begin{theorem}[Uniqueness Theorem]\label{T:Uniqueness} Let $L$ be a slim rectangular lattice. Then the ${\E C}_2$-diagram of $L$ is unique up to left-right symmetry. \end{theorem}
2211.08329
\section{Introduction} Electron transfer reactions are determinant in many different fields, ranging from biophysical processes to artificial compounds with potential applications. Ever since the synthesis of the emblematic Creutz-Taube compound~\cite{creutz1969direct}, molecular chemistry has produced a wealth of compounds, combining different redox centers and linkers. In this context, mixed-valence inorganic compounds are excellent model systems to investigate such phenomenon. The valences can be either trapped, interconvertible with a low activation barrier, or completely delocalized. An intense intervalence charge transfer band characterizes the last categories, and magnetic exchange couplings can be observed. Accurate methods of investigation are desirable to accurately determine band shapes and transition energies. Regardless their computational costs, the objectives are two-fold. First, spectroscopic accuracy is a prerequisite to validate their robustness and deliver means of interpretations of experimental observations. Then, the microscopic information available in the ground and excited states is a valuable contribution to rationalize the leading phenomena and to construct model Hamiltonians. From variational methods, the ground state $\ket{\Psi_0}$ can be determined using different strategies (wavefunction theory (WFT) or density functional theory (DFT)). The construction of excited states $\ket{\Psi_I}$ is much more problematic, in particular when $\ket{\Psi_0}$ and $\ket{\Psi_I}$ share the same spin and space symmetries. Even though stationary points of the expectation value of the Hamiltonian give access to excited states (Ritz's theorem), the strategy and its implementation are not straightforward. As a major breakthrough, the maximum overlap method was designed to converge on higher solutions of the self-consistent field (SCF) equation \cite{gilbert2008self,barca2018simple}. Despite the loss of orthogonality between the SCF solutions, the method has produced a wealth of excitation energies in different compounds~\cite{gilbert2008self}. The key role of the molecular orbitals (MOs) in describing electron transfer processes was also stressed for the intervalence charge transfer of a synthetic nonheme binuclear mixed-valence compound~\cite{domingo2015electronic}. More recently, selected configuration interaction (CI) calculations (\textit{e.g.} Configuration Interaction using a Perturbative Selection made Iteratively, CIPSI) and quantum Monte Carlo simulations~\cite{dash2021tailoring,cuzzocrea2022reference} were performed based on the ``largest technically-affordable number of determinants'' constructed on common set of natural orbitals (\textit{i.e.} state-average). Excellent agreement with high-level coupled cluster references was reached in the determination of excitation energies. Tremendous efforts are still put into benchmarking multi-reference excitation energies using state-average methods and the active space selection issue~\cite{king2022large}. An alternative is the application of the variational method restricted to a sub-space orthogonal to the ground state. Following this strategy, one would like to express all states on the same footing, moving away from the state-average strategy. Therefore, we designed a method here-referred to as orthogonally constrained orbital optimization (OCOO) to generate the first excited state constructed on optimized MOs and maintaining the orthogonality with the ground state. The method does not rely on any pre-conditioned structure of the excited states and somewhat differ from previous strategies~\cite{gilbert2008self,barca2018simple,hait2020excited,levi2020variational,carter2020state,gavnholt2008delta}. Our main intention is to identify the regimes where the MOs of the excited state strongly differ from the ground state ones. The method is applied to a three-site Hubbard Hamiltonian controlled by the on-site ($\mu$), hopping ($t$) and repulsion ($U$) energies (as illustrated in Figure~\ref{fig:FH_3site_STRONG}). \begin{figure}[!h] \centering \includegraphics[width=\columnwidth]{figure1.pdf} \caption{\textbf{Illustration of the model systems ruled by a Hubbard Hamiltonian.}} \label{fig:FH_3site_STRONG} \end{figure} Such model is a playground to identify the limits of traditional state-average strategies and to foresee electronic correlation regimes calling for different MOs basis sets. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure2.pdf} \caption{\textbf{Flowchart of the Orthogonally Constrained Orbital Optimization (OCOO) algorithm.}} \label{fig:flow} \end{figure*} \section{Orthogonally Constrained Orbital Optimization (OCOO) Method} In this section, we provide the details of the OCOO to describe excited states. The method starts with the construction of a multi-reference ground state in a given model active space. The latter is constructed on a restricted number of Slater determinants built on orthogonal MOs. The expansion amplitudes and the MOs coefficients are optimized following a complete active space self-consistent field (CASSCF) framework~\cite{siegbahn1981complete}. At convergence, an approximated ground state $|{\Psi}_0^\text{\tiny CASSCF}\rangle$ with energy $E_0^\text{\tiny CASSCF}$ is generated with an optimized MOs basis set $B_0$ characterized by $\boldsymbol{\kappa}_0$. Here, ``$\boldsymbol{\kappa}_0$'' relates to the way the MOs $\ket{{\phi}_p(\boldsymbol{\kappa})}$ are parametrized during the orbital optimization process: \begin{equation} \ket{{\phi}_p(\boldsymbol{\kappa})} = \sum_l \ket{\phi_l^\text{HF}} [\mathbf{\exp}(-\boldsymbol{\kappa})]_{lp}, \end{equation} $\boldsymbol{\kappa}$ being the anti-hermitian matrix generator encoding the orbital rotation parameters, and $\lbrace \phi_l^\text{HF} \rbrace$ a set of initial MOs to be optimized (in our case, Hartree-Fock MOs). Then, the first excited state $|{{\Psi}_1^\text{\tiny OCOO}}\rangle$ is constructed with the twofold objective that is ({\it i}) to preserve orthogonality with the ground state (\textit{i.e.} $\langle {{\Psi}_0^\text{\tiny CASSCF}}|{{\Psi}_1^\text{\tiny OCOO}}\rangle = 0$), and ({\it ii}) to optimize the MOs to generate a state-specific $B_1$ basis set ($\boldsymbol{\kappa} = \boldsymbol{\kappa}_1$). In practice, it is assumed that the multi-reference structure of both $| {{\Psi}_1^\text{\tiny OCOO}} \rangle $ and $|{\Psi}_0^\text{\tiny CASSCF} \rangle$ follow the same level of description. These states are built with the same number of ``active-space like'' Slater determinants in their respective optimized basis sets $B_0$ and $B_1$. Therefore, any basis set modification is likely to change the physical content of the four configurations. Let us stress that no assumption on the excited state structure is made here, in contrast with strategies used in some reported WFT- and DFT-based approaches (see Refs.~\cite{gavnholt2008delta,doi:10.1021/acs.jctc.8b00406,gilbert2008self}). How do the basis $B_0$ and $B_1$ differ is at the heart of the present study. In practice, the first excited state energy is estimated by solving the effective eigenvalue problem with fixed parameter $\boldsymbol{\kappa^*}$: \begin{equation}\label{eq:eig_val_prob} \hat{P}( \boldsymbol{\kappa^*} ) \hat{H}^\text{eff}\hat{P}( \boldsymbol{\kappa^*} ) \ket{\Psi_1^\text{\tiny OCOO} } = E_1 \ket{\Psi_1^\text{\tiny OCOO} }, \end{equation} with \begin{equation}\label{eq:GS_eff} \hat{H}^\text{eff} = \hat{H} + \Delta^\text{shift} |{{\Psi}_0^\text{\tiny CASSCF} }\rangle\langle{{\Psi}_0^\text{\tiny CASSCF} }|, \end{equation} where $\hat{H}$ is the full Hamiltonian of the system and $\hat{P}( \boldsymbol{\kappa^*} ) $ a projector over a restricted set of determinants following an active space structure in the fixed $\boldsymbol{\kappa^*}$ MOs basis set. The effective active-space Hamiltonian $\hat{P}( \boldsymbol{\kappa^*} ) \hat{H}\hat{P}( \boldsymbol{\kappa^*} ) $ is complemented with a parametrized $\Delta^\text{shift}$ projection (in practice $\Delta^\text{shift} / t = 10^8$) so that any eigenfunction with non-negligible decomposition on $|{{\Psi}_0^\text{\tiny CASSCF}}\rangle$ gets penalized. As a result, solving Eq.~(\ref{eq:GS_eff}) produces $ \ket{\Psi_1^\text{\tiny OCOO} } $ which is an ``effective'' ground state of the associated energy-shifted active-space Hamiltonian $\hat{P}( \boldsymbol{\kappa^*} ) \hat{H}^\text{eff} \hat{P}( \boldsymbol{\kappa^*} )$. In the second step of the OCOO method, we act on the orbital rotation parameter $\boldsymbol{\kappa}$ to optimize the MOs. For this, we define a cost function to be minimized \begin{equation}\label{eq:CF} \text{CF}(\boldsymbol{\kappa}) = E_1(\boldsymbol{\kappa}) + \lambda \left|\langle {{\Psi}_0^\text{\tiny CASSCF}} | {{\Psi}_1^\text{\tiny OCOO}(\boldsymbol{\kappa})}\rangle \right|^2. \end{equation} The latter includes ({\it i}) the regular orbital-rotation dependent energy \begin{equation} E_1(\boldsymbol{\kappa}) = \bra{\Psi_1^\text{\tiny OCOO} (\boldsymbol{\kappa}) } \hat{H}\ket{\Psi_1^\text{\tiny OCOO} (\boldsymbol{\kappa}) } \end{equation} (from state-specific CASSCF method), and ({\it ii}) an overlap penalty term with amplitude $\lambda$ (in practice $\lambda /t = 10^{8}$). The role of this second contribution is to counterbalance the energy minimization with a measure of orthogonality between $| {{\Psi}_0^\text{\tiny CASSCF}} \rangle$ and $| {{\Psi}_1^\text{\tiny OCOO} (\boldsymbol{\kappa})} \rangle$. A summarized flow-chart of the OCOO algorithm is given in Figure~\ref{fig:flow}. The iterative procedure based on Eq.~(\ref{eq:GS_eff}) and Eq.~(\ref{eq:CF}) starts with $\boldsymbol{\kappa} = \boldsymbol{\kappa}_0$. At convergence, a multi-reference state is generated in an optimized basis set $B_1$ characterized by $\boldsymbol{\kappa} = \boldsymbol{\kappa}_1$. Let us stress that both steps of the OCOO process call for the calculation of scalar products between multi-reference wavefunctions expressed in two different basis sets, namely $B_0$ and $B_1$ (as shown in Eq.~(\ref{eq:GS_eff}) and Eq.~(\ref{eq:CF})). Convergence is reached as soon as the variation in the cost function Eq.~(\ref{eq:CF}) is less than $10^{-7} t$ between two successive iterations. \section{Numerical results} Keeping in mind the difficult selection of the active space, the present study is focused on the construction of state-dependent MOs basis sets that preserve the orthogonality between the multi-reference wavefunctions. Evidently, a two-electron in two-orbital model system is not flexible enough to investigate orbital relaxation. Practically, the OCOO method was implemented on a model system (four-electron three-site) ruled by a Hubbard Hamiltonian (see Figure~\ref{fig:FH_3site_STRONG}) inspired by a CAS[2,2] on top of a single inactive orbital. Given a set of hopping $t$ and on-site repulsion $U$ energies, the on-site potential $\mu$ value was varied and the ground and excited states energies were calculated based on either state-averaged CAS[2,2]SCF calculations (\textit{i.e.} a single set of MOs to describe both states), or state-specific schemes (\textit{i.e.} two sets of optimized MOs, CASSCF and OCOO for the ground and excited states, respectively). Whatever the strategy, both wavefunctions were expanded on four Slater determinants written as $\ket{1\overline{1}2\overline{2}},\ket{1\overline{1}2\overline{3}},\ket{1\overline{1}\overline{2}3}$ and $\ket{1\overline{1}3\overline{3}}$ where the index ``$i$'' (or ``$\overline{i}$'') stand for the $i$th $\alpha$-MO (or $\beta$-MO). Charge transfer phenomenon being a motivation in the present study, symmetric and anti-symmetric model systems were considered by varying the values on-site potentials (see Figure~\ref{fig:FH_3site_STRONG}). All numerical implementations and calculations presented in this paper were carried out within the python package \textit{QuantNBody} (see Ref.~\cite{codeQuantNBody}) recently developed by one of us (SY). This package was designed to facilitate the numerical implementation of second quantization algebra and the manipulation of many-body wavefunctions. We used this numerical toolkit to build/diagonalize the Hubbard Hamiltonians, implement orbital optimizations and evaluate the non-trivial overlap between multi-reference wavefunctions expressed in different MOs basis. Electronic structures result from the competition between one-electron and two-electron contributions. Therefore, we first derived the eigenvalues of the one-body part of the Hubbard Hamiltonians $\hat{h}$ for both systems to evaluate the so-called spectral band. The eigen-values are readily derived and the spectral band $\Delta \epsilon$ of the symmetric trimer reads \begin{equation} \label{eq:spectral_band_sym} \Delta \epsilon = \sqrt{\mu^2+8t^2}. \end{equation} For the anti-symmetric trimer, this value becomes \begin{equation} \label{eq:spectral_band_antisym} \Delta \epsilon = 2\sqrt{\mu^2+2t^2} \end{equation} \begin{figure*} \centering \includegraphics[width=0.95\columnwidth]{figure3_left.pdf} \includegraphics[width=0.95\columnwidth]{figure3_right.pdf} \caption{\textbf{Symmetric three-site Hubbard model with tunable on-site potential on the central site (strong correlation regime: $U/t=10$).}\textbf{ Left panel:} Energies obtained from FCI (in black), SA-CASSCF (in green) and with the combination CASSCF+OCOO (in orange), respectively for the ground and first excited states. The middle panel gives the vertical excitation energies obtained with the three methods. The upper panel shows the ratio $\Delta \epsilon/U$. \textbf{Right panels:} Decompositions of the multi-reference $\ket{\Psi_0^\text{CASSCF}}$ and $\ket{\Psi_1^\text{OCOO}}$ in the $B_0$ basis set. For the latter, strong variations are observed, stressing the deep basis set modifications.} \label{fig:trimer_sym} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\columnwidth]{figure4_left.pdf} \includegraphics[width=0.95\columnwidth]{figure4_right.pdf} \caption{\textbf{Anti-symmetric three-site Hubbard model with tunable on-site potential on left and right sites (strong correlation regime: $U/t=10$).}\textbf{ Left panel:} Energies obtained with FCI (in black), SA-CASSCF (in green) and with the combination CASSCF+OCOO (in orange) respectively for ground and first excited state. The middle panel gives the vertical excitation energies obtained with the three methods. The upper panel shows the evolution of the ratio $\Delta \epsilon/U$. \textbf{Right panels:} Decompositions of the multi-reference $\ket{\Psi_0^\text{CASSCF}}$ and $\ket{\Psi_1^\text{OCOO}}$ in the $B_0$ basis set. For the latter, strong variations are observed, stressing the deep basis set modifications.} \label{fig:trimer_asym} \end{figure*} Based on these model systems, the relevance of the OCOO approach was examined and our results were compared to full-CI calculations in a strong correlation regime $U/t = 10$. All results are shown in Figures \ref{fig:trimer_sym} and \ref{fig:trimer_asym}. First, as shown in the bottom left panels of the Figures, the ground state energy is faithfully reproduced by the CASSCF and SA-CASSCF calculations in the whole range of $\mu$ values. From the system size, the orbitals relaxations are sufficient to retrieve most of the full-CI wavefunction with $| \langle{{\Psi}_0^\text{\tiny CASSCF}|\Psi_0^\text{\tiny FCI}} \rangle | \sim 0.99$ for $\mu \neq 0$. In the limit $\mu = 0$, the spectral bands are minimized (see Eq.~\ref{eq:spectral_band_sym} and \ref{eq:spectral_band_antisym}) and the CAS[2,2] picture might be questionable (projections $\sim 0.91$). Since the ground state energy does not suffer from the use of SA-CASSCF MOs, one would like to evaluate the robustness of a state-average method in the evaluation of vertical excitation energies noted $\Delta E$. Strong deviations with respect to full-CI are observed, and $\Delta E$ can be over-estimated by a factor up to 2.5 (in worst case scenario seen in Figures~\ref{fig:trimer_sym} and~\ref{fig:trimer_asym}, middle panels). Such shortcomings of the SA-CASSCF approach are anticipated when $\Delta \epsilon / U < 1 $ as confirmed in the top left-panel of Figures~\ref{fig:trimer_sym} and \ref{fig:trimer_asym}. Therefore, the SA-CASSCF excitation energy deteriorates for $\mu/t \sim 5$ (symmetric trimer) and $ -10 < \mu/t < 10$ (anti-symmetric trimer), revealing the inaccuracy of the state-average strategy. In contrast, excellent agreement between $E_1^{\mbox{\tiny OCOO}}$ and $E_1^{\mbox{\tiny FCI}}$ is reached for both model systems as soon as the excited state MOs are optimized following the OCOO method. The excitation energy is perfectly recovered as soon as each multi-reference state is individually generated in its optimized basis set. A similar conclusion was reached from a non-systematic procedure in donor-acceptor compounds. It was suggested that orbital relaxation should be explicitly taken into account~\cite{meyer2014charge}. To get a better view on the MOs modifications, the excited state wavefunction $\ket{\Psi_1^\text{OCOO}}$ was expanded in the ground state basis set $B_0$. As shown in the right panel of Figures~\ref{fig:trimer_sym} and~\ref{fig:trimer_asym}, the compact form constructed on the four configurations is lost, a feature of a deep change between $B_0$ and $B_1$ basis sets. Quantitatively, the projection of $\ket{\Psi_1^\text{OCOO}}$ on the CAS[2,2] subspace defined in $B_0$ exhibits strong variations and can even become null. Besides, the zeroth-projection domain is reduced from $\mu/t \in \left[0, 12 \right]$ (Figures~\ref{fig:trimer_sym}) to $\mu/t \in \left[2, 3 \right]$ (Figures~\ref{fig:trimer_asym}) upon symmetry-breaking. These observations support the idea of the ill-definition of the $B_0$ basis set for the compact active-space representation of the first excited state in the strong correlation regime ($U/t = 10$) of symmetrical systems (e.g. mixed-valence compounds). Finally, note that all our conclusions and analysis remain unchanged in weaker correlation regimes (\textit{i.e.} $U/t = 5$, not shown here) for which the domain calling for a simultaneous optimizations of the ground and excited states orbitals is reduced. \section{Conclusion} An orthogonaly constrained orbital optimization (OCOO) method is suggested and implemented to foresee the correlation regimes where orbitals relaxations cannot be ignored. The structure of the ground state is of CASSCF-type, whereas the excited state is defined as the lowest-lying orthogonal multi-reference one. Based on a trimer model system ruled by a Hubbard Hamiltonian, the excitation energy is evaluated from a multi-reference state-specific description of orthogonal states. As soon as the one-site energy $U$ competes with the spectral band $\Delta \epsilon$, SA-CASSCF calculations fail to reproduce the vertical excitation energy. Despite its simplicity, the model not only offers a practical method to democratically treat the ground and excited states, but also identifies correlation regimes where the robustness of SA-CASSCF might be questionable to reach spectroscopic accuracy. Finally, this work aims at re-emphasizing the importance of orbital optimization in zeroth-order multi-reference wavefunctions expansions. Explorations on the benefit of second-order perturbation treatment are planned to stress the decisive choice of MOs basis sets. \section{Acknowledgments} This work was supported by the Interdisciplinary Thematic Institute SysChem via the IdEx Unistra (ANR-10-IDEX-0002) within the program Investissement d’Avenir. \phantomsection
2208.02720
\section{INTRODUCTION} Young stars form in dense clouds that collapse under their own gravity. The angular momentum that is present at the beginning of this process forces the material to clump together into a disk around the young star. Planets form inside these disks either through instabilities in the disk that cause the formation of dens clumps or through pebble accretion. Multiple pathways to the formation of planets have been proposed and there is no clear answer yet on what each process's contribution is. A thorough understanding of the formation and evolution of exoplanets will allow us to gain an understanding of not only the origin of Earth, but possibly life. To this end we will need to characterize exoplanets in all stages of their evolution, from young to old. However, the most common characterization technique utilizes the transit method which suffers from astrophysical limitations such as constraints on the orbital geometry or astrophysical noise due to e.g. star spots or circumstellar material. Direct imaging plays an important role to overcome these observational limitations. For both the young and old systems, the influence of the star, and possible circum-stellar material, can be significantly reduced by spatially resolving the planet from its environment, allowing detailed characterization of the planet. The past few years has seen a large step in the capabilities of direct imaging instruments. Instruments such as SPHERE \cite{beuzit2019sphere} or MagAO \cite{close2014into} have observed the environment around many young stars in search of giant proto-planets; planets that are still accreting material from their birth environment. Young proto-planets sweep up the gas and dust within the proto-planetary disk. The accretion of gas releases a large amount of energy when the gas falls onto the planet, or its circum-planetary disks. Most of this energy is released by specific line emission such as the hydrogen emission lines \cite{aoyama2020spectral}. These signatures are therefore one of the strongest signposts of accreting gas giants. Several instruments contain sets of narrowband imaging filters that image the emission lines and the nearby continuum \cite{close2014discovery}. Recent observations have shown that medium to high-resolution spectroscopy is ideal to observe accreting proto-planets \cite{haffert2019pds70, xie2020searching}. However, no direct imaging instrument has the capability of visible light high-resolution integral-field spectroscopy. The Magellan Adaptive Optics eXtreme (MagAO-X) system is a new adaptive optics for the Magellan Clay 6.5m telescope at Las Campanas Observatory (LCO). MagAO-X has been designed to provide extreme adaptive optics (ExAO) performance in the visible. It will ultimately deliver Strehl ratios of 90\% at 0.9 $\mu$m and nearly 80\% at H$\alpha$ (Males et al., 2018). The performance of MagAO-X in the visible is comparable to what other direct imaging instruments achieve in H or K-band, making MagAO-X the ideal instrument to push exoplanet characterization to the visible range. However, MagAO-X does not have any spectroscopic capability. The Visible Integral-field Spectrograph eXtreme (VIS-X) is a spectrograph for MagAO-X that will cover the optical spectral range at high-spectral and high-spatial resolution. Sections 2 will expand on the primary science case and the instrumental requirements. Then Section 3 will discuss the instrument design and the first lab results. \section{Primary science case for VIS-X: Accreting proto-planets} The accretion process of sub-stellar companions is a key part of the information that can be used to discriminate between the different formation processes. The ability to accrete gas at all and the actual mass accretion rate will allow us to discriminate between formation pathways \cite{stamatellos2015properties}. Gas accretion on massive planets is thought to be a very energetic process, and the emitted accretion luminosity can become comparable to the total internal luminosity of the planet \cite{mordasini2017characterization}. Therefore, visible-light High-Contrast Imaging (HCI) is a promising approach to detect these young protoplanets, because it provides access to strong accretion tracers such as H$\alpha$. The huge potential of H$\alpha$ imaging has been demonstrated by recent detections of actively accreting companions with HST/WFC (Zhou et al. 2014), Magellan/MagAO \cite{close2014discovery, wu2017alma, wagner2018pds70}, and more recently VLT/MUSE \cite{haffert2019pds70, xie2020searching, eriksson2020strong }. Haffert et al. 2019 show that integral-field spectroscopy is a very powerful technique to unambiguously detect proto-planets. Conventional high-contrast imaging techniques, such as Angular Differential Imaging (ADI) \cite{marois2006adi}, often lead to point-like features that are caused by either residual instrumental artifacts or due to the presence of non-symmetric circumstellar disks \cite{ligi2018investigation}. With high-resolution integral-field spectroscopy we can eliminate the star and the circum-stellar disk by removing a scaled stellar spectrum from each spatial pixel (spaxel). The signal from the circum-stellar disk is also removed during this procedure because the light from the disk in the visible range consists mainly of reflected star light and is therefore identical to the stellar spectrum. The emission lines from accreting planets are very narrow \cite{aoyama2020spectral}. Marleau et al. in prep show that the H$\alpha$ emission line starts to be resolved around a resolving power of 15,000 (20 km/s). This implies that optimal SNR can be achieved when the resolving power of the instrument is 15,000. If the resolving power is increased even further the light from the proto-planet will be smeared out over more detector pixels, which will increase the detector noise contribution. The signal to noise (SNR) of the H$\alpha$ measurement is, \begin{equation} \mathrm{SNR} = \frac{T F_{\mathrm{H\alpha},P}}{\sqrt{T (F_{\mathrm{Phot},S} + F_{\mathrm{H\alpha},S}) C(\theta_P) + N\sigma_D^2 }}. \end{equation} Here $T$ is the throughput from the top of the atmosphere to the camera, $F_{\mathrm{H\alpha},P}$ and $F_{\mathrm{H\alpha},S}$ are the H$\alpha$ flux of the planet and star, $F_{\mathrm{Phot},S}$ is the stellar photospheric emission, $C(\theta_P)$ the contrast of the observations at angular separation $\theta_P$, $N$ is the number of pixels that are used to sample an unresolved emission line and $\sigma_D$ is the detector noise (read noise + dark current). Under favourable conditions the H$\alpha$ line of the star is separated from the H$\alpha$ line of the planet due to an intrinsic radial velocity difference. At high enough spectral resolution the stellar H$\alpha$ flux will not contribute at all at the velocity position of the planet. The flux of the stellar continuum is proportional to the bandwidth of a single spectral slice. Taking this into account we arrive at the following SNR for the H$\alpha$ detection, \begin{equation} \mathrm{SNR} = \frac{T F_{\mathrm{H\alpha},P}}{\sqrt{T \langle F_{\mathrm{Phot}, S} \rangle \delta \lambda C(\theta_P) + N\sigma_D^2 }}. \end{equation} Here the photospheric flux of the star has been replaced by the average flux density times the bandwidth of a single spectral channel. The bandwidth $\delta \lambda = \lambda / R$, where $R$ is the resolving power of the spectrograph. In the regions where photon noise dominates, the SNR is \begin{equation} \mathrm{SNR} = \frac{F_{\mathrm{H\alpha},P} \sqrt{TR}}{\sqrt{ \langle F_{\mathrm{Phot}, S} \rangle \lambda C(\theta_P)}}. \end{equation} This shows that the expected SNR scales with $\sqrt{R}$, under the assumption that the spectral line is unresolved. A high-resolution spectrograph at $R=15,000$ can have a large gain in sensitivity compared to narrowband imaging ($R=100$ for MagAO, and $R=120$ or $R\sim650$ for the broadband and narrowband H$\alpha$ filters in SPHERE, respectively). A rough order of magnitude in SNR improvement is $\sqrt{15000 / 100}\approx12.3$. This shows that significant gains can be made by using high-resolution integral field spectroscopy for accreting proto-planets. \subsection{Post-processing gain} There are several aspects of high-resolution integral-field spectroscopy for emission line imaging. The first and foremost advantage of HRS, is that it is easier to disentangle light from the emission lines from the neighboring continuum. Only two images are taken with the classic approach of dual band imaging. In post-processing, the continuum image is magnified by $\lambda_1 / \lambda_2$ to take care of the chromatic scaling due to diffraction. For accurate subtraction of one channel from the other, the flux total flux has to be scaled. This is usually achieved by measuring the total flux within the Airy core. After the flux correction, the images can be subtracted from each other, \begin{equation} \delta I = I(\lambda_1) - a I(\lambda_2) - b. \end{equation} Here $I(\lambda_i)$ is the observed image at spectral channel $i$, $a$ is the linear scale parameter, and $b$ is the background. This approach can be used to gain in contrast, however the gain is limited because this model assumes that the diffraction pattern and speckles can be modeled by a global linear scaling. This approach would work for a system without any wavefront aberration. However, any amount of wavefront aberration will introduce non-linear chromatic behavior in the focal plane speckles. This is especially true when amplitude errors are also present. The pupil plane electric field can be represented by, \begin{equation} E_p = A(1+g) e^{i \frac{2\pi}{\lambda} \delta}. \end{equation} Here $E_p$ is the pupil plane intensity, $A$ the pupil function, $g$ the amplitude aberrations and $\delta$ the wavefront error. For high-contrast imaging instruments $\delta$ is usually small and the exponential can be expanded into its Taylor series, \begin{equation} E_p = A(1+g) e^{i \frac{2\pi}{\lambda} \delta} \approx A(1+g)\left(1 + \frac{2\pi i}{\lambda} \right). \end{equation} The propagation from the input pupil plane through the optical system to the final science focal plane is represented by the linear operator $C$. With this operator in hand, the focal plane electric field is $E_f = CE_p$. This representation holds for any type of IFS implementation, such as micro-lens array based IFS's or dual band imagers. Detectors can not measure the electric field directly and measure the intensity. This means that the final intensity that we measure is, \begin{equation} I_p = |CP +CP\frac{2\pi i}{\lambda}|^2 = |CP|^2 + |CP\frac{2\pi i}{\lambda}|^2 + 2\Re\left\{CP \left(CP \frac{2\pi i}{\lambda}\delta\right)^{\dagger}\right\}. \end{equation} From this it is clear that even in the small-aberration regime, there are terms that have a different chromatic behavior. The aberration-free term has no chromatic scaling except for the chromatic magnification due to diffraction. And the other two terms both scale differently with wavelength. In this example, the wavefront error was achromatic and the higher-order terms have been neglected. Such a simple example already shows why there is no global linear relation between two different spectral channels, and also explains why DBI will not remove all aberrations. \begin{figure} [ht] \begin{center} \includegraphics[]{./Figures/dbi_ifs_contrast_curve} \end{center} \caption[example] { \label{fig:example} The radial 1-$\sigma$ contrast curve after post-processing. For the DBI method we applied the optimal scaling of the two spectral channels. Each color represents a different amount of wavefront aberration. The solid curves are the contrast curves after DBI post-processing and the dashed curves are the contrast curves for the IFS post-processing. For the DBI method the contrast decreases when the wavefront error is increased, while for the IFS observations the residuals increases slightly and always stays well below the DBI residuals. When no wavefront errors are present the 1-$\sigma$ contrast for the DBI method is between $10^{-7}$ and $10^{-8}$.} \end{figure} More spectral channels have to be measured to accurately model the chromatic behavior of the stellar speckles. A IFS provide such an opportunity. However, having many spectral channels is not the only requirement. Chromatic scaling due to diffraction happens on spectral resolving powers of $R=N$, with $R$ the resolving power and $N$ the field of view in units of $\lambda/D$. Typical instruments have a field of view of $\approx100 \lambda/D$. The instrument should have a $R\gg100$ to accurately measure the speckles. However, most if not all direct imaging IFS have low spectral resolving power, e.g. SPHERE-IFS has 50, CHARIS/SCEXAO has 20-80 and ALES at the LBT has 40. Speckle removal with higher-resolution spectroscopy has already proven itself to work better than DBI\cite{hoeijmakers2018atomic}. The post-processing of HR-IFS data uses the assumption that diffraction and instrumental effects happen at low-spectral resolution and can therefore be modeled by low-order polynomials. This means that the spectrum at every pixel can described by a reference stellar spectrum multiplied by a low-order polynomial. There are many spectral channels available for each spatial pixel(spaxel) in the IFU, therefore a different polynomial model can be estimated for each spaxel. This is described by, \begin{equation} I_j(\lambda) = \sum_i a_{ij} \phi_i(\lambda) S(\lambda). \end{equation} A Linear Least-Squares (LLS) solution can be found for the polynomials coefficients because the model is linear in the coefficients. A good choice of low-order polynomials are Chebyshev or Legendre polynomials. These are orthogonal and create robust matrices for the matrix inversion step. The results of the post-processing gain are shown in Figure \ref{fig:example}. The high-resolution observations are well below $10^{-8}$, which is more than sufficient to detect accreting proto-planets. The DBI mode can have strong residuals, depending on the exact chromatic behavior of the speckles. The DBI method is limited to $>10{-6}$ at the inner regions for smallest wavefront errors. This simulation only considered a single phase screen in the pupil. Real systems are expected to contain stronger and more complex chromatic behavior which will push the post-processed contrast to the $0.5\lambda$ curve. \subsubsection{First end-to-end simulations} End-to-end simulations of an $R=15000$ H$\alpha$ spectrograph coupled to MagAO-X have been performed to investigate the gain in performance. We have simulated a system with 50 actuators across the pupil driven by a unmodulated pyramid wavefront sensor. The star itself was an 8th magnitude star. Median atmospheric conditions of the Las Campanas site have been assumed ($v=15m/s$, $r_0=0.16$ at $\lambda=550$ nm). A separate static phase screen with 50 nm rms has been used to simulate non-common path errors. The sensitivity curves are shown in Figure \ref{fig:sensitivity}. The derived contrast curves show that VIS-X will indeed add roughly a factor 10 to the sensitivity of MagAO-X. MagAO-X itself will already be more sensitive than any other instrument because the instrument has been optimized to operate as an xAO system in the visible. VIS-X shows the larges gain in the inner few $\lambda/D$, exactly where we expect to find the most planets \cite{close2020separation}. This demonstrates the benefit of high-resolution IFU observations of the H$\alpha$ emission line. Additionally, due to the higher resolution it may become possible to study the line shapes and derive the physical state of the accretion process. \begin{figure} [ht] \begin{center} \includegraphics[width=\textwidth]{./Figures/ha_line_sensitivity} \end{center} \caption[example] { \label{fig:sensitivity} The radial 1-$\sigma$ contrast curve after post-processing. The red dased curve represents the sensitivity of MagAO-X with dual band imaging, while the blue dashed curve shows the sensitivity with VIS-X. Both curves are compared to actual observations from MUSE and SPHERE. This figure has been adapted from \cite{xie2020searching}} \end{figure} \section{VIS-X DESIGN AND FIRST MEASUREMENTS} Due to constraints on detector real estate there is a trade-off between spatial sampling, field-of-view, spectral bandwidth and spectral resolution. Maximizing the field of view and spectral resolving power requires a large format detector. In the past few years there has been a significant amount of progress of in the quality of back-illuminated CMOS detector technology. SONY released the imx455 sensor in 2019 with 9600x6422 pixels, a quantum efficiency close to 80 percent at H$\alpha$ and a peak efficiency close to 90 percent (at 550 nm) and a ~1 electron read noise at the highest gain setting. With a water-based cooling it is possible to reduce the dark current to below 0.001 electron/s/pixel. These properties make this an ideal sensor for visible integral-field spectroscopy. \begin{figure} [ht] \begin{center} \includegraphics[width=\textwidth]{./Figures/visx_layout} \end{center} \caption[example] { \label{fig:layout} A schematic drawing of VIS-X in the available space envelop of MagAO-X. The beam from MagAO-X enters from the top right and follows the path in the direction of the arrow. A spherical mirror-based relay will magnify the PSF onto an MLA. The MLA output is then dispersed by a first-order grating spectrograph.} \end{figure} With an internal UA seed grant, we developed a prototype micro-lens array (MLA) based integral-field spectrograph that can operate in a narrowband (6 nm) around H$\alpha$ using the new imx455 sensor. MLA based IFUs are in use in all direct imaging instrument and are considered a mature technology and therefore a low risk design. The prototype has been designed to deliver a resolving power of R$\sim15000$ at H$\alpha$, with a fixed spectral bandwidth of 5nm. The prototype has a limited field of view of 0.5” in diameter. Figure \ref{fig:layout} shows a schematic of the spectrograph within the available space envelop of MagAO-X. We use two spherical mirrors as an achromatic relay that magnifies the F/69 beam of MagAO-X to sample the PSF with 3 spaxels per $\lambda/D$ at H$\alpha$ with the MLA. This will keep us Nyquist sampled down to H$\gamma$ (434 nm). On-sky experience with the MUSE IFU at the VLT showed us that well sampled LSF’s are critical for accurate post-processing\cite{xie2020searching}. The relay itself has a theoretical wavefront rms of $\lambda/100$ because we are working with slow beams (F/69 to F/870). The performance is therefore mainly determined by the manufacturing quality of the relay mirrors. We found that $\lambda$/4 mirrors are fitting for our purpose, and lab measurements confirmed that there is no indication of degradation of the PSF after the relay, see Figure \ref{fig:extracted_psf} for an extracted PSF. The extracted PSF contains roughly $\lambda$/10 rms defocus which is well within the correction range of the NCPA DM of MagAO-X. \begin{figure} [ht] \begin{center} \includegraphics[width=\textwidth]{./Figures/extracted_psf} \end{center} \caption[example] { \label{fig:extracted_psf} The reconstructed PSF from VIS-X. There is a slight defocus visible (~$\lambda$/10 rms), and some extraction artifacts on the lower left due to the low SNR of the calibration files. The middle figure shows the fitted PSF model, and the right figure shows the residuals. The residuals are on the order of $\sim10^{-2}$.} \end{figure} The PSF is sampled by a micro-lens array with a 192 $\mu$m pitch and a 3.17 mm focal length (F/16.5). The spectrograph’s backend is kept as simple as possible by using a first-order layout with identical lenses (Thorlabs TTL200MP) for the camera and collimator both having a focal length of 200 mm. The current design has diffraction-limited performance over $\pm$0.25 arcseconds on-sky, but the performance rapidly degrades outside this small field of view. The monochromatic PSFs of the spaxels can be seen in Figure \ref{fig:psflets}. \begin{figure} [ht] \begin{center} \includegraphics[width=\textwidth]{./Figures/zoomed_in_psflets} \end{center} \caption[example] { \label{fig:psflets} The PSFs of several spaxels. The area that is shown here is part of the center of the IFU. All spaxels are well separated from each other and show diffraction-limited image quality.} \end{figure} \section{Conclusion} This manuscript has described the rational and design of a new IFU, VIS-X, for MagAO-X. It's main priority it focused on accreting proto-planets by observing H$\alpha$ at high spectral resolution. We expect that VIS-X will provide a gain in sensitivity of a 100 compared to other direct imaging instrument. This would enable us to search for fainter proto-planets at smaller angular separations. VIS-X has its first light scheduled for Fall 2021. \acknowledgments The authors acknowledge funding from the Lucas/San Diego Astronomy Association Junior Faculty Award to build the VIS-X spectrograph. Support for this work was provided by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This research made use of HCIPy, an open-source object-oriented framework written in Python for performing end-to-end simulations of high-contrast imaging instruments \cite{por2018high}.
2208.02717
\section{Introduction} \label{sec:introduction} The discovery of a Higgs boson with a mass of around 125\GeV, \ensuremath{\mathrm{\PH(125)}}\xspace, at the LHC in 2012~\cite{Aad:2012tfa,Chatrchyan:2012xdj,Chatrchyan:2013lba} has turned the standard model (SM) of particle physics into a theory that could be valid up to the Planck scale. In the SM, \ensuremath{\mathrm{\PH(125)}}\xspace emerges from the spontaneous breaking of the electroweak \ensuremath{\mathrm{SU}(2)_{\mathrm{L}}}\xspace symmetry. While the nature of the underlying mechanism leading to this symmetry breaking and the exact form of the required symmetry-breaking potential are still to be explored, the measured couplings of \ensuremath{\mathrm{\PH(125)}}\xspace to fermions and gauge bosons, with 5--20\% experimental precision~\cite{Khachatryan:2016vau,Sirunyan:2018koj,Aad:2019mbh,Sirunyan:2019twz}, are in good agreement with the expectation for an SM Higgs boson with a mass of $125.38\pm0.14\GeV$~\cite{Sirunyan:2020xwk}. The SM still leaves several fundamental questions related to particle physics unaddressed, including the presence of dark matter and the observed baryon asymmetry in nature. Many extensions of the SM that address these questions require a more complex structure of the part of the theory that is related to \ensuremath{\mathrm{SU}(2)_{\mathrm{L}}}\xspace breaking, often referred to as the Higgs sector. Such models usually predict additional spin-0 states and modified properties of \ensuremath{\mathrm{\PH(125)}}\xspace with respect to the SM expectation. Models incorporating supersymmetry (SUSY)~\cite{Golfand:1971iw,Wess:1974tw} are prominent examples. In the minimal extension of the SM, the minimal supersymmetric SM (MSSM)~\cite{Fayet:1974pd, Fayet:1977yc}, the model predicts three neutral and two charged Higgs bosons. Searches for additional heavy neutral Higgs bosons in the context of the MSSM were carried out in electron-positron collisions at the LEP collider at CERN~\cite{Schael:2006cr} and in proton-antiproton collisions at the Fermilab Tevatron~\cite{Aaltonen:2009vf, Abazov:2010ci,Abazov:2011jh,Aaltonen:2011nh}. At the LHC such searches have been carried out by the ATLAS and CMS Collaborations in the \PQb quark~\cite{Chatrchyan:2013qga,Khachatryan:2015tra,Sirunyan:2018taj,ATLAS:2019tpq}, dimuon~\cite{Aad:2012cfr,CMS:2015ooa,CMS:2019mij,ATLAS:2019odt}, and \ensuremath{\PGt\PGt}\xspace~\cite{Aad:2012cfr,Aad:2014vgg,Aaboud:2016cre,Aaboud:2017sjh,Chatrchyan:2011nx, Chatrchyan:2012vp,Khachatryan:2014wca,Sirunyan:2018zut,ATLAS:2020zms} final states. The \ensuremath{\PGt\PGt}\xspace final state has a leading role in these searches, since \PGt leptons can be identified with higher purity than \PQb quarks and backgrounds from genuine \ensuremath{\PGt\PGt}\xspace events can be estimated with higher accuracy, while the branching fractions for the decay into \PGt leptons are typically larger than those for the decay into muons because of the larger \PGt lepton mass. There are several other examples of extended Higgs sectors, which are summarized in Ref.~\cite{Steggemann:2020egv}, that could give appreciable resonant \ensuremath{\PGt\PGt}\xspace production rates in addition to the known SM processes at the LHC. Furthermore, models that include additional coloured states carrying both baryon and lepton quantum numbers, known as leptoquarks~\cite{Diaz:2017lit, Schmaltz:2018nls}, can lead to an enhancement in the nonresonant production rates of \ensuremath{\PGt\PGt}\xspace pairs with large invariant masses via the leptoquark $t$-channel exchange. Searches for resonant and nonresonant \ensuremath{\PGt\PGt}\xspace signatures are thus complementary in the exploration of physics beyond the SM (BSM) at the LHC. Recent searches for single- and pair-production of third-generation leptoquarks at the LHC are reported in Refs.~\cite{CMS:2017xcw,CMS:2018txo,CMS:2018iye, ATLAS:2019qpq,ATLAS:2020dsf,CMS:2020wzx,ATLAS:2021jyv,ATLAS:2021yij,ATLAS:2021oiz}. In this paper the results of three searches for both resonant and nonresonant \ensuremath{\PGt\PGt}\xspace signatures are presented: \begin{enumerate} \item The first search, which is meant to be as model independent as possible, targets the production of a single narrow spin-0 resonance \ensuremath{\phi}\xspace, in addition to \ensuremath{\mathrm{\PH(125)}}\xspace, via gluon fusion (\ensuremath{\Pg\Pg\phi}\xspace) or in association with \PQb quarks (\ensuremath{\PQb\PQb\phi}\xspace). Assumptions that have been made for this search are that the width of \ensuremath{\phi}\xspace is small compared with the experimental resolution, and that the \ensuremath{\phi}\xspace transverse momentum (\pt) spectrum for \ensuremath{\Pg\Pg\phi}\xspace production as well as the relative contributions of \PQt- and \PQb-quarks to \ensuremath{\Pg\Pg\phi}\xspace production are as expected for an SM Higgs boson at the tested mass value. \item The second search targets the $t$-channel exchange of a vector leptoquark \ensuremath{\mathrm{U}_1}\xspace. \item The third search exploits selected benchmark scenarios of the MSSM that rely on the signal from three neutral Higgs bosons, one of which is associated with \ensuremath{\mathrm{\PH(125)}}\xspace. \end{enumerate} The results are based on the proton-proton (\ensuremath{\Pp\Pp}\xspace) collision data collected at the LHC during the years 2016--2018, at $\sqrt{s} = 13\TeV$, by the CMS experiment. The data correspond to an integrated luminosity of 138\fbinv. The analysis is performed in four \ensuremath{\PGt\PGt}\xspace final states: \ensuremath{\Pe\PGm}\xspace, \ensuremath{\Pe\tauh}\xspace, \ensuremath{\PGm\tauh}\xspace, and \ensuremath{\tauh\tauh}\xspace, where \Pe, \PGm, and \tauh indicate \PGt decays into electrons, muons, and hadrons, respectively. For this analysis the most significant backgrounds are estimated from data, which includes all SM processes with two genuine \PGt leptons in the final state, and processes where quark- or gluon-induced jets are misidentified as \tauh, denoted as \ensuremath{\text{jet}\to\tauh}\xspace. The paper is organized as follows. Section~\ref{sec:phenomenology} gives an overview of the phenomenology of the BSM physics scenarios under consideration. Section~\ref{sec:detector} describes the CMS detector, and Section~\ref{sec:reconstruction} describes the event reconstruction. Section~\ref{sec:selection} summarizes the event selection and categorization used for the extraction of the signal. The data model and systematic uncertainties are described in Sections~\ref{sec:data-model} and~\ref{sec:systematic-uncertainties}. Section~\ref{sec:results} contains the results of the analysis. Section~\ref{sec:summary} briefly summarizes the paper. A complete set of tabulated results of this search for all tested mass hypotheses is available in the HEPData database~\cite{hepdata}. \section{Signal models} \label{sec:phenomenology} Neutral (pseudo)scalar bosons \ensuremath{\phi}\xspace appear in many extensions of the SM. They may have different couplings to the upper and lower components of the \ensuremath{\mathrm{SU}(2)_{\mathrm{L}}}\xspace fermion fields (associated with up- and down-type fermions) and gauge bosons. In several models, like the MSSM models discussed in Section~\ref{sec:MSSM}, the \ensuremath{\phi}\xspace couplings to down-type fermions are enhanced with respect to the expectation for an SM Higgs boson of the same mass, while the couplings to up-type fermions and vector bosons are suppressed. This makes down-type fermion final states, such as \ensuremath{\PGt\PGt}\xspace, particularly interesting for searches for neutral Higgs bosons in addition to \ensuremath{\mathrm{\PH(125)}}\xspace. An enhancement in the couplings to down-type fermions also increases the \ensuremath{\PQb\PQb\phi}\xspace production cross section relative to \ensuremath{\Pg\Pg\phi}\xspace, which is another characteristic signature of these models and motivates the search for enhanced production cross sections in this production mode with respect to the SM expectation. In a first interpretation of the data, which is meant to be as model independent as possible, we search for \ensuremath{\phi}\xspace production via the \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace processes in a range of $60\leq\ensuremath{m_{\phi}}\xspace\leq3500\GeV$, where \ensuremath{m_{\phi}}\xspace denotes the hypothesized \ensuremath{\phi}\xspace mass. Diagrams for these processes are shown in Fig.~\ref{fig:production-diagrams}. In a second, more specific interpretation of the data, we search for nonresonant \ensuremath{\PGt\PGt}\xspace production in a model with vector leptoquarks. Finally, in a third interpretation of the data, we survey the parameter space of two indicative benchmark scenarios of the MSSM, which predict multiresonance signatures, one of which is associated with \ensuremath{\mathrm{\PH(125)}}\xspace. The most important characteristics of the vector leptoquark model and the MSSM are described in the following. \begin{figure}[th] \centering \includegraphics[width=0.32\textwidth]{Figure_001-a.pdf} \includegraphics[width=0.32\textwidth]{Figure_001-b.pdf} \includegraphics[width=0.32\textwidth]{Figure_001-c.pdf} \caption { Diagrams for the production of neutral Higgs bosons \ensuremath{\phi}\xspace (left) via gluon fusion, labelled as \ensuremath{\Pg\Pg\phi}\xspace, and (middle and right) in association with \PQb quarks, labelled as \ensuremath{\PQb\PQb\phi}\xspace in the text. In the middle diagram, a pair of \PQb quarks is produced from the fusion of two gluons, one from each proton. In the right diagram, a \PQb quark from one proton scatters from a gluon from the other proton. In both cases \ensuremath{\phi}\xspace is radiated off one of the \PQb quarks. } \label{fig:production-diagrams} \end{figure} \subsection{Vector leptoquarks} Leptoquarks are hypothetical particles that carry both baryon and lepton numbers~\cite{Buchmuller:1986zs}, and are predicted by various BSM theories, such as grand unified theories~\cite{Pati:1973uk, Pati:1974yy, GeorgiGlashow, Fritzsch:1974nn}, technicolour models~\cite{Dimopoulos:1979es,Dimopoulos:1979sp,Technicolor, Lane:1991qh}, compositeness scenarios~\cite{LightLeptoquarks,Gripaios:2009dq}, and $R$-parity violating SUSY~\cite{Farrar:1978xj,Ramond:1971gb,Golfand:1971iw,Neveu:1971rx, Volkov:1972jx,Wess:1973kz,Wess:1974tw,Fayet:1974pd,Nilles:1983ge,Barbier:2004ez}. In recent years there has been a renewed interest in leptoquark models as a means of explaining various anomalies observed by a number of \PQb physics measurements performed in different experiments~\cite{Tanaka:2012nw,Barbieri:2015yvd,Faroughy:2016osc, Bordone:2017bld,DiLuzio:2017vat,Greljo:2018tuh,Angelescu:2021lln,Cornella:2021sby}, most notably the apparent violation of lepton flavour universality in neutral-current~\cite{LHCb:2021trn} and charged-current~\cite{BaBar:2012obs,BaBar:2013mob, Belle:2015qfa,LHCb:2015gmp,Belle:2016dyj,LHCb:2017rln,LHCb:2017smo} B meson decays. Models that contain a \TeVns-scale vector leptoquark (\ensuremath{\mathrm{U}_1}\xspace), characterized by its quantum numbers $(\ensuremath{\mathrm{SU}(3)_{\mathrm{C}}}\xspace, \ensuremath{\mathrm{SU}(2)_{\mathrm{L}}}\xspace, \ensuremath{\text{U}(1)_{\mathrm{Y}}}\xspace) = (\textbf{3}, \textbf{1}, 2/3)$, are particularly appealing because they can explain both neutral- and charged-current anomalies at the same time~\cite{Barbieri:2015yvd,Faroughy:2016osc,Bordone:2017bld, DiLuzio:2017vat,Greljo:2018tuh,Angelescu:2021lln,Cornella:2021sby}. The Lagrangian for the \ensuremath{\mathrm{U}_1}\xspace coupling to SM fermions is given by~\cite{Cornella:2021sby} \begin{linenomath} \begin{equation} \mathcal{L_{\mathrm{U}}} = \frac{\ensuremath{g_{\text{U}}}\xspace}{\sqrt{2}}\mathrm{U}^\mu \left[ \ensuremath{\beta_{\mathrm{L}}}\xspace^{i\alpha}(\overline{q}_{\mathrm{L}}^{i}\gamma_\mu l_{\mathrm{L}}^\alpha) +\ensuremath{\beta_{\mathrm{R}}}\xspace^{i\alpha}(\overline{d}_{\mathrm{R}}^{i}\gamma_\mu e_{\mathrm{R}}^{\alpha}) \right]+\mathrm{h.c.}, \end{equation} \end{linenomath} with the coupling constant \ensuremath{g_{\text{U}}}\xspace, where $q_{\mathrm{L}}$ and $d_{\mathrm{R}}$ ($l_{\mathrm{L}}$ and $e_{\mathrm{R}}$) denote the left- and right-handed quark (lepton) doublets, and \ensuremath{\beta_{\mathrm{L}}}\xspace and \ensuremath{\beta_{\mathrm{R}}}\xspace are left- and right-handed coupling matrices, which are assumed to have the structures: \begin{linenomath} \begin{equation} \ensuremath{\beta_{\mathrm{L}}}\xspace = \begin{pmatrix} 0 & 0 & \ensuremath{\betaL^{\mathrm{d}\tau}}\xspace \\ 0 & \ensuremath{\betaL^{\mathrm{s}\mu}}\xspace & \ensuremath{\betaL^{\mathrm{s}\tau}}\xspace \\ 0 & \ensuremath{\betaL^{\mathrm{b}\mu}}\xspace & \ensuremath{\betaL^{\PQb\PGt}}\xspace \end{pmatrix}, \quad \ensuremath{\beta_{\mathrm{R}}}\xspace = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \ensuremath{\betaR^{\mathrm{b}\tau}}\xspace \end{pmatrix}. \label{eqn:beta_couplings} \end{equation} \end{linenomath} The motivations for the assumed structures of these matrices are given in Ref.~\cite{Cornella:2021sby}. The normalization of \ensuremath{g_{\text{U}}}\xspace is chosen to give $\ensuremath{\betaL^{\PQb\PGt}}\xspace=1$. Two benchmark scenarios are considered, with different assumptions made about the value of \ensuremath{\betaR^{\mathrm{b}\tau}}\xspace. In the first benchmark scenario (``VLQ BM 1''), \ensuremath{\betaR^{\mathrm{b}\tau}}\xspace is assumed to be zero. In the second benchmark scenario (``VLQ BM 2''), \ensuremath{\betaR^{\mathrm{b}\tau}}\xspace is assumed to be $-1$, which corresponds to a Pati--Salam-like~\cite{Pati:1974yy,Bordone:2017bld} \ensuremath{\mathrm{U}_1}\xspace leptoquark. The \ensuremath{\betaL^{\mathrm{s}\tau}}\xspace couplings are set to their preferred values from global fits to the low-energy observables presented in Ref.~\cite{Cornella:2021sby}, as summarized in Table~\ref{tab:betal_values}. The \ensuremath{\betaL^{\mathrm{d}\tau}}\xspace, \ensuremath{\betaL^{\mathrm{s}\mu}}\xspace, and \ensuremath{\betaL^{\mathrm{b}\mu}}\xspace couplings are small and have negligible influence on the \ensuremath{\PGt\PGt}\xspace signature, and therefore have been set to zero. If the \ensuremath{\mathrm{U}_1}\xspace leptoquark mass (\ensuremath{m_{\text{U}}}\xspace) is sufficiently small, the \ensuremath{\mathrm{U}_1}\xspace particle will contribute to the \ensuremath{\PGt\PGt}\xspace spectrum via pair production with each \ensuremath{\mathrm{U}_1}\xspace subsequently decaying to a \ensuremath{\PQq\PGt}\xspace pair. For larger \ensuremath{m_{\text{U}}}\xspace, the pair production cross section is suppressed because of the decreasing probability that the initial-state partons possess sufficiently large momentum fractions of the corresponding protons to produce on-shell \ensuremath{\mathrm{U}_1}\xspace pairs. In this case the dominant contribution to the \ensuremath{\PGt\PGt}\xspace spectrum is via \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange in the $\PQb\PAQb$ initial-state as illustrated in Fig.~\ref{fig:production-diagrams-vlq}, with subdominant contributions from the equivalent $\PQb\PAQs$, $\PQs\PAQb$, and $\PQs\PAQs$ initiated processes. In our analysis we target the kinematic region of $\ensuremath{m_{\text{U}}}\xspace\gtrsim 1\TeV$, motivated by the experimental exclusion limits on \ensuremath{m_{\text{U}}}\xspace by direct searches, \eg in Ref.~\cite{CMS:2020wzx}. The contribution to the \ensuremath{\PGt\PGt}\xspace spectrum from \ensuremath{\mathrm{U}_1}\xspace pair production is negligible in this case, and we therefore consider only production through the $t$-channel exchange. \begin{table}[b] \topcaption{ Summary of the preferred values and uncertainties of \ensuremath{\betaL^{\mathrm{s}\tau}}\xspace in the two considered \ensuremath{\mathrm{U}_1}\xspace benchmark scenarios from Ref.~\cite{Cornella:2021sby}. } \renewcommand{\arraystretch}{1.3} \label{tab:betal_values} \centering \begin{tabular}{cc} Benchmark & \ensuremath{\betaL^{\mathrm{s}\tau}}\xspace \\ \hline VLQ BM 1 & $0.19 {}^{+0.06}_{-0.09}$ \\ VLQ BM 2 & $0.21 {}^{+0.05}_{-0.09}$ \\ \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{Figure_002.pdf} \caption { Diagram for the production of a pair of \PGt leptons via the $t$-channel exchange of a vector leptoquark \ensuremath{\mathrm{U}_1}\xspace. } \label{fig:production-diagrams-vlq} \end{figure} \subsection{The MSSM} \label{sec:MSSM} In the MSSM, which is a concrete example of the more general class of two Higgs doublet models (2HDMs)~\cite{Lee:1973iz,Branco:2011iw}, the Higgs sector requires two \ensuremath{\mathrm{SU}(2)}\xspace doublets, \ensuremath{\Phi_{\mathrm{u}}}\xspace and \ensuremath{\Phi_{\mathrm{d}}}\xspace, to provide masses for up- and down-type fermions. In $CP$-conserving 2HDMs, this leads to the prediction of two charged ($\PH^{\pm}$) and three neutral \ensuremath{\phi}\xspace bosons (\Ph, \PH, and \ensuremath{\mathrm{A}}\xspace), where \Ph and \PH (with masses $\ensuremath{m_{\Ph}}\xspace<\ensuremath{m_{\PH}}\xspace$) are scalars, and \ensuremath{\mathrm{A}}\xspace (with mass \ensuremath{m_{\PA}}\xspace) is a pseudoscalar. The physical states \Ph and \PH arise as mixtures of the pure gauge fields with a mixing angle $\alpha$. At tree level in the MSSM, the masses of these five Higgs bosons and $\alpha$ can be expressed in terms of the known gauge boson masses and two additional parameters, which can be chosen as \ensuremath{m_{\PA}}\xspace and the ratio of the vacuum expectation values of the neutral components of \ensuremath{\Phi_{\mathrm{u}}}\xspace and \ensuremath{\Phi_{\mathrm{d}}}\xspace, \begin{linenomath} \begin{equation} \tanb = \frac{\langle \ensuremath{\Phi_{\mathrm{u}}}\xspace^{0}\rangle}{\langle \ensuremath{\Phi_{\mathrm{d}}}\xspace^{0}\rangle}. \end{equation} \end{linenomath} Dependencies on additional parameters of the soft SUSY breaking mechanism enter via higher-order corrections in perturbation theory. In the exploration of the MSSM Higgs sector these additional parameters are usually set to fixed values in the form of indicative benchmark scenarios to illustrate certain properties of the theory. The most recent set of MSSM benchmark scenarios provided by the LHC Higgs Working Group has been introduced in Refs.~\cite{Bahl:2018zmf,Bahl:2020kwe, Bahl:2019ago} and summarized in Ref.~\cite{Bagnaschi:2791954}. The corresponding predictions of masses, cross sections, and branching fractions can be obtained from Ref.~\cite{MSSM_benchmark}. With one exception (the $M_{H}^{125}$ scenario), in these scenarios \Ph takes the role of \ensuremath{\mathrm{\PH(125)}}\xspace, and \PH and \ensuremath{\mathrm{A}}\xspace are nearly degenerate in mass ($\ensuremath{m_{\PH}}\xspace\approx\ensuremath{m_{\PA}}\xspace$) in a large fraction of the provided parameter space. For values of \ensuremath{m_{\PA}}\xspace much larger than the mass of the \PZ boson, the coupling of \PH and \ensuremath{\mathrm{A}}\xspace to down-type fermions is enhanced by \tanb with respect to the expectation for an SM Higgs boson of the same mass, while the coupling to vector bosons and up-type fermions is suppressed. For increasing values of \tanb, \ensuremath{\PQb\PQb\phi}\xspace (with $\ensuremath{\phi}\xspace=\PH,\,\ensuremath{\mathrm{A}}\xspace$) is enhanced relative to \ensuremath{\Pg\Pg\phi}\xspace production. The larger contribution of \PQb quarks to the loop in Fig.~\ref{fig:production-diagrams} (left) in addition leads to softer spectra of the \PH and \ensuremath{\mathrm{A}}\xspace transverse momentum. Extra SUSY particles influence the production and decay via higher-order contributions to the interaction vertices that belong to \PQb quark lines. They also contribute directly to the loop in Fig.~\ref{fig:production-diagrams} (left). \section{The CMS detector} \label{sec:detector} The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity ($\eta$) coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. Events of interest are selected using a two-tiered trigger system. The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\unit{kHz} within a fixed latency of about 4\mus~\cite{Sirunyan:2020zal}. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage~\cite{Khachatryan:2016bia}. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. \section{Event reconstruction} \label{sec:reconstruction} The reconstruction of the \ensuremath{\Pp\Pp}\xspace collision products is based on the particle-flow (PF) algorithm~\cite{Sirunyan:2017ulk}, which combines the information from all CMS subdetectors to reconstruct a set of particle candidates (PF candidates), identified as charged and neutral hadrons, electrons, photons, and muons. In the 2016 (2017--2018) data sets the average number of interactions per bunch crossing was 23 (32). The primary vertex (PV) is taken to be the vertex corresponding to the hardest scattering in the event, evaluated using tracking information alone, as described in Ref.~\cite{CMS-TDR-15-02}. Secondary vertices, which are displaced from the PV, might be associated with decays of long-lived particles emerging from the PV. Any other collision vertices in the event are associated with additional, mostly soft, inelastic \ensuremath{\Pp\Pp}\xspace collisions, referred to as pileup (PU). Electrons are reconstructed using tracks from hits in the tracking system and calorimeter deposits in the ECAL~\cite{Khachatryan:2015hwa,CMS:2020uim}. To increase their purity, reconstructed electrons are required to pass a multivariate electron identification discriminant, which combines information on track quality, shower shape, and kinematic quantities. For this analysis, a working point with an identification efficiency of 90\% is used, for a rate of jets misidentified as electrons of ${\approx}1\%$. Muons in the event are reconstructed by combining the information from the tracker and the muon detectors~\cite{Sirunyan:2018fpa}. The presence of hits in the muon detectors already leads to a strong suppression of particles misidentified as muons. Additional identification requirements on the track fit quality and the compatibility of individual track segments with the fitted track can reduce the misidentification rate further. For this analysis, muon identification requirements with an efficiency of ${\approx}99\%$ are chosen, with a misidentification rate below 0.2\% for pions. The contributions from backgrounds to the electron and muon selections are further reduced by requiring the corresponding lepton to be isolated from any hadronic activity in the detector. This property is quantified by an isolation variable \begin{linenomath} \begin{equation} \ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace=\frac{1}{\ensuremath{\pt^{\Pe(\PGm)}}\xspace}\left(\sum\pt^{\text{charged}} + \max\left(0, \sum\et^{\text{neutral}}+\sum\et^{\PGg}-\pt^{\text{PU}}\right)\right), \end{equation} \end{linenomath} where \ensuremath{\pt^{\Pe(\PGm)}}\xspace corresponds to the electron (muon) \pt and $\sum\pt^{\text{charged}}$, $\sum\et^{\text{neutral}}$, and $\sum\et^{\PGg}$ to the \pt (or transverse energy \et) sum of all charged particles, neutral hadrons, and photons, in a predefined cone of radius $\ensuremath{{\Delta}R}\xspace = \sqrt{\smash[b]{\left(\Delta\eta\right)^{2}+\left(\Delta \varphi\right)^{2}}}$ around the lepton direction at the PV, where $\Delta\eta$ and $\Delta\varphi$ (measured in radians) correspond to the angular distances of the particle to the lepton in the $\eta$ and azimuthal $\varphi$ directions. The chosen cone size is $\ensuremath{{\Delta}R}\xspace=0.3\,(0.4)$ for electrons (muons). The lepton itself is excluded from the calculation. To mitigate any distortions from PU, only those charged particles whose tracks are associated with the PV are included. Since an unambiguous association with the PV is not possible for neutral hadrons and photons, an estimate of the contribution from PU ($\pt^{\text{PU}}$) is subtracted from the sum of $\sum\et^{\text{neutral}}$ and $\sum\et^{\PGg}$. This estimate is obtained from tracks not associated with the PV in the case of \ensuremath{I_{\text{rel}}^{\PGm}}\xspace and the mean energy flow per area unit in the case of \ensuremath{I_{\text{rel}}^{\Pe}}\xspace. For negative values, the \ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace is set to zero. For further characterization of the event, all reconstructed PF candidates are clustered into jets using the anti-\kt algorithm with a distance parameter of 0.4, as implemented in the \FASTJET software package~\cite{Cacciari:2008gp, Cacciari:2011ma}. To identify jets resulting from the hadronization of \PQb quarks (\PQb jets) the \textsc{DeepJet} algorithm is used, as described in Refs.~\cite{Sirunyan:2017ezt,Bols:2020bkb}. In this analysis a working point of this algorithm is chosen that corresponds to a \PQb jet identification efficiency of ${\approx}80\%$ for a misidentification rate for jets originating from light-flavour quarks or gluons of \ensuremath{\mathcal{O}}(1\%)~\cite{CMS-DP-2018-058}. Jets with $\pt>30\GeV$ and $\abs{\eta}<4.7$ and \PQb jets with $\pt>20\GeV$ and $\abs{\eta}<2.4$ are used in 2016. From 2017 onwards, after the upgrade of the silicon pixel detector, the \PQb jet $\eta$ range is extended to $\abs{\eta}< 2.5$. Jets are also used as seeds for the reconstruction of \tauh candidates. This is done by exploiting the substructure of the jets using the ``hadrons-plus-strips'' algorithm, as described in Refs.~\cite{Sirunyan:2018pgf,CMS:2022prd}. Decays into one or three charged hadrons with up to two neutral pions with $\pt>2.5\GeV$ are used. Neutral pions are reconstructed as strips with dynamic size in $\eta$-$\phi$ from reconstructed photons and electrons contained in the seeding jet, where the latter originate from photon conversions. The strip size varies as a function of the \pt of the electron or photon candidates. The \tauh decay mode is then obtained by combining the charged hadrons with the strips. To distinguish \tauh candidates from jets originating from the hadronization of quarks or gluons, and from electrons or muons, the \textsc{DeepTau} (DT) algorithm is used, as described in Ref.~\cite{CMS:2022prd}. This algorithm exploits the information of the reconstructed event record (comprising tracking, impact parameter, and calorimeter cluster information), the kinematic and object identification properties of the PF candidates in the vicinity of the \tauh candidate and those of the \tauh candidate itself, and quantities that estimate the PU density of the event. It results in a multiclassification output $\ensuremath{y^{\text{DT}}}\xspace_{\alpha}\,(\alpha=\PGt,\,\Pe,\,\PGm,\,\text{jet})$ equivalent to a Bayesian probability of the \tauh candidate to originate from a genuine \PGt lepton, the hadronization of a quark or gluon, an electron, or a muon. From this output three discriminants are built according to \begin{linenomath} \begin{equation} D_{\alpha} = \frac{\ensuremath{y^{\text{DT}}}\xspace_{\PGt}}{\ensuremath{y^{\text{DT}}}\xspace_{\PGt}+\ensuremath{y^{\text{DT}}}\xspace_{\alpha}}, \quad \alpha=\,\Pe,\,\PGm,\text{ jet}. \end{equation} \end{linenomath} For the analysis presented here, predefined working points of \ensuremath{D_{\Pe}}\xspace, \ensuremath{D_{\PGm}}\xspace, and \ensuremath{D_{\text{jet}}}\xspace~\cite{CMS:2022prd} are chosen depending on the \ensuremath{\PGt\PGt}\xspace final state, for which the \tauh selection efficiencies and misidentification rates are given in Table~\ref{tab:dt-working-points}. Since the \ensuremath{\text{jet}\to\tauh}\xspace misidentification rate strongly depends on the \pt and initiating parton type of the misidentified jet, it should be viewed as approximate. The missing transverse momentum vector \ptvecmiss is also used for further categorization of the events. It is calculated as the negative vector \pt sum of all PF candidates, weighted by their probability to originate from the PV~\cite{Sirunyan:2019kia}, and exploits the pileup-per-particle identification algorithm~\cite{Bertolini:2014bba} to reduce the PU dependence. With \ptmiss we refer to the magnitude of this quantity. \begin{table}[t] \centering \topcaption{ Efficiencies for the identification of \tauh decays and corresponding misidentification rates (given in parentheses) for the working points of \ensuremath{D_{\Pe}}\xspace, \ensuremath{D_{\PGm}}\xspace, and \ensuremath{D_{\text{jet}}}\xspace, chosen for the \ensuremath{\PGt\PGt}\xspace selection, depending on the \ensuremath{\PGt\PGt}\xspace final state. The numbers are given as a percentages. } \begin{tabular}{cccc} & \ensuremath{D_{\Pe}}\xspace (\%) & \ensuremath{D_{\PGm}}\xspace (\%) & \ensuremath{D_{\text{jet}}}\xspace (\%) \\ \hline \ensuremath{\Pe\tauh}\xspace & 54 (0.05) & 71.1 (0.13) & \\ \ensuremath{\PGm\tauh}\xspace & \multirow{2}{*}{70 (2.60)} & 70.3 (0.03) & 49 (0.43) \\ \ensuremath{\tauh\tauh}\xspace & & 71.1 (0.13) & \\ \end{tabular} \label{tab:dt-working-points} \end{table} \section{Event selection and categorization} \label{sec:selection} \subsection{Selection of \texorpdfstring{\ensuremath{\PGt\PGt}\xspace}{tau tau} candidates} Depending on the final state, the online selection in the HLT step is based either on the presence of a single electron, muon, or \tauh candidate, or an \ensuremath{\Pe\PGm}\xspace, \ensuremath{\Pe\tauh}\xspace, \ensuremath{\PGm\tauh}\xspace, or \ensuremath{\tauh\tauh}\xspace pair in the event. In the offline selection further requirements on \pt, $\eta$, and \ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace are applied in addition to the object identification requirements described in Section~\ref{sec:reconstruction}, as summarized in Table~\ref{tab:selection_kin}. In the \ensuremath{\Pe\PGm}\xspace final state an electron and a muon with $\pt>15\GeV$ and $\abs{\eta }<2.4$ are required. Depending on the trigger path that has led to the online selection of an event, a stricter requirement of $\pt>24\GeV$ is imposed on one of the two leptons to ensure a sufficiently high trigger efficiency of the HLT selection. Both leptons are required to be isolated from any hadronic activity in the detector according to $\ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace<0.15\,(0.2)$. In the \ensuremath{\Pe\tauh}\xspace (\ensuremath{\PGm\tauh}\xspace) final state, an electron (muon) with $\pt>25\,(20)\GeV$ is required if the event was selected by a trigger based on the presence of the \ensuremath{\Pe\tauh}\xspace (\ensuremath{\PGm\tauh}\xspace) pair in the event. From 2017 onwards, the threshold on the muon is raised to 21\GeV. If the event was selected by a single-electron trigger, the \pt requirement on the electron is increased to 26, 28, or 33\GeV for the years 2016, 2017, or 2018, respectively. For muons, the \pt requirement is increased to 23 (25)\GeV for 2016 (2017--2018), if selected by a single-muon trigger. The electron (muon) is required to be contained in the central part of the detector with $\abs{ \eta}<2.1$, and to be isolated according to $\ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace<0.15$. The \tauh candidate is required to have $\abs{\eta}<2.3$ and $\pt>35\,(32)\GeV$ if selected by an \ensuremath{\Pe\tauh}\xspace (\ensuremath{\PGm\tauh}\xspace) pair trigger, or $\pt>30\GeV$ if selected by a single-electron (single-muon) trigger. In the \ensuremath{\tauh\tauh}\xspace final state, both \tauh candidates are required to have $\abs{\eta}<2.1$ and $\pt>40\GeV$. For events only selected by a single \tauh trigger, the \tauh candidate that has been identified with the triggering object is required to have $\pt>120\,(180)\GeV$ for events recorded in 2016 (2017--2018). The selected \PGt lepton decay candidates are required to be of opposite charge and to be separated by more than $\ensuremath{{\Delta}R}\xspace=0.3$ in the $\eta$-$\varphi$ plane in the \ensuremath{\Pe\PGm}\xspace final state and by more than 0.5 otherwise. The closest distance of their tracks to the PV is required to be $d_{z}<0.2\cm$ along the beam axis. For electrons and muons, an additional requirement of $d_{xy}<0.045\cm$ in the transverse plane is applied. In rare cases, where more than the required number of \tauh candidates fulfilling all selection requirements is found, the candidate with the highest \ensuremath{D_{\text{jet}}}\xspace score is chosen. For electrons and muons, the most isolated candidate is chosen. To avoid the assignment of single events to more than one final state, events with additional electrons or muons, fulfilling looser selection requirements than those given for each corresponding \ensuremath{\PGt\PGt}\xspace final state above, are rejected from the selection. These requirements also help with the suppression of background processes, such as \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pe\Pe}}\xspace or \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace. \begin{table}[t] \centering \topcaption { Offline selection requirements applied to the electron, muon, and \tauh candidates used for the selection of the \PGt pair. The expressions first and second lepton refer to the label of the final state in the first column. The \pt requirements are given in {\GeVns}. For the \ensuremath{\Pe\PGm}\xspace final state two lepton pair trigger paths, with a stronger requirement on the \pt of electron (muon), are used for the online selection of the event. For the \ensuremath{\Pe\tauh}\xspace, \ensuremath{\PGm\tauh}\xspace, and \ensuremath{\tauh\tauh}\xspace final states, the values (in parentheses) correspond to the lepton pair (single lepton) trigger paths that have been used in the online selection. A detailed discussion is given in the text. } \begin{tabular}{lccccccc} Final state & Obs. & \multicolumn{3}{c}{First lepton} & \multicolumn{3}{c}{Second lepton} \\ & & 2016 & 2017 & 2018 & 2016 & 2017 & 2018 \\ \hline \ensuremath{\Pe\PGm}\xspace & \pt & \multicolumn{3}{c}{$>15\,(24)$} & \multicolumn{3}{c}{$>24\,(15)$} \\ & $\abs{\eta}$ & \multicolumn{3}{c}{$<2.4$} & \multicolumn{3}{c}{$<2.4$} \\ & \ensuremath{I_{\text{rel}}^{\Pe}}\xspace & \multicolumn{3}{c}{$<0.15$} & \multicolumn{3}{c}{$<0.20$} \\ [\cmsTabSkip] \ensuremath{\Pe\tauh}\xspace & \pt & $>25\,(26)$ & $>25\,(28)$ & $>25\,(33)$ & $\hphantom{>20\,(21)}$ & $>35\,(30)$ & $\hphantom{>20\,(21)}$ \\ & $\abs{\eta}$ & \multicolumn{3}{c}{$<2.1$} & \multicolumn{3}{c}{$<2.3$} \\ & \ensuremath{I_{\text{rel}}^{\Pe}}\xspace & \multicolumn{3}{c}{$<0.15$} & \multicolumn{3}{c}{--} \\ [\cmsTabSkip] \ensuremath{\PGm\tauh}\xspace & \pt & $>20\,(23)$ & $>21\,(25)$ & $>21\,(25)$ & \multicolumn{3}{c}{$>32\,(30)$} \\ & $\abs{\eta}$ & \multicolumn{3}{c}{$<2.1$} & \multicolumn{3}{c}{$<2.3$} \\ & \ensuremath{I_{\text{rel}}^{\PGm}}\xspace & \multicolumn{3}{c}{$<0.15$} & \multicolumn{3}{c}{--} \\ [\cmsTabSkip] \ensuremath{\tauh\tauh}\xspace & \pt & $>40\,(120)$ & \multicolumn{2}{c}{$>40\,(180)$} & \multicolumn{3}{c}{$>40$} \\ & $\abs{\eta}$ & \multicolumn{3}{c}{$<2.1$} & \multicolumn{3}{c}{$<2.1$} \\ \end{tabular} \label{tab:selection_kin} \end{table} \subsection{Event categorization} \label{sec:event-categories} \subsubsection{Standard categories and signal extraction} \label{sec:standard-categorisation} To increase the sensitivity of the searches, all selected events are further split into categories. Events with at least one \PQb jet, according to the selection requirements given in Section~\ref{sec:reconstruction}, are combined into a global ``\PQb tag'' category, used to target \ensuremath{\PQb\PQb\phi}\xspace production and to control the background from top quark pair (\ttbar) production. All other events are subsumed into a global ``no \PQb tag'' category. The events in the \ensuremath{\tauh\tauh}\xspace final state are not further categorized beyond that point. In the \ensuremath{\Pe\tauh}\xspace and \ensuremath{\PGm\tauh}\xspace final states, more categories are introduced in the global ``\PQb tag'' and ``no \PQb tag'' categories, based on the transverse mass of the \Pe (\PGm)-\ptvecmiss system defined as \begin{linenomath} \begin{equation} \label{eqn:mt_definition} \ensuremath{m_{\mathrm{T}}^{\Pe(\mu)}}\xspace =\mT(\ensuremath{\ptvec^{\kern1pt\Pe(\PGm)}}\xspace,\ptvecmiss),\quad\text{with} \quad \mT(\ptvec^{\kern1pt{i}}, \ptvec^{\kern1pt{j}}) = \sqrt{2\,\pt^{\kern1pt{i}} \,\pt^{\kern1pt{j}}\left(1-\cos\Delta\varphi \right)}, \end{equation} \end{linenomath} where $\Delta\varphi$ refers to the azimuthal angular difference between $\ptvec^{ \kern1pt{i}}$ and $\ptvec^{\kern1pt{j}}$. Events are divided into a tight-\mT ($\ensuremath{m_{\mathrm{T}}^{\Pe(\mu)}}\xspace<40\GeV$) and a loose-\mT ($40<\ensuremath{m_{\mathrm{T}}^{\Pe(\mu)}}\xspace<70\GeV$) category. The \ensuremath{\phi}\xspace signal is expected to be concentrated in the tight-\mT category. However, the loose-\mT category increases the acceptance for $\ensuremath{m_{\phi}}\xspace\gtrsim700\GeV$. In the \ensuremath{\Pe\PGm}\xspace final state, events are categorized based on the observable \ensuremath{D_{\zeta}}\xspace~\cite{Abulencia:2005kq} defined as \begin{linenomath} \begin{equation} \label{eqn:Dzeta} \ensuremath{D_{\zeta}}\xspace = \ensuremath{p_{\zeta}^{\text{miss}}}\xspace + 0.85\,\ensuremath{p_{\zeta}^{\text{vis}}}\xspace ; \qquad \ensuremath{p_{\zeta}^{\text{miss}}}\xspace = \ptvecmiss\cdot\ensuremath{\hat{\zeta}}\xspace ; \qquad \ensuremath{p_{\zeta}^{\text{vis}}}\xspace = \left(\ensuremath{\ptvec^{\kern1pt\Pe}}\xspace+\ensuremath{\ptvec^{\kern1pt\PGm}}\xspace\right)\cdot\ensuremath{\hat{\zeta}}\xspace, \\ \end{equation} \end{linenomath} where \ensuremath{\hat{\zeta}}\xspace corresponds to the bisectional direction between \ensuremath{\ptvec^{\kern1pt\Pe}}\xspace and \ensuremath{\ptvec^{\kern1pt\PGm}}\xspace. The scalar products \ensuremath{p_{\zeta}^{\text{miss}}}\xspace and \ensuremath{p_{\zeta}^{\text{vis}}}\xspace can take positive or negative values. Their linear combination has been optimized to maximize the sensitivity of the search. For events originating from \PW boson production in association with jets (\ensuremath{\PW}{+}\,\text{jets}\xspace) or \ttbar production, the \ensuremath{\ptvec^{\kern1pt\Pe}}\xspace, \ensuremath{\ptvec^{\kern1pt\PGm}}\xspace, and \ptvecmiss directions are more isotropically distributed leading to nonpeaking distributions in \ensuremath{D_{\zeta}}\xspace. For \ensuremath{\PGt\PGt}\xspace events from resonant decays, \ptvecmiss is expected to roughly coincide with \ensuremath{\hat{\zeta}}\xspace, and a stronger correlation between \ensuremath{p_{\zeta}^{\text{miss}}}\xspace and \ensuremath{p_{\zeta}^{\text{vis}}}\xspace is expected to lead to a peaking distribution about $\ensuremath{D_{\zeta}}\xspace\approx0\GeV$. The inputs to the reconstruction of \ensuremath{D_{\zeta}}\xspace are illustrated in Fig.~\ref{fig:dzeta}. Three further categories are introduced as high-\ensuremath{D_{\zeta}}\xspace ($\ensuremath{D_{\zeta}}\xspace>30\GeV$), medium-\ensuremath{D_{\zeta}}\xspace ($-10<\ensuremath{D_{\zeta}}\xspace<30\GeV$), and low-\ensuremath{D_{\zeta}}\xspace ($-35< \ensuremath{D_{\zeta}}\xspace<-10\GeV$). A \ensuremath{\phi}\xspace signal is expected to be concentrated in the medium-\ensuremath{D_{\zeta}}\xspace category. However, the low- and high-\ensuremath{D_{\zeta}}\xspace categories still contribute to an increase of the sensitivity of the model-independent \ensuremath{\phi}\xspace search in the \ensuremath{\Pe\PGm}\xspace final state by ${\approx}10\%$. A control category in the \ensuremath{\Pe\PGm}\xspace final state with at least one \PQb jet and $\ensuremath{D_{\zeta}}\xspace<-35\GeV$ is used to constrain the normalization of \ttbar events in the fit used for signal extraction. \begin{figure}[t] \centering \includegraphics[width=0.90\textwidth]{Figure_003.pdf} \caption{ Inputs to the reconstruction of the event observable \ensuremath{D_{\zeta}}\xspace, as described in the text. } \label{fig:dzeta} \end{figure} In summary, this leads to 17 event categories per data-taking year. Figure~\ref{fig:sub-categories} shows the \ensuremath{D_{\zeta}}\xspace and \ensuremath{m_{\mathrm{T}}^{\mu}}\xspace distributions in the \ensuremath{\Pe\PGm}\xspace and \ensuremath{\PGm\tauh}\xspace final states, before splitting the events into the categories described above. The category definitions are indicated by the vertical dashed lines in the figures. An overview of the categories described above is given in Fig.~\ref{fig:categories}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_004-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_004-b.pdf} \caption { Observed and expected distributions of (left) \ensuremath{D_{\zeta}}\xspace in the \ensuremath{\Pe\PGm}\xspace final state and (right) \ensuremath{m_{\mathrm{T}}^{\mu}}\xspace in the \ensuremath{\PGm\tauh}\xspace final state. The distributions are shown in the global ``no \PQb tag'' category before any further event categorization and after an individual background-only fit to the data in each corresponding variable. The grey shaded band represents the complete set of uncertainties used for signal extraction, after the fit. A detailed discussion of the data modelling is given in Section~\ref{sec:data-model}. The vertical dashed lines indicate the category definitions in each of the final states, as described in the text. In the lower panels of each figure the ratio of the observed numbers of events per bin to the background expectation is shown. } \label{fig:sub-categories} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1.\textwidth]{Figure_005.pdf} \caption { Overview of the categories used for the extraction of the signal for the model-independent \ensuremath{\phi}\xspace search for hypothesized values of $\ensuremath{m_{\phi}}\xspace\geq250\GeV$, the vector leptoquark search, and the interpretation of the data in MSSM benchmark scenarios. } \label{fig:categories} \end{figure} In all cases the signal is extracted from distributions of the total transverse mass~\cite{Aad:2014vgg} defined as \begin{linenomath} \begin{equation} \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace = \sqrt{\mT^{2}(\ensuremath{\ptvec^{\kern1pt\PGt_{1}}}\xspace,\ensuremath{\ptvec^{\kern1pt\PGt_{2}}}\xspace)+\mT^{2}(\ensuremath{\ptvec^{\kern1pt\PGt_{1}}}\xspace, \ptvecmiss)+\mT^{2}(\ensuremath{\ptvec^{\kern1pt\PGt_{2}}}\xspace,\ptvecmiss)}, \label{eq:mttot} \end{equation} \end{linenomath} where $\tau_{1(2)}$ refers to the first (second) \PGt final state indicated in the \ensuremath{\Pe\PGm}\xspace, \ensuremath{\Pe\tauh}\xspace, \ensuremath{\PGm\tauh}\xspace, and \ensuremath{\tauh\tauh}\xspace final state labels, and \mT between two objects with transverse momenta \ensuremath{\ptvec^{\kern1pt\PGt_{1}}}\xspace and \ensuremath{\ptvec^{\kern1pt\PGt_{2}}}\xspace is defined in Eq.~(\ref{eqn:mt_definition}). This quantity is expected to provide superior discriminating power between resonant signals with $\ensuremath{m_{\phi}}\xspace\gtrsim250\GeV$ and nonpeaking backgrounds, such as \ensuremath{\PW}{+}\,\text{jets}\xspace or \ttbar production in the high-mass tails of the distribution. This strategy is used for the model-independent \ensuremath{\phi}\xspace search, to extract the expected signal for hypothesized values of $\ensuremath{m_{\phi}}\xspace\geq250\GeV$. It is also used for the extraction of the \ensuremath{\mathrm{A}}\xspace and \PH signal (for $\ensuremath{m_{\PA}}\xspace,\,\ensuremath{m_{\PH}}\xspace\gtrsim250\GeV$), when interpreting the data in MSSM benchmark scenarios, and for the vector leptoquark search, which is most sensitive to an excess over the background expectation for $\ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace\gtrsim250\GeV$ as will be discussed in Section~\ref{sec:simulation-signal}. To increase the sensitivity of the analyses for the model-independent \ensuremath{\phi}\xspace search for hypothesized values of $\ensuremath{m_{\phi}}\xspace<250\GeV$ and the low-mass resonance \Ph for the interpretation of the data in MSSM benchmark scenarios, this signal extraction strategy is modified as discussed in the following sections. \subsubsection{Modifications for the low-mass model-independent \texorpdfstring{\ensuremath{\phi}\xspace}{phi} search} \label{sec:low-mass-categorisation} For hypothesized values of $\ensuremath{m_{\phi}}\xspace<250\GeV$, the background from \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace production, which features a peaking mass distribution in a region close to the signal mass, starts to exceed the nonpeaking backgrounds. The \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace distribution loses discrimination power and some of the categories that were introduced to increase the acceptance for high-mass signals are not useful anymore. Therefore, the signal extraction strategy is modified in the following way. The low-\ensuremath{D_{\zeta}}\xspace and loose-\mT categories are removed. The remaining ``no \PQb tag'' categories are further split by \ensuremath{\pt^{\ditau}}\xspace, obtained from the vectorial sum \pt of the visible \PGt decay products and \ptvecmiss, according to $\ensuremath{\pt^{\ditau}}\xspace<50\GeV$, $50<\ensuremath{\pt^{\ditau}}\xspace<100\GeV$, $100<\ensuremath{\pt^{\ditau}}\xspace<200\GeV$, and $\ensuremath{\pt^{\ditau}}\xspace<200\GeV$, where \ensuremath{\pt^{\ditau}}\xspace is used as an estimate for the \ensuremath{\phi}\xspace \pt (\ensuremath{\pt^{\phi}}\xspace) in data. No further splitting based on \ensuremath{\pt^{\ditau}}\xspace is applied to the ``\PQb tag'' categories because of the lower event populations in these categories. In summary, this leads to 26 event categories per data-taking year. An overview of this modified set of categories is given in Fig.~\ref{fig:categories_lowmass}. \begin{figure}[t] \centering \includegraphics[width=1.\textwidth]{Figure_006.pdf} \caption{ Overview of the categories used for the extraction of the signal for the model-independent \ensuremath{\phi}\xspace search for $60\leq\ensuremath{m_{\phi}}\xspace<250\GeV$. } \label{fig:categories_lowmass} \end{figure} In these categories, the signal is extracted from a likelihood-based fit of the invariant mass of the \ensuremath{\PGt\PGt}\xspace system, \ensuremath{m_{\PGt\PGt}}\xspace, before the decay of the \PGt leptons~\cite{Bianchini:2014vza}. This estimate combines the measurement of \ptvecmiss and its covariance matrix with the measurements of the visible \ensuremath{\PGt\PGt}\xspace decay products, utilizing the matrix elements for unpolarized \PGt decays~\cite{Bullock:1992yt} for the decay into leptons and the two-body phase space~\cite{ParticleDataGroup:2020ssz} for the decay into hadrons. On average the resolution of \ensuremath{m_{\PGt\PGt}}\xspace amounts to about 10--25\%, depending on the kinematic properties of the \ensuremath{\PGt\PGt}\xspace system and the \ensuremath{\PGt\PGt}\xspace final states, where the latter is related to the number of neutrinos that escape detection. This approach exploits the better \ensuremath{m_{\phi}}\xspace resolution of \ensuremath{m_{\PGt\PGt}}\xspace compared to \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace, together with the usually harder \ensuremath{\pt^{\phi}}\xspace, compared to the \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace \pt spectrum. \subsubsection{Modifications for the MSSM interpretation} The MSSM predicts three neutral Higgs bosons \ensuremath{\phi}\xspace, one of which is identified with \ensuremath{\mathrm{\PH(125)}}\xspace. Each benchmark scenario has to match the observed \ensuremath{\mathrm{\PH(125)}}\xspace properties. To exploit the best possible experimental knowledge about \ensuremath{\mathrm{\PH(125)}}\xspace all events in the global ``no \PQb tag'' category are split by \ensuremath{m_{\PGt\PGt}}\xspace. For events with $\ensuremath{m_{\PGt\PGt}}\xspace>250 \GeV$, the categories described in Section~\ref{sec:standard-categorisation} are used. For events with $\ensuremath{m_{\PGt\PGt}}\xspace<250\GeV$, the neural-network-based (NN) analysis, which was used for the stage-0 simplified template cross section measurements of Ref.~\cite{CMS:2022kdi}, is used to obtain the most precise estimates from data for \ensuremath{\mathrm{\PH(125)}}\xspace production via gluon fusion (\ensuremath{\Pg\Pg\Ph}\xspace), vector boson fusion (VBF), and vector boson associated production (\ensuremath{\PV\Ph}\xspace). Although the NN is trained specifically to target events with an SM-like \ensuremath{\phi}\xspace with $\ensuremath{m_{\phi}}\xspace=125\GeV$, signal events for the additional Higgs bosons can also enter the NN categories if $\ensuremath{m_{\phi}}\xspace\lesssim 250\GeV$, and the $y_{l}$ discriminators contribute to the separation of such events from the background. This modification adds 18 background and 8 signal categories from the NN-analysis per data-taking year. We will refer to these as the "NN categories" throughout this paper. In these categories, the \ensuremath{\mathrm{\PH(125)}}\xspace signal is extracted from distributions of the NN output functions $y_{l}$ in each signal and background category $l$. For the NN-analysis in Ref.~\cite{CMS:2022kdi}, $\mT^{\ensuremath{\Pe\PGm}\xspace}$ calculated from $\ensuremath{\ptvec^{\kern1pt\Pe}}\xspace+\ensuremath{\ptvec^{\kern1pt\PGm}}\xspace$ and \ptvecmiss is required to be less than 60\GeV in the \ensuremath{\Pe\PGm}\xspace final state, to prevent event overlap with analyses of other \ensuremath{\mathrm{\PH(125)}}\xspace decay modes in the SM interpretation. For the analysis presented here, this requirement is replaced by $\ensuremath{D_{\zeta}}\xspace>-35\GeV$. A summary of the categories and discriminating variables used for signal extraction for each of the analyses presented in this paper is given in Table~\ref{tab:categories-vs-analyses}. \begin{table}[t] \centering \topcaption{ Event categories and discriminants used for the extraction of the signals, for the searches described in this paper. We note that \ensuremath{m_{\phi}}\xspace refers to the hypothesized mass of the model-independent \ensuremath{\phi}\xspace search, while \ensuremath{m_{\PGt\PGt}}\xspace refers to the reconstructed mass of the \ensuremath{\PGt\PGt}\xspace system before the decays of the \PGt leptons, and thus to an estimate of \ensuremath{m_{\phi}}\xspace in data. The variable $y_{l}$ refers to the output functions of the NNs used for signal extraction in Ref.~\cite{CMS:2022kdi}. } \begin{tabular}{llllc} \multicolumn{2}{l}{Search} & Categories & Additional & Discr. \\ & & & selections & variable \\ \hline \multirow{2}{*}{Model-independent (\ensuremath{\phi}\xspace)} & $\ensuremath{m_{\phi}}\xspace<250\GeV$ & Fig.~\ref{fig:categories_lowmass} & \NA & \ensuremath{m_{\PGt\PGt}}\xspace \\ & $\ensuremath{m_{\phi}}\xspace\geq 250\GeV$ & Fig.~\ref{fig:categories} & \NA & \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace \\[\cmsTabSkip] \multicolumn{2}{l}{Vector leptoquark (\ensuremath{\mathrm{U}_1}\xspace)} & Fig.~\ref{fig:categories} & \NA & \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace \\[\cmsTabSkip] \multicolumn{2}{l}{\multirow{3}{*}{MSSM benchmark scenarios (\ensuremath{\mathrm{A}}\xspace, \PH, \Ph)}} & \multirow{2}{*}{NN-analysis} & $\ensuremath{m_{\PGt\PGt}}\xspace<250\GeV$, & \multirow{2}{*}{$y_{l}$} \\ &&& $\ensuremath{D_{\zeta}}\xspace>-35\GeV$ (in \ensuremath{\Pe\PGm}\xspace) &\\ [1ex] & & Fig.~\ref{fig:categories} & $\ensuremath{m_{\PGt\PGt}}\xspace>250\GeV$ & \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace \\ \end{tabular} \label{tab:categories-vs-analyses} \end{table} \section{Background and signal modelling} \label{sec:data-model} All SM background sources that are relevant after the event selection described in Section~\ref{sec:selection} are listed in Table~\ref{tab:bg-processes}. The expected background composition depends on the \ensuremath{\PGt\PGt}\xspace final state, event category, and the tested signal mass hypothesis. The most abundant source of background in the ``\PQb tag'' categories is \ttbar production. In the ``no \PQb tag'' categories \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace forms the largest fraction of background processes, followed by \ensuremath{\PW}{+}\,\text{jets}\xspace production and events containing purely quantum chromodynamics (QCD) induced gluon and light-flavour quark jets, referred to as QCD multijet production. These backgrounds are grouped according to their experimental signature into: \begin{enumerate} \item events containing genuine \PGt lepton pairs (\ensuremath{\PGt\PGt}\xspace); \item events with quark- or gluon-induced jets misidentified as \tauh candidates (\ensuremath{\text{jet}\to\tauh}\xspace) or light leptons (\ensuremath{\text{jet}\to\Pell}\xspace); \item \ttbar events where an intermediate \PW boson decays into an electron, muon, or \PGt lepton, which do not fall into the previous groups (labelled as ``\ttbar''); \item remaining background processes that are of minor importance for the analysis and not yet included in any of the previous event classes (labelled as ``others'' in later figures). \end{enumerate} Event group (ii) mostly contains events from QCD multijet, \ensuremath{\PW}{+}\,\text{jets}\xspace, and \ttbar production. Event group (iv) comprises diboson production and single~\PQt quark production (labelled as ``electroweak'' in Fig.~\ref{fig:sub-categories} left), \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace, and \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pe\Pe}}\xspace events. For the background modelling, four different methods are used depending on the event group: \ensuremath{\PGt\PGt}\xspace events are obtained from the \PGt-embedding method~\cite{Sirunyan:2019drn}, discussed in Section~\ref{sec:tau-embedding}; \ensuremath{\text{jet}\to\tauh}\xspace events are obtained from the ``fake factor'' (\ensuremath{F_{\mathrm{F}}}\xspace) method, discussed in Section~\ref{sec:FF-method}; \ensuremath{\text{jet}\to\Pell}\xspace events mostly from QCD multijet production with \ensuremath{\Pe\PGm}\xspace pairs in the final state are estimated using the ``same-sign'' (SS) method, discussed in Section~\ref{sec:em-background}; all other background and signal events are obtained from full event simulation, discussed in Section~\ref{sec:simulation}. \begin{table}[t] \topcaption{ Background processes contributing to the event selection, as discussed in Section~\ref{sec:selection}. The symbol $\Pell$ corresponds to an electron or muon. The second column refers to the experimental signature in the analysis, the last four columns indicate the estimation methods used to model each corresponding signature, as described in Sections~\ref{sec:tau-embedding}--\ref{sec:simulation}. Diboson and single \PQt production are part of the process group iv) discussed in Section~\ref{sec:data-model}. QCD(\ensuremath{\Pe\PGm}\xspace) refers to QCD multijet production with an \ensuremath{\Pe\PGm}\xspace pair in the final state. } \label{tab:bg-processes} \centering \begin{tabular}{lr@{$\to$}lcccc} & \multicolumn{2}{c}{} &\multicolumn{4}{c}{Estimation method} \\ Background process & \multicolumn{2}{c}{Final-state signature} & \PGt-emb. & \ensuremath{F_{\mathrm{F}}}\xspace & Sim. & SS \\ \hline \multirow{3}{*}{\ensuremath{\PZ/\PGg^{*}}\xspace} & \multicolumn{2}{c}{\ensuremath{\PGt\PGt}\xspace} & $\checkmark$ & \NA & \NA & \NA \\ & $\hspace{1.1cm}$Jet & \tauh & \NA & $\checkmark$ & \NA & \NA \\ & \multicolumn{2}{c}{$\Pell\Pell$} & \NA & \NA & $\checkmark$ & \NA \\ [\cmsTabSkip] \multirow{3}{*}{\ttbar} & \multicolumn{2}{c}{\ensuremath{\PGt\PGt}\xspace} & $\checkmark$ & \NA & \NA & \NA \\ & Jet & \tauh & \NA & $\checkmark$ & \NA & \NA \\ & \multicolumn{2}{c}{\ensuremath{\hphantom{\PGt}\Pell+\mathrm{X}}\xspace} & \NA & \NA & $\checkmark$ & \NA \\ [\cmsTabSkip] \multirow{3}{*}{Diboson+single \PQt} & \multicolumn{2}{c}{\ensuremath{\PGt\PGt}\xspace} & $\checkmark$ & \NA & \NA & \NA \\ & Jet & \tauh & \NA & $\checkmark$ & \NA & \NA \\ & \multicolumn{2}{c}{\ensuremath{\hphantom{\PGt}\Pell+\mathrm{X}}\xspace} & \NA & \NA & $\checkmark$ & \NA \\ [\cmsTabSkip] \ensuremath{\PW}{+}\,\text{jets}\xspace & Jet & \tauh & \NA & $\checkmark$ & \NA & \NA \\[\cmsTabSkip] \multirow{3}{*}{QCD multijet} & Jet & \tauh & \NA & $\checkmark$ & \NA & \NA \\ & Jet & $\Pell$ & \NA & \NA & $\checkmark$ & \NA \\ & \multicolumn{2}{c}{QCD(\ensuremath{\Pe\PGm}\xspace)} & \NA & \NA & \NA & $\checkmark$ \\[\cmsTabSkip] \ensuremath{\mathrm{\PH(125)}}\xspace & \multicolumn{2}{c}{\ensuremath{\PGt\PGt}\xspace} & \NA & \NA & $\checkmark$ & \NA \\ \end{tabular} \end{table} \subsection{\texorpdfstring{Backgrounds with genuine \PGt lepton pairs (\ensuremath{\PGt\PGt}\xspace)} {Backgrounds with genuine tau lepton pairs (tau tau)}} \label{sec:tau-embedding} For all events where the decay of a \PZ boson results in two genuine \PGt leptons, the \PGt-embedding method, as described in Ref.~\cite{Sirunyan:2019drn}, is used. For this purpose, \ensuremath{\PGm\PGm}\xspace events are selected in data. All energy deposits of the muons are removed from the event record and replaced by simulated \PGt lepton decays with the same kinematic properties as the selected muons. In this way the method relies only on the simulation of the well-understood \PGt lepton decay and its energy deposits in the detector, while all other parts of the event, such as the identification and reconstruction of jets, including \PQb jets, or the non-\PGt related parts of \ptvecmiss are obtained from data. This results in an improved modelling of the data compared with the simulation of the full process. In turn, several simulation-to-data corrections, as detailed in Section~\ref{sec:corrections}, are not needed. The selected muons predominantly originate from \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace events. However, contributions from other processes resulting in two genuine \PGt leptons, like \ttbar or diboson production, are also covered by this model. A detailed discussion of the selection of the original \ensuremath{\PGm\PGm}\xspace events, the exact procedure itself, its range of validity, and related uncertainties is reported in Ref.~\cite{Sirunyan:2019drn}. For a selection with no (at least one) \PQb jet in the event, as described in Section~\ref{sec:selection}, 97\% (84\%) of the \ensuremath{\PGm\PGm}\xspace events selected for the \PGt-embedding method are expected to originate from \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace and ${<}1\%$ (14\%) from \ttbar production. \subsection{\texorpdfstring{Backgrounds with jets misidentified as hadronically decaying \PGt leptons (\ensuremath{\text{jet}\to\tauh}\xspace)}{Backgrounds with jets misidentified as hadronically decaying tau leptons (jet to tau)}} \label{sec:FF-method} The main processes contributing to \ensuremath{\text{jet}\to\tauh}\xspace events in the \ensuremath{\Pe\tauh}\xspace, \ensuremath{\PGm\tauh}\xspace, and \ensuremath{\tauh\tauh}\xspace final states are QCD multijet, \ensuremath{\PW}{+}\,\text{jets}\xspace, and \ttbar production. These events are estimated using the \ensuremath{F_{\mathrm{F}}}\xspace method described in Refs.~\cite{Sirunyan:2018qio, Sirunyan:2018zut}, and adapted to the analyses described in this paper. For this purpose, the signal region (SR), defined by the event selection given in Section~\ref{sec:selection}, is complemented by three additional regions: the application region (AR) and two determination regions \ensuremath{\mathrm{DR}^{i}}\xspace, where $i$ stands for QCD or \ensuremath{\PW}{+}\,\text{jets}\xspace. For the AR a looser working point for the identification of the \tauh candidate is chosen and the events from the SR are excluded, which is the only selection difference with respect to the SR. In this way the AR forms an orthogonal, though still adjacent, sideband to the SR that is enriched in \ensuremath{\text{jet}\to\tauh}\xspace events. The events in the AR are then multiplied with a transfer function, which is obtained from each corresponding \ensuremath{\mathrm{DR}^{i}}\xspace or simulation, to estimate the contribution of \ensuremath{\text{jet}\to\tauh}\xspace events in the SR. The background processes in the AR and each corresponding \ensuremath{\mathrm{DR}^{i}}\xspace that are not targeted by this method are estimated either from simulation or the \PGt-embedding method and subtracted from the data. In the \ensuremath{\tauh\tauh}\xspace final state, where QCD multijet production contributes ${\gtrsim}95\%$ of the events in the AR, the transfer function is determined from DR$^{\text{QCD}}$ only, for which the charges of the two selected \tauh candidates are required to be of same sign. This function is assumed to be applicable also for the small fraction of \ensuremath{\PW}{+}\,\text{jets}\xspace and \ttbar events in the AR. In this final state, both \tauh candidates usually originate from \ensuremath{\text{jet}\to\tauh}\xspace misidentification. We require only the \tauh candidate with the larger \pt to fulfil the AR requirements, which provides an estimate for events where only this \tauh candidate is misidentified. Events in which the \tauh candidate with the larger \pt is a genuine \PGt lepton and the one with the lower \pt is misidentified, which constitute ${\approx}2\%$ of the total \ensuremath{\text{jet}\to\tauh}\xspace background, are modelled from simulation. In the \ensuremath{\Pe\tauh}\xspace (\ensuremath{\PGm\tauh}\xspace) final state, where the sharing of processes contributing to the AR is more equal, separate contributions to the transfer function \ensuremath{\FF^{i}}\xspace are used, where the index $i$ runs over the processes of QCD multijet, \ensuremath{\PW}{+}\,\text{jets}\xspace, and \ttbar production. For QCD multijet and \ensuremath{\PW}{+}\,\text{jets}\xspace production each \ensuremath{\FF^{i}}\xspace is derived in its corresponding \ensuremath{\mathrm{DR}^{i}}\xspace. For DR$^{\text{QCD}}$ we require $0.05<\ensuremath{I_{\text{rel}}^{\Pe(\mu)}}\xspace<0.15$ and the charges of the selected $\Pe (\PGm)$ and the \tauh candidate to be of same sign. For DR$^{\ensuremath{\PW}{+}\,\text{jets}\xspace}$ we require $\ensuremath{m_{\mathrm{T}}^{\Pe(\mu)}}\xspace>70\GeV$ and the absence of \PQb jets. The estimate of \ensuremath{\FF^{\ttbar}}\xspace is obtained from simulation. Each \ensuremath{\FF^{i}}\xspace is then used to estimate the yield \ensuremath{N_{\text{SR}}}\xspace and kinematic properties of the combination of the main contributing backgrounds $i$ in the SR from the number of events \ensuremath{N_{\text{AR}}}\xspace in the AR according to \begin{linenomath} \begin{equation} \label{eq:FF} \ensuremath{N_{\text{SR}}}\xspace = \left(\sum\limits_{i}w_{i}\ensuremath{\FF^{i}}\xspace\right)\ensuremath{N_{\text{AR}}}\xspace, \qquad i=\text{QCD, }\PW\text{+jets, }\ttbar. \end{equation} \end{linenomath} Each \ensuremath{\FF^{i}}\xspace is combined into a weighted sum, using the simulation-based estimate of the fractions $w_{i}$ of each process in the AR. A template fit to the data in the AR yields a similar result for the $w_{i}$. Each \ensuremath{\FF^{i}}\xspace is computed on an event-by-event basis. It mainly depends on the \pt of the \tauh candidate with the larger \pt, \ensuremath{\pt^{\tauh}}\xspace, the ratio $\ensuremath{\pt^{\text{jet}}}\xspace/\ensuremath{\pt^{\tauh}}\xspace$ where \ensuremath{\pt^{\text{jet}}}\xspace corresponds to the \pt of the jet seeding the \tauh reconstruction, and the jet multiplicity \ensuremath{N_{\text{Jets}}}\xspace. Each \ensuremath{\FF^{i}}\xspace is further subject to a number of residual corrections derived from both control regions in data and simulation to take subleading dependencies of the \ensuremath{\FF^{i}}\xspace into account. Depending on the transfer function \ensuremath{\FF^{i}}\xspace and the \ensuremath{\PGt\PGt}\xspace final state these are dependencies on \ensuremath{\pt^{\Pell}}\xspace, the invariant mass of the visible decay products of the \ensuremath{\PGt\PGt}\xspace system, \ensuremath{I_{\text{rel}}^{\Pell}}\xspace, or \ensuremath{\pt^{\tauh}}\xspace of the second-leading \tauh candidate. \subsection{\texorpdfstring{Backgrounds with jets misidentified as electron-muon pairs (QCD(\ensuremath{\Pe\PGm}\xspace))}{Backgrounds with jets misidentified as electron-muon pairs (QCD(emu))}} \label{sec:em-background} The background from QCD multijet production where two quark- or gluon-induced jets are misidentified as an \ensuremath{\Pe\PGm}\xspace pair is estimated using the SS method. In this case, an AR is distinguished from the SR by requiring the charges of the electron and muon to have the same sign. A sideband region DR is defined requiring the muon to be nonisolated ($0.2<\ensuremath{I_{\text{rel}}^{\PGm}}\xspace<0.5$). From this DR a same-sign (SS) to opposite-sign (OS) transfer function \ensuremath{F_{\mathrm{T}}}\xspace is obtained to extrapolate the number \ensuremath{N_{\text{AR}}}\xspace of events in the AR to the number \ensuremath{N_{\text{SR}}}\xspace of events in the SR according to \begin{linenomath} \begin{equation} \label{eq:TF} \ensuremath{N_{\text{SR}}}\xspace = \ensuremath{F_{\mathrm{T}}}\xspace\,\ensuremath{N_{\text{AR}}}\xspace. \end{equation} \end{linenomath} The function \ensuremath{F_{\mathrm{T}}}\xspace primarily depends on the distance $\ensuremath{{\Delta}R}\xspace(\Pe, \PGm)$ between the \Pe and \PGm trajectories in $\eta$-$\varphi$ and \ensuremath{N_{\text{Jets}}}\xspace. Additional dependencies on the electron and muon \pt enter via a bias correction, ranging from 0.85--0.9. To validate the method, a second transfer function $\ensuremath{F_{\mathrm{T}}}\xspace^{\prime}$ is calculated from a modified DR$^{\prime}$ with an isolated muon ($\ensuremath{I_{\text{rel}}^{\PGm}}\xspace<0.2$) and nonisolated electron ($0.15<\ensuremath{I_{\text{rel}}^{\Pe}}\xspace<0.5$), which is applied to the SS selection of the DR. The resulting event yield and shapes of the \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace and \ensuremath{m_{\PGt\PGt}}\xspace distributions are compared to the OS selection of the DR. This test reveals a consistent result within the statistical uncertainties of the estimate, for events with $\ensuremath{N_{\PQb\text{-jets}}}\xspace=0$. For events with $\ensuremath{N_{\PQb\text{-jets}}}\xspace\geq1$, a global correction factor $r_{\PQb}$ is required, with a value of 0.71--0.75 depending on the year of data-taking. A potential bias from requiring the muon to be nonisolated in the definition of DR is checked from a third definition of the transfer function $\ensuremath{F_{\mathrm{T}}}\xspace^{\prime \prime}$, in a DR$^{\prime\prime}$ with a nonisolated muon ($0.2<\ensuremath{I_{\text{rel}}^{\PGm}}\xspace<0.5$) and electron ($0.15<\ensuremath{I_{\text{rel}}^{\Pe}}\xspace<0.5$). This test reveals another correction of 0.94--0.95, depending on the year of data-taking, to correct for the fact that $r_{\PQb}$, with an isolated muon, is systematically smaller by ${\approx}5\%$ than in the case of a nonisolated muon. \subsection{Simulated backgrounds and signal} \label{sec:simulation} In the \ensuremath{\tauh\tauh}\xspace final state, the \PGt-embedding and \ensuremath{F_{\mathrm{F}}}\xspace methods cover ${\approx}98 \%$ of all expected background events. In the \ensuremath{\Pe\tauh}\xspace and \ensuremath{\PGm\tauh}\xspace final states, the fractions of expected background events described by these two methods are ${\approx}50\%$ and 40\%, respectively. In the \ensuremath{\Pe\PGm}\xspace final state, ${\approx}53\%$ of all events are covered by either the \PGt-embedding or SS method. All remaining events originate from processes such as \PZ boson, \ttbar, or diboson production, where at least one decay of a vector boson into an electron or muon is not covered by any of the previously discussed methods. These backgrounds and the signal processes are modelled using the simulation of the full processes. \subsubsection{Background processes} The \ensuremath{\PW}{+}\,\text{jets}\xspace and \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace processes are simulated at LO accuracy in the strong coupling \alpS, using the \MGvATNLO 2.2.2 (2.4.2) event generator~\cite{Alwall:2011uj, Alwall:2014hca} for the simulation of the data taken in 2016 (2017--2018). To increase the number of simulated events in regions of high signal purity, supplementary samples are generated with up to four outgoing partons in the hard interaction. For diboson production, \MGvATNLO is used at next-to-LO (NLO) precision in \alpS. In each case, the FxFx~\cite{Frederix:2012ps} (MLM~\cite{Alwall:2007fs}) prescription is used to match the NLO (LO) matrix element calculation with the parton shower model. For \ttbar~\cite{Alioli:2011as} and ($t$-channel) single~\PQt quark production~\cite{Frederix:2012dh}, samples are generated at NLO precision in \alpS using \POWHEG 2.0~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Jezo:2015aia}. The \POWHEG version 1.0 at NLO precision is used for single~\PQt quark production in association with a \PW boson ($\PQt\PW$ channel)~\cite{Re:2010bp}. When compared with data, \ensuremath{\PW}{+}\,\text{jets}\xspace, \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace, \ttbar, and single~\PQt quark events in the $\PQt\PW$ channel are normalized to their cross sections at next-to-NLO (NNLO) precision in \alpS~\cite{Melnikov:2006kv,Czakon:2011xx,Kidonakis:2013zqa}. Single~\PQt quark ($t$-channel) and diboson events are normalized to their cross sections at NLO precision in \alpS or higher~\cite{Kidonakis:2013zqa, Campbell:2011bn,Gehrmann:2014fva}. \subsubsection{Signal processes} \label{sec:simulation-signal} The kinematic properties of single \Ph production are simulated at NLO precision in \alpS using \POWHEG 2.0 separately for the production via \ensuremath{\Pg\Pg\Ph}\xspace~\cite{Bagnaschi:2011tu}, VBF~\cite{Nason:2009ai}, or in association with a \PZ (\ensuremath{\PZ\Ph}\xspace) or \PW (\ensuremath{\PW\Ph}\xspace) boson~\cite{Luisoni:2013cuh,Granata:2017iod}. For \ensuremath{\Pg\Pg\Ph}\xspace production, the distributions of the \Ph boson \pt and the jet multiplicity in the simulation are tuned to match the NNLO accuracy obtained from full phase space calculations with the NNLOPS event generator~\cite{Hamilton:2013fea,Hamilton:2015nsa}. For this purpose, \Ph is assumed to behave as expected from the SM. This applies to the modelling of \ensuremath{\mathrm{\PH(125)}}\xspace as part of the background for the model-independent \ensuremath{\phi}\xspace search, as well as for the SM and the MSSM hypotheses for the interpretation of the data in MSSM benchmark scenarios, where \Ph is associated with \ensuremath{\mathrm{\PH(125)}}\xspace with properties as expected from the SM. The production of \ensuremath{\phi}\xspace, \PH, and \ensuremath{\mathrm{A}}\xspace bosons via gluon fusion is simulated at NLO precision in \alpS using the 2HDM implementation of \POWHEG 2.0~\cite{Bagnaschi:2011tu}. To account for the multiscale nature of the process in the NLO plus parton shower prediction, the \pt spectra corresponding to the contributions from the \PQt quark only, \PQb quark only, and $\PQt\PQb$-interference are each calculated separately. The \POWHEG damping factor \ensuremath{h_{\text{damp}}}\xspace, which controls the matching between the matrix element calculation and the parton shower, is set specifically for each contribution as proposed in Refs.~\cite{Harlander:2014uea,Bagnaschi:2015bop,Bagnaschi:2015qta}. For the model-independent \ensuremath{\phi}\xspace search, the individual distributions are combined according to their contribution to the total cross section as expected for an SM-like Higgs boson with given mass. For the tests of MSSM benchmark scenarios, where the contributions of the individual distributions also depend on the model parameters, these distributions are scaled using the effective Yukawa couplings as predicted by the corresponding benchmark model~\cite{MSSM_benchmark}, before combining them into one single prediction. In this context, the \tanb-enhanced SUSY corrections to the $\ensuremath{\phi}\xspace\PQb\PQb$ couplings are also included via the corresponding effective Yukawa couplings, where appropriate. Other SUSY contributions have been checked to amount to less than a few percent and are neglected. An example of the \ensuremath{\mathrm{A}}\xspace boson \pt spectrum for $\ensuremath{m_{\PA}}\xspace=1.6\TeV$ and $\tanb=30$ is shown in Fig.~\ref{fig:signal-templates} (left). The \ensuremath{\PQb\PQb\phi}\xspace production is simulated at NLO precision in \alpS using the corresponding \POWHEG 2.0 implementation~\cite{Jager:2015hka} in the four-flavour scheme (4FS). \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_007-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_007-b.pdf} \caption{ Composition of the signal for the MSSM interpretation of the data and the vector leptoquark search. The left figure shows the generator level \ensuremath{\mathrm{A}}\xspace boson \pt density for the MSSM \ensuremath{M_{\mathrm{h}}^{125}}\xspace scenario for $\ensuremath{m_{\PA}}\xspace=1.6\TeV$ and $\tanb =30$, split by the contributions from the \PQt quark only, the $\PQb$ quark only, and the $\PQt\PQb$-interference term. The right figure shows the distribution of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace at reconstruction level in the \ensuremath{\tauh\tauh}\xspace final state for \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange with $\ensuremath{m_{\text{U}}}\xspace=1\TeV$ and $\ensuremath{g_{\text{U}}}\xspace=1.5$, for the signal with and without the interference term for the VLQ BM 1 scenario. The \ensuremath{\tauh\tauh}\xspace final state is shown, since it is the most sensitive one for this search. The bins of the distributions are divided by their width and the distribution is normalized to the expected signal yield for 138\fbinv. } \label{fig:signal-templates} \end{figure} The signal process of the \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange is simulated in the five-flavour scheme (5FS) at LO precision in \alpS using the \MGvATNLO event generator, v2.6.5~\cite{Baker:2019sli}. Events are generated with up to one additional outgoing parton from the matrix element calculation and matched following the MLM prescription, with the matching scale \ensuremath{Q_{\text{match}}}\xspace set to 40\GeV. The contribution from on-shell $\ensuremath{\mathrm{U}_1}\xspace\to\PQq\PGt$ production and decay is excluded during the event generation. Samples are produced with $\ensuremath{g_{\text{U}}}\xspace=1$, for several values of \ensuremath{m_{\text{U}}}\xspace between 1 and 5\TeV. We observe no large dependence, neither of the templates used for signal extraction nor of the overall cross section, on the assumed \ensuremath{\mathrm{U}_1}\xspace decay width $\Gamma$, even after variations of factors of 0.5 and 2 and therefore, for each considered value of \ensuremath{m_{\text{U}}}\xspace, we choose $\Gamma$ to approximately match the value predicted for \ensuremath{\mathrm{U}_1}\xspace production with couplings as obtained from the global fit presented in Ref.~\cite{Cornella:2021sby}. We expect a sizeable effect of destructive interference between the \ensuremath{\mathrm{U}_1}\xspace signal and \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace production, where the relative sizes of the interference and noninterference contributions depend on \ensuremath{g_{\text{U}}}\xspace. To include this dependence we generate separate samples for each contribution to form signal templates, which are negative in case of the interference contribution. These are scaled by $\ensuremath{g_{\text{U}}}\xspace^{4}$ (for the noninterference contribution) and $\ensuremath{g_{\text{U}}}\xspace^{2}$ (for the interference contribution), respectively, and combined to form the overall signal distributions for any value of \ensuremath{g_{\text{U}}}\xspace. Finally, the resulting signal event yields are normalized to the cross sections for the inclusive \ensuremath{\mathrm{U}_1}\xspace mediated $\ensuremath{\Pp\Pp}\xspace\to\PGt\PGt$ process, computed at LO precision in \alpS. The contribution of the \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange to the \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace distribution in the \ensuremath{\tauh\tauh}\xspace final state for $\ensuremath{m_{\text{U}}}\xspace=1\TeV$ and $\ensuremath{g_{\text{U}}}\xspace=1.5$ for the VLQ BM 1 scenario is shown in Fig.~\ref{fig:signal-templates} (right). As visible from the figure, a complex contribution of the signal to the overall \ensuremath{\PGt\PGt}\xspace event yield in \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace is expected, with a reduction for $\ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace \lesssim250\GeV$ and an enhancement otherwise. Both features may contribute to the signal inference, while the sensitivity of the analysis relies on the yield enhancement for $\ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace\gtrsim250\GeV$, as will be discussed in more detail in Section~\ref{sec:results_VLQ}. We note that for the \ensuremath{\phi}\xspace searches presented in this paper interference effects with \ensuremath{\PGt\PGt}\xspace backgrounds, \eg, from \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace production are not an issue due to the different spin configurations of the \ensuremath{\PGt\PGt}\xspace final states. \subsubsection{Common processing} The PDF4LHC15~\cite{Butterworth:2015oua} (NNPDF3.1~\cite{Ball:2017nwa}) parton distribution functions (PDFs) are used for the simulation of the \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace (\ensuremath{\mathrm{U}_1}\xspace) signal processes. For all other processes, the NNPDF3.0~\cite{Ball:2014uwa} (NNPDF3.1) PDFs are used for the simulation of the data taken in 2016 (2017--2018). The description of the underlying event is parameterized according to the CUETP8M1~\cite{Khachatryan:2015pea} and CP5~\cite{Sirunyan:2019dfx} tunes for the simulation of the data taken in 2016 and 2017--2018, respectively. Parton showering and hadronization, as well as the \PGt lepton decays, are modelled using the \PYTHIA event generator~\cite{Sjostrand:2014zea}, where versions 8.212 and 8.226 are used for the simulation of the data taken in 2016, and version 8.230 is used for the data taken in 2017--2018. For all simulated events, additional inclusive inelastic \ensuremath{\Pp\Pp}\xspace collisions generated with \PYTHIA are added according to the expected PU profile in data. All events generated are passed through a \GEANTfour-based~\cite{Agostinelli:2002hh} simulation of the CMS detector and reconstructed using the same version of the CMS event reconstruction software used for the data. \subsection{Corrections to the model} \label{sec:corrections} The capability of the model to describe the data is monitored in various control regions orthogonal to the signal and background classes, and corrections and corresponding uncertainties are derived where necessary. All corrections that have been applied to the model are described in the following. Their uncertainties are discussed in Section~\ref{sec:systematic-uncertainties}. The following corrections apply equally to simulated and \PGt-embedded events, where the \PGt decay is also simulated. Since the simulation part of \PGt-embedded events happens under detector conditions that are different from the case of fully simulated events, corrections and related uncertainties may differ, as detailed in Ref.~\cite{Sirunyan:2019drn}. Corrections are derived for residual differences in the efficiencies of the selected triggers, differences in the electron and muon tracking efficiencies, and in the efficiencies of the identification and isolation requirements for electrons and muons. These corrections are obtained in bins of \pt and $\eta$ of the corresponding lepton, using the ``tag-and-probe'' method, as described in Ref.~\cite{Khachatryan:2010xn}, with \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pe\Pe}}\xspace and \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace events. They usually do not amount to more than a few percent. The electron energy scale is adjusted to the scale measured in data using the \PZ boson mass peak in \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pe\Pe}}\xspace events. In a similar way, corrections are obtained for the efficiency of triggering on the \tauh decay signature and for the \tauh identification efficiency. The trigger efficiency corrections are obtained from parametric fits to the trigger efficiency, as a function of \pt, derived for simulated events and data. The identification efficiency corrections are also derived as a function of the \pt of the \tauh candidate. For $\ensuremath{\pt^{\tauh}}\xspace>40\GeV$, a correction is moreover derived for each \tauh decay mode individually, which is used only in the \ensuremath{\tauh\tauh}\xspace final state. For each data-taking year and each \tauh decay mode, corrections to the energy scale of the \tauh candidates and of electrons misidentified as \tauh candidates are derived from likelihood scans of discriminating observables, such as the reconstructed \tauh candidate mass. For muons misidentified as \tauh candidates, the energy scale correction has been checked to be negligible. Corrections are applied to the magnitude and resolution of \ptvecmiss in \PGt-embedded events to account for rare cases of an incomplete removal of the energy deposits from the muons that are replaced by simulated \PGt decays during the embedding procedure. These corrections are derived by comparing \ptmiss in \PGt-embedded events with fully simulated events. The following corrections only apply to fully simulated events. During the 2016 and 2017 data taking, a gradual shift in the timing of the inputs of the ECAL L1 trigger in the region at $\abs{\eta}>2.0$ caused a specific trigger inefficiency~\cite{Sirunyan:2020zal}. For events containing an electron (a jet) with \pt larger than ${\approx}50\,({\approx}100)\GeV$, in the region of $2.5< \abs{\eta}<3.0$ the efficiency loss is 10--20\%, depending on \pt, $\eta$, and time. Corresponding corrections have been derived from data and applied to the simulation, where this effect is not present. The energies of jets are corrected to the expected response of the jet at the stable hadron level, using corrections measured in bins of the jet \pt and $\eta$. These corrections are usually less than 10--15\%. Residual data-to-simulation corrections are applied to the simulated event samples. They usually range from subpercent level at high jet \pt in the central part of the detector to a few percent in the forward region. The energy resolution of simulated jets is also adjusted to match the resolution in data. A correction is applied to the direction and magnitude of \ptvecmiss based on differences between estimates of the hadronic recoil in \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace events in data and simulation. This correction is applied to the simulated \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace, \ensuremath{\PW}{+}\,\text{jets}\xspace, \Ph, and \ensuremath{\phi}\xspace signal events, where a hadronic recoil against a single particle is well defined. The efficiencies for genuine and misidentified \PQb jets to pass the working points of the \PQb jet identification discriminant, as given in Section~\ref{sec:selection}, are determined from data, using \ttbar events for genuine \PQb jets and jet-associated \PZ boson production for jets originating from light-flavour quarks. Data-to-simulation corrections are obtained for these efficiencies and used to correct the number of \PQb jets in the simulation. Data-to-simulation corrections are further applied to simulated events where an electron (muon) is reconstructed as a \tauh candidate, to account for residual differences in the \ensuremath{\Pe(\PGm)\to\tauh}\xspace misidentification rate between data and simulation. In a similar way, a correction is applied to account for residual differences in the \ensuremath{\PGm\to\Pe}\xspace misidentification rate between data and simulation. The dilepton mass and \pt spectra in simulated \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace events are corrected to better match the data. To do this, the dilepton mass and \pt are measured in data and simulation in \ensuremath{\PGm\PGm}\xspace events, and the simulated events are corrected to match the spectra in data. In addition, all simulated \ttbar events are weighted to better match the top quark \pt distribution observed in data~\cite{Khachatryan:2015oqa}. The overall normalization of \ttbar events is constrained using the \ttbar control region described in Section~\ref{sec:selection}. \section{Systematic uncertainties} \label{sec:systematic-uncertainties} The uncertainty model used for the analysis comprises theoretical and experimental uncertainties, and uncertainties due to the limited population of template distributions available for the background model. The last group of uncertainties is incorporated for each bin of each corresponding template individually following the approach proposed in Refs.~\cite{Barlow:1993dm,Conway:2011in}. All other uncertainties lead to correlated changes across bins either in the form of normalization changes or as general nontrivial shape-altering variations. Depending on the way they are derived, correlations may also arise across data-taking years, samples, or individual uncertainties. \subsection{Uncertainties related to the \texorpdfstring{\PGt}{tau}-embedding method or the simulation} \label{sec:uncertainties} The following uncertainties, related to the reconstruction of electrons, muons, and \tauh candidates after selection, apply to simulated and \PGt-embedded events. Unless stated otherwise, they are partially correlated across \PGt-embedded and simulated events. \subsubsection{Uncertainties common to signal and background events} Uncertainties in the identification efficiency of electrons and muons amount to 2\%, correlated across all years. Since no significant dependence on the \pt or $\eta$ of each corresponding lepton is observed, these uncertainties are introduced as normalization uncertainties. A similar reasoning applies to uncertainties in the electron and muon trigger efficiencies, which also amount to 2\% each. Because of differences in the online selections they are treated as uncorrelated for single-lepton and dilepton triggers. This may result in shape-altering effects in the overall model, since the two trigger types act on different ranges of lepton \pt. For fully simulated events, an uncertainty in the electron energy scale is derived from the calibration of ECAL crystals, and applied on an event-by-event basis. For \PGt-embedded events, uncertainties of 0.5--1.25\%, determined separately for the ECAL barrel and endcap regions, are derived for the corrections described in Section~\ref{sec:corrections}. Because of the varying detector conditions, and the different ways the uncertainties are determined, they are treated as uncorrelated across simulated and \PGt-embedded events. They lead to shape-altering variations and are treated as correlated across data-taking years. The muon momentum is precisely known, and a variation within the expected uncertainties was verified to have no effect on the analysis. Uncertainties in the \tauh identification efficiency are between 3--9\% in bins of \tauh lepton \pt. These are dominated by statistical uncertainties and are, therefore, treated as uncorrelated across decay modes, \pt bins, and data-taking years. The same is true for the uncertainties in the \tauh energy scale, which amount to 0.2--1.1\%, depending on the \tauh lepton \pt and decay mode. For the energy scale of electrons misidentified as \tauh candidates, the uncertainties are 1--6.5\%. All \tauh energy scale uncertainties are also treated as uncorrelated across data-taking years as they are predominantly statistical in nature. The uncertainty in the energy scale of muons misidentified as \tauh is 1\%. Uncertainties in the \tauh trigger efficiencies are typically \ensuremath{\mathcal{O}}(10\%), depending on the \tauh lepton \pt. They are obtained from parametric fits to data and simulation, and are treated as uncorrelated across triggers and data-taking years. All uncertainties discussed in this paragraph lead to shape-altering variations. Four further sources of uncertainty are considered for \PGt-embedded events. A 4\% normalization uncertainty arises from the efficiency of the \ensuremath{\PGm\PGm}\xspace selection in data, which is unfolded during the \PGt-embedding procedure. Most of this uncertainty originates from the triggers used for selection. Since the trigger configurations changed over time, this uncertainty is treated as uncorrelated across data-taking years. An additional shape uncertainty is introduced to quantify the consistency of the embedding method in a sample of \ensuremath{\PGm\PGm}\xspace events. For this purpose, dedicated event samples are produced where the muons selected in data are replaced by simulated muons instead of \PGt lepton decays. These events are compared with the originally selected \ensuremath{\PGm\PGm}\xspace events in data and residual differences in the \ensuremath{\PGm\PGm}\xspace mass and \pt spectra are used as uncertainties. Another shape- and normalization-altering uncertainty in the yield of $\ttbar\to\ensuremath{\PGm\PGm}\xspace+ \mathrm{X}$ decays, which are part of the \PGt-embedded event samples, ranges from subpercent level to 8\%, depending on the event composition of the model. For this uncertainty, the number and shape of \ttbar events contained in the \PGt-embedded event samples are estimated from simulation, where the corresponding decay has been selected at the parton level. This estimate is then varied by ${\pm}10\%$ to account for the \ttbar cross section and acceptance uncertainties. Finally, an uncertainty in the \ptmiss correction for the \PGt-embedded events described in Section~\ref{sec:corrections} is applied. Since this correction is derived from a comparison with fully simulated events, this uncertainty is related to the imperfect \ptvecmiss reconstruction in the simulation. For fully simulated events, the following additional uncertainties arise. Uncertainties in the \ensuremath{\Pe(\PGm)\to\tauh}\xspace misidentification rate are 18--40\% for electrons and 7--65\% for muons, depending on the $\eta$ of the \tauh candidate. These uncertainties apply only to simulated \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pe\Pe}}\xspace and \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGm\PGm}}\xspace events, which are of marginal importance for the analysis. The same is true for the uncertainty in the reweighting in the \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace dilepton mass and \pt, discussed in Section~\ref{sec:corrections}, which is typically smaller than 1\%. A normalization uncertainty due to the timing shift of the inputs of the ECAL L1 trigger described in Section~\ref{sec:corrections} amounts to 2--3\%. Uncertainties in the energy calibration and resolution of jets are applied with different correlations depending on their sources, which arise from the statistical limitations of the measurements used for calibration, the time-dependence of the energy measurements in data due to detector ageing, and bias corrections introduced to cover residual differences between simulation and data. They range from subpercent level to \ensuremath{\mathcal{O}}(10\%), depending on the kinematic properties of the jets in the event. Similar uncertainties, with similar ranges, are applied for the identification rates for \PQb jets and for the misidentification rates for light-flavour quark or gluon jets. Depending on the process under consideration, two independent uncertainties in \ptmiss are applied. For processes that are subject to recoil corrections, \ie \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace, \ensuremath{\PW}{+}\,\text{jets}\xspace, \Ph, or \ensuremath{\phi}\xspace production, uncertainties in the calibration and resolution of the hadronic recoil are applied; they typically result in changes to the event yields ranging from 0--5\%. For all other processes, an uncertainty in \ptvecmiss is derived from the amount of energy carried by unclustered particle candidates, which are not contained in jets, in the event~\cite{Sirunyan:2019kia}. This uncertainty typically results in changes to the event yields ranging from 0--10\%. The integrated luminosities of the 2016, 2017, and 2018 data-taking periods are individually known with uncertainties in the 1.2--2.5\% range~\cite{CMS-LUM-17-003, CMS-PAS-LUM-17-004,CMS-PAS-LUM-18-002}, while the total integrated luminosity for the years 2016--2018 has an uncertainty of 1.6\%; the improvement in precision reflects the (uncorrelated) time evolution of some systematic effects. Uncertainties in the predictions of the normalizations of all simulated processes amount to 4\% for \mbox{\ensuremath{\PZ/\PGg^{*}\to\Pell\Pell}}\xspace and \ensuremath{\PW}{+}\,\text{jets}\xspace production~\cite{Melnikov:2006kv}, 6\% for \ttbar production~\cite{Czakon:2011xx,Kidonakis:2013zqa}, and 5\% for diboson and single~\PQt quark production~\cite{Kidonakis:2013zqa,Campbell:2011bn,Gehrmann:2014fva}, where used in the analyses. These uncertainties are correlated across data-taking years. A shape-altering uncertainty is derived in the reweighting of the top quark \pt described in Section~\ref{sec:corrections} by applying the correction twice or not applying it at all. This uncertainty has only a very small effect on the final discriminant. \subsubsection{Uncertainties in the signal modelling} Theoretical uncertainties in the acceptance of \ensuremath{\PQb\PQb\phi}\xspace signal events are obtained from variations of the renormalization (\ensuremath{\mu_{\mathrm{R}}}\xspace) and factorization (\ensuremath{\mu_{\mathrm{F}}}\xspace) scales, the \ensuremath{h_{\text{damp}}}\xspace factor, and the PDFs. The scale uncertainty is obtained from the envelope of the six variations of \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace by factors of 0.5 and 2, omitting the variations where one scale is multiplied by 2 and the corresponding other scale by 0.5, as recommended in Ref.~\cite{deFlorian:2016spz}. The scale \ensuremath{h_{\text{damp}}}\xspace is varied by factors of $1/\sqrt{2}$ and $\sqrt{2}$. The uncertainty from the variation of \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace, and the uncertainty from the variation of \ensuremath{h_{\text{damp}}}\xspace are added linearly, following the recommendation in Ref.~\cite{deFlorian:2016spz}, resulting in an overall uncertainty that ranges from 1--8\% (1--5\%) for the \PQb tag (``no \PQb tag'') categories depending on the tested mass. The uncertainties due to PDF variations and the uncertainty in \alpS are obtained following the PDF4LHC recommendations, taking the root mean square of the variation of the results when using different replicas of the default PDF4LHC sets as described, \eg, in Ref.~\cite{Butterworth:2015oua}. They range from 1--2\%. Uncertainties in the acceptance of the \ensuremath{\Pg\Pg\phi}\xspace process are also obtained from variations of \ensuremath{\mu_{\mathrm{R}}}\xspace, \ensuremath{\mu_{\mathrm{F}}}\xspace, and \ensuremath{h_{\text{damp}}}\xspace. The \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace scales are varied as described above for the \ensuremath{\PQb\PQb\phi}\xspace process, whereas the \ensuremath{h_{\text{damp}}}\xspace scale is varied by factors of 0.5 and 2 as suggested in Ref.~\cite{Bagnaschi:2015bop}. The influence of the former (latter) variation on the signal acceptance amounts to 20\% (35\%) for the smallest \ensuremath{m_{\phi}}\xspace values. For larger \ensuremath{m_{\phi}}\xspace values, the variation is at the subpercent level. In both cases the uncertainties also result in shape-altering effects in the overall model. For the parameter scan in the MSSM interpretations, theoretical uncertainties in the \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace cross sections are included, as described in Ref.~\cite{Bagnaschi:2791954}. This includes uncertainties in the \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace scales, PDFs, \alpS. The uncertainties are evaluated separately for each \ensuremath{m_{\PA}}\xspace-\tanb point under consideration. They are typically 5--20\% (10--25\%) for \ensuremath{\Pg\Pg\phi}\xspace (\ensuremath{\PQb\PQb\phi}\xspace) production. Several sources of theoretical uncertainty in the \ensuremath{\mathrm{U}_1}\xspace signal prediction are included. The uncertainty due to the \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace scale variations is about 15\%. The uncertainties due to the PDFs and \alpS variations are about 15 and 4\%, respectively. The \ensuremath{Q_{\text{match}}}\xspace and parton shower uncertainties affect the signal acceptances in the ``\PQb tag'' categories, with magnitudes of about 11 and 1\% respectively, and in the ``no \PQb tag'' categories, with magnitudes of 5 and 6\% respectively. The uncertainty in the \ensuremath{\betaL^{\mathrm{s}\tau}}\xspace parameter is estimated by varying the coupling strength by the uncertainties obtained in the fit presented in Ref.~\cite{Cornella:2021sby} and summarized in Table~\ref{tab:betal_values}. The resulting uncertainty varies the signal yields by 4--12\%. The uncertainty in the signal acceptance due to the choice of flavour scheme is estimated by comparing the predictions in the 4FS and 5FS calculations, which mainly affect the \ensuremath{N_{\PQb\text{-jets}}}\xspace distribution. The resulting uncertainty has a magnitude of 25\% (18\%) for the ``\PQb tag'' (``no \PQb tag'') categories. For all results shown in the following, the expectation for SM Higgs boson production is included in the model used for the statistical inference of the signal. Uncertainties due to different choices of \ensuremath{\mu_{\mathrm{R}}}\xspace and \ensuremath{\mu_{\mathrm{F}}}\xspace for the calculation of the production cross section of the SM Higgs boson amount to 3.9\% for \ensuremath{\Pg\Pg\Ph}\xspace, 0.4\% for VBF, 2.8\% for \ensuremath{\PZ\Ph}\xspace, and 0.5\% for \ensuremath{\PW\Ph}\xspace production~\cite{Alioli:2008tz,Bagnaschi:2011tu,Nason:2009ai,Luisoni:2013cuh, Hartanto:2015uka}; uncertainties due to different choices for the PDFs and \alpS amount to 3.2\%, 2.1\%, 1.6\%, and 1.9\% for these four production modes, respectively. \subsection{\texorpdfstring{Uncertainties related to jets misidentified as an electron, muon, or \tauh candidate}{Systematic uncertainties related to jets misidentified as an electron, muon, or tauh candidate}} \label{sec:uncertainties-FF} For the \ensuremath{F_{\mathrm{F}}}\xspace method, the following uncertainties apply. The \ensuremath{\FF^{i}}\xspace and their corrections are subject to statistical fluctuations in each corresponding \ensuremath{\mathrm{DR}^{i}}\xspace and simulation. The corresponding uncertainties are split into a normalization and a shape-altering part and propagated into the final discriminant. They are typically 1--10\% and are treated as uncorrelated across the kinematic and topological bins where they are derived. An additional uncertainty is defined by varying the choice of the functional form for the parametric fits. Uncertainties are also applied to cover residual corrections and extrapolation factors, varying from a few percent to \ensuremath{\mathcal{O}}(10\%), depending on the kinematic properties of the \tauh candidate and the topology of the event. These are both normalization and shape-altering uncertainties. An additional source of uncertainty concerns the subtraction of processes other than the enriched process in each corresponding \ensuremath{\mathrm{DR}^{i}}\xspace. These are subtracted from the data using simulated or \PGt-embedded events. The combined shape of the events to be removed is varied by 10\%, and the measurements are repeated. The impacts of these variations are then propagated to the final discriminant as shape-altering uncertainties. An uncertainty in the estimation of the three main background fractions in the AR is estimated from a variation of each individual contribution by 10\%, increasing or decreasing the remaining fractions such that the sum of all contributions remains unchanged. The amount of variation is motivated by the uncertainty in the production cross sections and acceptances of the involved processes and the constraint on the process composition that can be clearly obtained from the AR. The effect of this variation is found to be very small, since usually one of the contributions dominates the event composition in the AR. \begin{table}[!htbp] \centering \topcaption{ Summary of systematic uncertainties discussed in the text. The columns indicate the source of uncertainty, the process class that it applies to, the variation, and how it is correlated with other uncertainties. A checkmark is given also for partial correlations. More details are given in the text. } \cmsTable{ \begin{tabular}{llccccr@{--}lcc} & & \multicolumn{4}{c}{Process} & \multicolumn{2}{c}{ } & \multicolumn{2}{c}{Correlated across} \\ \multicolumn{2}{l}{Uncertainty} & Sim. & \PGt-emb. & \ensuremath{F_{\mathrm{F}}}\xspace & SS & \multicolumn{2}{c}{Variation} & Years & Processes \\ \hline \multirow{4}{*}{\PGt-emb.} & Acceptance & \NA & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{4\%} & \NA & \NA \\ & \ensuremath{\PGm\PGm}\xspace closure & \NA & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{See text} & $\checkmark$ & \NA \\ & \ttbar fraction & \NA & $\checkmark$ & \NA & \NA & 0.1 & 8\% & \NA & \NA \\ & \ptmiss & \NA & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{See text} & \NA & \NA \\[\cmsTabSkip] \multirow{2}{*}{\PGm} & Identification & $\checkmark$ & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{2\%} & $\checkmark$ & $\checkmark$ \\ & Trigger & $\checkmark$ & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{2\%} & \NA & $\checkmark$ \\[\cmsTabSkip] \multirow{3}{*}{\Pe} & Identification & $\checkmark$ & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{2\%} & $\checkmark$ & $\checkmark$ \\ & Trigger & $\checkmark$ & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{2\%} & \NA & $\checkmark$ \\ & Energy scale & $\checkmark$ & $\checkmark$ & \NA & \NA & \multicolumn{2}{c}{See text} & $\checkmark$ & $\checkmark$ \\[\cmsTabSkip] \multirow{3}{*}{\tauh} & Identification & $\checkmark$ & $\checkmark$ & \NA & \NA & 3 & 8\% & \NA & $\checkmark$ \\ & Trigger & $\checkmark$ & $\checkmark$ & \NA & \NA & 5 & 10\% & \NA & $\checkmark$ \\ & Energy scale & $\checkmark$ & $\checkmark$ & \NA & \NA & 0.2 & 1.1\% & \NA & $\checkmark$ \\[\cmsTabSkip] \multirow{2}{*}{$\PGm\to\tauh$} & Misidentification & $\checkmark$ & \NA & \NA & \NA & 7 & 67\% & \NA & \NA \\ & Energy scale & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{1\%} & \NA & \NA \\[\cmsTabSkip] \multirow{2}{*}{$\Pe\to\tauh$} & Misidentification & $\checkmark$ & \NA & \NA & \NA & 18 & 41\% & \NA & \NA \\ & Energy scale & $\checkmark$ & \NA & \NA & \NA & 1 & 6.5\% & \NA & \NA \\[\cmsTabSkip] \multicolumn{2}{l}{$\text{Jet}\to\Pe$ misidentification} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{10\%} & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{$\text{Jet}\to\PGm$ misidentification} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{10\%} & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{\ensuremath{\PGm\to\Pe}\xspace misidentification} & $\checkmark$ & \NA & \NA & \NA & 15 & 45\% & \NA & $\checkmark$ \\ \multicolumn{2}{l}{\ensuremath{\PZ/\PGg^{*}}\xspace mass and \pt reweighting} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{${<}1\%$} & $\checkmark$ & \NA \\ \multicolumn{2}{l}{Jet energy scale \& resolution} & $\checkmark$ & \NA & \NA & \NA & 0.1 & 10\% & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{\PQb-jet (mis)identification} & $\checkmark$ & \NA & \NA & \NA & 1 & 10\% & \NA & $\checkmark$ \\ \multicolumn{2}{l}{\ptmiss calibration} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{See text} & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{ECAL timing shift} & $\checkmark$ & \NA & \NA & \NA & 2 & 3\% & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{\PQt quark \pt reweighting} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{See text} & $\checkmark$ & \NA \\ \multicolumn{2}{l}{Integrated luminosity} & $\checkmark$ & \NA & \NA & \NA & 1.2 & 2.5\% & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{Background cross sections} & $\checkmark$ & \NA & \NA & \NA & 2 & 5\% & $\checkmark$ & \NA \\ \multicolumn{2}{l}{Signal theoretical uncertainties} & $\checkmark$ & \NA & \NA & \NA & \multicolumn{2}{c}{See text} & $\checkmark$ & \NA \\ [\cmsTabSkip] \multirow{4}{*}{\ensuremath{F_{\mathrm{F}}}\xspace} & Event count & \NA & \NA & $\checkmark$ & \NA & \ensuremath{\mathcal{O}}(1 & 10\%) & \NA & \NA \\ & Corrections & \NA & \NA & $\checkmark$ & \NA & \multicolumn{2}{c}{\ensuremath{\mathcal{O}}(10\%)} & \NA & \NA \\ & Non-\ensuremath{F_{\mathrm{F}}}\xspace processes & \NA & \NA & $\checkmark$ & \NA & \multicolumn{2}{c}{10\%} & \NA & \NA \\ & \ensuremath{F_{\mathrm{F}}}\xspace proc. composition & \NA & \NA & $\checkmark$ & \NA & \multicolumn{2}{c}{10\%} & \NA & \NA \\[\cmsTabSkip] \multirow{2}{*}{QCD (\ensuremath{\Pe\PGm}\xspace)} & Event count & \NA & \NA & \NA & $\checkmark$ & 2 & 4\% & \NA & \NA \\ & AR to SR extrapolations & \NA & \NA & \NA & $\checkmark$ & \multicolumn{2}{c}{\ensuremath{\mathcal{O}}(10\%)} & \NA & \NA \\ \end{tabular} \label{tab:uncertainties} } \end{table} Since the background from QCD multijet events in the \ensuremath{\Pe\PGm}\xspace final state is also determined from a DR, uncertainties that account for the statistical uncertainty in the data and the subtracted backgrounds in this DR are applied in a similar way. These uncertainties amount to 2--4\%. In addition, this background is subject to uncertainties related to the extrapolations from the DR to the corresponding SRs. These uncertainties are \ensuremath{\mathcal{O}}(10\%) depending on \ensuremath{\pt^{\Pe}}\xspace, \ensuremath{\pt^{\PGm}}\xspace, and \ensuremath{N_{\PQb\text{-jets}}}\xspace. Because of their mostly statistical nature, all uncertainties related to the \ensuremath{F_{\mathrm{F}}}\xspace and SS methods are treated as uncorrelated across data-taking years. In the \ensuremath{\Pe\PGm}\xspace final state, the subdominant contribution to the \ensuremath{\text{jet}\to\Pell}\xspace and \ensuremath{\PGm\to\Pe}\xspace backgrounds is estimated from simulation. Uncertainties in the simulated $\text{jet}\to\Pe$ and $\text{jet}\to\PGm$ misidentification rates are 10\% and 12\%, respectively. They are treated as correlated across data-taking years. The uncertainty in the \ensuremath{\PGm\to\Pe}\xspace misidentification rate is 15--45\%, and is treated as uncorrelated across data-taking years since it is mostly statistical in nature. A summary of all systematic uncertainties that have been discussed in this section is given in Table~\ref{tab:uncertainties}. \section{Results} \label{sec:results} The statistical model used to infer the signal from the data is defined by an extended binned likelihood of the form \begin{linenomath} \begin{equation} \mathcal{L}\left(\{k_{i}\},\{\mu_{s}\},\{\theta_{j}\}\right) =\prod \limits_{i}\mathcal{P}\Bigl(k_{i}|\sum\limits_{s}\mu_{s}\,S_{si}(\{\theta_{j} \})+\sum\limits_{b}B_{bi}(\{\theta_{j}\})\Bigr)\, \prod\limits_{j}\mathcal{C}(\widetilde{\theta}_{j}|\theta_{j}), \label{eq:likelihood} \end{equation} \end{linenomath} where $i$ labels the bins of the discriminating distributions of all categories, split by \ensuremath{\PGt\PGt}\xspace final state and data-taking year. The function $\mathcal{P}(k_{i} |\sum\mu_{s}\,S_{si}(\{\theta_{j}\})+\sum B_{bi}(\{\theta_{j}\}))$ corresponds to the Poisson probability to observe $k_{i}$ events in bin $i$ for a prediction of $\sum \mu_{s}\,S_{si}$ signal and $\sum B_{bi}$ background events. The predictions for $S_{si}$ and $B_{bi}$ are obtained from the signal and background models discussed in Section~\ref{sec:data-model}. The parameters $\mu_{s}$ act as linear scaling parameters of the corresponding signal yields $S_{s}$. Systematic uncertainties are incorporated in the form of penalty terms for additional nuisance parameters $\{\theta_{j}\}$ in the likelihood, appearing as a product with predefined probability density functions $\mathcal{C}(\widetilde{\theta}_{j}|\theta_{j})$, where $\widetilde{\theta}_{j}$ corresponds to the nominal value for $\theta_{j}$. The predefined uncertainties in the $\widetilde{\theta}_{j}$, as discussed in Section~\ref{sec:systematic-uncertainties}, may be constrained by the fit to the data. The test statistic used for the inference of the signal is the profile likelihood ratio, as discussed in Refs.~\cite{CMS-NOTE-2011-005,Chatrchyan:2012tx}: \begin{linenomath} \begin{equation} q_{\mu_{s}}=-2\ln \left( \frac{\mathcal{L}(\left.\{k_{i}\}\right|\sum\limits_{s}\mu_{s}\,S_{si} (\{\hat{\theta}_{j,\mu_{s}}\})+\sum\limits_{b}B_{bi}(\{\hat{\theta}_{j,\mu_{s}}\}))} {\mathcal{L}(\left.\{k_{i}\}\right|\sum\limits_{s}\hat{\mu}_{s}\,S_{si}(\{\hat{ \theta}_{j,\hat{\mu}_{s}}\})+\sum\limits_{b}B_{bi}(\{\hat{\theta}_{j,\hat{\mu}_{ s}}\}))}\right) , \;\; 0\leq \hat{\mu}_{s} \leq \mu_{s}, \label{eq:profile-likelihood-ratio} \end{equation} \end{linenomath} where one or more parameters $\mu_{s}$ are the parameters of interest (POIs) and $\hat{\mu}_{s}$, $\hat{\theta}_{j,\mu_{s}}$, and $\hat{\theta}_{j,\hat{\mu}_{s}}$ are the values of the given parameters that maximize the corresponding likelihood. The index of $q_{\mu_{s}}$ indicates that the test statistic is evaluated for a fixed value of $\mu_{s}$. In the large number limit, the sampling distribution of $q_{\mu_{s}}$ can be approximated by analytic functions, from which the expected median and central intervals can be obtained as described in Ref.~\cite{Cowan:2010js}. The signal is inferred from the data in three different ways: \begin{enumerate} \item the model-independent \ensuremath{\phi}\xspace search features a signal model for a single narrow resonance \ensuremath{\phi}\xspace; \item for the search for vector leptoquarks, the data are interpreted in terms of the nonresonant \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange; \item the interpretation of the data in terms of MSSM benchmark scenarios relies on three resonances in the \ensuremath{\PGt\PGt}\xspace mass spectrum with mass values and rates determined by the parameters of the corresponding scenario. \end{enumerate} In all cases the \ttbar control region, as defined in Section~\ref{sec:selection} and shown in Figs.~\ref{fig:categories}--\ref{fig:categories_lowmass} is used to constrain the normalization of \ttbar events and all \ttbar related uncertainties. Detailed descriptions of the specific statistical procedures and the results obtained in each case are given in the following sections. \subsection{\texorpdfstring{Model-independent \ensuremath{\phi}\xspace search} {Model-independent phi search}} For the model-independent \ensuremath{\phi}\xspace search, we investigate \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace production corresponding to two independent POIs $\mu_{\ensuremath{\Pg\Pg\phi}\xspace}$ and $\mu_{\ensuremath{\PQb\PQb\phi}\xspace}$ in the likelihood of Eq.~(\ref{eq:likelihood}). In the model, \ensuremath{\mathrm{\PH(125)}}\xspace is treated as background assuming the production cross sections and branching fraction for the decay into \PGt leptons as expected from the SM. For $\ensuremath{m_{\phi}}\xspace\geq250\GeV$, the signal extraction is based on binned template distributions of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace in the 17 categories per data-taking year shown in Fig.~\ref{fig:categories}, resulting in a total of 51 input distributions for signal extraction. For $60\leq\ensuremath{m_{\phi}}\xspace<250\GeV$, binned template distributions of \ensuremath{m_{\PGt\PGt}}\xspace are used in the 26 categories shown in Fig.~\ref{fig:categories_lowmass}, resulting in 78 input distributions for signal extraction. A few examples of these input distributions in a subset of the most sensitive categories per final state are shown in Figs.~\ref{fig:mTtot-distributions} and~\ref{fig:mtt-distributions}. In each figure the expected background distributions are represented by the stack of filled histograms in the upper panel of each subfigure, where each filled histogram corresponds to a process as discussed in Section~\ref{sec:data-model}. The grey shaded band associated with the sum of filled histograms corresponds to the combination of all uncertainties discussed in Section~\ref{sec:systematic-uncertainties}, including all correlations as obtained from the fit of the background model to the data. The lower panel of each subfigure shows the ratio of the data points to the expectation from the background model in each bin. The statistical uncertainty in the data is represented by the error bars and the uncertainty in the sum of all background processes, after the fit to the data, by the shaded band. The expected \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace (\ensuremath{m_{\PGt\PGt}}\xspace) distributions for a \ensuremath{\Pg\Pg\phi}\xspace or \ensuremath{\PQb\PQb\phi}\xspace signal with $\ensuremath{m_{\phi}}\xspace=1200\,(100)\GeV$ are also shown. \begin{figure}[htbp] \centering \includegraphics[width=.42\textwidth]{Figure_008-a.pdf} \includegraphics[width=.42\textwidth]{Figure_008-b.pdf} \includegraphics[width=.42\textwidth]{Figure_008-c.pdf} \includegraphics[width=.42\textwidth]{Figure_008-d.pdf} \includegraphics[width=.42\textwidth]{Figure_008-e.pdf} \includegraphics[width=.42\textwidth]{Figure_008-f.pdf} \caption { Distributions of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace in the global (left) ``no \PQb tag'' and (right) ``\PQb tag'' categories in the (upper) \ensuremath{\Pe\PGm}\xspace, (middle) \ensuremath{\Pe\tauh}\xspace and \ensuremath{\PGm\tauh}\xspace, and (lower) \ensuremath{\tauh\tauh}\xspace final states. For the \ensuremath{\Pe\PGm}\xspace final state, the medium-\ensuremath{D_{\zeta}}\xspace category is displayed; for the \ensuremath{\Pe\tauh}\xspace and \ensuremath{\PGm\tauh}\xspace final states the tight-\mT categories are shown. The solid histograms show the stacked background predictions after a signal-plus-background fit to the data for $\ensuremath{m_{\phi}}\xspace=1.2\TeV$. The best fit \ensuremath{\Pg\Pg\phi}\xspace signal is shown by the red line. The \ensuremath{\PQb\PQb\phi}\xspace and \ensuremath{\mathrm{U}_1}\xspace signals are also shown for illustrative purposes. For all histograms, the bin contents show the event yields divided by the bin widths. The lower panel shows the ratio of the data to the background expectation after the signal-plus-background fit to the data. } \label{fig:mTtot-distributions} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=.42\textwidth]{Figure_009-a.pdf} \includegraphics[width=.42\textwidth]{Figure_009-b.pdf} \ \includegraphics[width=.42\textwidth]{Figure_009-c.pdf} \includegraphics[width=.42\textwidth]{Figure_009-d.pdf} \ \includegraphics[width=.42\textwidth]{Figure_009-e.pdf} \includegraphics[width=.42\textwidth]{Figure_009-f.pdf} \caption { Distributions of \ensuremath{m_{\PGt\PGt}}\xspace in the (left) $100<\ensuremath{\pt^{\ditau}}\xspace<200\GeV$ and (right) $\ensuremath{\pt^{\ditau}}\xspace >200\GeV$ ``no \PQb tag'' categories for the (upper) \ensuremath{\Pe\PGm}\xspace, (middle) \ensuremath{\Pe\tauh}\xspace and \ensuremath{\PGm\tauh}\xspace, and (lower) \ensuremath{\tauh\tauh}\xspace final states. The solid histograms show the stacked background predictions after a signal-plus-background fit to the data for $\ensuremath{m_{\phi}}\xspace=100\GeV$. The best fit \ensuremath{\Pg\Pg\phi}\xspace signal is shown by the red line. The total background prediction as estimated from a background-only fit to the data is shown by the dashed blue line for comparison. For all histograms, the bin contents show the event yields divided by the bin widths. The lower panel shows the ratio of the data to the background expectation after the signal-plus-background fit to the data. The signal-plus-background and background-only fit predictions are shown by the solid red and dashed blue lines, respectively, which are also shown relative to the background expectation obtained from the signal-plus-background fit to the data. } \label{fig:mtt-distributions} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_010-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_010-b.pdf} \caption{ Expected and observed 95\% \CL upper limits on the product of the cross sections and branching fraction for the decay into \PGt leptons for (left) \ensuremath{\Pg\Pg\phi}\xspace and (right) \ensuremath{\PQb\PQb\phi}\xspace production in a mass range of $60\leq\ensuremath{m_{\phi}}\xspace\leq 3500\GeV$, in addition to \ensuremath{\mathrm{\PH(125)}}\xspace. The expected median of the exclusion limit in the absence of signal is shown by the dashed line. The dark green and bright yellow bands indicate the central 68\% and 95\% intervals for the expected exclusion limit. The black dots correspond to the observed limits. The peak in the expected \ensuremath{\Pg\Pg\phi}\xspace limit emerges from the loss of sensitivity around 90\GeV due to the background from \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace events. } \label{fig:results_modelindep_cmb} \end{figure} Figure~\ref{fig:results_modelindep_cmb} shows the expected and observed 95\% confidence level (\CL) upper limits on the product of the cross sections and branching fraction for the decay into \PGt leptons for \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace production in a mass range of $60\leq\ensuremath{m_{\phi}}\xspace\leq3500\GeV$. These limits have been obtained following the modified frequentist approach described in Refs.~\cite{Junk:1999kv,Read:2002hq}. When setting the limit in one production mode the POI of the other production mode is profiled. The limits are shown with a separation into the low-mass ($\ensuremath{m_{\phi}}\xspace<250\GeV$) and high-mass ($\ensuremath{m_{\phi}}\xspace\geq 250\GeV$) regions of the search. The expected limits in the absence of a signal span four orders of magnitude between ${\approx}10\unit{pb}$ (at $\ensuremath{m_{\phi}}\xspace=60\GeV$) and ${\approx}0.3\unit{fb}$ (at $\ensuremath{m_{\phi}}\xspace=3.5\TeV$) for both production modes, with a falling slope for increasing values of \ensuremath{m_{\phi}}\xspace. In general, the observation falls within the central 95\% interval of the expectation. For the low-mass search, the largest deviation from the expectation is observed for \ensuremath{\Pg\Pg\phi}\xspace production at $\ensuremath{m_{\phi}}\xspace=100\GeV$ with a local (global) $p$-value equivalent to 3.1 (2.7) standard deviations (s.d.). To turn the local into a global $p$-value, a number $N_{\text{trial}}$ of pseudo-data from the input distributions of the background model to the maximum likelihood fit is created. For each mass hypothesis in consideration, a fit of the signal model to these pseudo-data is performed and the fraction of cases, where the outcome of these fits with the maximal significance exceeds the observed significance, with respect to $N_{\text{trial}}$ is determined. Finally, the local $p$-value is reduced by this fraction. The best fit value of the product of the cross section with the branching fraction for the decay into \PGt leptons is $\sigma_{\ensuremath{\Pg\Pg\phi}\xspace}\, \mathcal{B}(\ensuremath{\phi}\xspace\to\ensuremath{\PGt\PGt}\xspace)=(5.8\pm {}^{2.5}_{2.0})\unit{pb}$. The excess at $\ensuremath{m_{\phi}}\xspace=100\GeV$ exhibits a $p$-value of 50\% (58\%) for the compatibility across \ensuremath{\PGt\PGt}\xspace final states (data-taking years). Within the resolution of \ensuremath{m_{\PGt\PGt}}\xspace this coincides with a similar excess observed in a previous search for low-mass resonances by the CMS Collaboration in the $\PGg\PGg$ final state, where the smallest local $p$-value corresponds to a significance of 2.8 s.d.\ for a mass of 95.3\GeV~\cite{CMS:2018cyk}. The local (global) significance for the \ensuremath{\PGt\PGt}\xspace search evaluated at $\ensuremath{m_{\phi}}\xspace=95\GeV$ is 2.6 (2.3) s.d.\ and the best fit value of the product of the cross section with the branching fraction for the decay into \PGt leptons is $\sigma_{\ensuremath{\Pg\Pg\phi}\xspace}\,\mathcal{B}(\ensuremath{\phi}\xspace\to\ensuremath{\PGt\PGt}\xspace)=(7.8\pm {}^{3.9}_{3.1}) \unit{pb}$. For the high-mass search, the largest deviation from the expectation is observed for \ensuremath{\Pg\Pg\phi}\xspace production at $\ensuremath{m_{\phi}}\xspace=1.2\TeV$ with a local (global) $p$-value equivalent to 2.8 (2.4) s.d., where the best fit value of the product of the cross section with the branching fraction for the decay into \PGt leptons is $\sigma_{\ensuremath{\Pg\Pg\phi}\xspace}\,\mathcal{B}(\ensuremath{\phi}\xspace\to\ensuremath{\PGt\PGt}\xspace)=(3.1\pm {}^{1.0}_{1.1})\unit{fb}$. The excess at $\ensuremath{m_{\phi}}\xspace=1.2\TeV$ exhibits a $p$-value of 11\% (63\%) for the compatibility across \ensuremath{\PGt\PGt}\xspace final states (data-taking years). For \ensuremath{\PQb\PQb\phi}\xspace production, no deviation from the expectation beyond the level of 2 s.d.\ is observed. Figure~\ref{fig:likelihood-scans} shows the same results in the form of maximum likelihood estimates with 68\% and 95\% \CL contours obtained from scans of the signal likelihood along the \ensuremath{\Pg\Pg\phi}\xspace and \ensuremath{\PQb\PQb\phi}\xspace cross sections, for selected values of \ensuremath{m_{\phi}}\xspace between 60\GeV and 3.5\TeV. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{Figure_011-a.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-b.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-c.pdf} \ \includegraphics[width=0.3\textwidth]{Figure_011-d.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-e.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-f.pdf} \ \includegraphics[width=0.3\textwidth]{Figure_011-g.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-h.pdf} \includegraphics[width=0.3\textwidth]{Figure_011-i.pdf} \caption{ Maximum likelihood estimates, and 68\% and 95\% \CL contours obtained from scans of the signal likelihood for the model-independent \ensuremath{\phi}\xspace search. The scans are shown for selected values of \ensuremath{m_{\phi}}\xspace between 60\GeV and 3.5\TeV. In each figure the SM expectation is $(0, 0)$. } \label{fig:likelihood-scans} \end{figure} \subsection{Search for vector leptoquarks} \label{sec:results_VLQ} The inputs for the search for vector leptoquarks are the binned template distributions of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace in the categories shown in Fig.~\ref{fig:categories} resulting in 51 input distributions for signal extraction, for the years 2016--2018. Based on these inputs a signal is searched for in the range $1<\ensuremath{m_{\text{U}}}\xspace<5\TeV$. As discussed in Section~\ref{sec:simulation}, the \ensuremath{\mathrm{U}_1}\xspace $t$-channel exchange may reduce or enhance the yields in the \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace template distributions used for signal extraction with respect to the expectation from the background model, due to the destructive interference with the \mbox{\ensuremath{\PZ/\PGg^{*}\to\PGt\PGt}}\xspace process. An example of this effect for a signal with $\ensuremath{m_{\text{U}}}\xspace=1\TeV$, $\ensuremath{g_{\text{U}}}\xspace=1.5$, for the VLQ BM 1 scenario is shown in Fig.~\ref{fig:signal-templates} (right). From this figure a sizeable reduction in the yield of \ensuremath{\PGt\PGt}\xspace events is observed for $\ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace\lesssim250\GeV$ and a smaller excess for $250\lesssim\ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace\lesssim1000\GeV$. In principle, the bins for which event deficits with respect to the SM background are expected contribute to the sensitivity of the analysis, as well as the bins for which excesses are expected. However, the bins with expected deficits occur at smaller values of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace where the background is much larger and thus they do not contribute significantly to the overall sensitivity. Most of the sensitivity to the \ensuremath{\mathrm{U}_1}\xspace signal instead comes from the high \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace bins due to the smaller background yields. While reduced by the destructive interference, the signal yields tend to remain positive in these bins. The overall effect of the interference term is thus to reduce the analysis sensitivity compared to the expectation without interference effects included. No statistically significant signal is observed and 95\% \CL upper limits on \ensuremath{g_{\text{U}}}\xspace are derived for the VLQ BM 1 and 2 scenarios, as shown in Fig.~\ref{fig:vlq_exclusion-contours}, again following the modified frequentist approach as for the previously discussed search. The expected sensitivity of the analysis drops for increasing values of \ensuremath{m_{\text{U}}}\xspace following a linear progression with values from $\ensuremath{g_{\text{U}}}\xspace=1.3$ (0.8) to 5.6 (3.2) for the VLQ BM 1 (2) scenario. The observed limits fall within the central 95\% intervals for the expected limits in the absence of signal. The expected limits are also within the 95\% confidence interval of the best fit results reported by Ref.~\cite{Cornella:2021sby}, indicating that the search is sensitive to a portion of the parameter space that can explain the \PQb physics anomalies. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{Figure_012-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_012-b.pdf} \caption{ Expected and observed 95\% \CL upper limits on \ensuremath{g_{\text{U}}}\xspace in the VLQ BM (left) 1 and (right) 2 scenarios, in a mass range of $1<\ensuremath{m_{\text{U}}}\xspace<5\TeV$. The expected median of the exclusion limit in the absence of signal is shown by the dashed line. The dark and bright grey bands indicate the central 68\% and 95\% intervals of the expected exclusion limit. The observed excluded parameter space is indicated by the coloured blue area. For both scenarios, the 95\% confidence interval for the preferred region from the global fit presented in Ref.~\cite{Cornella:2021sby} is also shown by the green shaded area. } \label{fig:vlq_exclusion-contours} \end{figure} \subsection{MSSM interpretation of the data} For the interpretation of the data in MSSM benchmark scenarios, the signal is based on the binned distributions of \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace in the categories shown in Fig.~\ref{fig:categories}, complemented by distributions of the NN output function used for the stage-0 simplified template cross section measurement of Ref.~\cite{CMS:2022kdi}, as discussed in Section~\ref{sec:event-categories}, resulting in 129 input distributions for signal extraction. In the MSSM, the signal constitutes a multiresonance structure with contributions from \Ph, \PH, and \ensuremath{\mathrm{A}}\xspace bosons. For the scenarios chosen for this paper \Ph is associated with \ensuremath{\mathrm{\PH(125)}}\xspace. Any MSSM prediction has to match the observed properties of \ensuremath{\mathrm{\PH(125)}}\xspace, in particular its mass, cross sections for various production modes, and branching fraction for the decay into \PGt leptons. For the benchmark scenarios summarized in Ref.~\cite{Bagnaschi:2791954}, all model parameters have been chosen such that \ensuremath{m_{\Ph}}\xspace is compatible with the observed \ensuremath{\mathrm{\PH(125)}}\xspace mass of 125.38\GeV~\cite{Sirunyan:2020xwk}, within an uncertainty of ${\pm}3\GeV$ in most of the provided parameter space. The uncertainty of $\pm3\GeV$ in the prediction of \ensuremath{m_{\Ph}}\xspace is supposed to reflect the unknown effect of higher-order corrections, as discussed in Ref.~\cite{Slavich:2020zjv}. The value of \ensuremath{m_{\Ph}}\xspace is allowed to vary within these boundaries, according to a flat distribution. For the interpretation this is taken into account by simulating the \Ph signal at the observed \ensuremath{\mathrm{\PH(125)}}\xspace mass. For \Ph production, the modes via \ensuremath{\Pg\Pg\Ph}\xspace, \PQb associated production (\ensuremath{\PQb\PQb\Ph}\xspace), VBF, and \ensuremath{\PV\Ph}\xspace production are included, and all cross sections and the branching fraction for the decay into \PGt leptons are scaled according to the MSSM predictions. To remove any dependencies of these predictions on the exact value of \ensuremath{m_{\Ph}}\xspace, they are scaled to the expectation for $\ensuremath{m_{\Ph}}\xspace=125.38\GeV$, following the prescription of Ref.~\cite{Bagnaschi:2791954}. For \ensuremath{\mathrm{A}}\xspace and \PH production, gluon fusion (\ensuremath{\Pg\Pg\ensuremath{\mathrm{A}}\xspace}\xspace, \ensuremath{\Pg\Pg\PH}\xspace) and \PQb associated production (\ensuremath{\PQb\PQb\ensuremath{\mathrm{A}}\xspace}\xspace, \ensuremath{\PQb\PQb\PH}\xspace) are included. All kinematic distributions are modelled within the accuracies discussed in Section~\ref{sec:simulation}. In particular, the \PH (\ensuremath{\mathrm{A}}\xspace) boson \pt spectra in \ensuremath{\Pg\Pg\PH}\xspace (\ensuremath{\Pg\Pg\ensuremath{\mathrm{A}}\xspace}\xspace) production are modelled as a function of \tanb for each tested value of \ensuremath{m_{\PA}}\xspace, resulting in a softer progression for increasing values of \tanb. In the ``no \PQb tag'' categories for $\ensuremath{m_{\PGt\PGt}}\xspace>250\GeV$ the \Ph signal is expected to be negligible so it is dropped from the signal templates. A summary of the association of signals to the templates used for signal extraction is given in Table~\ref{tab:mssm-signal-association}. To interpolate the simulated mass points to the exact predicted values of \ensuremath{m_{\PH}}\xspace, a linear template morphing algorithm, as described in Ref.~\cite{Read:1999kh}, is used. \begin{table}[t] \centering \topcaption{ Contribution of MSSM signals to the \ensuremath{m_{\mathrm{T}}^{\text{tot}}}\xspace and NN output function template distributions used for signal extraction for the interpretation of the data in MSSM benchmark scenarios. } \begin{tabular}{llcc} & & \multicolumn{2}{c}{Signal processes} \\ \multicolumn{2}{l}{Categories} & \ensuremath{\Pg\Pg\Ph}\xspace, \ensuremath{\PQb\PQb\Ph}\xspace, VBF, \ensuremath{\PV\Ph}\xspace & \ensuremath{\Pg\Pg\PH}\xspace/\ensuremath{\Pg\Pg\ensuremath{\mathrm{A}}\xspace}\xspace, \ensuremath{\PQb\PQb\PH}\xspace/\ensuremath{\PQb\PQb\ensuremath{\mathrm{A}}\xspace}\xspace \\ \hline No \PQb tag & $\ensuremath{m_{\PGt\PGt}}\xspace<250\GeV$ & $\checkmark$ & \checkmark \\ No \PQb tag & $\ensuremath{m_{\PGt\PGt}}\xspace>250\GeV$ & \NA & $\checkmark$ \\ \multicolumn{2}{l}{\PQb tag} & $\checkmark$ & $\checkmark$ \\ \multicolumn{2}{l}{Control regions} & $\checkmark$ & $\NA$ \\ \end{tabular} \label{tab:mssm-signal-association} \end{table} The \ensuremath{m_{\PA}}\xspace-\tanb plane is scanned and for each tested point in (\ensuremath{m_{\PA}}\xspace, \tanb), the \CLs ~\cite{Read:2002hq} value is calculated. Those points where \CLs falls below 5\% define the 95\% \CL exclusion contour for the benchmark scenario under consideration. The underlying test compares the MSSM hypothesis, with signal contributions for \Ph ($S_{\Ph}$), \PH ($S_{\PH}$), and \ensuremath{\mathrm{A}}\xspace ($S_{\ensuremath{\mathrm{A}}\xspace}$), with the SM hypothesis ($S_{\text{SM}}$), with only one signal contribution related to \ensuremath{\mathrm{\PH(125)}}\xspace. The test versus the SM hypothesis is justified by the properties of \ensuremath{\mathrm{\PH(125)}}\xspace being in agreement with the SM expectation within the experimental accuracy of current measurements. For the hypothesis test the likelihood of Eq.~(\ref{eq:likelihood}) is expressed in the form \begin{linenomath} \begin{equation} \mathcal{L}\left(\{k_{i}\},\mu\right) =\prod \limits_{i}\mathcal{P}\Bigl(k_{i}|\mu\Bigl((S_{\Ph}-S_{\text{SM}})+S_{\PH} +S_{\ensuremath{\mathrm{A}}\xspace}\Bigr)+S_{\text{SM}}+ \sum\limits_{b}B_{b}\Bigr), \label{eq:likelihood-mssm} \end{equation} \end{linenomath} where for brevity the dependence on the nuisance parameters $\{\theta_{j}\}$ has been omitted. Equation~(\ref{eq:likelihood-mssm}) represents a nested likelihood model from which the MSSM hypothesis (with $\mu=1$) evolves through continuous transformation from the SM hypothesis (with $\mu=0$). We note that the only physically meaningful hypotheses in Eq.~(\ref{eq:likelihood-mssm}) correspond to $\mu=0$ and 1. On the other hand, in the large number limit this construction allows the application of the asymptotic formulas given in Ref.~\cite{Cowan:2010js}, as analytic estimates of the sampling distributions for the MSSM and SM hypotheses, when using the profile likelihood ratio given in Eq.~(\ref{eq:profile-likelihood-ratio}) as the test statistic. We have verified the validity of the large number limit for masses of $\ensuremath{m_{\PA}}\xspace>1\TeV$ with the help of ensemble tests. Since we are using the same template distributions for $S_{\text{SM}}$ and $S_{\Ph}$ the transition from $\mu=0$ to 1 corresponds to a normalization change of the signal contribution related to \ensuremath{\mathrm{\PH(125)}}\xspace, only. Figure~\ref{fig:exclusion-contours} shows the exclusion contours in the \ensuremath{m_{\PA}}\xspace-\tanb plane for two representative benchmark scenarios of the MSSM, \ensuremath{M_{\mathrm{h}}^{125}}\xspace~\cite{Bahl:2018zmf} and \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace~\cite{Bahl:2019ago}. The red hatched areas indicate the regions where the compatibility of \ensuremath{m_{\Ph}}\xspace with the observed \ensuremath{\mathrm{\PH(125)}}\xspace mass could not be achieved within the previously discussed ${\pm} 3\GeV$ boundary. For low values of \tanb, higher scales for the additional SUSY particle masses (referred to collectively as ``\ensuremath{m_{\text{SUSY}}}\xspace'') are required to accommodate a mass of $\ensuremath{m_{\Ph}}\xspace\approx125\GeV$. In the \ensuremath{M_{\mathrm{h}}^{125}}\xspace scenario, where \ensuremath{m_{\text{SUSY}}}\xspace is fixed, the prediction of \ensuremath{m_{\Ph}}\xspace falls below 122\GeV. In the \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace scenario, \ensuremath{m_{\text{SUSY}}}\xspace is adjusted to values that meet the required prediction for \ensuremath{m_{\Ph}}\xspace in each point in (\ensuremath{m_{\PA}}\xspace, \tanb) individually. The growing logarithmic corrections associated with the large values of \ensuremath{m_{\text{SUSY}}}\xspace are resummed using an effective field theory approach. The \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace scenario can thus be viewed as a continuation of the \ensuremath{M_{\mathrm{h}}^{125}}\xspace scenario for $\tanb\lesssim10$. In this case the red hatched area at very low values of \ensuremath{m_{\PA}}\xspace in Fig.~\ref{fig:exclusion-contours} (right) indicates the parameter space where the values required for \ensuremath{m_{\text{SUSY}}}\xspace exceed the GUT scale. For both scenarios the Higgs boson masses, mixing angle $\alpha$, and effective Yukawa couplings have been calculated with the code \textsc{FeynHiggs}~\cite{Heinemeyer:1998yj,Heinemeyer:1998np, Degrassi:2002fi,Frank:2006yh,Hahn:2013ria,Bahl:2016brp,Bahl:2017aev,Bahl:2018qog}. Branching fractions for the decay into \PGt leptons and other final states have been obtained from a combination of the codes \textsc{FeynHiggs} (and \textsc{HDECAY}~\cite{Djouadi:1997yw,Djouadi:2018xqq}) for the \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace (\ensuremath{M_{\mathrm{h}}^{125}}\xspace) scenario, as described in Ref.~\cite{Bagnaschi:2791954} following the prescriptions given in Refs.~\cite{LHCHiggsCrossSectionWorkingGroup:2013rie, deFlorian:2016spz,Denner:2011mq}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Figure_013-a.pdf} \includegraphics[width=0.48\textwidth]{Figure_013-b.pdf} \caption{ Expected and observed 95\% \CL exclusion contours in the MSSM (left) \ensuremath{M_{\mathrm{h}}^{125}}\xspace and (right) \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace scenarios. The expected median in the absence of a signal is shown as a dashed black line. The dark and bright grey bands indicate the central 68\% and 95\% intervals of the expected exclusion. The observed exclusion contour is indicated by the coloured blue area. For both scenarios, those parts of the parameter space where \ensuremath{m_{\Ph}}\xspace deviates by more then ${\pm}3\GeV$ from the mass of \ensuremath{\mathrm{\PH(125)}}\xspace are indicated by a red hatched area. For the \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace scenario, the dashed blue line indicates the threshold at $\ensuremath{m_{\PA}}\xspace=2\ensuremath{m_{\PQt}}\xspace$ whereby the $\ensuremath{\mathrm{A}}\xspace\to \ttbar$ decay starts to influence the \ensuremath{\PA\to\PGt\PGt}\xspace branching fraction. The \ensuremath{\PH\to\PGt\PGt}\xspace branching fraction is influenced more gradually close to this threshold since \ensuremath{\mathrm{A}}\xspace and \PH are not completely degenerate in mass. } \label{fig:exclusion-contours} \end{figure} Inclusive cross sections for the production via \ensuremath{\Pg\Pg\phi}\xspace have been calculated using the program \textsc{SusHi} 1.7.0~\cite{Harlander:2012pb,Harlander:2016hcx}, including NLO corrections in \alpS for the \PQt- and \PQb-quark contributions to the cross section~\cite{Spira:1995rr,Harlander:2005rq}, NNLO corrections in \alpS in the heavy \PQt quark limit, for the \PQt quark contribution~\cite{Harlander:2002wh, Anastasiou:2002yz,Ravindran:2003um,Harlander:2002vv,Anastasiou:2002wq}, and next-to-NNLO contributions in \alpS for \Ph production~\cite{Anastasiou:2014lda, Anastasiou:2015yha,Anastasiou:2016cez}. Electroweak corrections mediated by light-flavour quarks are included at two-loop accuracy reweighting the SM results of Refs.~\cite{Aglietti:2004nj,Bonciani:2010ms}. Contributions from squarks and gluinos are included at NLO precision in \alpS following Refs.~\cite{Degrassi:2010eu,Degrassi:2011vq,Degrassi:2012vt}. The \tanb-enhanced SUSY contributions to the Higgs-\PQb couplings have been resummed using the one-loop $\Delta_{\mathrm{b}}$ terms from Ref.~\cite{Hofer:2009xb} as provided by \textsc{FeynHiggs}. Uncertainties in these $\Delta_{\mathrm{b}}$ terms which range ${\approx}10\%$ are not included in the overall uncertainties in the predictions as they are subdominant with respect to the other theoretical uncertainties. For \ensuremath{\PQb\PQb\PH}\xspace, cross sections have been calculated for the SM Higgs boson as a function of its mass, based on soft-collinear effective theory~\cite{Bonvini:2015pxa, Bonvini:2016fgf} which combines the merits of both the 4FS~\cite{Dittmaier:2003ej, Dawson:2003kb} and 5FS~\cite{Harlander:2003ai,Duhr:2019kwi} calculations. These cross sections coincide with the results of the so-called ``fixed order plus next-to-leading log'' approach of Refs.~\cite{Forte:2015hba,Forte:2016sja}. The pure \PQt- and loop-induced $\PQt\PQb$-interference contributions are separately reweighted with effective Higgs couplings, using an effective mixing angle $\alpha$, and including the resummation of \tanb-enhanced SUSY contributions as in the \ensuremath{\Pg\Pg\phi}\xspace case. The same SM cross sections are also used to obtain the reweighted cross section for \ensuremath{\PQb\PQb\ensuremath{\mathrm{A}}\xspace}\xspace production. A more detailed discussion is given in Ref.~\cite{Bagnaschi:2791954}. All Higgs boson masses, effective mixing angles $\alpha$, Yukawa couplings, branching fractions, cross sections, and their uncertainties, which are included for the exclusion contours, are obtained from Ref.~\cite{MSSM_benchmark}. In the figure, the exclusion sensitivity, estimated from the expected median in the absence of a signal, is indicated by the dashed black line. We note that the central 68\% and 95\% intervals, also given for the exclusion sensitivity, should not be misinterpreted as an uncertainty in the analysis, but they rather reflect the variation of the expected signal yield in the probed parameter space of the chosen benchmark scenarios. For the \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace scenario the sensitivity sharply drops at $\ensuremath{m_{\PA}}\xspace=2\,\ensuremath{m_{\PQt}}\xspace$, caused by a drop of the branching fractions for the decay of \ensuremath{\mathrm{A}}\xspace and \PH into $\PGt$ leptons where the \ensuremath{\mathrm{A}}\xspace and \PH decays into two on-shell \PQt quarks become kinematically accessible. The distinct boundary is related to the fact that in \textsc{FeynHiggs}, which is used for the calculation of all branching fractions for this benchmark scenario, only the decay into on-shell \ttbar pairs is implemented. The parameter space of each benchmark scenario that is excluded at 95\% \CL by the data is indicated by the coloured blue area. Both scenarios are excluded at 95\% \CL for $\ensuremath{m_{\PA}}\xspace\lesssim350\GeV$. The local excess observed at 1.2\TeV causes the deviation of the observed exclusion from the expectation. For $\ensuremath{m_{\PA}}\xspace\lesssim250\GeV$, most of the \ensuremath{\Pg\Pg\PH}\xspace/\ensuremath{\Pg\Pg\ensuremath{\mathrm{A}}\xspace}\xspace events do not enter the ``no \PQb tag'' categories due to the $\ensuremath{m_{\PGt\PGt}}\xspace>250\GeV$ requirement, although these events still contribute to the signal yields in the NN categories. In this parameter space the sensitivity to the MSSM is driven by the measurements of the \ensuremath{\mathrm{\PH(125)}}\xspace production rates, while the sensitivity to the \PH and \ensuremath{\mathrm{A}}\xspace enters mainly via the \ensuremath{\PQb\PQb\phi}\xspace signal in the ``\PQb tag'' categories, especially for increasing values of \tanb. \section{Summary} \label{sec:summary} Three searches have been presented for signatures of physics beyond the standard model (SM) in \ensuremath{\PGt\PGt}\xspace final states in proton-proton collisions at the LHC, using a data sample collected with the CMS detector at $\sqrt{s} = 13\TeV$, corresponding to an integrated luminosity of 138\fbinv. Upper limits at 95\% confidence level (CL) have been set on the products of the branching fraction for the decay into \PGt leptons and the cross sections for the production of a resonance \ensuremath{\phi}\xspace in addition to the observed Higgs boson via gluon fusion (\ensuremath{\Pg\Pg\phi}\xspace) or in association with \PQb quarks, ranging from \ensuremath{\mathcal{O}}(10\unit{pb}) for a mass of $60\GeV$ to 0.3\unit{fb} for a mass of $3.5\TeV$ each. The data reveal two excesses for \ensuremath{\Pg\Pg\phi}\xspace production with local $p$-values equivalent to about three standard deviations at $\ensuremath{m_{\phi}}\xspace=0.1$ and 1.2\TeV. Within the resolution of the reconstructed invariant mass of the \ensuremath{\PGt\PGt}\xspace system, the excess at 100\GeV coincides with a similar excess observed in a previous search for low-mass resonances by the CMS Collaboration in the $\PGg\PGg$ final state at a mass of ${\approx}95\GeV$. In a search for $t$-channel exchange of a vector leptoquark \ensuremath{\mathrm{U}_1}\xspace, 95\% CL upper limits are set on the \ensuremath{\mathrm{U}_1}\xspace coupling to quarks and \PGt leptons ranging from 1 for a mass of 1\TeV to 6 for a mass of 5\TeV, depending on the scenario. The search is sensitive to and excludes a portion of the parameter space that can explain the \PQb physics anomalies. In the interpretation of the \ensuremath{M_{\mathrm{h}}^{125}}\xspace and \ensuremath{M_{\mathrm{h},\,\text{EFT}}^{125}}\xspace minimal supersymmetric SM benchmark scenarios, additional Higgs bosons with masses below 350\GeV are excluded at 95\% CL.
2011.06602
\section{Introduction} \label{sec:intro} Post-starburst galaxies (PSBs), also referred to as E+A or k+a galaxies based on spectral properties \citep{1999ApJS..122...51D, 1999ApJ...527...54B}, are characterised by a recent starburst and subsequent fast quenching \citep{1983ApJ...270....7D, 1999AJ....117..140C}. As such, they offer a unique opportunity to clarify some of the debated details of both the morphological (late- to early-type) and colour (blue cloud to red sequence) transition, which are fundamental to understanding galaxy evolution \citep{2016MNRAS.456.3032P, 2017MNRAS.472.1401A}. Furthermore, PSBs are found in all environments and at all redshifts, suggesting physical processes of universal importance \citep{1995A&A...297...61B, 2003ApJ...599..865T, 2012ApJ...745..179W, 2013ApJ...770...62D, 2019NatAs...3..440P}. Their formation mechanism remains a matter of debate: Typically, PSBs are assumed to have undergone a recent quenching event rather than gradual depletion, and thus they belong to a transition population between blue disc-like and red early-type galaxies \citep{2008AJ....135.1636D, 2017MNRAS.472.1447W, 2018MNRAS.477.1921A}. Different pathways have been proposed to explain the observed strong increase of PSBs with redshift \citep{2018MNRAS.480..381M, 2019ApJ...874...17B} and the varying environmental abundance \citep{1999ApJ...518..576P, 2017MNRAS.472..419L, 2019MNRAS.482..881P} of PSBs. \cite{2016MNRAS.463..832W} propose two PSB pathways: First, at $z \gtrsim 2$ PSBs are exclusively massive galaxies which formed the majority of their stars within a rapid assembly period, followed by a complete shutdown in star formation. Second, at $z \lesssim 1$ PSBs are the result of rapid quenching of gas-rich star-forming galaxies, independent of stellar mass. Possible candidates for this rapid quenching at $z \lesssim 1$ include the environment and/or gas-rich major mergers \citep{2016MNRAS.463..832W}. More recent work by \cite{2018MNRAS.480..381M} suggests that the $z > 1$ PSB population is the result of a violent event, leading to a compact object, whereas the $z < 1$ population is able to preserve its typically disc-dominated structure, suggesting an environmental mechanism. At redshift $z \sim 0.8$, \cite{2020MNRAS.497..389D} find evidence for a fast pathway associated with a centrally concentrated starburst. Galaxies at redshifts $z < 0.05$ appear to show evidence for three different pathways through the post-starburst phase, mostly occurring in intermediate density environments \citep{2018MNRAS.477.1708P}: First, a strong disruptive event (e.g. major merger) triggering a starburst and subsequently quenching the galaxy. Second, random star formation in the mass range $9.5 < log(M_*/\mathrm{M_{\odot}}) < 10.5$ causing weak starbursts and, third, weak starburst in quiescent galaxies, resulting in a gradual climb towards the high mass end of the red sequence \citep{2018MNRAS.477.1708P}. In general, it is clear that different PSB evolutionary pathways exist, however, to date the number of and relevant characteristics of these pathways is not well determined. Despite ongoing arguments, simulations \citep{2018MNRAS.479..758W, 2019MNRAS.484.2447D} and observations \citep{1996AJ....111..109S, 2005MNRAS.359..949B} of PSBs in the local low density Universe generally show signs of, or are consistent with, recent galaxy-galaxy interactions and galaxy mergers. This is not surprising, as galaxy mergers can impact the star formation rate (SFR) of galaxies in opposing ways: Mergers have been found to increase \citep{2019MNRAS.490.2139R, 2020MNRAS.494.5396B}, not impact \citep{2019A&A...631A..51P}, and decrease \citep{2020ApJ...888...77W} the SFR on varying timescales, depending on the details of the specific merger. Mechanisms that directly impact the SFR and have been associated with mergers include: introducing turbulence \citep{2018MNRAS.478.3447E}, increasing disc instabilities \citep{2019MNRAS.489.4196L}, triggering nuclear inflows via tidal interactions \citep{2005MNRAS.361..776S, 2008MNRAS.386.1355G}, and gravitational heating, i.e. the process whereby gas is heated and kept hot via the release of potential energy from infalling substructure \citep{2009ApJ...697L..38J}. Mergers may also influence the SFR indirectly: Potential mechanisms include facilitating the central galactic black hole (BH) growth \citep{2014MNRAS.437.1456B}, thus potentially leading to strong AGN feedback \citep{2013MNRAS.430.1901H} which may lead to galactic gas removal \citep{2014MNRAS.437.1456B}, ultimately causing long term star formation suppression \citep{2014ApJ...792...84Y}. Several works highlight the relevance of active galactic nucleus (AGN) feedback in explaining the sharp decline in the SFR found in (PSB) galaxies \citep{2010ApJ...709..884Y, 2018MNRAS.480.3993B, 2019A&A...623A..64C, 2020AAS...23520719L}. Nonetheless, the details of the mechanism(s) driving the nuclear activity in the centres of galaxies remains a major unsettled question \citep{2018MNRAS.481..341S}, especially in PSBs. Some studies find no dominant AGN role in shutting down star formation on short timescale: zCOSMOS survey observations conclude that several mechanisms, both related and unrelated to the environment, are more relevant to the quenching of star formation on short timescales ($<1\,$Gyr) \citep{2010A&A...509A..42V}. Due to the time delay between the starburst phase and AGN activity, \cite{2014ApJ...792...84Y} go a step further and suggest that the AGN does not play a primary role in the initial quenching of starbursts, but rather is responsible for maintaining the post-starburst phase. On the other hand, \cite{2005MNRAS.361..776S} find a complex interplay between starbursts and AGN activity when tidal interactions trigger intense nuclear gas inflows. SPH simulations find that the BH accretion rate is especially relevant to the inner SFR, but the correlation is less pronounced when using the galactic SFR \citep{2010MNRAS.407.1529H}. \citep{2013MNRAS.430.1901H} suggest that strong AGN feedback is required to explain the observed star formation shutdown in post-merger galaxies. Generally, galactic quenching mechanisms, i.e. processes whereby star formation is reduced, can be divided into two categories: \textit{mass quenching}, i.e. stellar mass dependent mechanisms which are mostly independent of the environment (e.g. SNe and AGN feedback), and \textit{environmental quenching}, i.e. mechanisms governed by the surroundings (e.g. ram-pressure stripping and galaxy-galaxy interactions) rather than the galaxy itself \citep{2006PASP..118..517B, 2010ApJ...721..193P, 2017MNRAS.469.3670S}. The specific environment has a strong influence on the abundances of different types of galaxies, as evidenced by the morphology-density relation \citep{1980ApJ...236..351D, 2003MNRAS.346..601G}: Cluster galaxies are more likely to be characterised by reduced star formation than galaxies in less dense environments \citep{1974ApJ...194....1O, 1978ApJ...219...18B, 1980ApJ...236..351D, 1997ApJ...488L..75B}. As a result, the evolution of PSBs in cluster environments differs significantly from the evolution in lower density environments: A study of PSBs in galaxy clusters at $0.04 < z < 0.07$ indicates that the short-timescale star formation quenching channel (e.g. ram-pressure stripping within and galaxy-galaxy interactions outside of the virial radius) contributes two times more than the long timescale one (e.g. strangulation) to the growth of the quiescent cluster population \citep{2017ApJ...838..148P}. PSBs and star-forming cluster galaxies share many properties and are characterised by similar distributions of environment \citep{2006ApJ...650..763H}. Long-slit spectra observations of Coma cluster galaxies suggest close kinematic similarities between star-forming and PSB galaxies, i.e. both appear to be rotating systems and have exponential light profiles \citep{1996AJ....111...78C}. In a follow-up study of five low redshift galaxy clusters, \cite{1997AJ....113..492C} also find that most of the recent PSBs are in fact disc galaxies. Similarly, kinematic classifications at $z<0.04$ show that PSBs are typically fast rotators \citep{2013MNRAS.432.3131P}. Meanwhile, cluster PSBs at $z \sim 0.55$ \citep{2010PASA...27..360P} and $z \sim 1$ \citep{2020MNRAS.493.6011M} appear to have more similarities with early-type morphologies. Depending on how recently and by what mechanism(s) PSBs have undergone their starburst and subsequent shutdown, the morphological classification likely varies from late- to early-type. Observations of cluster galaxies suggest that interactions with the intra-cluster medium (ICM) rather than mergers (as is the case in the field) are highly relevant for the evolution of cluster PSBs \citep{2009ApJ...693..112P, 2017ApJ...838..148P}. Coma cluster observations conclude that the starbursts found in Coma PSBs were not the result of major mergers \citep{1996AJ....111...78C}. Similarly, observations of a rich $z \sim 0.55$ cluster argue against a merger origin, favouring a PSB shutdown scenario involving the ICM \citep{2010PASA...27..360P}. This is further supported by the correlation between quenching efficiency and cluster velocity dispersion, which implies that the star formation shutdown is related to the ICM, specifically that more massive clusters quench more efficiently \citep{2009ApJ...693..112P}. SAMI and GASP observations find additional evidence for ram-pressure stripping being the dominant quenching mechanism in galaxy clusters: Most cluster PSBs have been quenched outside-in, i.e. the outskirts reach undetectable levels of star formation prior the inner regions \citep{2017ApJ...846...27G, 2019ApJ...873...52O, 2020ApJ...892..146V}. In summary, the outside-in quenching, the lack of signs of interaction, the fast rotation, and the dense environment, all favour a scenario in which ram-pressure stripping shuts down star formation in cluster PSBs \citep{2020ApJ...892..146V}. Ram-pressure stripping also appears to influence galaxy evolution beyond direct quenching: Studies of cluster galaxies undergoing ram-pressure stripping find that ram-pressure stripping may enhance star formation prior to gas removal \citep{2018ApJ...866L..25V, 2020ApJ...899...98V, 2020MNRAS.495..554R}. Furthermore, there is evidence that an AGN is hosted by six out of seven GASP jellyfish galaxies \citep{2021IAUS..359..108P}, i.e. a potential cluster PSB progenitor \citep{2016AJ....151...78P} which is associated with long tentacles of (star-forming) material extending far beyond the galactic disc \citep{2017Natur.548..304P}. This surprisingly high incidence, compared to the general cluster and field population, suggests that ram-pressure stripping may trigger AGN activity via nuclear gas inflow \citep{2017Natur.548..304P, 2021IAUS..359..108P}. Similarly, the Romulus C simulation finds ram-pressure stripping triggered enhanced black hole accretion prior to quenching \citep{2020ApJ...895L...8R}, implying that the AGN feedback may aid in the quenching of star formation during ram-pressure stripping. Meanwhile, \cite{2019MNRAS.486..486R} find that photo-ionisation by the AGN in GASP jellyfish galaxies is the dominant ionisation mechanism. A detailed analysis of an individual GASP jellyfish galaxy supports the scenario in which the suppression of star formation in the central region of the disc is most likely due to the feedback from the AGN \citep{2019MNRAS.487.3102G}. We aim to clarify the importance of different mechanisms to the onset of the starburst phase, as well as the reasons for the subsequent rapid shutdown in star formation observed in PSBs. We discuss the Magneticum Pathfinder simulations and our PSB selection process in Section \ref{sec:data}. In Section \ref{sec:environment}, we study the environment, relevant distributions and the overall evolution of PSBs. The role of mergers is investigated in Section \ref{sec:mergers}. In Section \ref{sec:shutdown} we analyse the importance of the AGN and SNe feedback. Finally, we study PSB evolution within galaxy clusters in Section \ref{sec:clusters}. We discuss our results in Section \ref{sec:discussion} and present our conclusions in Section \ref{sec:conc}. \section{Data sample} \label{sec:data} \subsection{Magneticum Pathfinder simulations} \label{sub:Mag} \textit{Magneticum Pathfinder}\footnote{\url{www.magneticum.org}} is a set of large scale smoothed-particle hydrodynamic (SPH) simulations that employ a mesh-free Lagrangian method aimed at following structure formation on cosmological scales, with open access to many features \citep{2017A&C....20...52R}. The simulations are executed with the Tree/SPH code GADGET-3, a development based on GADGET-2 \citep{2001NewA....6...79S, 2005MNRAS.364.1105S}. In this work we primarily use Box2 ($352 \, \mathrm{(Mpc/h)^3}$), and to a lesser extent Box2b ($640 \, \mathrm{(Mpc/h)^3}$) and Box4 ($48 \, \mathrm{(Mpc/h)^3}$). Box2 and Box4 have a higher temporal resolution compared to Box2b, i.e. a larger number of individual \texttt{SUBFIND} halo finder outputs \citep{2001NewA....6...79S, 2009MNRAS.399..497D}. This facilitates and enhances the temporal tracking of galaxies. On the other hand, Box2b is larger and provides a greater statistical sample and is solely used to increase the sample size in Section \ref{sub:vlosObsComp}. Our standard resolution (for Box2, Box2b, and one of the Box4 runs) is set to `high resolution': dark matter (dm) and gas particles have masses of $m_{\mathrm{dm}} = 6.9 \cdot 10^8 \, h^{-1}\mathrm{M_{\odot}}$ and $m_{\mathrm{gas}} = 1.4 \cdot 10^8 \, h^{-1}\mathrm{M_{\odot}}$, respectively. Stellar particles are formed from gas particles and have $\sim 1/4$ of the mass of their parent gas particle. At this resolution level the softening of the dark matter, gas and stars is $\epsilon_{\mathrm{dm}} = 3.75 \, \mathrm{h^{-1} kpc}$, $\epsilon_{\mathrm{gas}} = 3.75 \, \mathrm{h^{-1} kpc}$ and $\epsilon_{\mathrm{stars}} = 2 \, \mathrm{h^{-1} kpc}$, respectively. Box2 is comprised of $2 \cdot 1584^3$ particles, while Box2b is comprised of $2 \cdot 2880^3$ particles. The `ultra-high resolution' Box4 run has a $\sim 20$ higher mass resolution (compared to our standard 'high resolution') \citep{2018MNRAS.480.4636S} and is only used to test the numerical convergence of our results. Throughout this paper the following cosmology is adopted \citep{2011ApJS..192...18K}: $h = 0.704$, $\Omega_M = 0.272$, $\Omega_{\Lambda} = 0.728$ and $\Omega_b = 0.0451$. The astrophysical processes modelled within the Magneticum simulations include, but are not limited to: gas cooling and star formation \citep{2003MNRAS.339..289S}, metal and chemical enrichment \citep{2003MNRAS.342.1025T, 2007MNRAS.382.1050T, 2017Galax...5...35D}, black holes and AGN feedback \citep{2005MNRAS.361..776S, 2014MNRAS.442.2304H, 2015MNRAS.448.1504S}, thermal conduction \citep{2004ApJ...606L..97D}, low viscosity scheme to track turbulence \citep{2005MNRAS.364..753D,2016MNRAS.455.2110B}, higher order SPH kernels \citep{2012MNRAS.425.1068D} and magnetic fields (passive) \citep{2009MNRAS.398.1678D}. For a more details on the precise physical processes refer to \cite{2015ApJ...812...29T, 2014MNRAS.442.2304H, 2017Galax...5...35D}. The Magneticum simulations have been used in the past to compare and interpret observations, in addition to independently studying various properties. Galaxy kinematics are in good agreement with observations and may be used to predict the formation pathway \citep{2018MNRAS.480.4636S, 2020MNRAS.493.3778S}. The specific angular momentum of disc stars and its relation to the specific angular momentum of the cold gas matches observations, and may be used to (morphologically) classify galaxies \citep{2015ApJ...812...29T}. When comparing with the integral field spectroscopic data from SAMI, Magneticum matches observations well: In particular, it is the only simulation able to reproduce ellipticities typical for disc galaxies \citep{2019MNRAS.484..869V}. The mass ratios and orbital parameters of galaxy mergers strongly impact the resulting radial mass distribution: Mini mergers can significantly increase the host disc size, while not changing the global shape \citep{2019MNRAS.487..318K}. AGN properties in Magneticum, such as the evolution of the bolometric AGN luminosity function, agree with observations \citep{2014MNRAS.442.2304H}. In fact, merger events, especially minor mergers, do not necessarily drive strong nuclear activity \citep{2016MNRAS.458.1013S}. Moreover, merger events are not the statistically dominant driver of nuclear activity \citep{2018MNRAS.481..341S}. Satellite galaxies in galaxy clusters are predominantly quenched by ram-pressure stripping \citep{2019MNRAS.488.5370L}. In general, Magneticum galaxy cluster properties, such as the pressure \citep{2013A&A...550A.131P}, temperature, and entropy profiles \citep{2014ApJ...794...67M}, as well as the distribution of metals \citep{2017Galax...5...35D}, agree with observations. \subsection{Post-starburst selection} \label{sub:selection} Galaxies are selected to have a minimum stellar mass of $M_* \geq 3.5 \cdot 10^{10} \, h^{-1} \mathrm{M_{\odot}}$, corresponding to a minimum of $\sim 1000$ stellar particles for a given galaxy. The only exceptions to this stellar mass threshold is found in Section \ref{sec:clusters}, where the threshold is reduced to $M_* \geq 3.5 \cdot 10^{9} \, h^{-1} \mathrm{M_{\odot}}$ to increase the available sample size in cluster environments. The additional use of Box2b and the lowering of the stellar mass threshold is done to increase the abundance of PSBs within galaxy cluster environments. In order to differentiate between star-forming and quiescent galaxies, the criterion introduced by \cite{2008ApJ...688..770F} is used throughout this paper at all redshifts. To this end, we use the specific star formation rate (SSFR), i.e. the star formation rate (SFR) divided by the galactic stellar mass $\mathrm{SSFR} = \mathrm{SFR}/M_*$, and the redshift evolving Hubble time $t_{\mathrm{H}} = 1/H(t)$, where $H(t)$ is the Hubble parameter calculated at a given redshift. Galaxies with a value above $\mathrm{SSFR} \cdot t_{\mathrm{H}} > 0.3$ are classified as star-forming, while galaxies with $\mathrm{SSFR} \cdot t_{\mathrm{H}} < 0.3$ are classified as quiescent. Importantly, this co-called `blueness criterion' ($\mathrm{SSFR} \cdot t_{\mathrm{H}} > 0.3$) is time dependent rather than merely being applicable to low redshifts. Hence, this definition encompasses the changing star formation history on a cosmological scale and is well suited to study and compare galaxies at different redshifts. With this criterion, the Milky Way, for example, would have $\mathrm{SSFR} \cdot t_{\mathrm{H}} \sim 0.4$ at $z = 0$ and, hence, be considered star-forming \citep{2015ApJ...806...96L}. We identify post-starburst galaxies (PSBs) in the Magneticum simulations based on the stellar particle age and the blueness: Of all stellar particles of a galaxy, at least $2\%$ need to be younger than $0.5 \, \mathrm{Gyr}$. In addition, the galaxy's blueness at identification needs to be smaller than $\mathrm{SSFR} \cdot t_\mathrm{H} < 0.3$. These two parameters describe galaxies that have a sufficiently large young stellar population, while also no longer being star-forming, i.e. galaxies that have experienced a recent starburst. In particular, we choose this criterion as it implies a minimum average SSFR within the past $0.5\,$Gyr of $\mathrm{SSFR} \geq 4 \cdot 10^{-11} \, \mathrm{yr}^{-1}$, similar to the criterion used by \cite{2019MNRAS.484.2447D}. To verify that our results are robust, we initially varied both the young stellar mass percentage (1, 2, 5 or 10 per cent) and the associated evaluation timescale (0.5, 1 or 2 Gyr). Although the resulting sample size varied, the conclusions and the agreement with observations remained robust. When considering all Box2 galaxies fulfilling these criteria we obtain a sample of $647$ PSBs at $z \sim 0$. This global sample provides the basis of the majority of our analysis and is complemented by additional specific environmental and redshift selections where necessary. To understand how PSBs differ from other galaxies, we introduce two stellar mass matched control (SMMC) samples: quenched (QSMMC) and star-forming (SFSMMC) galaxies, using the above blueness criterion for differentiation. The control samples are constructed by selecting the closest quenched and star-forming stellar mass match for each PSB galaxy at identification redshift. In terms of the star formation at identification redshift the QSMMC sample is indistinguishable from PSBs. In order to disentangle the details causing the starburst and the following shutdown in star formation, we consider the temporal evolution of PSBs. To this end, we employ two complementary methods to track and trace both PSBs and control galaxies in Box2 of the Magneticum simulations. First, we identify the main galactic black hole particle associated with a galaxy and track this particle and subsequently its host backwards in time. This method provides a temporal resolution of $0.43\,$Gyr, as only every fourth time step has stored particle data. Second, we analyse the merger trees of the galaxies in question, yielding a complete merger history with a temporal resolution of $0.11\,$Gyr. \section{Environment, distribution, and evolution of post-starburst galaxies} \label{sec:environment} \subsection{Quenched, PSB-to-quenched, and PSB fractions} \label{sub:qfrac} Understanding the abundance of specific galaxy types at different halo masses, i.e. in different environments, is crucial for determining the relevant formation and evolutionary mechanisms of PSBs. Specifically, the environment is key to understanding potential triggers of the starburst phase and, subsequently, the causes of the star formation shutdown. \cite{2019MNRAS.488.5370L} already demonstrated good agreement between Box2 and observations of quenched fractions at intermediate stellar masses $\log_{10}(M_*/\mathrm{M_{\odot}}) = [9.7,10.5]$ \citep{2012MNRAS.424..232W}. We now extend our investigation to higher stellar mass galaxies $\log_{10}(M_*/\mathrm{M_{\odot}}) = [10.70,12.00]$, as well as presenting predictions of the PSB-to-quenched fraction. \begin{figure*} \includegraphics[width=1.5\columnwidth]{201212plot_PSB_Q_fractions_a_Nsnaps6_Sep20} \includegraphics[width=1.5\columnwidth]{201212plot_PSB_Q_fractions_b_Nsnaps6_Sep20} \caption{Fraction of quenched galaxies (top) and fraction of PSB-to-quenched galaxies (bottom) as a function of $M_{\mathrm{200,crit}}$ halo mass, at different redshifts $0.07<z<1.71$ (increasing from left to right) for all Box2 galaxies. Each panel is subdivided into four unique stellar mass bins (colour coded) and one bin showing the behaviour across the entire evaluated stellar mass range $\log_{10}(M_*/\mathrm{M_{\odot}}) = [10.70,12.00]$ (black). The quenched and PSB-to-quenched fraction is only shown if the denominator in each case is larger than $100$ galaxies. Quenched galaxies are defined as $\mathrm{SSFR} \cdot t_\mathrm{H} < 0.3$ (see Section \ref{sub:selection}). Box2 error bars are calculated via bootstrapping. If no observational error bars are shown, then the error is of order the symbol size. Note the difference in y-axis range for the PSB-to-quenched fractions between rows. At $z=0.07$, we compare the quenched fraction with low redshift observations in the stellar mass range $\log_{10}(M_*/\mathrm{M_{\odot}}) = [[10.8,11.00],[11.0,11.4]]$ \protect\citep{2018ApJ...852...31W}. At intermediate redshifts, we compare the quenched fractions to central galaxies in COSMOS groups \protect\citep{2011ApJ...742..125G, 2013ApJ...778...93T}. At $z=1.32$, we compare our results to $1.38<z<1.45$ cluster galaxies above the common mass completeness limit $\log_{10}(M_*/\mathrm{M_{\odot}}) > 10.85$ within $r < 0.45r_{500}$ and $r < 0.7r_{500}$ of SPT-SZ galaxy clusters \protect\citep{2019A&A...622A.117S}. } \label{fig:Q_PSBfrac_grid} \end{figure*} Figure \ref{fig:Q_PSBfrac_grid} (top panels) shows a number of trends and behaviours relating to the quenched fraction: First, at redshifts $z \lesssim 1$ the vast majority ($\geq 80\%$) of galaxies in the stellar mass range $\log_{10}(M_*/\mathrm{M_{\odot}}) = [10.70,12.00]$ (black) are quenched, independent of host halo mass. The only exception to this is found in the highest stellar mass bin ($\log_{10}(M_*/\mathrm{M_{\odot}}) = [11.67,12.00]$), which shows lower quenched fractions with increasing halo mass, because above halo masses $M_{\mathrm{200,crit}} \geq 10^{14}$ these high mass galaxies are dominated by brightest cluster galaxies (BCGs), which experience episodes of star formation as a result of gas accretion. Second, Figure \ref{fig:Q_PSBfrac_grid} shows varying agreement with observations: We find broad agreement between our $z=0.07$ quenched fractions and $0.01<z<0.12$ observations by \cite{2018ApJ...852...31W}, which are based on NYU-VAGC \citep{2005AJ....129.2562B} and SDSS DR7 \citep{2009ApJS..182..543A}. Although our Box2 galaxies are characterised by higher quenched fractions and a less distinct split between stellar masses at $z \sim 0$ compared to observations, observations are similarly characterised by high quenched fractions at low redshift, especially towards higher halo mass. When comparing our results at $0.25<z<1.04$ to observations of central galaxies in COSMOS groups at median redshift bins $z=[0.36, 0.66, 0.88]$ \citep{2011ApJ...742..125G, 2013ApJ...778...93T}, we find the strongest agreement towards higher redshifts, while the lower redshift comparison lacks good agreement. At high redshift, we compare our $z=1.32$ results to SPT-SZ cluster galaxies at redshifts $z=[1.38, 1.401, 1.478]$ \citep{2019A&A...622A.117S}. The cluster galaxies have stellar masses above the common mass completeness limit $\log_{10}(M_*/\mathrm{M_{\odot}}) > 10.85$, and the quenched fractions are calculated for cluster radii $r < 0.45 \, r_{500}$ and $r < 0.7 \, r_{500}$ \citep{2019A&A...622A.117S}. To better compare with our results, we convert the halo mass from $M_{\mathrm{500}}$ \citep{2019A&A...622A.117S} to $M_{\mathrm{200}}$, assuming an NFW profile with constant concentration ($c=5$) \citep{2003MNRAS.342..163P}. The resulting comparison agrees well with our $z=1.32$ results. Furthermore, the trend whereby the quenched fraction at constant stellar mass increases towards lower redshift agrees with established models \citep{2008ApJS..175..390H} and simulations \citep{2019MNRAS.488.3143B}. Third, towards higher redshifts ($z > 1$) our quenched fraction begins to drop and the differences between the stellar mass bins become larger than the bootstrapped errors associated with the individual bins. At $z=1.71$, we find the highest quenched fraction in the lowest stellar mass bin $\log_{10}(M_*/M_{\odot}) = [10.70,11.02]$. This is likely due to higher stellar mass galaxies at this redshift having undergone more recent mass growth, which is typically associated with star formation, thus leading to lower quenched fractions in high stellar mass compared to low stellar mass galaxies. In brief, environmental quenching is more effective than mass quenching at high redshift. Figure \ref{fig:Q_PSBfrac_grid} (bottom panels) shows the PSB-to-quenched fraction. The PSB-to-quenched fraction maps the abundance of PSBs relative to the evolving quenched fraction, rather than the total population, thereby avoiding additional systematics associated with the quenched fraction and its evolution. We find that the qualitative behaviour remains broadly similar at redshifts $z \lesssim 1$: The highest abundance of PSBs is consistently found at low stellar and halo masses. Specifically, the PSB-to-quenched fraction is consistently below $7\%$ at redshifts $z \leq 0.73$. Furthermore, the lower the redshift, the lower the PSB-to-quenched fraction. At higher redshifts ($z \geq 1.3$) PSBs are no longer most often found at low stellar masses. In particular, the low redshift preference for low stellar masses appears to be inverted at high redshift. High stellar mass galaxies at high redshift belong to the subset of galaxies characterised by the quickest mass assembly. When high stellar mass galaxies become quenched at high redshift, they likely host a significant population of young stars, thus fulfilling our PSB selection criteria. As a result the PSB-to-quenched fraction at high redshift is highest among high stellar mass galaxies. The PSB-to-quenched fraction as a function of halo mass evolves with redshift: At low redshifts ($z \leq 0.73$), the PSB-to-quenched fraction exhibits the highest abundances at low halo masses. With increasing redshift ($z \geq 1$), the PSB-to-quenched fraction shows less preference for low halo mass. Similarly, DEEP2 and SDSS results find that $z \sim 0$ PSBs are found in relatively under-dense environments, while at $z \sim 1$ they are increasingly found in over-dense environments \citep{2009MNRAS.398..735Y}. The positive correlation between redshift and the PSB-to-quenched fraction is also found for the PSB-to-total fraction: In the stellar mass range $10.7 < \log_{10}(M_*/\mathrm{M_{\odot}}) < 12.0$ used in Figure \ref{fig:Q_PSBfrac_grid}, we find the following PSB-to-total fractions: $0.45\%$ at $z=0.07$, $0.95\%$ at $z=0.25$, $3.81\%$ at $z=0.73$, $13.4\%$ at $z=1.04$, $19.0\%$ at $z=1.32$, and $20.8\%$ at $z=1.71$. This behaviour agrees with observations of PSBs with stellar masses $10.0 < \log_{10}(M_*/\mathrm{M_{\odot}}) < 12.5$, which find that the fraction of PSBs declines from $\sim 5\%$ of the total population at $z \sim 2$, to $\sim 1\%$ by $z \sim 0.5$ \citep{2016MNRAS.463..832W}. At low redshift, the two differing stellar mass ranges yield similar abundances. However, at high redshift the agreement becomes smaller. This is likely driven by the $\sim 5$ times lower stellar mass threshold used by \cite{2016MNRAS.463..832W}, compared to our threshold. To demonstrate, in Box2 at $z=1.71$, this lower stellar mass threshold results in a $\sim 40$ times higher number of total galaxies in the stellar mass range $\log_{10}(M_*/M_{\odot}) = [10.0,12.5]$ compared to $\log_{10}(M_*/M_{\odot}) = [10.7,12.0]$. Connecting this with the fact that higher stellar mass galaxies at high redshift are statistically more likely to be classified as PSBs, as illustrated by Figure \ref{fig:Q_PSBfrac_grid}, our higher PSB-to-total fraction at high redshift is expected. Given these considerations, \cite{2016MNRAS.463..832W} and our results agree well. We conclude, Figure \ref{fig:Q_PSBfrac_grid} (bottom) suggests that both the redshift and environment play an important role in the specific evolution of PSBs. \subsection{Stellar mass functions of satellite galaxies} \label{sub:SMF} \begin{figure*} \includegraphics[width=1.99\columnwidth]{210506PSB_SMF_Nsnaps6_May21} \caption{Stellar mass functions of all Magneticum Box2 galaxies, split into total (1st row), star-forming (SF: 2nd row), quenched (Q: 3rd row) and PSB galaxies (4th row) at different redshifts in the range $0.07<z<1.71$ (increasing from left to right). The vertical dashed dotted black line at $\log_{10}(M_*/\mathrm{M_{\odot}}) \sim 10.7$ indicates our standard stellar mass threshold. The total stellar mass function (1st row) is compared to $z < 4$ observations based on COSMOS / UltraVISTA \protect\citep{2013ApJ...777...18M}. The SF, Q, and PSB selection is compared to two observational surveys: \protect\cite{2018MNRAS.473.1168R}, based on GAMA ($z < 1$), and \protect\cite{2016MNRAS.463..832W}, based on UKIDSS UDS ($0.5<z<2$). } \label{fig:SMF_grid} \end{figure*} Evaluating the galaxy stellar mass distribution is critical for understanding the relative importance of different evolutionary mechanisms: Figure \ref{fig:SMF_grid} shows the redshift evolution of the stellar mass function and its various components, as well as comparisons to observations. As such, Figure \ref{fig:SMF_grid} provides a useful extension of Figure \ref{fig:Q_PSBfrac_grid} by displaying the stellar mass distribution and an additional component-wise split into various samples. Although we only consider high mass PSBs ($M_* \geq 4.97 \cdot 10^{10}$) for our analysis, we have extended the stellar mass function below our mass threshold, which is indicated by a vertical dashed dotted black line at $\log_{10}(M_*/\mathrm{M_{\odot}}) \sim 10.7$. Throughout the studied redshift range ($0.07<z<1.71$) displayed in Figure \ref{fig:SMF_grid}, the total stellar mass function (1st row) shows little evolution. When comparing the total stellar mass function with observations based on COSMOS / UltraVISTA \citep{2013ApJ...777...18M}, we find agreement at all redshifts, especially towards lower redshifts. In contrast, the star-forming population (2nd row) shows a significant redshift evolution and only matches observations well at high redshift. The kink in the star-forming stellar mass function at $\log_{10}(M_*/\mathrm{M_{\odot}}) \sim 10.3$, which becomes more evident with decreasing redshift, is the result of our active galactic nucleus (AGN) feedback. Specifically, above these stellar masses the AGN begins to continuously quench galaxies, leading to a relative under-abundance of star-forming galaxies in the stellar mass range $\log_{10}(M_*/\mathrm{M_{\odot}}) \sim [10.3,11.5]$ \citep{2015MNRAS.448.1504S}. This difference becomes most evident when comparing our results to observational surveys based on GAMA ($z < 1$) \citep{2018MNRAS.473.1168R} and on UKIDSS UDS ($0.5<z<2$) \citep{2016MNRAS.463..832W}. This relative lack of star-forming galaxies, compared to observations, becomes stronger towards lower redshifts, as more galaxies host AGNs. This effect also influences the total and quenched stellar mass functions, as evidenced by the perturbation found at $\log_{10}(M_*/\mathrm{M_{\odot}}) \sim 10.4$ in an otherwise fairly smooth distribution. When viewing the evolution of the PSB stellar mass function with redshift in Figure \ref{fig:SMF_grid}, we find a significant evolution: At low redshifts PSBs are primarily found below our stellar mass cut (vertical dashed dotted black line), while they are typically found above our stellar mass cut at high redshifts. In other words, the abundance of PSBs above our stellar mass threshold increases significantly with increasing redshift. This strong redshift evolution agrees with VVDS observations, which find that the mass density of strong PSB galaxies is $230$ times lower at $z \sim 0.07$ than at $z \sim 0.7$ \citep{2009MNRAS.395..144W}. When comparing the shape of the PSB galaxy stellar mass function to observations \citep{2016MNRAS.463..832W, 2018MNRAS.473.1168R}, we do not find close agreement. However, we note that observations at similar redshifts, as indicated by the legend in the bottom row of Figure \ref{fig:SMF_grid}, do not appear to show agreement either. This may be due to different selection mechanisms: While \cite{2016MNRAS.463..832W} derive three eigenvectors, termed super-colours, via a principal component analysis (PCA) of the spectral energy distribution (SED) \citep{2014MNRAS.440.1880W}, \cite{2018MNRAS.473.1168R} use two spectral indices based on a PCA to distinguish different galaxy types. In contrast, we determine the percentage of young stars formed within the last $0.5\,$Gyr and the current star formation rate (SFR) (see Section \ref{sub:selection}). Evaluating the SED compared to the numerical star formation may lead to discrepancies. In short, the PSB stellar mass function appears quite sensitive to the exact selection criteria, both in our simulation and in observations. \subsection{Galaxy distribution within halos} \label{sub:FoFNsub} \begin{figure} \includegraphics[width=0.95\columnwidth]{201108PSB_FoFNsubs_Nbeg136to100_Jul20} \caption{Distribution of $N_{gal/FoF}$, the number of galaxies per Friends-of-Friends (FoF) halo, of PSBs (green), quenched (QSMMC, red), and star-forming stellar mass matched control sample (SFSMMC, blue) galaxies as a function of look-back-time $t_{\mathrm{lbt}}$. Galaxies are identified at $t_{\mathrm{lbt}}=0\,$Gyr (top panel), thereafter their progenitors are tracked back to $t_{\mathrm{lbt}} \sim 2.5\,$Gyr (bottom panel). For a comparison, the PSB $t_{\mathrm{lbt}}=0\,$Gyr distribution is included in each panel as a solid black line. All FoFs with more than $20$ galaxies are grouped together in the last bin. } \label{fig:FoFNsubs} \end{figure} Figure \ref{fig:FoFNsubs} shows the distribution of the number of galaxies per Friends-of-Friends (FoF\footnote{A FoF linking length of $0.16$ times the mean DM particle separation is used \citep{2009MNRAS.399..497D}. Thereafter, each stellar and gas particle is associated with the nearest DM particle and ascribed to the corresponding FoF group, provided one exists, i.e. has at least $32$ DM particles \citep{2009MNRAS.399..497D}.}) halo $N_{\mathrm{gal/FoF}}$ of all PSBs identified at $z \sim 0$ in Box2. All PSBs (green solid lines) were tracked from present-day back over the last $2.5\,$Gyr. To better understand how PSBs differ from other galaxies, quenched (red dashed lines) and star-forming (blue dotted lines) stellar mass matched control samples (QSMMC and SFSMMC, respectively) of galaxies and their evolution are shown in addition. We find a significantly stronger evolution of $N_{\mathrm{gal/FoF}}$ in the PSB (green) and SFSMMC (blue) samples compared to the QSMMC (red) sample. At $t_{\mathrm{lbt}}=0\,$Gyr (top panel), the PSB, QSMMC, and SFSMMC samples initially share a similar distribution. The only meaningful exception being the largest bin, i.e. $N_{\mathrm{gal/FoF}} > 20$, which is a factor of $\sim 3$ larger for the QSMMC compared to the PSB sample, indicating a preference of quenched galaxies for richer membership FoF halos. In contrast, PSBs are rarely found in rich membership FoF halos. In high membership FoF halos, star-forming galaxies lie in intermediate ranges, centred between the other two samples. The varying galaxy abundances in different halo mass ranges are listed in the bottom row of Table \ref{tab:BHgrowthTable}. As the look-back-time increases, we find that PSBs, and to lesser degree the SFSMMC galaxies, develop a clear peak around $N_{\mathrm{gal/FoF}} \sim 3$, while values of $N_{\mathrm{gal/FoF}} = 1$ experience a strong decrease. In contrast, the QSMMC distribution remains fairly similar over time. This fundamental difference in evolution of PSB and star-forming galaxies compared to quiescent galaxies suggests that the initial environment at $t_{\mathrm{lbt}} = 2.5\,$Gyr plays an important role in influencing star formation, and subsequently PSB galaxy evolution. At $t_{\mathrm{lbt}}=2.5\,$Gyr the overwhelming majority of halos in which PSBs (and SFSMMC galaxies) are found, host $N_{\mathrm{gal/FoF}} \sim 2-4$ galaxies. This differs significantly from QSMMC galaxies, which are most often found in halos hosting one galaxy. In contrast, PSBs are rarely found with $N_{\mathrm{gal/FoF}}= 1$, indicating that they are usually not found in isolation\footnote{We note that the number of galaxies found in a given halo is a function of resolution and thus the differences in relative abundance between galaxy types is a more robust quantity.}. The similarity between the PSB and SFSMMC distributions at $t_{\mathrm{lbt}}=2.5\,$Gyr, shown in the bottom panel of Figure \ref{fig:FoFNsubs}, suggests that star formation is associated with the relative abundance of galaxies in the direct environment. When connecting the initial abundance of galaxies within the FoF halo with the decrease in the number of galaxies found at lower look-back-times, a mechanism linked to the interaction with other galaxies appears likely. Specifically, Figure \ref{fig:FoFNsubs} suggests that galaxy-galaxy processes, such as mergers with nearby galaxies, are important in supporting star formation, as well as possibly being linked to the starburst phase and the following star formation shutdown which characterise PSBs. \subsection{A closer look: Evolution of massive post-starburst galaxies} In Table \ref{tab:6massivePSBs} we introduce six massive PSBs, which we study in greater detail alongside the total population of $647$ PSBs. These six massive PSBs are chosen based on their high stellar mass, i.e. higher number of stellar particles, which allows a more detailed (spatial) examination of the involved physical processes. Table \ref{tab:6massivePSBs} lists relevant galactic and halo properties of the six PSBs at $t_{\mathrm{lbt}}=0\,$Gyr. Similar to the vast majority ($89\%$) of the global $647$ PSB sample (see bottom row in Table \ref{tab:BHgrowthTable}), five of our six massive PSBs are found in halos with halo mass $M_{\mathrm{200,crit}} < 10^{13}\, \mathrm{M_{\odot}}$. Figure \ref{fig:massiveHistos} shows the diverse distributions of stellar histories. Specifically, the number of stellar particles added to a given galaxy within a given look-back-time interval for the six massive PSBs is shown. We highlight that this representation includes both internally formed (in-situ) and accreted (ex-situ) stars (whereas the in-situ star formation is shown in Figure \ref{fig:MSpanel}). The first (last) three galaxies of Table \ref{tab:6massivePSBs} are displayed in the top (bottom) row, as indicated by the IDs. The stellar histories shown in Figure \ref{fig:massiveHistos} vary: While some massive PSBs are characterised by continuous star formation (and/or accretion) in recent look-back-times (pink, blue), others show strong recent star formation (black). Both Table \ref{tab:6massivePSBs} and Figure \ref{fig:massiveHistos} show that massive PSBs with very different properties and stellar histories are captured by our selection criteria. \begin{table*} \begin{center} \begin{tabular}{| l | c | c | c | c | c | c | c | c |} \hline ID & $M_*$ [$\mathrm{M_{\odot}}$] & $\mathrm{SSFR} \cdot t_{\mathrm{H}}$ & $M_{\mathrm{gas}}$ [$\mathrm{M_{\odot}}$] & $M_{\mathrm{cgas}}$ [$\mathrm{M_{\odot}}$] & $M_{\mathrm{BH}}$ [$\mathrm{M_{\odot}}$] & $M_{\mathrm{200,crit}}$ [$\mathrm{M_{\odot}}$] & $R_{\mathrm{200}}$ [$\mathrm{kpc}$] & $\#$Galaxies in halo \\ \hline 430674 & $1.55 \cdot 10^{11}$ & 0.18 & $6.57 \cdot 10^{11}$ & $1.42 \cdot 10^{11}$ & $5.32 \cdot 10^{7}$ & $6.70 \cdot 10^{12}$ & 405 & 13 \\ 472029 & $1.52 \cdot 10^{11}$ & 0.00 & $6.42 \cdot 10^{11}$ & $9.58 \cdot 10^{10}$ & $2.25 \cdot 10^{8}$ & $7.67 \cdot 10^{12}$ & 424 & 6 \\ 625491 & $1.24 \cdot 10^{11}$ & 0.21 & $2.09 \cdot 10^{11}$ & $9.21 \cdot 10^{10}$ & $8.02 \cdot 10^{7}$ & $2.21 \cdot 10^{12}$ & 280 & 3 \\ 711135 & $1.20 \cdot 10^{11}$ & 0.05 & $1.30 \cdot 10^{11}$ & $3.40 \cdot 10^{10}$ & $8.10 \cdot 10^{7}$ & $1.45 \cdot 10^{12}$ & 243 & 1 \\ 417642 & $1.11 \cdot 10^{11}$ & 0.00 & $9.04 \cdot 10^{10}$ & $7.43 \cdot 10^{10}$ & $1.76 \cdot 10^{8}$ & $1.09 \cdot 10^{13}$ & 477 & 8 \\ 659121 & $1.08 \cdot 10^{11}$ & 0.00 & $1.90 \cdot 10^{11}$ & $6.36 \cdot 10^{10}$ & $1.20 \cdot 10^{8}$ & $1.96 \cdot 10^{12}$ & 269 & 1 \\ \end{tabular} \end{center} \caption{Overview of properties of six massive PSBs at $z \sim 0$ which are studied in greater detail. From left to right: (1) \texttt{SUBFIND} identification (2) Stellar mass, (3) Blueness, (4) Gas mass, (5) Cold gas mass, (6) BH mass, (7) Halo mass, (8) Halo radius, (9) Number of galaxies in halo. All values are given at $t_{\mathrm{lbt}} = 0\,$Gyr.} \label{tab:6massivePSBs} \end{table*} \begin{figure} \includegraphics[width=0.95\columnwidth]{210505PSB_massive_sel_histos_6panels.eps} \caption{Stellar history, i.e. distribution of newly added stars (in- and ex-situ) as a function of look-back-time for the six massive PSBs introduced in Table \ref{tab:6massivePSBs}. First row IDs from left to right: 430674, 472029, 625491. Second row IDs from left to right: 711135, 417642, 659121. Coloured vertical lines indicate merger events (more easily visible in Figure \ref{fig:AGN_SNe_Energy}): major (solid), minor (dashed), and mini (dotted). Note that only recent mergers with $t_{\mathrm{lbt}} \lesssim 3.6\,$Gyr are shown. } \label{fig:massiveHistos} \end{figure} \subsection{Main sequence tracks} \label{sub:mainseq} Figure \ref{fig:MSpanel} shows the positions of post-starburst (PSB: green) and star-forming (SF: blue) galaxies and their progenitors in the stellar mass - star formation rate (SFR) plane from left to right at: $z = 0.07$ (1st panel), $z = 0.42$ (2nd panel), peak PSB star formation (3rd panel), and the evolution of the six massive PSBs (4th panel) introduced in Table \ref{tab:6massivePSBs}. To compare the behaviour with observations, we have added main sequence fits (shaded regions) for redshifts $z=0.4$ and $z=0.1$ \citep{2014ApJS..214...15S, 2018A&A...615A.146P}. The six massive PSBs in the right panel are identified at $z = 0.07$ (crosses), i.e at $t_{\mathrm{lbt}} = 0\,$Gyr. The PSB progenitors are then tracked backwards in intervals of $t_{\mathrm{lbt}} \sim 0.11\,$Gyr, yielding additional data points (small diamonds). PSB progenitors are tracked to a maximum redshift of $z=0.42$ (triangle), i.e. to $t_{\mathrm{lbt}} \sim 3.6\,$Gyr, depending on how recently their black holes have been seeded (see Section \ref{sub:BHgrowthStat} for more details). Additionally, the logarithmic relative deviation of galaxies from the observationally based redshift evolving main sequence (MS) fit \citep{2018A&A...615A.146P} is plotted in the top panels, i.e. $\Delta \log_{10}(MS[z]) = \log_{10}(SFR[z]/MS[z])$. In other words, $\Delta \log_{10}(MS[z]) = 0$ galaxies lie on the main sequence and positive (negative) values correspond to the logarithm of the factor they lie above (below) the main sequence. \begin{figure*} \includegraphics[width=1.95\columnwidth]{210323PSB_SSFRvsMstar_Nbeg136to100_hhm1_multi_Mar21} \caption{Post-starburst (PSB) progenitor evolution in the stellar mass - star formation rate plane. 1st panel (from left to right): PSB (green) and star-forming stellar mass matched control (SFSMMC: blue) galaxies at $z = 0.07$, i.e. at look-back-time (lbt) $t_{\mathrm{lbt}}=0\,$Gyr. 2nd panel: PSB and SFSMMC progenitors at $z = 0.42$, i.e. $t_{\mathrm{lbt}}=3.6\,$Gyr. 3rd panel: Peak star formation rate (SFR) for each PSB and corresponding SFSMMC progenitor with median SFR bins. 4th panel: Evolution of a subset of six massive PSBs (see Table \ref{tab:6massivePSBs}), which are tracked through time, ending at $t_{\mathrm{lbt}}=0\,$Gyr (crosses). Each step (small diamonds), represents an incremental increase of $t_{\mathrm{lbt}} \sim 0.11\,$Gyr, ultimately arriving at the furthest tracked progenitors at a maximum $t_{\mathrm{lbt}}=3.6\,$Gyr (triangles). The shaded regions (1st and 2nd panel) provide redshift dependent observational fits to the main sequence \protect\citep{2014ApJS..214...15S,2018A&A...615A.146P}. The grey density distribution shows the abundance and location of all Magneticum Box2 galaxies at $z = 0.07$ (1st and 4th panel) and $z = 0.42$ (2nd panel). For plotting purposes, in the 3rd and 4th panel, low SFR galaxies are artificially set to $SFR(< 0.02) = 0.02$ (and $\Delta[<-0.8] \log_{10}(MS[z]) = -0.8$ in the top panels). For reference, a recent Milky Way stellar mass estimate $M_{*,\mathrm{MW}} \sim 6 \cdot 10^{10}\, \mathrm{M_{\odot}}$ has been included (black X) in the 1st panel \protect\citep{2015ApJ...806...96L}. The top panels show the logarithmic relative deviation from the \protect\cite{2018A&A...615A.146P} redshift evolving main sequence, i.e. $\Delta \log_{10}(MS[z]) = \log_{10}(SFR[z]/MS[z])$, for each population in the connected main panel. } \label{fig:MSpanel} \end{figure*} The distribution of PSB (green) and SF (blue) galaxies at $z = 0.07$ is shown in the first panel of Figure \ref{fig:MSpanel}. The dichotomy found at $t_{\mathrm{lbt}}=0\,$Gyr, i.e. at our identification time, is the result of our selection criteria (see Section \ref{sub:selection}): By design, PSBs are quenched, while the SFSMMC sample is characterised by star formation. When comparing to observations, we find that this dichotomy is well described by observations \citep{2014ApJS..214...15S, 2018A&A...615A.146P}. Furthermore, we see that the SF galaxies appear as an extension of the grey density distribution describing the abundance of all Box2 Magneticum galaxies, while the PSBs scatter below the main sequence. The grey density distribution experiences a strong cut-off at $\log_{10}(\mathrm{M_*} / \mathrm{M_{\odot}}) \gtrsim 10.4$ because the SF population is characterised by a strong decline at this stellar mass (see Figure \ref{fig:SMF_grid}). We note that the relative deviation from the evolving main sequence shown above the first panel uses a redshift of $z=0.09$, as the observational fit is no longer defined at $z=0.07$ \citep{2018A&A...615A.146P}, the redshift showing Magneticum results. The $z=0.42$ display of PSB and SF progenitors in the second panel of Figure \ref{fig:MSpanel} shows no meaningful difference between the two populations. Both populations match the behaviour of the general distribution of the main sequence of Magneticum galaxies at the same redshift, which is shown in the underlying grey density distribution. Furthermore, the general Box2 Magneticum galaxy distribution (grey), as well as the PSB and SF galaxies are well described by observational fits at $z=0.4$ \citep{2014ApJS..214...15S, 2018A&A...615A.146P}. We note that the least amount of galaxies are found in the second panel, compared to the first and third panel, because not all galaxies can be traced back to higher redshifts. Furthermore, at $z=0.4$ galaxies with $SFR = 0$ are not shown due to the logarithmic scaling. The third panel in Figure \ref{fig:MSpanel} displays the distribution of PSB and SF progenitors at the height of PSB progenitor star formation within $t_{\mathrm{lbt}} < 1.48\,$Gyr, i.e. since $z = 0.19$. If available, the corresponding SFSMMC galaxy to a given PSB galaxy is displayed, otherwise a random unique SFSMMC at the same redshift is shown for comparison. The median PSB peak star formation occurs at $z=0.13$, i.e. at $t_{\mathrm{lbt}}=0.75\,$Gyr. When comparing the two populations we find that PSBs are characterised by higher SFRs than SFSMMC galaxies, as illustrated by the dashed horizontal lines indicating the median of each population at different stellar mass intervals. Interestingly, $\sim 17\%$ of SF progenitors (blue, 3rd panel) at peak PSB progenitor star formation are found on the black horizontal line, i.e. have $SFR < 0.02$. These SFSMMC galaxies were previously quiescent and have become star-forming at $z \sim 0$ via recent mergers, i.e. have been rejuvenated. In this context, we note that the recent SFR of the most massive progenitors need not always be correlated with a young stellar population at $t_{\mathrm{lbt}}=0\,$Gyr. This is evidenced by the fact that within our PSB sample we have a galaxy which has no in-situ star formation over the evaluated time-span, as illustrated by the green diamond (3rd panel) found on the black horizontal line showing galaxies with $SFR(< 0.02) = 0.02$. In other words, galaxies need not have formed in-situ stars to host a young stellar population, rather, as is the case for the mentioned PSB galaxy, young ex-situ stars may also be accreted during mergers, leading to a young stellar population in the merger remnant at $t_{\mathrm{lbt}}=0\,$Gyr (see also Section \ref{sub:gasEvol}). The fourth panel of Figure \ref{fig:MSpanel} shows that massive PSB progenitors are found significantly above the main sequence prior to their quiescent phase at $t_{\mathrm{lbt}}=0\,$Gyr (crosses). PSB progenitors display prolonged strong star formation episodes, with SFRs consistently being significantly larger than the redshift evolving main sequence \citep{2018A&A...615A.146P}. Generally, independent of the duration, starbursts of massive PSB progenitors are found in the range $5 \lesssim \Delta MS[z]/MS[z] \lesssim 20$ above the redshift evolving main sequence. In Figure \ref{fig:MSpanel}, we find both galaxies that continuously remain above the main sequence as well as galaxies that experience rejuvenation, i.e. galaxies which were initially below the main sequence but rise above it during their starburst phase. The starburst timescales ($t_{\mathrm{sb}}$) differ widely and are within the range $t_{\mathrm{sb}} \sim (0.4-3)\,$Gyr. This spread in timescales is a reflection of the different star formation histories prior to the starburst. As the global $647$ PSB sample is tracked backwards, the sample size is reduced, especially if BHs are recently seeded. This results in a sample size of $455$ tracked PSB progenitors, which reach a $t_{\mathrm{lbt}} \geq 2.5\,$Gyr. Of these $455$ successfully tracked PSBs, $105$ are considered to be rejuvenated galaxies, i.e. $23\%$. Independent of whether galaxies are rejuvenated or show sustained star formation, they show a sharp decline in star formation at the end of the starburst phase. Typically, this decline to passive levels of star formation happens within $\lesssim 0.4\,$Gyr. This conflicts with our understanding of the typical behaviour of field galaxies, which make up the vast majority of our sample (see last row of Table \ref{tab:BHgrowthTable}), as field galaxies generally experience a gradual decline in average SFR \citep{2007ApJ...660L..43N}. In other words, the (massive) PSBs in Figure \ref{fig:MSpanel} not only show enhanced, often sustained, starbursts, but also experience an abrupt cessation of star formation, the details of which are discussed in Section \ref{sec:shutdown}. \section{The role of mergers} \label{sec:mergers} It is well established that galaxy mergers impact the galactic star formation rate (SFR), both directly \citep{2005MNRAS.361..776S, 2009ApJ...697L..38J, 2018MNRAS.478.3447E, 2019MNRAS.489.4196L} and indirectly \citep{2013MNRAS.430.1901H, 2014MNRAS.437.1456B, 2014ApJ...792...84Y}. However, the nature and relevant parameters of the mergers and how they influence the SFR is still debated. For example, while the SIMBA cosmological simulations find an increasing impact \citep{2019MNRAS.490.2139R}, observations based on SDSS, KiDS, and CANDELS find that mergers do not significantly impact the SFR, compared to non-merging systems \citep{2019A&A...631A..51P}, and observations based on $32$ PSBs from LEGA-C suggest that mergers likely trigger the rapid shutdown of star formation found in PSBs \citep{2020ApJ...888...77W}. To disentangle this complex relationship between mergers and the SFR, we investigate mergers in Box2, both on an individual basis as well as statistically. \subsection{Case study: Gas evolution} \label{sub:gasEvol} The case study of one typical PSB (progenitor), selected from Table \ref{tab:6massivePSBs} (ID=417642), is shown in Figure \ref{fig:traceGasMassive}. The goal is to map the (cold) gas evolution as a means of investigating the initial triggering of the starburst and the following starburst phase. To uncover the mechanisms involved, Figure \ref{fig:traceGasMassive} shows the evolution of the stellar history (1st row), the gas phase (2nd row), and a projection of the spatial gas distribution (3rd row) as a function of look-back-time for the selected PSB (progenitor). To better visualise the evolution of the gas involved in the recent starburst phase, all star-forming gas at $t_{\mathrm{lbt}}=0.43\,$Gyr, i.e. one time-step before the shutdown, is identified and subsequently coloured green. These identified gas particles maintain their green colouring both prior to and after this look-back-time. When considering higher look-back-times in Figure \ref{fig:traceGasMassive}, we find that the recent increase in star formation at $t_{\mathrm{lbt}} \sim 2 \,$Gyr coincides with a close galaxy-galaxy interaction, followed by a major merger event at $t_{\mathrm{lbt}} = 0.75 \,$Gyr (see solid red vertical line in Figure \ref{fig:massiveHistos}). The period between $t_{\mathrm{lbt}} \sim (0-2)\,$Gyr is characterised by prolonged star formation. This is not an exception, but rather most PSB progenitors experience recent merger events (see Table \ref{tab:MergerTable}). It appears that the initial close galaxy-galaxy interactions and the subsequent mergers provide a mechanism by which gas is transported inwards, increasing the number of gas particles above the density threshold (2nd row) required for star formation. The increase in the supply of cold, dense gas within the PSB progenitor then enables the starburst. Similarly to the vast majority of PSBs surveyed in this manner, a strong diffusion of gas is registered in Figure \ref{fig:traceGasMassive} at $t_{\mathrm{lbt}}=0\,$Gyr, following the starburst phase. In the second row, we find a strong decrease in gas density, accompanied by an overall increase in temperature within a timescale of $t \sim 0.4\,$Gyr, as evidenced by the distribution of previously star-forming gas (green) over the entire density and temperature regime. This behaviour at low look-back-times is mirrored in the spatial domain (3rd row), which also provides evidence for a strong redistribution of previously star-forming gas (green). Although the spatial distribution widens, large cold gas reservoirs remain within the PSB galaxy at $t_{\mathrm{lbt}}=0\,$Gyr, agreeing with recent observations \citep{2020ApJ...900..107Y}. We reviewed the gas evolution of multiple different PSBs and verified that the behaviour shown in Figure \ref{fig:traceGasMassive} is not an exception, but rather typical for our (massive) PSB sample. \begin{figure*} \includegraphics[width=1.95\columnwidth]{210326PSB_gas_phase_diagram_trace_snr136to100_PSB4_NbegSubID417642_selcGas_isnr1_clustersample0_Mar21} \caption{Case study of the gas evolution of one of the PSBs ($417642$), as introduced in Table \ref{tab:6massivePSBs}. The first row shows the stellar history as a function of look-back-time, while the second row shows the gas phase diagram. The third row displays the evolution of the spatial distribution (co-moving side length $100h^{-1}ckpc$) of gas (black), cold gas (blue), i.e. $T < 10^5\,K$, and star-forming gas (orange). To better understand the evolution of the gas involved in star formation, all star-forming gas at $t_{\mathrm{lbt}}=0.43\,$Gyr (green) is identified and tracked through all look-back-times to reveal its origin, as well as its distribution at $t_{\mathrm{lbt}}=0\,$Gyr. The tracked gas (green) uses a smaller symbol size, to show the over-plotted star-forming gas (orange) in the background. The 'x' at the centre of the spatial distribution marks the black hole of the PSB (progenitor).} \label{fig:traceGasMassive} \end{figure*} When viewing the stellar history in the first row of Figure \ref{fig:traceGasMassive}, the relative weighting of different components appears to change as time progresses. For example, at $t_{\mathrm{lbt}}=1.68\,$Gyr the onset of the starburst appears to be significantly stronger (compared to the older stars) than at lower look-back-times. Investigating this behaviour, we found that a significant population of older stars are accreted onto the PSB progenitor, thus impacting the relative abundance of different components of the stellar history. In other words, during the presented merging process, more ex-situ old stars are accreted than young in-situ stars are formed (see also Section \ref{sub:mainseq}). \subsection{Merger statistics} \label{sub:mergerStat} To extend the case study conducted in Section \ref{sub:gasEvol} by a statistical analysis, we begin by evaluating the merger history of the $z \sim 0$ global $647$ PSB sample. In addition, we also analyse the two (quiescent and star-forming, respectively) stellar mass matched control samples QSMMC and SFSMMC. The results of the merger tree evaluation for these samples are listed in Table \ref{tab:MergerTable}. Mergers are defined by their progenitor peak stellar mass ratio within the past four snapshots prior to merger identification, i.e. $t_{\mathrm{lbt}} \leq 0.43\,$Gyr: mini mergers 1:10 - 1:100, minor mergers 1:3 - 1:10, and major mergers 1:1 - 1:3. The first data row lists the sample size of successfully constructed merger trees. This value is less than the total sample size ($647$), as merger trees only exist over the entire evaluated time-span of $t_{\mathrm{lbt}} = 2.5\,$Gyr if the main progenitor was formed prior to this time-span. The next three rows list the total number of identified mergers (galaxies can have multiple mergers of the same type) for each type. The next three rows in Table \ref{tab:MergerTable} display the percentage of the analysed merger trees which identify at least one merger event of the respective type. The last row lists the percentage of galaxies with at least one merger event, independent of the type. Table \ref{tab:MergerTable} shows that the PSB sample is characterised by an abundance of merger events. This agrees with low redshift observations, which find that PSBs are associated with interactions and/or mergers \citep{2004ApJ...607..258Y, 2008ApJ...688..945Y, 2009MNRAS.396.1349P, 2017A&A...597A.134M}. Specifically, $64.7\%$ of PSBs experience a major merger within the last $2.5\,$Gyr. In contrast, only $9.4\%$ of QSMMC galaxies experience a major merger within the same time-span, while this percentage rises to $58.1\%$ for SFSMMC galaxies. Compared to the QSMMC, the PSB sample experiences a factor of $\sim 7$ more major merger events. When comparing the samples, we find close similarities between the PSB and SFSMMC sample, i.e. both show an abundance of mergers. In contrast, the QSMMC sample is characterised by a low abundance of mergers and differs significantly from the other two samples. However, it is not clear that this is typical for PSBs identified at higher redshifts. \begin{table} \begin{center} \begin{tabular}{| l | c | c | c |} \hline Criterion & PSBs & QSMMC & SFSMMC \\ \hline Analysed trees & 632 & 646 & 630 \\ \hline $\Sigma(N_{\mathrm{mini}})$ & 343 & 114 & 285 \\ \hline $\Sigma(N_{\mathrm{minor}})$ & 295 & 53 & 260 \\ \hline $\Sigma(N_{\mathrm{major}})$ & 465 & 65 & 415 \\ \hline $N_{\geq 1 \mathrm{ mini}}$ & $40.7\%$ & $14.1\%$ & $33.7\%$ \\ \hline $N_{\geq 1 \mathrm{ minor}}$ & $37.3\%$ & $ 7.4\%$ & $33.8\%$ \\ \hline $N_{\geq 1 \mathrm{ major}}$ & $64.7\%$ & $ 9.4\%$ & $58.1\%$ \\ \hline $N_{\geq 1 \mathrm{ merger}}$ & $88.9\%$ & $23.4\%$ & $79.7\%$ \\ \end{tabular} \end{center} \caption{Overview of different merger abundances of our global $z \sim 0$ identified PSB sample and its stellar mass matched control (SMMC) samples, subdivided into quiescent (QSMMC) and star-forming (SFSMMC) samples. The first data row displays the number of successfully analysed merger trees out of the $647$ galaxies traced for each sample over the time-span $t_{\mathrm{lbt}} = (0.0-2.5)\,$Gyr. The next three rows list the total number of mergers $\Sigma(N)$ encountered over the evaluated time-span, subdivided into the following classes and stellar mass ratios: Mini 1:10 - 1:100, Minor 1:10 - 1:3, Major 1:3 - 1:1. The subsequent three rows list the percentage of galaxies with respect to the analysed merger trees, which encountered at least one merger event of the respective type ($N_{\geq 1}$). The last row shows the percentage of galaxies which encountered at least one merger, independent of type.} \label{tab:MergerTable} \end{table} \begin{table} \begin{center} \begin{tabular}{| l | c | c | c |} \hline Criterion & PSBs & QSMMC & SFSMMC \\ \hline Analysed trees & 10520 & 10596 & 10479 \\ \hline $\Sigma(N_{\mathrm{mini}})$ & 8559 & 4899 & 8692 \\ \hline $\Sigma(N_{\mathrm{minor}})$ & 6747 & 2439 & 6832 \\ \hline $\Sigma(N_{\mathrm{major}})$ & 6014 & 1638 & 6822 \\ \hline $N_{\geq 1 \mathrm{ mini}}$ & $50.7\%$ & $33.6\%$ & $51.3\%$ \\ \hline $N_{\geq 1 \mathrm{ minor}}$ & $50.5\%$ & $20.6\%$ & $50.1\%$ \\ \hline $N_{\geq 1 \mathrm{ major}}$ & $47.3\%$ & $14.3\%$ & $52.7\%$ \\ \hline $N_{\geq 1 \mathrm{ merger}}$ & $92.6\%$ & $51.9\%$ & $92.3\%$ \\ \end{tabular} \end{center} \caption{Same as Table \ref{tab:MergerTable} but showing an overview of different merger abundances of $z \sim 0.9$ identified PSBs (initial PSB sample size of $10624$) and their control galaxies (QSMMC and SFSMMC). The galaxies were traced for each sample over the time-span $t_{\mathrm{lbt}} = (6.5-9.0)\,$Gyr. } \label{tab:Nbeg064MergerTable} \end{table} In Section \ref{sec:environment}, we showed the redshift evolution of both the PSB-to-quenched fraction and the PSB stellar mass function. In this context, we investigate the abundance of mergers at redshift $z=0.9$, in the same manner as outlined for our global $z \sim 0$ PSB sample. This is motivated by the desire to separate the redshift evolution of identically selected samples from differences resulting from different (later) environmental selections. We choose redshift $z=0.9$ because we also study the merger abundance in the cluster environment (see Section \ref{sub:clusterMergers}) and compare it to observations (see Section \ref{sub:vlosObsComp}) at this redshift. As established by Figures \ref{fig:Q_PSBfrac_grid} and \ref{fig:SMF_grid}, the abundance of PSBs increases with increasing redshift. Table \ref{tab:Nbeg064MergerTable} reflects this too, as significantly more PSBs are identified at $z=0.9$ (10624 galaxies), compared to $z\sim 0$ (647 galaxies). Beyond this, we find that: First, the percentage of galaxies which experience more than one merger (last row) increases, especially for the QSMMC sample (factor $\sim 2$), less so for the SFSMMC sample (increase by $\sim 12\%$), and least for the PSB sample (increase by $\sim 3\%$). Second, the similarity between the PSB and the SFSMMC sample remains, as both continue to show similar (high) merger abundances compared to the QSMMC sample. Third, the overall increase in the abundance of mergers is especially driven by more mini and minor mergers at $z=0.9$. This behaviour at $z \sim 0.9$ agrees with LEGA-C observations at $z \sim 0.8$, which find that central starbursts are often the result of gas-rich mergers, as evidenced by the high fraction of PSB galaxies with disturbed morphologies and tidal features ($40\%$) \citep{2020MNRAS.497..389D}. Albeit differences existing between Tables \ref{tab:MergerTable} ($z \sim 0$) and \ref{tab:Nbeg064MergerTable} ($z = 0.9$), the link between recent (in relation to the identification redshift) star formation and the abundance of mergers appears strong. To summarise, although PSBs are quiescent at identification redshift, they are characterised by recent (strong) star formation. The similarity with respect to merger abundances between star-forming and PSB galaxies is likely driven by the ability of mergers to trigger starbursts on short timescales and to provide cold gas on longer timescales to otherwise exhausted galaxies \citep{2010MNRAS.407.2091G, 2012MNRAS.419.3200H}. In short, we find strong evidence that mergers are linked to increased star formation, while their absence is linked to quiescent levels of star formation. Consequently, the high abundance of mergers appears to be central to the evolution of PSB galaxies, while likely also playing an important role in the subsequent shutdown. \subsection{Cold gas fractions} \label{sub:cgas_frac} \begin{figure*} \includegraphics[width=0.95\columnwidth]{210514plot_merger_wetness_Mar21_mmp_Nbeg136.eps} \includegraphics[width=0.95\columnwidth]{210514plot_merger_wetness_Mar21_nonmmp_Nbeg136.eps} \caption{Distribution of cold gas fraction $f_{cgas} = M_{cold,gas}/M_*$ within three half-mass radii $r < 3 \, R_{1/2}$ for main (left) and satellite progenitors (right). The distributions are further split into major (top), minor (middle), and mini (bottom) mergers and show the behaviour of the PSB (green), QSMMC (red), and SFSMMC (blue) samples. The short solid vertical lines indicate the median values for each population, while the horizontal lines indicate the $1\,\sigma$ region, i.e. the range between the $15.9\%$ and $84.1\%$ percentile. In contrast to all other panels, the panel on the bottom right shows a four times larger $f_{cgas}$ domain. } \label{fig:merger_wetness} \end{figure*} As the timescales of the galaxy-galaxy interactions prior to the detection of a merger event vary widely, depending on the specific geometry of the encounter, we do not individually correlate merger events with the onset of the starburst phase. Rather, to more closely evaluate the properties of the detected mergers and to investigate their differences, we determine the cold gas fractions $f_{\mathrm{cgas}} = M_{\mathrm{cold,gas}}/M_*$ prior to mergers for the $z \sim 0$ PSB, QSMMC, and SFSMMC samples. The cold gas fraction is calculated within three half-mass radii $r < 3 \, R_{1/2}$, where the half-mass radius is defined as the radius of a three dimensional sphere containing half of the total galactic stellar mass. We choose $R_{1/2}$, as its use is well established within our simulations and it is often considered equal to the observationally attained effective radius $R_e$ \citep{2015ApJ...812...29T, 2017MNRAS.464.3742R, 2018ApJ...854L..28T, 2020MNRAS.493.3778S}. We tested the impact of choosing different half-mass radii ($r/R_{1/2} = [0.5,5]$) on $f_{\mathrm{cgas}}$ and found consistent behaviour for varying half-mass radii. Figure \ref{fig:merger_wetness} shows the distribution of cold gas fractions $f_{\mathrm{cgas}}$ within three half-mass radii $r < 3 \, R_{1/2}$, split up into main (left) and satellite progenitors (right). We further split the sample into major (top), minor (middle), and mini (bottom) mergers. When a merger event is registered, we determine the cold gas fraction prior to the merger event, i.e. we identify the progenitors peak stellar mass in the $\leq 0.43\,$Gyr before the event is registered and determine the cold gas fraction at this time-step. Each progenitor is then assigned to the respective merger type distribution. This is done separately for the PSB (green), QSMMC (red), and SFSMMC (blue) sample. The solid vertical lines indicate the median values of each population, while the horizontal lines are bounded by the percentiles $15.9\%$ and $84.1\%$ respectively, i.e. the equivalent $1\,\sigma$ region of a Gaussian distribution. All panels showing the individual main progenitor distribution (left) in Figure \ref{fig:merger_wetness} display similar distributions for different merger ratios. The reason for this is that independent of the given merger ratio, by definition, the merging main progenitor has the same cold gas fraction. Every time a merger occurs, the population of main progenitors is sampled, resulting in a similar cold gas fraction distribution for all main progenitors, independent of the merger ratio. In contrast, each sample of satellite progenitors (right) shows an evolving behaviour with merger type. Figure \ref{fig:merger_wetness} (right) displays that the $f_{\mathrm{cgas}}$ distribution for each sample migrates towards higher $f_{\mathrm{cgas}}$ values as the stellar mass ratio between main and satellite progenitor decreases, i.e. when moving towards smaller mergers. In a nutshell, less massive merging satellite progenitors have higher relative abundances of cold gas. We find that the main progenitor behaviour of the (median) cold gas fraction distribution of the PSB and SFSMMC sample is similar with $f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) \sim (0.8-1.0)$ for the main progenitors. The PSB and SFSMMC satellite progenitors show an expected (see above) stronger variance in cold gas fractions between merger types. In contrast to the PSB and SFSMMC galaxies, Figure \ref{fig:merger_wetness} shows that the QSMMC sample consistently has lower $f_{\mathrm{cgas}}$ values: The quiescent main progenitors (left) have $f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) \sim 0$, i.e. compared to their stellar mass almost no cold gas is present in the galaxies. The satellite progenitors (right) also show that satellites which merge via major or minor mergers into the QSMMC sample typically have lower cold gas fractions, compared to the PSB and SFSMMC sample. Taking all this into account, it appears that Figure \ref{fig:merger_wetness} provides some evidence for \textit{galactic conformity}, i.e. the effect whereby properties, e.g. the star formation rate, of satellite galaxies appear correlated to the properties of the central galaxy \citep{2016ApJ...817....9K, 2017MNRAS.472.4769T, 2017MNRAS.472.2504T, 2018MNRAS.477..935T}. In other words, star-forming and PSB main progenitors appear more likely to merge with satellite progenitors which have similarly high cold gas fractions, while quiescent main progenitors appear more likely to merge with satellite progenitors which exhibit more cold gas depletion, i.e. lower $f_{\mathrm{cgas}}$. Interestingly, the strongest difference between the PSB and SFSMMC sample in Figure \ref{fig:merger_wetness} is found for major mergers of satellite progenitors (top right): Statistically, the median cold gas fraction of SFSMMC major merger satellite progenitors ($f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) = 0.73$) is almost twice as large compared to PSBs ($f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) = 0.40$). This is further evidenced by the different abundances at small cold gas fractions: $69\%$ of QSMMC, $42\%$ of PSB, and $29\%$ of SFSMMC satellite major merger progenitors are found within $f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) \lesssim 0.1$. As $65\%$ of $z \sim 0$ PSB and $58\%$ of SFSMMC galaxies experienced at least one merger within the last $2.5\,$Gyr (Table \ref{tab:MergerTable}), this difference in cold gas supply marks an important distinction between the, otherwise often similar, populations. The implications associated with the difference in cold gas supply during major mergers, especially for the shutdown of star formation, are discussed in Section \ref{sub:disc:mergers}. The bottom right panel of Figure \ref{fig:merger_wetness} displays a four times larger domain. The mini mergers of satellite progenitors show a significantly flatter distribution of $f_{\mathrm{cgas}}$, while simultaneously having significantly higher $f_{\mathrm{cgas}}$ values. This is likely the result of infalling cold gas over-densities being classified as mini mergers or gas-rich satellites merging with their host. Subsequently, the low number of stellar particles compared to the abundant (cold) gas particles, drives high values of $f_{\mathrm{cgas}}$. Due to the low resolution of mini merger satellite progenitors, this panel is less relevant to understanding mergers, while still showing that (cold) gas inflow is relatively similar ($f_{\mathrm{cgas}}(r < 3 \, R_{1/2}) \sim 4.0-5.5$) for all analysed samples, with the highest values found in the QSMMC sample. \section{Shutdown of star formation} \label{sec:shutdown} \subsection{Active galactic nucleus and supernova feedback} \label{sub:AGN+SNe} We investigate both the active galactic nuclei (AGN) as well as the supernovae (SNe) feedback energy output as a means to better understand the processes involved in shutting down star formation. Specifically, we want to shed light on processes which are linked to the short timescale ($t \sim 0.4\,$Gyr) redistribution and heating of previously star-forming gas, as discussed in Section \ref{sub:gasEvol}. We choose these mechanisms in particular because they are able to deposit large amounts of energy on short timescales \citep{2005MNRAS.361..776S, 2015ApJ...803L..21V, 2020MNRAS.494..529W}, thereby potentially strongly impacting star formation. As a precaution, we also investigated the typical depletion timescales of cold gas in PSB progenitors during peak star formation. We find the timescales to be significantly higher ($t_{\mathrm{depl}} \sim (2-5)\,$Gyr) than the short shutdown timescale ($t_{\mathrm{shutdown}} \lesssim 0.4\,$Gyr) found throughout our PSB sample. In other words, PSBs progenitors do not appear to run out of gas, rather the reservoir of cold, dense gas is abruptly heated and/or redistributed, leading to a shutdown in star formation, as demonstrated in Figure \ref{fig:traceGasMassive}. \begin{figure*} \includegraphics[width=0.95\columnwidth]{210401PSB_AGN_Energy_Nbeg136to100_all_Mar21.eps} \includegraphics[width=0.95\columnwidth]{210401PSB_SNe_Energy_Nbeg136to100_all_Mar21.eps} \caption{Active galactic nuclei (AGN: left figure) and supernovae (SNe: right figure) power output of the global PSB (green), QSMMC (red), and SFSMMC (blue) samples identified at $z \sim 0$ and evaluated over the past $\sim 3.2\,$Gyr and $\sim 3.5\,$Gyr in units of $10^{51}\,$erg/Myr and $1+10^{51}\,$erg/Myr, respectively. The different panels (from bottom to top) show $t_{\mathrm{lbt}}=0\,$Gyr increasing equal bin size stellar mass intervals: ${M_* \in [[5.00,5.40), [5.40,5.81), [5.81,6.38), [6.38,7.24), [7.24,8.28), \geq 8.28] \cdot 10^{10}\, \mathrm{M_{\odot}}}$. Both figures show the median, as well as the $0.5\,\sigma$ region as error bars for each population. } \label{fig:AGN_SNe_Energy_mass_evol} \end{figure*} \begin{figure} \includegraphics[width=0.95\columnwidth]{210512PSB_AGN_and_SNe_Energy_Nbeg136to100_May21.eps} \caption{Energy deposited by AGN (top) and SNe (bottom) in units of $10^{51}\,$erg/Myr and $1+10^{51}\,$erg/Myr for the six massive PSBs, introduced in Table \ref{tab:6massivePSBs}, over the past $\sim 3.5\,$Gyr. Vertical lines (following the colour scheme) indicate different merger events: mini (1:10 - 1:100) mergers (dash dotted line), minor (1:3 - 1:10) mergers (dashed line), and major (1:1 - 1:3) mergers (solid lines). We note, that the temporal resolution differs by a factor of four between the AGN (top) and SNe (bottom) energy output. The horizontal lines in the top panel (right) show an estimation of the spherical binding energy of the massive PSBs in units of $10^{55}\,$erg. } \label{fig:AGN_SNe_Energy} \end{figure} We calculate the AGN power output $P_{\mathrm{AGN}}$ based on the change in BH mass $\Delta M_{\mathrm{BH}}$ between time steps ($\Delta t = 0.43\,$Gyr) \citep{2014MNRAS.442.2304H}: \begin{equation} P_{\mathrm{AGN}} = (e_r e_f \Delta M_{\mathrm{BH}} \cdot c^2) / \Delta t \label{eq:E_AGN} \end{equation} where $e_r=0.2$ is the fraction of energy which is thermally coupled to the surrounding gas, and $e_f$ is a free parameter usually set to $e_f=0.15$ (typical for simulations following metal dependent cooling functions \citep{2009MNRAS.398...53B, 2011MNRAS.413.1158B}). As we are especially interested in SNe which release their energy on short timescales, our focus is on short lived, i.e. massive, stars. Therefore, supernovae Type II (SNeII), which arise at the end of the lifetime of massive stars, provide the dominant source of supernovae feedback in our analysis \citep{1976ApJ...207..872C}. Following the star formation model by \cite{2003MNRAS.339..289S}, we expect an average SN energy release per stellar mass of $\epsilon_{\mathrm{SN}} = 4 \cdot 10^{48}\,\mathrm{erg}\mathrm{M_{\odot}}^{-1}$. Combining this with the star formation rate $SFR$ at each time step, (temporal resolution $\Delta t = 0.11\,$Gyr), we receive the following estimation for the SNe power output $P_{\mathrm{SNe}}$: \begin{equation} P_{\mathrm{SN}} = \epsilon_{\mathrm{SN}} \cdot SFR \label{eq:E_SNe} \end{equation} The results of these calculations are shown in Figure \ref{fig:AGN_SNe_Energy_mass_evol} for equal bin size stellar mass intervals: $M_* \in [[5.00,5.40), [5.40,5.81), [5.81,6.38), [6.38,7.24), [7.24,8.28), \\ \geq 8.28] \cdot 10^{10}\, \mathrm{M_{\odot}}$. On the left-hand side each data point displays the median AGN power output calculated from the difference in BH mass between time-steps, as indicated in Equation \ref{eq:E_AGN}. Following Equation \ref{eq:E_SNe}, data points in the right figure display the SNe power output estimation based on the current star formation rate ($SFR$). When the median SFR is zero, which is the case for the entire QSMMC sample, the SNe power output is zero too. Both figures shown in Figure \ref{fig:AGN_SNe_Energy_mass_evol} show the respective median values, as well as the $0.5\,\sigma$ region as error bars (additionally shaded on the right). The different temporal resolution between the two figures is the result of using the BH particle data on the left, which due to storage constraints is only saved every $0.43\,$Gyr, and using \texttt{SUBFIND} data on the right, which is available every $0.11\,$Gyr (see Section \ref{sub:selection}). In the stellar mass interval $M_* \in [5.00,5.40) \cdot 10^{10}\, \mathrm{M_{\odot}}$ (bottom panel) characterised by the weakest AGN power output (left), the AGN still strongly outweighs the SNe power output (right): We find the maximum median SNe power output for PSB galaxies to be $P_{\mathrm{SNe,PSB}} \leq 2 \cdot 10^{55}\,$erg/Myr. In contrast, the maximum median AGN power output is $P_{\mathrm{AGN,PSB}} \geq 10^{56}\,$erg/Myr for the same stellar mass selection. In other words, Figure \ref{fig:AGN_SNe_Energy_mass_evol} shows that the AGN outweighs the SNe power output by half an order of magnitude, especially at recent look-back-times. Figure \ref{fig:AGN_SNe_Energy_mass_evol} (left) shows negligible differences between PSB and SF galaxies at lower stellar masses: Both samples show a recent increase in AGN feedback, which is significantly larger than that of the quenched sample, especially towards more recent look-back-times. However, with increasing stellar mass the difference between PSB and SF galaxies increases. Specifically, in the highest stellar mass interval, i.e. $M_* \geq 8.28 \cdot 10^{10}\, \mathrm{M_{\odot}}$ (top panel), the difference at $t_{\mathrm{lbt}} = 0\,$Gyr is of order half a magnitude between PSBs ($P_{\mathrm{AGN,PSB}} \sim 10^{57}\,$erg/Myr) and SF ($P_{\mathrm{AGN,SF}} \sim 2 \cdot 10^{56}\,$erg/Myr) galaxies. In contrast to the recent elevation in AGN feedback found in PSB and to a lesser degree in SF galaxies (depending on the stellar mass interval), AGN feedback of quiescent galaxies shows no meaningful temporal evolution and only a weak stellar mass evolution ($P_{\mathrm{AGN,Q}} \sim 10^{55}\,$erg/Myr in the highest stellar mass interval). In Figure \ref{fig:AGN_SNe_Energy_mass_evol} (right) the PSB and SF galaxies show similar median SNe feedback. However, even at $M_* \in [5.00,5.40) \cdot 10^{10}\, \mathrm{M_{\odot}}$ (bottom panel), where PSB and SF galaxies show the most similarities, we see a large spread in SNe feedback in the SF sample, while PSBs show a smaller spread in the distribution of SNe feedback. Independent of stellar mass, this is especially the case at recent look-back-times, $t_{\mathrm{lbt}} \sim [0.1,1]\,$Gyr: During this period PSBs are typically experiencing their starburst phase. As a result, the SFR is elevated throughout the entire sample, which due to its linear relation to the SNe feedback (see Equation \ref{eq:E_SNe}) results in a tighter and slightly elevated distribution compared to SF galaxies, as evidenced by smaller error bars. Meanwhile, the quiescent galaxy sample is continuously characterised by a lack of SNe feedback, as no meaningful star formation occurs in the sample during the evaluated time span. As dictated by our selection criteria, PSBs show a strong decrease in SNe feedback energy at $t_{\mathrm{lbt}} \sim 0\,$Gyr. In addition to our statistical analysis (Figure \ref{fig:AGN_SNe_Energy_mass_evol}), in Figure \ref{fig:AGN_SNe_Energy}, we consider the individual AGN (top) and SNe (bottom) feedback of the six massive PSBs, described in Table \ref{tab:6massivePSBs}. We have added vertical lines indicating specific merger events colour coded to match the associated galaxy: When evaluating the last $3.5\,$Gyr, we find that the six massive PSBs experienced 16 merger events, compared to 10 in the associated SFSMMC, and 2 in the QSMMC sample. As previously established in Sections \ref{sub:gasEvol} and \ref{sub:mergerStat}, this further highlights the significance of mergers for the evolution of (massive) PSBs. Similarly to the comparison between AGN and SNe feedback shown in Figure \ref{fig:AGN_SNe_Energy_mass_evol}, Figure \ref{fig:AGN_SNe_Energy} also shows that the AGN feedback significantly outweighs the SNe feedback, especially at recent look-back-times. Specifically, within the last $t_{\mathrm{lbt}} \leq 0.5\,$Gyr all six PSBs have an AGN feedback ($P_{\mathrm{AGN,PSB}} \gtrsim 10^{57}\,$erg/Myr) which outweighs the SNe feedback ($P_{\mathrm{SNe,PSB}} \lesssim 10^{56}\,$erg/Myr) by more than an order of magnitude. Furthermore, Figure \ref{fig:AGN_SNe_Energy} shows that most of the mergers (vertical lines) in the PSB sample occur within the last $\sim 1.5\,$Gyr, i.e. during the same time in which the AGN power output increases by up to $\sim 2$ orders of magnitude. As a rough comparison, we calculate an estimation of the spherical binding energy ($E_{\mathrm{bind}} = 3GM^2/5R$) of the massive PSBs using the $M_{\mathrm{200,crit}}$ halo mass and $R_{\mathrm{200}}$ radius as displayed in Table \ref{tab:6massivePSBs} for $M$ and $R$, respectively. The resulting estimation is shown as horizontal lines in the top panel (right) of Figure \ref{fig:AGN_SNe_Energy}. To compare with the power output, the horizontal binding energy lines use a different scale [$10^{55}\,$erg], as indicated by the legend. Five out of the six massive PSBs have binding energies with $E_{\mathrm{bind}} \leq 10^{61}\,$erg and the PSB with the most massive halo (shown in Figure \ref{fig:traceGasMassive}) has a binding energy of $E_{\mathrm{bind}} = 1.278 \cdot 10^{61}\,$erg. Most binding energies are found within an order of magnitude of the AGN energy released within the last time step $t_{\mathrm{lbt}} \lesssim 0.43\,$Gyr, which further highlights the strong impact the AGN has on (massive) PSBs. Furthermore, we note that the extensive amount of power deposited by the AGN ($P_{\mathrm{AGN,PSB}} \gtrsim 10^{57}\,$erg/Myr) during $t_{\mathrm{lbt}} \lesssim 0.43\,$Gyr is correlated with the gas temperature increase, gas density decrease, and general redistribution of gas seen in Figure \ref{fig:traceGasMassive} at $t_{\mathrm{lbt}}=0\,$Gyr. Thus, we find strong evidence that the AGN is connected and probably responsible for the shutdown of the star formation in (massive) PSBs. \subsection{Black hole growth statistics} \label{sub:BHgrowthStat} To complement the analysis in Section \ref{sub:AGN+SNe}, we additionally quantify the black hole (BH) growth for our different samples. We calculated both the relative and absolute BH growth: Indeed, only $7.8\%$ of QSMMC galaxies, compared to $60.2\%$ and $62.7\%$ of the SFSMMC and PSB galaxies, at least double their BH mass over the last $2.5\,$Gyr. In absolute terms, $80.1\%$ of PSB and $73.7\%$ of the SFSMMC galaxies experience a significant mass growth of $\Delta M_{\mathrm{BH}} \geq 10^7\,\mathrm{M_{\odot}}$, while this is only the case for $18.7\%$ of the QSMMC galaxies. \begin{figure*} \includegraphics[width=0.95\columnwidth]{210512PSB_BH_growth_Mstar_Mgas_mergers_Nbeg136to100_qmatch1_Apr21.eps} \includegraphics[width=0.95\columnwidth]{210512PSB_Mstar_growth_Mstar_Mgas_mergers_Nbeg136to100_qmatch1_Apr21.eps} \caption{Based on Equation \ref{eq:gammaBH}, the left figure shows the black hole mass growth $\gamma_{BH}$ over a period of $2.5\,$Gyr, as a function of stellar mass for the PSB (coloured points) and the QSMMC sample (grey density). The right figure shows the PSB stellar mass growth $\gamma_{M_*}$, following the same prescription as Equation \ref{eq:gammaBH}, however using stellar mass rather than BH mass. On each side, the colour bar displays the accreted gas mass onto the tracked galaxy due to mergers within the evaluated time-span. The symbols (cross, square, triangle) encode the initial BH mass of each PSB galaxy at $t_{\mathrm{lbt}} = 2.5\,$Gyr: $M_{\mathrm{BH,PSB}}[t_{\mathrm{lbt}} = 2.5\,\mathrm{Gyr}]/\mathrm{M_{\odot}} \in [[4.6 \cdot 10^5, 1.9 \cdot 10^6), [1.9 \cdot 10^6, 5.4 \cdot 10^7), [5.4 \cdot 10^7, 9.9 \cdot 10^8]]$. The histograms on the right-hand side show the distribution of the PSB (green), QSMMC (black), and SFSMMC (pink) samples along $\gamma_{BH}$ (left) and $\gamma_{M_*}$ (right), respectively. To avoid cluttering of points due to an increased stellar mass range, one lone high mass PSB galaxy ($\log_{10}(M_*/M_{_\odot}) = 11.98$) is excluded from the figures. We note that (especially quenched) galaxies exist below the chosen y-range. } \label{fig:gammaMgas} \end{figure*} To better visualise the scales involved in the BH mass growth over a period of $2.5\,$Gyr, we introduce $\gamma_{\mathrm{BH}}$: \begin{equation} \gamma_{\mathrm{BH}} = \log_{10} \left[ \frac{M_{\mathrm{BH}}[t_{\mathrm{lbt}}=0\,\mathrm{Gyr}]-M_{\mathrm{BH}}[t_{\mathrm{lbt}}=2.5\,\mathrm{Gyr}]}{M_{\mathrm{BH}}[t_{\mathrm{lbt}}=2.5\,\mathrm{Gyr}]} \right] \label{eq:gammaBH} \end{equation} Figure \ref{fig:gammaMgas} (left) shows $\gamma_{\mathrm{BH}}$ as a function of stellar mass for QSMMC (grey density) and PSB galaxies, using a colour bar (right-hand side) to encode the galactic accreted gas mass $M_{\mathrm{gas}}$. As indicated by the legend, different symbols are used to indicate the initial PSB galaxy BH mass at $t_{\mathrm{lbt}} = 2.5\,$Gyr: the least massive BHs are indicated by crosses, i.e. ${M_{\mathrm{BH,PSB}}[t_{\mathrm{lbt}} = 2.5\,\mathrm{Gyr}]/\mathrm{M_{\odot}} \in [4.6 \cdot 10^5, 1.9 \cdot 10^6)}$, intermediate BHs are indicated by squares, i.e. ${M_{\mathrm{BH,PSB}}[t_{\mathrm{lbt}} = 2.5\,\mathrm{Gyr}]/\mathrm{M_{\odot}} \in [1.9 \cdot 10^6, 5.4 \cdot 10^7)}$, and the most massive BHs are indicated by triangles, i.e. ${M_{\mathrm{BH,PSB}}[t_{\mathrm{lbt}} = 2.5\,\mathrm{Gyr}]/\mathrm{M_{\odot}} \in [5.4 \cdot 10^7, 9.9 \cdot 10^8]}$. Figure \ref{fig:gammaMgas} (left) clearly shows that, in contrast to QSMMC galaxies, PSBs are consistently found at higher values of $\gamma_{\mathrm{BH}}$. This is in line with previously established behaviour (see Section \ref{sub:AGN+SNe}), where PSBs exhibit a significantly stronger AGN feedback, i.e. BH mass growth, than the QSMMC sample. Interestingly, it appears that the PSB population inhabits distinct regions in the stellar mass - $\gamma_{BH}$ plane. Most noticeably, there appears to be a bimodality, centred around two PSB populations found at $\gamma_{\mathrm{BH}} \sim 2$, i.e. a BH growth by a factor of $\sim 100$, and $\gamma_{BH} \sim 0$, i.e. a doubling of the BH mass over the last $2.5\,$Gyr. A strong correlation between decreasing ${M_{\mathrm{BH,PSB}}[t_{\mathrm{lbt}} = 2.5\,\mathrm{Gyr}]/\mathrm{M_{\odot}}}$ and increasing $\gamma_{BH}$ is evident: In Figure \ref{fig:gammaMgas} (left), we see that larger BH growth strongly correlates with smaller $t_{\mathrm{lbt}} = 2.5\,$Gyr BH mass (crosses). Likewise, smaller BH growth correlates with more massive $t_{\mathrm{lbt}} = 2.5\,$Gyr BHs (triangles). In short, the less massive PSB BHs were at $t_{\mathrm{lbt}} = 2.5\,$Gyr, the more BHs grow in the following $2.5\,$Gyr. It follows that, the $\gamma_{\mathrm{BH}} \sim 2$ population is characterised by recently seeded BHs at $t_{\mathrm{lbt}}=2.5\,$Gyr. Our BHs are represented by collisionless sink particles, which are seeded with an initial mass of $4.6 \cdot 10^5\, \mathrm{M_{\odot}}$ in galaxies with stellar mass $M_* > 2.3 \cdot 10^{10}\, \mathrm{M_{\odot}}$ \citep{2015MNRAS.448.1504S}. The BHs are seeded below the Magorrian relation, i.e. the relation between BH and bulge mass \citep{1998AJ....115.2285M}. In practice, this means that recently seeded BHs experience an initial rapid BH mass growth at fairly constant stellar mass \citep{2015MNRAS.448.1504S}. In contrast, the $\gamma_{\mathrm{BH}} \sim 0$ population is characterised by BHs that have already reached the Magorrian relation at $t_{\mathrm{lbt}}=2.5\,$Gyr. Despite the numerical effects associated with seeding BHs, from a physical point of view, the important distinction between PSB and QSMMC galaxies remains: PSBs are characterised by a significantly stronger recent BH mass growth. Conversely, the histogram in Figure \ref{fig:gammaMgas} (left) shows a strong overlap between PSB (green) and SFSMMC (pink) galaxies. This further highlights the importance of the specific details of the BH growth, i.e. when, on which timescale, and under which circumstances the growth occurs. Figure \ref{fig:gammaMgas} (right) illustrates a number of distinct points: First, the PSB $\gamma_{\mathrm{BH}}$ bimodality found in the left panel is not reproduced when evaluating the stellar mass growth $\gamma_{\mathrm{M_*}}$ (using Equation \ref{eq:gammaBH}, but substituting $M_{\mathrm{BH}}$ with $M_*$). Second, as previously established in Section \ref{sec:environment}, PSB galaxies in Figure \ref{fig:gammaMgas} are overwhelmingly found at lower stellar masses, i.e. close to our mass cut. Third, Figure \ref{fig:gammaMgas} shows a weak correlation between stellar mass and gas accretion, as low stellar mass PSBs are more likely to have low gas accretion (red and orange), while gas accretion appears to increase (blue) towards higher stellar masses. Additionally, we also investigate the correlation between stellar mass and stellar mass (rather than gas) accretion via mergers, finding a stronger correlation than in Figure \ref{fig:gammaMgas}. This is not surprising, as mergers provide a significant pathway for stellar mass growth for massive galaxies \citep{2016MNRAS.458.2371R, 2017MNRAS.464.1659Q, 2021MNRAS.501.3215O}, while in-situ star formation becomes less \citep{2010ApJ...709.1018V}. Further investigation shows that the majority ($69\%$) of BHs do not accrete any other BHs, while $21\%$ accrete one other BH within the evaluated time-span. This reveals that BH growth in our simulation typically happens via smooth accretion, i.e. the process whereby (diffuse) gas is continuously accreted \citep{2009ApJ...694L.158B, 2012A&A...544A..68L}, rather than through the accretion of other BHs. We find a weak correlation between increasing stellar mass and increasing number of accreted BHs. Again, this is not surprising as larger stellar mass galaxies typically grow their stellar mass via mergers \citep{2010ApJ...709.1018V, 2017MNRAS.464.1659Q}. As a result more massive galaxies are more likely to merge with satellites which already host a seeded BH, increasing the likelihood of the main BH accreting a satellite BH. This weak correlation between stellar mass and accreted BHs agrees with our expectations of a hierarchical growth model in a $\Lambda$CDM universe in which large halos are formed late via the coalescence of smaller ones \citep{1997ApJ...490..493N, 2000MNRAS.319..168C, 2006MNRAS.370..645B}. A closer look at the histogram (right) reveals both a continued similarity between PSB (green) and SFSMMC (pink) galaxies, and a strong difference to QSMMC (black) galaxies: Compared to PSB and SFSMMC galaxies, far fewer QSMMC galaxies experience a non-negligible stellar mass growth over the considered time-span, $\Delta t \sim 2.5\,$Gyr. As a result, Figure \ref{fig:gammaMgas} (right) is underpopulated with QSMMC galaxies. This contrast between PSB and SFSMMC on one side, and QSMMC galaxies on the other, further highlights the statistically rich merger history of PSB and SFSMMC galaxies, which are overwhelming located around $\gamma_{\mathrm{M_*}} \sim 0$, i.e. experience a doubling in stellar mass within the past $2.5\,$Gyr. In short, both the merger history (Table \ref{tab:MergerTable}) and BH growth (Figure \ref{fig:gammaMgas} left) of PSB and SFSMMC galaxies show strong similarities, albeit PSBs are classified as quiescent at $t_{\mathrm{lbt}}=0\,$Gyr. This shows that, when no further stellar mass selection is chosen (as is done in the left panel of Figure \ref{fig:AGN_SNe_Energy_mass_evol}) and the major merger progenitor cold gas content is not taken into consideration (see Figure \ref{fig:merger_wetness} top right), PSBs essentially behave like star-forming galaxies until their recent shutdown in star formation. \begin{table*} \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c | c | c |} \hline & \multicolumn{3}{|c|}{$M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 10^{13} [\%]$} & \multicolumn{3}{|c|}{$10^{13} \leq M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 10^{14} [\%]$} & \multicolumn{3}{|c|}{$M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} \geq 10^{14} [\%]$} \\ \hline Selection & PSBs & QSMMC & SFSMMC & PSBs & QSMMC & SFSMMC & PSBs & QSMMC & SFSMMC \\ \hline $\gamma_{BH} \geq 1.0$ & 95.6 & 91.7 & 87.5 & 3.8 & 8.3 & 10.8 & 0.6 & 0 & 1.7 \\ \hline $1.0> \gamma_{BH} \geq -2.0$ & 85.2 & 82.8 & 86.2 & 12.1 & 12.7 & 10.8 & 2.7 & 4.5 & 3.1 \\ \hline $-2.0> \gamma_{BH}>-3.8$& 100 & 84.7 & 65.4 & 0 & 12.4 & 26.9 & 0 & 2.9 & 7.7 \\ \hline $\gamma_{BH} \leq -3.8$& - & 8.9 & 0 & - & 28.9 & 100 & - & 62.2 & 0 \\ \hline \hline $3 > \gamma_{BH}>-6$ & 89.4 & 79.1 & 85.0 & 8.7& 13.6 & 12.0 & 1.9 & 7.3 & 2.9 \\ \hline \label{tab:gammas} \end{tabular} \end{center} \caption{Different subdivisions of $\gamma_{BH}$ (see Equation \ref{eq:gammaBH}), partitioned based on the horizontal lines in Figure \ref{fig:gammaMgas} as a function of different halo masses for the PSB (424), QSMMC (641), and SFSMMC (411) samples.} \label{tab:BHgrowthTable} \end{table*} Table \ref{tab:BHgrowthTable} displays the BH growth $\gamma_{BH}$ for the PSB, QSMMC, and SFSMMC samples as a function of halo mass, i.e. local environment. The horizontal lines in Figure \ref{fig:gammaMgas} indicate the different $\gamma_{BH}$ subdivisions of Table \ref{tab:BHgrowthTable}. As PSB and SFSMMC galaxies typically experience a more rapid evolution, i.e. their BHs have been more recently seeded, the sample size of galaxies evaluated over $2.5\,$Gyr is smaller for PSBs ($424$) and SFSMMC ($411$), compared to the QSMMC sample ($641$). The last row of Table \ref{tab:BHgrowthTable} shows that $89.4\%$ of all PSB, $79.1\%$ of QSMMC, and $85.0\%$ of SFSMMC galaxies are found within a field environment ($M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 10^{13}$) at $t_{\mathrm{lbt}}=0\,$Gyr. In contrast to $7.3\%$ of QSMMC galaxies, only $1.9\%$ of PSB and $2.9\%$ of SFSMMC galaxies are found in clusters ($M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} > 10^{14}$). This trend reflects the results obtained in Section \ref{sub:FoFNsub}, i.e. that PSBs at $t_{\mathrm{lbt}}=0\,$Gyr are overwhelmingly found in halos with few satellites. QSMMC galaxies belong to the only sample with a non-negligible population in the $\gamma_{BH} \leq -3.8$ regime (see Figure \ref{fig:gammaMgas} left). Moreover, galaxies found at these low $\gamma_{BH}$ values, i.e. BHs with stagnated growth, are more likely to be found in clusters ($62.2\%$) and groups ($28.9\%$). As galaxy clusters are characterised by an abundance of hot gas and cluster galaxies have high relative velocities, inhibiting galaxy mergers, satellite galaxies have very limited opportunities to replenish their (cold) gas reservoir. Consequently, cluster galaxies have a lower likelihood of gas inflow reaching the galactic centre, resulting in low BH growth, and less numbers of PSBs. \section{Post-starburst galaxies in galaxy clusters} \label{sec:clusters} As observations suggest that the evolution of PSBs differs considerably with environment, we now focus on galaxy clusters \citep{1999ApJ...518..576P, 2005MNRAS.357..937G, 2009MNRAS.395..144W, 2017MNRAS.472..419L, 2019MNRAS.482..881P}. Particularly, we want to understand how the environment, specifically galaxy clusters, influence PSB galaxy evolution. To increase our sample size, we lower the mass threshold in this section to include galaxies with at least 100 stellar particles, i.e. $M_* \geq 4.97 \cdot 10^{9} \, \mathrm{M_{\odot}}$. This does not include Table \ref{tab:MuzzinTable} and Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster}, where the stellar mass threshold ($M_* \geq 4.97 \cdot 10^{10} \, \mathrm{M_{\odot}}$) is kept the same to allow direct comparisons with Table \ref{tab:Nbeg064MergerTable} and Figure \ref{fig:AGN_SNe_Energy_mass_evol}. \subsection{Galaxy cluster stellar mass function comparison} \label{sub:soco} \begin{figure} \includegraphics[width=\columnwidth]{210514soco_SMF_box2_cyl_len_5_May21} \caption{Stellar mass functions comparing Magneticum Box2 satellite galaxies to \protect\cite{2018MNRAS.476.1242S} at $z = 0.7$ in the group and cluster environments. Following \protect\cite{2018MNRAS.476.1242S}, we select groups and clusters with a member range of $20 \leq N \leq 135$ and stellar mass within radius $R \leq 1 \, \mathrm{Mpc}$ of $10^{11.29} \leq M_{*}/\mathrm{M_{\odot}} \leq 10^{12.45}$. Centred on these groups and clusters, we construct cylinders with height $H_{\mathrm{cyl}} = 5 \, \mathrm{Mpc}$ and evaluate all galaxies contained within the cylindrical volume. The black solid line indicates the fit to the stellar mass functions by \protect\cite{2018MNRAS.476.1242S}, while the coloured triangles represent the Magneticum results. The black solid fit line extends to the $90\%$ mass completion limit. The different panels show the stellar mass functions of the star-forming (SF: top), the quiescent (Q: middle) and the post-starburst (PSB: bottom) populations. The Magneticum results are normalised by a factor of $\xi = 1/300$ to fit the arbitrarily normalised cluster observations (see \protect\cite{2018MNRAS.476.1242S}).} \label{fig:SMF_soco} \end{figure} We extend our study of the global stellar mass functions shown in Figure \ref{fig:SMF_grid} by considering the high density environment and comparing to a catalogue of galaxy cluster candidates detected in the Ultra-Deep-Survey (UDS) \citep{2018MNRAS.476.1242S}. \cite{2018MNRAS.476.1242S} study the environment dependent galaxy evolution in the redshift range $0.5 < z < 1.0$ using the UDS. They identify $37$ clusters, $11$ of which contain more than $45$ members. This results in a sample of $2210$ galaxies, which provide the basis for the stellar mass function calculation \citep{2018MNRAS.476.1242S}. To compare with the observations, we follow a similar, yet not identical, prescription: Due to redshift uncertainties, \cite{2018MNRAS.476.1242S} sample the volume of cylinders centred on clusters with height $H_{\mathrm{cyl}} = 250 \, \mathrm{Mpc}$. Thereafter, they remove the contaminants by statistically subtracting the field galaxies in each cylinder \citep{2018MNRAS.476.1242S}. To not unnecessarily introduce statistical contamination, we consider smaller cylinders with height $H_{\mathrm{cyl}} = 5 \, \mathrm{Mpc}$. In both cases, the cylinder has radius $R_{\mathrm{sph}} = 1 \, \mathrm{Mpc}$ and the stellar mass and the number of satellites $N$ is calculated inside the cylinder. Following \cite{2018MNRAS.476.1242S}, we select only those clusters with a member range of $20 \leq N \leq 135$ and stellar mass within $1 \, \mathrm{Mpc}$ of $10^{11.29} \leq M_{*}/\mathrm{M_{\odot}} \leq 10^{12.45}$. Subsequently, each cluster is considered along three random yet linearly independent spatial axes, increasing our sampling. A total of $8406$ Magneticum clusters fulfil the above criteria, with a total of $182213$ member galaxies with stellar mass $M_* \geq 4.97 \cdot 10^{9} \, \mathrm{M_{\odot}}$, of which $43084$ are star-forming, $139129$ are quiescent, and $7704$ are identified as PSBs. The cluster and galaxy counts are the total values across all three spatial axes, i.e. are up to a factor of $\sim 3$ larger than the uniquely identified objects within Box2. Figure \ref{fig:SMF_soco} shows the $z = 0.7$ galaxy cluster stellar mass function of star-forming (blue), quenched (red) and PSB (green) galaxies. Similarly to the total sample shown in Figure \ref{fig:SMF_grid} at $z=0.7$, the cluster PSB stellar mass function has two bumps at $\mathrm{log}(M_{*}/\mathrm{M_{\odot}}) \sim 9.7$ and $\mathrm{log}(M_{*}/\mathrm{M_{\odot}}) \sim 10.4$ and is dominated by the low stellar mass end. However, the amplitude of the PSB bumps differs between the total and cluster sample. Furthermore, we find fewer star-forming and thus more quiescent galaxies in the cluster environment at low stellar mass compared to the total sample (Figure \ref{fig:SMF_grid}). The observations in Figure \ref{fig:SMF_soco} are fitted by Schechter functions (star-forming and quiescent satellite galaxies) and double Schechter functions (PSBs), respectively \citep{2008MNRAS.388..945B, 2010A&A...523A..13P}. As the stellar mass functions discussed in \cite{2018MNRAS.476.1242S} are arbitrarily normalised, the Magneticum results were also normalised to fit the observational data. Specifically, the Magneticum results (triangles) were multiplied by a factor of $\xi = 1/300$ to vertically adjust them to the observations. As shown in Figure \ref{fig:SMF_soco}, the shape of the cluster galaxy stellar mass functions from Magneticum are in very good agreement with observations (see also \cite{2015MNRAS.448.1504S}). There are only two discrepancies: First, the star-forming distribution which, similar to Figure \ref{fig:SMF_grid}, lacks good agreement for masses between $10.5 < \mathrm{log}(M_{*}/\mathrm{M_{\odot}}) < 11.2$. As discussed in Section \ref{sub:SMF}, this is due to the onset of the AGN feedback. Second, we find evidence for rare massive cluster PSBs which are not found in the significantly smaller observational sample. Further evidence for good agreement is provided by the replication of the PSB plateau in the mass range $10.0 < \mathrm{log}(M_{*}/\mathrm{M_{\odot}}) < 10.5$, indicating a preferential intermediate mass range. \subsection{Line-of-sight velocity: Observation and resolution comparison} \label{sub:vlosObsComp} \begin{figure} \includegraphics[width=\columnwidth]{210609vlos_PhaseSpace_Muzzin_masscut5e9comb_boxes_Nbeg064_paperversion} \caption{Post-starburst (PSB) galaxy normalised line-of-sight (LOS), $v_{\mathrm{los}}/v_{\mathrm{200,crit}}$, phase space comparison between Box4 at high resolution (hr, blue crosses), Box4 at ultra-high resolution (uhr, black squares), \protect\cite{2014ApJ...796...65M} (red triangles), and Magneticum Box2 and Box2b (green) at $z = 0.9$ in dependence of the cluster-centric 2D projected radial profile, $R/r_{\mathrm{200,crit}}$. Following the criteria outlined in \protect\cite{2014ApJ...796...65M}, satellite galaxies, hosted by clusters in the mass range $1 \cdot 10^{14} < M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 20 \cdot 10^{14}$, are shown. Satellite galaxies are selected above stellar mass $M_* \geq 4.97 \cdot 10^{9} \, \mathrm{M_{\odot}}$. The contour lines highlight the regions where the density is $50\%$ and $75\%$ of the maximum density. The histograms depict the relative abundance of each population projected onto the respective axes. The enveloping dashed black lines corresponds to $|v_{\mathrm{los}}/v_{\mathrm{200,crit}}| \sim 1.6|({R/r_{\mathrm{200,crit}}})^{-1/2}|$ and is used to exclude interlopers. } \label{fig:Muzzin} \end{figure} In Figure \ref{fig:Muzzin}, we show the normalised line-of-sight phase space velocity $v_{\mathrm{los}}/v_{\mathrm{200,crit}}$ of PSBs at $z = 0.9$ as a function of the cluster-centric 2D projected radius, $R/r_{\mathrm{200,crit}}$, for both Box2 and Box2b (green density). Box2 and Box2b results are compared to our high (blue crosses) and ultra-high resolution Box4 (black squares), as well as to observations by \cite{2014ApJ...796...65M} (red triangles). Galaxies with the lower stellar mass threshold of $M_* \geq 4.97 \cdot 10^{9} \, \mathrm{M_{\odot}}$ are shown, previously used in \cite{2019MNRAS.488.5370L}. We compared this lower stellar mass threshold with our standard stellar mass threshold ($M_* \geq 4.97 \cdot 10^{10} \, \mathrm{M_{\odot}}$) and established the convergence of our results. We choose the lower stellar mass cut, so as to increase our phase space sampling (relevant especially to Figure \ref{fig:PSvradGrid}). Magneticum PSBs are shown as density maps and are scaled to the maximum density of the PSB galaxy population. The dashed black lines enveloping the density map in Figure \ref{fig:Muzzin} are based on the virial theorem and are introduced to provide a relationship between the velocity and the radius via ${|v_{\mathrm{los}}/v_{\mathrm{200,crit}}| \sim |({R/r_{\mathrm{200,crit}}})^{-1/2}|}$. A proportionality factor of $1.6$ is introduced to scale the enveloping dashed black lines. The factor is motivated by the strongest outlier of the observational data \citep{2014ApJ...796...65M} and is used to filter out interlopers, i.e. galaxies that are only attributed to a cluster due to the line-of-sight projection. \cite{2014ApJ...796...65M} consider data based on the Gemini Cluster Astrophysics Spectroscopic Survey (GCLASS). They investigate the line-of-sight phase space of these $424$ cluster galaxies at $z \sim 1$, of which $24$ are identified as PSBs according to an absence of $O_{\mathrm{II}}$ emission while also hosting a young stellar population ($D_n(4000) < 1.45$) \citep{2017ApJ...841...32Z}. To sample a similar volume as the observations, a cylinder of height $179 \mathrm{Mpc}$ was used. The cylinder height was calculated by evaluating the scatter around the mean observed redshift, $\sigma_{\mathrm{z}}$, resulting in $\sigma_{\mathrm{z}} = 0.036$ \citep{2013A&A...557A..15V}. The projections were considered along three linearly independent spatial axes. We identified $20371$ PSBs in $1239$ clusters. Figure \ref{fig:Muzzin} shows that both the PSBs identified by \cite{2014ApJ...796...65M} and by Magneticum exhibit a strong preference for the inner region of the clusters. The inner over-density of PSBs found between $R \sim (0.15-0.5) \, r_{\mathrm{200,crit}}$ matches observations well. Of the $20371$ identified PSBs, $14790$, i.e. $73\%$, are found inside $r_{\mathrm{200,crit}}$. A subset of $9263$ satellite galaxies, i.e. $45\%$, are even found inside $R < 0.5 \, r_{\mathrm{200,crit}}$. The normalised distributions projected onto each axis further demonstrate the close agreement between observations and our simulation. The PSB galaxy preference for a distinct region of phase space, namely $R \sim (0.15-0.5) \, \mathrm{R_{200,crit}}$, suggests a common cause: Most likely an environmental quenching mechanism leads to the shutdown of previously (strongly) star-forming galaxies on a timescale that brings them about halfway through the cluster until star formation is shutdown. This agrees with previous phase space results concerning the fast quenching of star-forming satellite galaxies in clusters \citep{2019MNRAS.488.5370L}. To test numerical convergence we compare our results with Box4 at high and ultra-high resolution, the latter having a $\sim 20$ times higher mass resolution than the former. As Box4 is significantly smaller than Box2 or Box2b (see Section \ref{sub:Mag}), both resolution levels only yield $1$ cluster within the mass range $1 \cdot 10^{14} < M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 20 \cdot 10^{14}$. The clusters host $11$ unique ultra-high and $6$ unique high resolution PSBs, respectively. When considering three linearly independent projections, this yields a sample of $26$ ultra-high and $18$ high resolution PSBs. Figure \ref{fig:Muzzin} shows agreement between high and ultra-high resolution PSBs, as well as between different boxes. Similarly to Box2 and Box2b, Box4 PSBs (both resolution levels) are also predominantly found within the inner cluster region. Although the 2D projected radial scatter is stronger for the larger ultra-high resolution PSB sample (black squares) compared to the smaller high resolution PSB sample (blue crosses), the difference does not exceed the expected statistical variation. Meanwhile, the normalised line-of-sight phase space velocity distributions of the two resolution levels, different boxes, and observations shows consistent agreement. In general, the numerical convergence of our results and the associated implications are discussed in Section \ref{sub:disc:numerics}. \begin{figure} \includegraphics[width=\columnwidth]{200904vrad_PhaseSpace_Muzzin_masscut5e9comb_boxes_Nbeg064_paperversion} \caption{Same as Fig \ref{fig:Muzzin} but showing the normalised radial velocity $v_{\mathrm{rad}}/v_{\mathrm{200,crit}}$ as a function of the 3D radius $r/r_{\mathrm{200,crit}}$. The enveloping black line is no longer used to filter out galaxies, it only remains to guide the eye.} \label{fig:Muzzin_vrad} \end{figure} \subsection{Radial velocity as a function of cluster mass and redshift} \label{sub:PSvradGrid} To better disentangle the underlying mechanisms potentially involved in triggering the starburst and subsequent shutdown in star formation of PSBs in galaxy cluster environments, we extend our investigation beyond the line-of-sight phase space observational comparison. Using much of the same nomenclature as Figure \ref{fig:Muzzin}, Figure \ref{fig:Muzzin_vrad} shows the PSBs within a 3D sphere instead of projections. Thus, Figure \ref{fig:Muzzin_vrad} no longer shows the PSBs inside a cylindrical volume which is the basis of the projected line-of-sight population but a 3D sphere of radius $2 r_{\mathrm{200,crit}}$, and therefore the sample of PSBs is smaller, i.e. only $5185$ PSBs are plotted. Of this population, $3401$ PSBs (or $66\%$) are infalling, i.e. $v_{\mathrm{rad}}/v_{\mathrm{200,crit}} < 0$. This is similar to the $69\%$ infalling PSBs found in the line-of-sight population. In addition to PSBs typically being characterised by infall, Figure \ref{fig:Muzzin_vrad} shows an abundance of PSBs in the inner cluster region. Furthermore, it appears that the PSB population, when compared to e.g. the older quiescent cluster population in \cite{2019MNRAS.488.5370L}, is not well mixed within the cluster, clearly indicating a recent infall. \begin{figure*} \includegraphics[width=2\columnwidth]{200827PSB_PS_vrad_grid_sb0_box2} \caption{Box2 PSB galaxy ($M_* \geq 4.97 \cdot 10^{9} \, \mathrm{M_{\odot}}$) normalised radial velocity $v_{\mathrm{rad}}/v_{\mathrm{200,crit}}$ as a function of 3D radius $r/r_{\mathrm{200,crit}}$ for different cluster masses and redshifts. Columns from left to right have the following redshifts: $z=1.18$, $z=0.47$, $z=0.25$, and $z=0.06$. Rows from top to bottom have the following cluster masses:$M_{\mathrm{200,crit}} > 9 \cdot 10^{14}\, \mathrm{M_{\odot}}$, $M_{\mathrm{200,crit}} = (6-9) \cdot 10^{14}\, \mathrm{M_{\odot}}$, $M_{\mathrm{200,crit}} = (3-6) \cdot 10^{14}\, \mathrm{M_{\odot}}$, and $M_{\mathrm{200,crit}} = (1-3) \cdot 10^{14}\, \mathrm{M_{\odot}}$. The colour bar displays the relative phase space number density normalised to the maximum value of each individual panel. The contour lines correspond to regions showing $20\%$ and $60\%$ of the maximum density in each panel. The horizontal dashed line marks $v_{\mathrm{rad}}/v_{\mathrm{200,crit}} = 0$: PSBs above this line are moving outwards with respect to the cluster centre, while PSBs below this line are moving into the cluster. The empty panels are the result of a lack of clusters in the given redshift and cluster mass range.} \label{fig:PSvradGrid} \end{figure*} We analyse the normalised radial velocity of cluster PSBs as a function of the 3D cluster-centric radius at different cluster masses and redshifts. Figure \ref{fig:PSvradGrid} shows an overview of four different cluster mass ranges at redshifts $0.06 < z < 1.18$. We find that cluster PSBs in all halo mass ranges at redshifts $z \lesssim 0.5$ have unusually negative radial velocities, i.e. they are in the process of infall. This means that they are either on their first infall into the cluster or are returning to the cluster after they have left it, typically referred to as backsplash galaxies \citep{2011MNRAS.411.2637P}. However, when evaluating cluster PSBs, we find negligible evidence for backsplash orbits, rather the vast majority of cluster PSBs are experiencing their first infall. In addition to showing that cluster PSBs are overwhelmingly characterised by infall, Figure \ref{fig:PSvradGrid} also reveals two important trends: First, cluster PSBs become increasingly infall dominated towards higher cluster masses, suggesting a density dependent environmental quenching mechanisms, such as ram-pressure stripping. This agrees with observations, which find that processes linked to the termination of star formation in galaxy clusters are more effective in denser environments \citep{2009ApJ...693..112P, 2012A&A...543A..19R}. For example, ram-pressure stripping is linearly dependent on the intra-cluster-medium (ICM) density, thus higher mass clusters are more efficient in quenching, i.e. the quenching timescale is shorter \citep{1972ApJ...176....1G, 2019MNRAS.488.5370L}. In this case, higher mass clusters show an increased likelihood of cluster PSBs being fully quenched before they pass their pericenter. Second, we find that cluster PSBs become slightly more infall dominated towards lower redshifts, especially visible in the transition from $z \sim 1.2$ to $z \sim 0.5$ in the lowest cluster mass regime in Figure \ref{fig:PSvradGrid}. This is likely driven by the fact that clusters at $z \sim 1.2$, compared to clusters in the same mass range at $z \sim 0.5$, have an increased likelihood of currently undergoing cluster mergers. In other words, clusters at $z \sim 1.2$ are typically not relaxed, while clusters of similar mass at $z \sim 0.5$ have had enough time to at least centrally relax. Therefore, the same mass clusters at $z \sim 1.2$ are on average more disturbed, which in turn implies a less relaxed and hot ICM. Under these circumstances, the quenching efficiency is inhibited and thus cluster PSBs, on average, are able to penetrate deeper into the galaxy cluster before ram-pressure stripping quenching is efficient. Considering these findings a picture of cluster PSB galaxy evolution in our simulation emerges, which strongly favours environmental quenching, e.g. ram-pressure stripping, as the responsible shutdown mechanism of cluster PSBs, in contrast to our results found for the field PSBs. Specifically, independent of whether an additional starburst is triggered during cluster infall or the PSB progenitors were previously experiencing significant star formation, the cluster environment appears to shut down the star formation during infall. This rapid shutdown increases the likelihood of previously star-forming/star-bursting galaxies to be classified as PSBs. Consequently, it appears that PSBs in galaxy clusters share a similar shutdown mechanism, rather than necessarily sharing the same SFR increasing mechanism. \subsection{Cluster merger statistics} \label{sub:clusterMergers} Following the same approach as in Section \ref{sub:mergerStat}, we analyse merger abundances in Table \ref{tab:MuzzinTable} to understand their statistical relevance to the evolution of cluster PSBs. Of the 411 projection independent PSBs with stellar mass $M_* \geq 4.97 \cdot 10^{10}\, \mathrm{M_{\odot}}$ identified within galaxy clusters in Box2 at $z=0.9$, we successfully trace 410 PSBs to $z=1.7$, i.e. over a time-span of $\sim 2.5\,$Gyr. \begin{table} \begin{center} \begin{tabular}{| l | c | c | c |} \hline Criterion & PSBs & QSMMC & SFSMMC \\ \hline Analysed trees & 410 & 409 & 405 \\ \hline $\Sigma(N_{\mathrm{mini}})$ & 370 & 197 & 400 \\ \hline $\Sigma(N_{\mathrm{minor}})$ & 290 & 98 & 293 \\ \hline $\Sigma(N_{\mathrm{major}})$ & 216 & 94 & 238 \\ \hline $N_{\geq 1 \mathrm{ mini}}$ & $57.1\%$ & $32.8\%$ & $53.6\%$ \\ \hline $N_{\geq 1 \mathrm{ minor}}$ & $55.6\%$ & $20.3\%$ & $51.4\%$ \\ \hline $N_{\geq 1 \mathrm{ major}}$ & $43.2\%$ & $20.8\%$ & $49.9\%$ \\ \hline $N_{\geq 1 \mathrm{ merger}}$ & $92.9\%$ & $53.1\%$ & $91.9\%$ \\ \end{tabular} \end{center} \caption{Following the nomenclature as introduced in Table \ref{tab:MergerTable}, but showing results for cluster galaxies in the redshift range $0.9 < z < 1.7$ (compare to Table \ref{tab:Nbeg064MergerTable} in the same redshift range without an environmental selection). Of the 411 uniquely identified cluster PSBs with stellar mass $M_* \geq 4.97 \cdot 10^{10}\, \mathrm{M_{\odot}}$ in Box2 at $z = 0.9$, 410 progenitors were successfully traced over a time-span of $2.5\,$Gyr. PSB, QSMMC, and SFSMMC galaxies are selected so as to reproduce the cluster criteria outlined in \protect\cite{2014ApJ...796...65M} (see Section \ref{sub:vlosObsComp}).} \label{tab:MuzzinTable} \end{table} When comparing cluster PSBs (Table \ref{tab:MuzzinTable}) with non environmentally selected PSBs (Table \ref{tab:Nbeg064MergerTable}) in the redshift range $0.9 < z < 1.7$, we find very similar merger abundances for all samples. For example, $92.9\%$ of cluster PSBs and $92.6\%$ of non environmentally selected PSBs both at $z=0.9$ have experienced at least one merger event within the last $\sim 2.5\,$Gyr. Additionally the similarity between the PSB and SFSMMC sample appears independent of the environment surveyed, further providing evidence for the importance of mergers for (recently) star-forming galaxies in our simulation. The only difference we find between Tables \ref{tab:MuzzinTable} and \ref{tab:Nbeg064MergerTable} is a slightly higher abundance of mini ($57.1\%$) and minor ($55.6\%$) mergers and a slightly lower abundance of major mergers ($43.2\%$) in cluster PSBs, compared to mini ($50.7\%$), minor ($50.5\%$), and major ($47.3\%$) mergers of non environmentally selected PSBs. Beyond these small differences the abundances found in Table \ref{tab:MuzzinTable} agree with the analysis presented in Section \ref{sub:mergerStat}. \subsection{Active galactic nuclei and supernovae} \label{sub:AGN_SNe_clusters} \begin{figure*} \includegraphics[width=0.95\columnwidth]{210404PSB_AGN_Energy_Nbeg064to032_all_Mar21.eps} \includegraphics[width=0.95\columnwidth]{210404PSB_SNe_Energy_Nbeg064to032_all_Mar21.eps} \caption{Active galactic nuclei (AGN: left figure) and supernovae (SNe: right figure) power output of cluster PSB (green), QSMMC (red), and SFSMMC (blue) samples identified at $z=0.9$ and evaluated over the past $\sim 3.2\,$Gyr and $\sim 3.5\,$Gyr in units of $10^{51}\,$erg/Myr and $1+10^{51}\,$erg/Myr, respectively. The cluster samples shown are based on the results displayed in Table \ref{tab:Nbeg064MergerTable}. The different panels show increasing $z = 0.9$ stellar mass cuts, $M_* > [4.97,6,7,8,9,10] \cdot 10^{10}\, \mathrm{M_{\odot}}$, from the lower to top panel. Both figures show the median, as well as the $0.5\,\sigma$ region as error bars for each population. } \label{fig:AGN_SNe_Energy_mass_evol_cluster} \end{figure*} As observations suggest that ram-pressure stripping may trigger AGN activity \citep{2017Natur.548..304P, 2019MNRAS.487.3102G, 2021IAUS..359..108P}, we investigate the AGN and SNe feedback of cluster PSBs at $z=0.9$. We follow the method presented in Section \ref{sub:AGN+SNe} closely, with the exception that, instead of using the global $z \sim 0$ PSB sample, we use the cluster PSB sample presented in Table \ref{tab:MuzzinTable}. Following the same nomenclature as Figure \ref{fig:AGN_SNe_Energy_mass_evol}, Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} shows the AGN (left) and SNe (right) power output for the PSB (green), SFSMMC (blue), and QSMMC (red) samples. We note that $t_{\mathrm{lbt}}=0\,$Gyr corresponds to the identification redshift $z=0.9$. Furthermore, we note that at $z \sim 1$ the time-steps in our simulation become larger, which leads the change in abundance of data points in Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} at $t_{\mathrm{lbt}} \sim 0.5\,$Gyr. As the feedback energy is normalised per unit time, this does not impact our results. Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} (left) shows no clear signs of an increase in recent AGN activity. As established in Section \ref{sub:PSvradGrid}, cluster PSBs belong to a population of recently in-fallen galaxies. Consequently, if ram-pressure stripping would trigger AGN feedback, we would expect a signal within the last $\sim 1\,$Gyr \citep{2019MNRAS.488.5370L}. However, we find no evidence for enhanced AGN activity. In fact, it appears that the AGN activity for all samples has been (gradually) declining since $t_{\mathrm{lbt}} \sim 2\,$Gyr. Furthermore, there appears to be no strong stellar mass evolution for the PSB and SFSMMC samples. Only the QSMMC sample shows signs of a stellar mass evolution: As the stellar mass increases, the onset of the decline in AGN activity at $t_{\mathrm{lbt}} \sim 2\,$Gyr begins earlier, while the QSMMC AGN power output at $t_{\mathrm{lbt}} \sim 0\,$Gyr increases by a factor $\sim 2$ between the lowest and highest stellar mass selection. In contrast, the SNe power output, i.e. the SFR, shows signs of small increase in activity starting at $t_{\mathrm{lbt}} \sim 0.5\,$Gyr for both the PSB and SFSMMC sample. However, as Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} (right) shows, at $t_{\mathrm{lbt}} \sim 0.2\,$Gyr the PSB and SFSMMC samples diverge: The PSBs experience a short timescale decrease in star formation, reaching quiescent levels of star formation according to our blueness criterion, while the SFSMMC galaxies continue to experience an elevated star formation with signs of a small increase. The fact that SFSMMC galaxies are able to sustain star formation despite being located in a high density environment is likely due to their tangential infall orbits, ideally at large cluster-centric radii \citep{2019MNRAS.488.5370L}. Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} (right) also appears to show no indication of a stellar mass evolution: The only exception being the QSMMC sample, which appears to be characterised by more recent quenching with increasing stellar mass. Generally, integrated over the evaluated time-span, both the AGN and SNe feedback shown in Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster} is significantly stronger than at $z \sim 0$ in Figure \ref{fig:AGN_SNe_Energy_mass_evol}. As discussed in Section \ref{sub:BHgrowthStat}, recent BH seeding leads to an over-estimation of BH growth. Given the same stellar mass threshold, this becomes more relevant towards higher redshift, as galaxies need to assemble their mass in a shorter time period, i.e. more rapidly. As such, it appears likely that recent BH seeding impacts our $z=0.9$ cluster sample. This likely explains the strong median AGN feedback at high look-back-times. Nonetheless, the fact that the AGN feedback does not increase towards recent look-back-times holds and suggests that AGN feedback is not relevant to shutting down star formation in cluster PSBs. This is further supported by the similarity in AGN feedback between the PSB and SFSMMC sample: While the former population is quenched at $t_{\mathrm{lbt}}=0\,$Gyr, the latter is not. Compared to low redshift, we also find a higher median SNe feedback at $z=0.9$. This, however, appears purely physical as the star formation, and thus the SNe feedback, was significantly stronger at higher redshift compared to low redshift \citep{2004Natur.428..625H}. To conclude, we find that cluster PSBs are shutdown via environmental quenching with no evidence that additional galactic feedback is triggered. \section{Discussion} \label{sec:discussion} \subsection{Environment and redshift evolution} \label{sub:disc:envir} We find that PSBs at low redshift are more frequently found in low halo mass environments and that the PSB-to-quenched fraction increases with redshift (Figure \ref{fig:Q_PSBfrac_grid}). The preference for low halo masses is further strengthened by the fact that $89.4\%$ of $z \sim 0$ PSBs are found in halos with $M_{\mathrm{200,crit}}/\mathrm{M_{\odot}} < 10^{13}$ (Table \ref{tab:BHgrowthTable}). This agrees with DEEP2 and SDSS results which find low redshift PSBs in relatively under-dense environments \citep{2005MNRAS.357..937G} compared to high redshift PSBs \citep{2009MNRAS.398..735Y}. Similarly to the decline of the PSB-to-quenched fraction with decreasing redshift found in our simulation (Figure \ref{fig:Q_PSBfrac_grid}), observations also show that the fraction of PSBs declines from $\sim 5\%$ of the total population at $z \sim 2$, to $\sim 1\%$ by $z \sim 0.5$ \citep{2016MNRAS.463..832W}. We note that MOSFIRE observations at $z \sim 1$ find a higher number of PSBs, relative to star-forming galaxies, in clusters than in groups or the field \citep{2017MNRAS.472..419L}. Considering that we determine the abundance of PSBs with respect to quiescent galaxies, a comparison is difficult, however, as fewer star-forming than quiescent galaxies are found in high density environments \citep{1980ApJ...236..351D, 2003MNRAS.346..601G}, and our PSB-to-quenched fraction does not show a strong preference for low halo mass at $z \sim 1$ (Figure \ref{fig:Q_PSBfrac_grid}), agreement seems plausible. Given these points, it appears that PSB galaxy evolution is strongly redshift dependent, favouring decreasing environmental densities towards lower redshifts, supporting the idea that the formation mechanism of PSBs is affected by redshift and environment. Similarly, we also find a strong stellar mass function evolution (Figure \ref{fig:SMF_grid}): The abundance of PSBs above our stellar mass threshold increases significantly with increasing redshift, matching VVDS observations which find that the stellar mass density ($\log_{10}(M_*/\mathrm{M_{\odot}}) > 9.75$) of strong PSB galaxies is $230$ times higher at $z \sim 0.7$ than at $z \sim 0.07$ \citep{2009MNRAS.395..144W}. In contrast, when comparing the PSB galaxy stellar mass function shape to observations at redshifts $0.07<z<1.71$ \citep{2016MNRAS.463..832W, 2018MNRAS.473.1168R} and the observations with each other, we do not find close agreement. These discrepancies are likely due to the stellar mass function sensitivity to the exact selection criteria of PSBs. Interestingly, when comparing the PSB stellar mass function at $z=0.7$ to observations in the group and cluster environment (Figure \ref{fig:SMF_soco}), we find close agreement, including the double Schechter behaviour. \subsection{The impact of mergers} \label{sub:disc:mergers} Evaluating PSBs in relation to the star-forming main sequence (Figure \ref{fig:MSpanel}) shows that during their starburst phase, which is often correlated with recent mergers (Figure \ref{fig:AGN_SNe_Energy} and Table \ref{tab:MergerTable}), massive PSBs are found significantly above the normalised redshift evolving main sequence \citep{2014ApJS..214...15S}. Considering our global low redshift PSB sample, of which $89\%$ have experienced a merger in the last $2.5\,$Gyr, we find that during peak star formation PSBs have SFRs which are a few times higher than on the main sequence, with a wide spread in their distribution. This behaviour matches observations by \cite{2019A&A...631A..51P}, which, on the one hand, find that mergers have little effect on the SFR for the majority of merging galaxies, but, on the other hand, also find that an increasing merger fraction correlates with the distance above the main sequence, i.e. at sometimes mergers may induce starbursts. Furthermore, simulations by \cite{2008A&A...492...31D} suggest that strong starbursts, where the SFR is increased by a factor $\geq 5$, are rare and only found in $15\%$ of major galaxy interactions and mergers. \cite{2020MNRAS.493.3716H} highlights the impact of mergers on the SFR: Star-forming post-merger galaxies, which make up $67\%$ of their post-merger galaxies identified in the IllustrisTNG simulation experience on average a SFR increase by a factor of $\sim 2$. This behaviour is in qualitative agreement with the correlation between mergers and the SFR increase found in our star-forming and, to a stronger extent, PSB galaxies (Figure \ref{fig:MSpanel}). Additionally, when studying adjacent galaxies in IllustrisTNG, \cite{2020MNRAS.494.4969P} find that the presence of closest companions boost the average specific SFR of massive galaxies by $14.5\%$. This agrees with our study of an individual PSB in Figure \ref{fig:traceGasMassive}, where we find an increase in star formation prior to identifying a merger event while another galaxy is in close proximity. Figure \ref{fig:MSpanel} also shows that $23\%$ of the tracked PSBs were previously quiescent, i.e. have undergone rejuvenation. When comparing quiescently star-forming, quenching, and rejuvenating galaxies in the EAGLE simulation, \cite{2016MNRAS.460.3925T} find that $\sim 1.6\%$ and $\sim 10\%$ of all galaxies can be characterised as fast and slow rejuvenating galaxies, respectively. In other words, although (fast) rejuvenation is generally rare, rejuvenation may well be a relevant pathway for the evolution of PSBs. Consistent with the high merger abundances throughout our PSB sample, observations find quiescent galaxies may undergo rejuvenation events, e.g. via (gas rich) minor mergers, triggering the required starburst phase found in PSBs \citep{2012ApJ...761...23F, 2014MNRAS.444.3408Y, 2017ApJ...841L...6B, 2020ApJ...900..107Y}. Even when solely considering isolated merger simulations, much of the ambiguity concerning the quenching impact of mergers remains \citep{2019MNRAS.487..318K}: Different types of mergers have been associated with varying quenching impacts, both directly, e.g. by introducing turbulence \citep{2018MNRAS.478.3447E}, and indirectly, e.g. by facilitating BH growth \citep{2013MNRAS.430.1901H, 2014MNRAS.437.1456B}. For example, binary galaxy merger simulations find that the termination of star formation by BH feedback in disc galaxies is significantly less important for higher progenitor mass ratios \citep{2008AN....329..956J, 2009ApJ...690..802J}. Similar studies find that galaxies, which are dominated by minor merging and smooth accretion in their late formation history ($z \lesssim 2)$, experience an energy release via gravitational heating which is sufficient to form red and dead elliptical galaxies by $z \sim 1$, even in the absence of SNe and AGN feedback \citep{2009ApJ...697L..38J}. Meanwhile, SPH simulations of major mergers demonstrate that consistency with observations does not require BH feedback to terminate star formation in massive galaxies or unbind large quantities of cold gas \citep{2011MNRAS.412.1341D}. When linking the BH accretion rate with the galaxy-wide SFR, the disc galaxy mergers in the hydrodynamical simulations by \cite{2015MNRAS.449.1470V}, typically find no temporal correlation and different variability timescales. However, when averaging over time during $\sim (0.2-0.3)\,$Gyr long merger events, they find a typical increase of a factor of a few in the ratio of BH accretion rate to SFR \citep{2015MNRAS.449.1470V}. This qualitatively agrees with our results shown in Figures \ref{fig:AGN_SNe_Energy_mass_evol} and \ref{fig:AGN_SNe_Energy}, that the recent AGN feedback increase in PSB and star-forming galaxies correlates with high merger abundances. Note, however, that not all simulations agree in that mergers and AGN feedback are correlated (e.g. \cite{2014MNRAS.442.1992H}). The ambiguous nature of merger impacts is also reflected in our results: Using a statistical approach, i.e. comparing merger abundances at $z \sim 0$ (Table \ref{tab:MergerTable}), we find that $88.9\%$ of PSBs, $23.4\%$ of quenched (QSMMC), and $79.7\%$ of star-forming stellar mass matched control (SFSMMC) galaxies experience at least one merger within the last $2.5\,$Gyr. The high merger abundance found in both our PSB and our SFSMMC sample highlights the varying merger impact: While our PSB sample is considered quiescent at $z \sim 0$, the reverse is true for SFSMMC galaxies with similarly rich merger histories, especially compared to the QSMMC sample. A similar behaviour is found when considering merger abundances at $z=0.9$ (Table \ref{tab:Nbeg064MergerTable}). Our high merger abundance broadly agrees with observations of local PSBs in SDSS, which, in their youngest age bin ($< 0.5\,$Gyr), classify at least $73\%$ of PSBs, far more than their control sample, as distorted or merging galaxies \cite{2017A&A...597A.134M}. Generally, observations of PSBs in the local low density Universe are associated with galaxy-galaxy interactions and galaxy mergers \citep{1996ApJ...466..104Z, 2001ApJ...547L..17B, 2008ApJ...688..945Y, 2009MNRAS.396.1349P, 2018MNRAS.477.1708P}, in excellent agreement with our results. When evaluating the cold gas fractions of PSB, QSMMC, and SFSMMC progenitors within three half-mass radii (Figure \ref{fig:merger_wetness}), we find general agreement between PSB and SFSMMC progenitors: Both show a preference for higher cold gas fractions compared to QSMMC progenitors. This is in line with observations, which find evidence for gas-rich mergers triggering central starbursts \citep{2018MNRAS.477.1708P, 2020MNRAS.497..389D}, fast quenching \citep{2019ApJ...874...17B}, and that recently merged galaxies typically are a factor of $\sim 3$ more atomic hydrogen rich than control galaxies at the same stellar mass \citep{2018MNRAS.478.3447E}. With regard to cold gas fractions, the only difference between PSB and SFSMMC galaxies is found for the satellite progenitor populations in major mergers (Figure \ref{fig:merger_wetness}): The major merger satellite progenitors of PSBs are characterised by a lower cold gas fraction (almost half) compared to the SFSMMC sample. The subsequent lower cold gas content of post-merger PSBs, compared to SFSMMC galaxies, may be linked to a higher likelihood of a subsequent shutdown in star formation for two reasons: First, less cold gas is available to maintain star formation. Second, given a similar onset of merger triggered AGN feedback, the same amount of energy is distributed across a smaller supply of cold gas, quickening its heating and/or redistribution. \subsection{Shutting down star formation} \label{sub:disc:shutdown} As shown for our global $z \sim 0$ PSB sample (Figure \ref{fig:AGN_SNe_Energy_mass_evol}) and the subset of six massive PSBs (Figure \ref{fig:AGN_SNe_Energy}), we find a significant increase in AGN feedback at recent look-back-times and towards higher stellar masses. The importance of AGN feedback in shutting down PSBs is also evidenced by the decreasing agreement between the otherwise often similarly behaving PSB and SFSMMC sample with increasing stellar mass. As the fraction of galaxies hosting an AGN is a strong function of stellar mass \citep{2005MNRAS.362...25B}, the apparent lack of strong AGN activity in the SFSMMC sample, compared to the PSB sample at high stellar masses, is a strong indicator of the AGN quenching effectiveness. In addition, the short shutdown timescale (Figure \ref{fig:MSpanel}), the redistribution and heating of gas (Figure \ref{fig:traceGasMassive}), the correlated BH growth (Figures \ref{fig:gammaMgas}), and the comparatively weak SNe energy (Figure \ref{fig:AGN_SNe_Energy_mass_evol}) all suggest that merger triggered AGN feedback generally is the dominant shutdown mechanism of PSBs at low redshift. However, we can neither fully exclude other causes, nor do all PSBs necessarily experience the same shutdown sequence. Nonetheless, it appears likely that merger facilitated BH growth, which triggers AGN feedback, plays an important, albeit not necessarily exclusive, role in mediating between the starburst and post-starburst phase within our simulation at low redshifts. This agrees with previous Magneticum results, which found that merger events are not statistically dominant in fuelling mechanisms for nuclear activity, while still finding elevated merger fractions in AGN hosting galaxies compared to inactive galaxies, pointing towards an intrinsic connection between AGN and mergers \citep{2018MNRAS.481..341S}. Generally, the importance of AGN feedback in explaining the sharp decline in the SFR found in (PSB) galaxies is also supported by several other works \citep{2005MNRAS.361..776S, 2013MNRAS.430.1901H, 2019A&A...623A..64C, 2020AAS...23520719L}. We find evidence for the simultaneous mechanical expulsion and heating of previously star-forming cold gas (Figure \ref{fig:traceGasMassive}). The rapid shutdown in this and similar examples (Figure \ref{fig:MSpanel}) happens on timescales of $t_{\mathrm{shutdown}} \lesssim 0.4\,$Gyr, which, due to the short timescale, generally favours AGN feedback as the expected quenching mechanism \citep{2020MNRAS.494..529W}. Although much of the dense cold gas is heated, some cold gas remains in the recently quenched galaxy (Figure \ref{fig:traceGasMassive}). The fact that significant amounts of cold gas are redistributed on short timescales rather than only being directly heated may provide an explanation to observations which find significant non star-forming (molecular) gas reservoirs in PSBs \citep{2013MNRAS.432..492Z, 2015ApJ...801....1F}. This also agrees with other simulations, suggesting that the SFR is quenched with feedback via gas removal, with little effect on the SFR via gas heating \citep{2014MNRAS.437.1456B}. Furthermore, the large amounts of molecular gas found in PSBs rules out processes such as gas depletion, expulsion, and/or starvation as the dominant shutdown mechanisms \citep{2015ApJ...801....1F}, which is supported by the results presented in this work. \subsection{Post-starburst galaxies in galaxy clusters} \label{sub:disc:clusters} Cluster PSBs are typically infalling, especially towards lower redshift and higher cluster masses (Figure \ref{fig:PSvradGrid}). This matches previous results concerning the quenching of satellite galaxies in clusters \citep{2019MNRAS.488.5370L}, showing that star-forming galaxies are more likely to be on their first infall, especially for higher mass clusters, indicating that ram-pressure stripping typically quickly shuts down star formation already during the first infall of satellite galaxies. This higher quenching effectiveness matches other simulations which find a similar significant enhancement of ram-pressure stripping in massive halos compared to less massive halos \citep{2019MNRAS.484.3968A}. In fact, several observations suggest that environmental quenching mechanisms, such as interactions with the ICM \citep{2009ApJ...693..112P, 2010PASA...27..360P} or specifically ram-pressure stripping \citep{2013A&A...553A..90G, 2017ApJ...846...27G, 2019MNRAS.482..881P}, are responsible for the abundance of PSBs in galaxy clusters. Generally, different populations of satellite galaxies, e.g. infalling, backsplash and virialised, occupy distinct regions of phase space \citep{2013MNRAS.431.2307O}. Hence, the clear preference of cluster PSBs for infall (Figure \ref{fig:PSvradGrid}) provides a strong indication for environmental quenching such as ram-pressure stripping. This is also reflected in the PSB galaxy preference for distinct cluster-centric radii ($R \sim (0.15-0.5)\,r_{\mathrm{200,crit}}$) found in projections (Figure \ref{fig:Muzzin}), showing excellent agreement with observations \citep{2014ApJ...796...65M}. It also agrees with SAMI observations of recently quenched cluster galaxies, which are exclusively found within $R \leq 0.6\, R_{\mathrm{200}}$ and show a significantly higher velocity dispersion relative to the cluster population \citep{2019ApJ...873...52O}. Similarly, GASP observations find that PSB galaxies avoid cluster cores and are characterised by a large range in relative velocities \citep{2020ApJ...892..146V}. Furthermore, both the SAMI and GASP phase space behaviour is consistent with recent infall, suggesting that PSBs could be descendants of galaxies which were quenched during first infall via ram-pressure stripping \citep{2019ApJ...873...52O, 2020ApJ...892..146V}. Providing multiple lines of evidence, \cite{2020ApJ...892..146V} conclude that the outside-in quenching \citep{2019ApJ...873...52O, 2020MNRAS.493.6011M}, the morphology, and kinematics of the stellar component, along with the position of GASP PSBs within massive clusters point to a scenario in which ram-pressure stripping has shutdown star formation via gas removal. This is in excellent agreement with our findings. When comparing merger abundances at $z=0.9$ between non environmentally selected PSBs (Table \ref{tab:Nbeg064MergerTable}) and cluster PSBs in (Table \ref{tab:MuzzinTable}), we find broad agreement: Both samples are characterised by high merger abundances, i.e. in both samples $\sim 93\%$ of PSBs experience at least one merger event in the past $2.5\,$Gyr. In contrast to the non environmentally selected sample, cluster galaxies have slightly fewer major mergers ($47.3\%$ vs. $43.2\%$ for PSBs). It appears that mergers are important in enabling the conditions necessary for (strong) star formation in cluster PSB progenitors, while it remains unclear what impact they have on shutting down star formation in cluster PSBs. In contrast to the majority of cluster PSB galaxy observations, observations of the Cl J1604 supercluster at $z \sim 0.9$ indicate that galaxy mergers are the principal mechanism for producing PSBs in clusters, while both interactions between galaxies and with the ICM also appear effective \citep{2014ApJ...792...16W}. As found by observations \citep{2019ApJ...873...52O, 2020ApJ...892..146V} and our results (Figure \ref{fig:PSvradGrid}), cluster PSBs belong to a population of recently in-fallen galaxies. Hence, it appears likely that PSB progenitors have had ample opportunity to experience mergers in the outskirts of clusters prior to infall, likely positively impacting recent star formation, thereby building a young stellar population necessary for their later identification as PSBs. As discussed in Section \ref{sub:disc:mergers}, the impact of mergers is varied and need not lead to a subsequent shutdown in star formation, e.g. via triggering AGN feedback. It seems plausible that ram-pressure stripping is more efficient in shutting down star formation than merger triggered mechanisms, and hence is the dominant shutdown mechanism in cluster PSBs. In a previous paper, we found evidence for a starburst $0.2 \, \mathrm{Gyr}$ after satellite galaxies first fall into their respective clusters, i.e. after crossing the cluster virial radius for the first time \citep{2019MNRAS.488.5370L}. Specifically, we found that the average normalised blueness, i.e. $\mathrm{SSFR} \cdot t_{\mathrm{H}}$, of satellite galaxies with stellar masses $M_* > 1.5 \cdot 10^{10}\, \mathrm{M_{\odot}}$ shows a significant starburst lasting $\sim 0.2\,$Gyr. As discussed by \cite{2019MNRAS.488.5370L}, this is likely driven by the onset of ram-pressure stripping which triggers a short starburst, followed by a complete shutdown in star formation within $< 1\,$Gyr, often on shorter timescales. Observations of local cluster galaxies undergoing ram-pressure stripping come to similar conclusions: Ram-pressure likely drives an enhancement in star formation prior to quenching \citep{2018ApJ...866L..25V, 2020MNRAS.495..554R}. Similarly, cluster galaxies undergoing ram-pressure stripping in the GASP sample show a systematic enhancement of the star formation rate: As the excess is found at all galacto-centric distances within the discs and is independent of both the degree of stripping and star formation in the tails, \cite{2020ApJ...899...98V} suggest that the star formation is most likely induced by compression waves triggered by ram-pressure stripping. Furthermore, HST observations have found strong evidence of ram-pressure stripping first shock compressing and subsequently expelling large quantities of gas from infalling cluster galaxies, which experience violent starbursts during this intense period \citep{2014ApJ...781L..40E}. When evaluating the median SNe feedback, i.e. the SFR, at $z=0.9$ for cluster PSBs (Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster}), we find on average no evidence for a strong starburst at recent look-back-times. However, this signal likely correlates more strongly with cluster-centric radius (e.g. \cite{2019MNRAS.488.5370L}). Hence it does not seem surprising that we find no signal. To better understand cluster PSB galaxy evolution, we also investigated whether cluster PSBs are found in the vicinity or crossing of cluster shock fronts: We found no evidence for an increased abundance of cluster PSBs near shocks. Observations of GASP jellyfish galaxies undergoing strong ram-pressure stripping find that the majority host an AGN \citep{2017Natur.548..304P, 2019MNRAS.487.3102G, 2021IAUS..359..108P} and that the suppression of star formation in the central region is driven by AGN feedback \citep{2019MNRAS.486..486R}. Similarly, the Romulus C simulation finds evidence for ram-pressure stripping triggering AGN feedback, which may aid in the quenching process \citep{2020ApJ...895L...8R}. When comparing these results to our study of median AGN feedback in cluster PSBs at $z=0.9$ (Figure \ref{fig:AGN_SNe_Energy_mass_evol_cluster}), we find no evidence for a recent increase in AGN feedback. However, we note that such an increase would likely correlate more strongly when evaluating the AGN feedback as a function of cluster-centric radius, which goes beyond the scope of this paper. While observations based on the UKIDSS UDS conclude that a combination of environmental and secular processes are most likely to explain the appearance of PSBs in galaxy clusters \citep{2019MNRAS.482.1640S}, all our evidence (Figures \ref{fig:PSvradGrid} and \ref{fig:AGN_SNe_Energy_mass_evol_cluster}) suggests that environmental quenching in the form of ram-pressure stripping leads to the shutdown in star formation found in our cluster PSBs. \subsection{Numerical considerations} \label{sub:disc:numerics} In addition to the ambiguous involvement of various physical mechanisms, the implementation and approximation of known physical mechanisms in simulations comes with its own set of challenges. For example, many simulations underestimate the effectiveness of feedback due to excessive radiative losses \citep{2012MNRAS.426..140D}, which, in turn, are the result of a lack of resolution and insufficiently realistic modelling of the interstellar medium \citep{2015MNRAS.446..521S}. \cite{2020MNRAS.498.1259Z} highlights the difficulty of reproducing very young PSBs in simulations, potentially indicating that new sub-resolution star formation recipes are required to properly model the process of star formation quenching. To test the numerical convergence of our results, we searched for PSBs at ultra-high resolution in Box4. However, due to the low number of PSBs per volume element (Figure \ref{fig:SMF_grid}) and the small volume of Box4 (see Section \ref{sub:Mag}), no $z \sim 0$ PSBs were found with stellar mass $M_* \geq 5 \cdot 10^{10} \, \mathrm{M_{\odot}}$. Similarly, when evaluating Box4 at ultra-high resolution using the same redshift and cluster mass domain as shown in Figure \ref{fig:PSvradGrid} only one PSB galaxy was identified above our stellar mass threshold (at $z=0.47$ and in a cluster with $M_{\mathrm{200,crit}} = (1-3) \cdot 10^{14}\, \mathrm{M_{\odot}}$). If we lower the stellar mass threshold to $M_* \geq 5 \cdot 10^{9} \, \mathrm{M_{\odot}}$ then $40$ PSBs at $z \sim 0$ are identified, $55\%$ of which have experienced at least one merger in the last $2.5\,$Gyr. We note that fewer recent mergers are expected as stellar mass decreases \citep{2014MNRAS.444.3986R}. This lower stellar mass threshold was also used in Figure \ref{fig:Muzzin}: Despite the small sample sizes ($\leq 26$ PSBs identified when considering three linearly independent projections sampled from $\leq 11$ unique PSBs), the comparison of individual PSBs at different resolution levels and in different boxes shows agreement. Additionally, we note that the high resolution run of Box4 employs an updated AGN model \citep{2015MNRAS.448.1504S}. In short, the comparison between the different boxes and resolutions has shown that our results do not appear to be driven by the resolution level nor details in the applied AGN model. Generally, we note that the identification and comparison of Magneticum PSBs to observations may be influenced by a number of effects: As discussed in \cite{2019MNRAS.488.5370L}, we measure the star formation rate rather than the colour of galaxies. In other words, we determine the star formation directly and instantaneously, rather than via the indirect, at times delayed, observation of local and/or global galactic properties. However, the galaxies selected in Box2 do not reproduce the detailed morphologies, especially concerning the cold thin discs, found in observations. Hence, we cannot capture the details of mechanisms which act on scales similar to our gas softening ($\epsilon_{\mathrm{gas}} = 3.75 \, \mathrm{h^{-1} kpc}$). This, for example, becomes relevant during ram-pressure stripping, where cold thin discs, dependent on infall geometry, provide additional shielding compared to more diffuse galactic configurations, thereby impacting quenching efficiencies and timescales. These limitations need to be addressed with the next generation of cosmological simulations. \section{Conclusions} \label{sec:conc} In order to understand the physical mechanisms leading to the formation of post-starburst galaxies (PSBs), i.e. the reasons for both the onset of the initial starburst and the abrupt shutdown in star formation, we studied the environment and temporal evolution of PSBs with stellar mass $M_* \geq 5 \cdot 10^{10} \, \mathrm{M_{\odot}}$. To this end, we used Box2 of the hydrodynamical cosmological \textit{Magneticum Pathfinder} simulations to resolve the behaviour of PSBs at varying redshifts $0.07<z<1.71$ both throughout our whole box volume as well as in specific environments such as galaxy clusters. The principal sample studied consists of $647$ PSBs, identified at $z \sim 0$, i.e. a global sample spanning the whole box volume. Throughout our analysis the behaviour and evolution of PSBs is compared to star-forming (SF) and quiescent (Q) stellar mass matched control (SMMC) galaxy samples at different look-back-times (lbt). Furthermore, Magneticum PSBs are compared with observed quenched fractions \citep{2011ApJ...742..125G, 2013ApJ...778...93T, 2018ApJ...852...31W, 2019A&A...622A.117S}, stellar mass functions \citep{2013ApJ...777...18M, 2016MNRAS.463..832W, 2018MNRAS.473.1168R}, and the star formation main sequence \citep{2014ApJS..214...15S, 2018A&A...615A.146P} at different redshifts. Especially, we compare Magneticum galaxy cluster PSBs to observed high environmental density stellar mass functions \citep{2018MNRAS.476.1242S} and the cluster phase space behaviour at high redshift \citep{2014ApJ...796...65M}. Our results are summarised as follows: \begin{itemize} \item At $z \sim 0$ PSBs and SF galaxies both are characterised by an abundance of mergers: $89\%$ of PSB and $80\%$ of SF galaxies experience at least one merger event within the last $2.5\,$Gyr, compared to $23\%$ of quiescent galaxies. Over the same time-span, $65\%$ of PSB, $58\%$ of SF, and $9\%$ of quiescent galaxies experience at least one major merger ($M_*$ ratio: > 1:3) event. This established similarity in merger abundances between PSB and SF galaxies is also found at redshift $z \sim 0.9$, both when evaluating the entire box volume as well as when specifically selecting galaxy cluster environments. \item Inspecting $z \sim 0$ PSB, quiescent, and SF galaxies with $M_* \in [5.00,5.40) \cdot 10^{10}\, \mathrm{M_{\odot}}$, we find that the AGN feedback, which is associated with recent mergers, consistently outweighs the supernova (SNe) feedback. Within the last $0.5\,$Gyr, the difference between AGN and SNe feedback increases significantly: While the maximum median SNe power output for PSBs is $P_{\mathrm{SNe,PSB}} \leq 2 \cdot 10^{55}\,$erg/Myr, the maximum median AGN power output is $P_{\mathrm{AGN,PSB}} \geq 10^{56}\,$erg/Myr. In contrast to the SF galaxies, PSBs are characterised by increasing AGN feedback with increasing stellar mass: At stellar masses $M_* \geq 8.3 \cdot 10^{10} \, \mathrm{M_{\odot}}$, the AGN feedback at $z = 0$ of PSBs ($P_{\mathrm{AGN,PSB}} \sim 10^{57}\,$erg/Myr) is half an order of magnitude larger than that of SF galaxies ($P_{\mathrm{AGN,SF}} \sim 2 \cdot 10^{56}\,$erg/Myr), which, in turn, is significantly larger than that of quenched galaxies ($P_{\mathrm{AGN,Q}} \sim 10^{55}\,$erg/Myr). This strongly indicates that the star formation in PSBs generally is shut down by AGN feedback. \item In our global $z \sim 0$ PSB sample we find that during the star formation shutdown, typically at $t_{\mathrm{lbt}} \lesssim 0.4\,$Gyr, galactic gas, especially previously star-forming gas, is often abruptly heated, while simultaneously being redistributed. This results in a sharp decrease in the (cold) gas density. This is often correlated with a recent strong increase in black hole mass, triggering significant AGN feedback. \item In contrast to SF galaxies, PSBs in our global sample, especially at $t_{\mathrm{lbt}} = [0.1,1]\,$Gyr, show less spread, i.e. are more continuous in the distribution of SNe feedback energy. As the star formation rate (SFR) linearly impacts the SNe feedback, the smaller spread in the distribution of PSB SNe feedback in combination with slightly elevated median SFRs during recent times, compared to SF galaxies, is a reflection of the recent starburst phase. As the stellar mass increases, the median PSB SNe feedback increases slightly and the difference to SF galaxies, which continue to be associated with a wider distribution in feedback, becomes stronger. \item When evaluating the cold gas content prior to mergers in our global sample, PSB and SF progenitors show similar cold gas fractions within three half-mass radii ($f_{\mathrm{cgas}} \sim 0.9$) for the main progenitors. However, when considering cold gas abundances of satellite progenitors, i.e. not the most massive progenitors prior to major merger events, PSBs are characterised by lower median cold gas fractions ($f_{\mathrm{cgas}} = 0.40$) compared to SF satellite progenitors ($f_{\mathrm{cgas}} = 0.73$). This is also reflected in the different abundance of satellite major merger progenitors which have low cold gas fractions: $42\%$ of PSBs compared to $29\%$ of SF galaxies have $f_{\mathrm{cgas}} \lesssim 0.1$. This indicates that, statistically, PSBs have less cold gas available following major mergers than SF galaxies, leading to a higher likelihood of a subsequent shutdown in star formation. \item Prior to the star formation shutdown, PSB progenitors exhibit both sustained long term star formation ($t \sim 3\,$Gyr), as well as short starbursts ($t \sim 0.4\,$Gyr). During the starbursts, independent of the duration, massive PSB progenitors are found at least a factor of $\Delta MS[z]/MS[z] \gtrsim 5$ above the redshift evolving main sequence. Of the tracked PSBs in our global sample, $23\%$ are rejuvenated galaxies, i.e. were considered quiescent before their starburst. At $z \sim 0.4$, Magneticum Box2 main sequence galaxies agree well with observations, while at $z \sim 0.1$ our galaxies lie slightly above observations \citep{2014ApJS..214...15S, 2018A&A...615A.146P}. \item At $t_{\mathrm{lbt}} \sim 2.5\,$Gyr, PSB and SF progenitors from our global sample are rarely found in isolated halos, whereas quenched progenitors are most often found in isolated halos. This initial difference between the PSB and SF versus quenched distribution of galaxies within a given halo, becomes indistinguishable at $t_{\mathrm{lbt}} \sim 2.5\,$Gyr. This indicates that common initial conditions, i.e. an abundance, albeit not saturation, of galaxies in the immediate vicinity, are shared among SF and PSB galaxies, enabling the rich merger history found in these populations. \item We compared the Box2 total, SF, quenched, and PSB stellar mass functions (SMF) at multiple redshifts $0.07<z<1.71$ to observations, finding broad agreement \citep{2013ApJ...777...18M, 2016MNRAS.463..832W, 2018MNRAS.473.1168R}: While the total and quenched SMF agree well with observations over the evaluated redshift range, the agreement for SF galaxies improves with increasing redshift. Meanwhile, PSB SMFs show that both the agreement between simulation and observations, as well as between observations, is subject to variation. When comparing stellar mass functions in group and cluster environments at $z = 0.7$, we are able to closely reproduce the observations \citep{2018MNRAS.476.1242S}. In particular, similarly to observations, we also find evidence for a PSB plateau in the stellar mass range $10.0 < \mathrm{log}(M_{*}/\mathrm{M_{\odot}}) < 10.5$ in group and cluster environments. \item At redshifts $z \lesssim 1$, PSBs are consistently found close to our stellar mass threshold ($M_* \geq 5 \cdot 10^{10}\, \mathrm{M_{\odot}}$) and at low halo masses. Towards higher redshift the abundance of PSBs increases significantly, especially at higher stellar masses. Overall, the PSB-to-quenched fraction increases with redshift, most significantly between $z \sim 1.3$ and $z \sim 1.7$. \item To compare with line-of-sight (LOS) phase space observations of cluster PSBs at $z \sim 1$ \citep{2014ApJ...796...65M}, we environmentally selected PSBs in the same halo mass range ($10^{14} < M_{\mathrm{200,crit}} / \mathrm{M_{\odot}} < 2 \cdot 10^{15}$) and found close agreement with observations. In particular, cluster PSBs are preferentially located in a narrow region of phase space with projected cluster-centric radii $R \sim (0.15-0.5) \, \mathrm{R_{200,crit}}$. The fact that both simulated and observed cluster PSBs are found in the same preferential region of phase space suggests a shared environmentally driven mechanism relevant to the formation of PSBs, which is specific to galaxy clusters, such as ram-pressure stripping. When evaluating cluster PSBs at different redshifts and cluster masses, we find that cluster PSBs at $z \lesssim 0.5$ are overwhelmingly infall dominated, especially towards higher cluster masses. This further supports the idea that, different to the PSBs in the field, ram-pressure stripping shuts down star formation of previously active galaxies, thus leading to the identification of cluster PSBs within a distinct region of phase space. \item Cluster PSBs further show no signs of significantly increased AGN or SNe feedback at recent look-back-times. In other words, we find no evidence suggesting that AGN feedback is triggered via ram-pressure stripping during cluster infall for PSBs. We also find no evidence that the AGN is responsible for quenching cluster PSBs. This is further supported by the similarity in AGN feedback between the PSB and SF sample: While the former population is quenched at $t_{\mathrm{lbt}}=0\,$Gyr, the latter is not. Hence, we conclude that cluster PSBs are primarily shut down via environmental quenching, likely ram-pressure stripping. \end{itemize} To summarise, PSBs with stellar mass $M_* \geq 5 \cdot 10^{10} \, \mathrm{M_{\odot}}$ at $z \sim 0$ typically evolve as follows: First, PSB progenitors, which at $t_{\mathrm{lbt}}=2.5\,$Gyr are predominantly found in halos with more than one galaxy, experience a merger event. Specifically, $89\%$ of PSBs experience at least one merger within the last $t_{\mathrm{lbt}}=2.5\,$Gyr, with $65\%$ undergoing at least one major merger. Second, the merger provides additional gas and/or facilitates the inflow of gas onto the PSB progenitor, often triggering a starburst phase. After the merger, the BH accretion, and thereby the AGN power output ($P_{\mathrm{AGN,PSB}} \geq 10^{56}\,$erg/Myr), typically increases significantly, especially at higher stellar masses. A quick shutdown in star formation follows, which is often accompanied by a dispersal and heating of (previously star-forming) gas. Lastly, a PSB galaxy remains, i.e. a galaxy with a young stellar population and quiescent levels of star formation. Strikingly, this evolution is different for PSBs found in galaxy clusters: While cluster PSBs also experience an abundance of mergers, leading to star formation enhancement, they are found in a distinct region of phase space, implying a shared environmentally driven quenching mechanisms. Moreover, cluster PSBs are usually experiencing their first infall, especially in higher mass clusters, favouring a density dependent quenching mechanism such as ram-pressure stripping. In other words, although the merger abundance, associated with an increased SFR in cluster PSB progenitors prior to their infall, is similar to our global sample, the reason for the shutdown in star formation is not. To conclude, we find that PSBs experience starbursts due to merger events, independent of their environment, but the quenching mechanisms strongly depend on environment: While AGN feedback is the dominant quenching mechanism for field PSBs, PSBs in galaxy clusters are quenched by ram-pressure stripping due to the hot cluster environment. Thus, for field galaxies their cold gas fraction prior to quenching from the AGN is important to whether they stay star-forming or become PSBs, while for cluster PSBs their infall orbit is the most important factor for quenching, as discussed already by \citep{2019MNRAS.488.5370L}. This likely leads to very different fundamental properties of PSBs in the field and clusters, but to study this in detail remains to be done in a future study. \section*{Acknowledgements} We thank Felix Schulze, Ulrich Steinwandel, Ludwig B\"{o}ss, and Tadziu Hoffmann for helpful discussions. The \textit{Magneticum Pathfinder} simulations were partially performed at the Leibniz-Rechenzentrum with CPU time assigned to the Project "pr86re". This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2094 – 390783311. Information on the \textit{Magneticum Pathfinder} project is available at \url{http://www.magneticum.org}. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
2011.06654
\section*{Acknowledgments} This work was supported in part by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under contract DE-AC02-06CH11357; in part by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357; and in part by NSF grant CCF-1801856. We acknowledge the Argonne Leadership Computing Facility (ALCF) for use of Theta under the DOE INCITE project CANDLE and ALCF project EE-ECP. We acknowledge Huihuo Zheng from ALCF for the TensorFlow and Horovod environments on Theta. \section{Parallel Cancer Deep Learning CANDLE Benchmarks: P1B2 and NT3 } In this section, we discuss two parallel caner deep learning CANDLE benchmarks \cite{21}: NT3 (weak scaling) and P1B2 (strong scling). The NT3 benchmark is a 1D convolutional network for classifying RNA-seq gene expression profiles into normal or tumor tissue categories. This network follows the classic architecture of convolutional models with multiple 1D convolutional layers interleaved with pooling layers followed by final dense layers. The model is trained on the balanced 700 matched normal-tumor gene expression profile pairs available from the NCI Genomic Data Commons and acts as a quality control check for synthetically generated gene expression profiles. The full dataset of expression features contains 60,483 float columns transformed from RNA-seq FPKM-UQ values \cite{21} that map to a column that contains the integer 0|1. The training data size for this benchmark is 597 MB, and the test data size is 150 MB. The number of epochs is one per node for the weak-scaling study. The batch size is 20 (default); and the total training samples are 1,120, with 60,483 features per sample. The optimizer is sgd (stochastic gradient descent). The P1B2 benchmark is an MLP (multiLayer perceptron) network with regularization and five layers. Given patient somatic SNP data, it builds a deep learning network that can classify the cancer type based on sparse input data and evaluate the information content and predictive value in a molecular assay with auxiliary learning tasks. The training data size is 162 MB, and the test data size is 55 MB. The number of epochs is 384 for the strong-scaling study; the batch size is 60 (default); and the total training samples are 2,700 with 28,204 features per sample. The optimizer is rmsprop (root mean square propagation). In this work, we use the two benchmarks P1B2 and NT3 to collect the available 26 performance counters using PyPAPI \cite{PYPA} and performance and power data on the Cray XC40 Theta \cite{THET}. Each node of Theta has 64 cores. The dataset p1b2.csv for P1B2 has 144 configurations, which include 37 variables per configuration (such as application name and system name), 12 numbers of nodes (6 to 384), number of cores (384 to 24,576), learning rates, and 12 different batch sizes, number of epochs (384). The dataset nt3.csv for NT3 has 105 configurations, which include the same 37 variables per configuration, 15 numbers of nodes (1 to 384), number of cores (64 to 24,576), learning rates, 7 different batch sizes, and number of epochs (one per node) Both datasets include 26 performance counters and four metrics (runtime, node power, CPU power, and memory power). The number of cores used for P1B2 and NT3 is up to 24,576 (384 nodes). The 26 performance counters are TOT\_CYC (total cycles), TOT\_INS (total instructions completed), BR\_CN (conditional branch instructions), BR\_NTK (conditional branch instructions not taken), L1\_TCM (L1 total cache misses), L1\_LDM (L1 load misses), L1\_DCM (L1 data cache misses), L1\_ICA (L1 instruction cache accesses), L1\_ICH (L1 instruction cache hits), L1\_ICM (L1 instruction cache misses), L2\_TCM (L2 total cache misses), L2\_TCA (L2 total cache assesses), L2\_TCH (L2 total cache hits), L2\_LDM (L2 load misses), TLB\_DM (data translation lookaside buffer misses), BR\_MSP (conditional branch instructions mispredicted), RES\_STL (cycles stalled at any resource), SR\_INS (store instructions), LD\_INS (load instructions), BR\_TKN (conditional branch instructions taken), BR\_INS (branch instructions), L1\_DCA (L1 data cache accesses), LST\_INS (Load/store instructions completed), REF\_CYC (reference clock cycles), STL\_ICY (cycles with no instructions issued), and BR\_UCN (unconditional branch instructions). Then TOT\_CYC is used to normalize all the performance counters. We used the datasets to analyze the pairwise correlations among 25 hardware performance counters and four target objects as follows. For NT3, we used weak scaling to collect the dataset nt3.csv with 105 configurations. For the same batch size, we expect that the runtime for different configurations is similar. Figure \ref{fig:2} shows the counter correlation matrix for 25 counters. This indicates that most counters are not correlated each other. Figure \ref{fig:3} presents the object correlation matrix for the four metrics. It indicates that runtime is inversely correlated with power. However, node power is highly correlated with CPU power because node power includes the CPU power, and CPU power is poorly correlated with memory power. \begin{figure} \center \includegraphics[width=.45\textwidth]{counter-cor-nt3.png} \caption{Counter correlation matrix for NT3} \label{fig:2} \end{figure} \begin{figure} \center \includegraphics[width=.45\textwidth]{metrics-cor-nt3.png} \caption{Object correlation matrix for NT3} \label{fig:3} \end{figure} For P1B2, we used strong scaling to collect the dataset p1b2.csv with 144 configurations. For the same batch size, we expect that the runtime for different configurations is distinct and that it decreases with increasing numbers of nodes because the number of epochs per node is decreased. Figure \ref{fig:4} shows the counter correlation matrix for 25 counters. This indicates that most counters have some correlation each other except REF\_CYC. Figure \ref{fig:5} presents the metrics correlation matrix for four metrics. It indicates that runtime is correlated with power. As with NT3, however, node power is highly correlated with CPU power because node power includes the CPU power, and CPU power is poorly correlated with memory power. \begin{figure} \center \includegraphics[width=.45\textwidth]{counter-cor-p1b2.png} \caption{Counter correlation matrix for NT3} \label{fig:4} \end{figure} \begin{figure} \center \includegraphics[width=.45\textwidth]{metrics-cor-p1b2.png} \caption{Object correlation matrix for NT3} \label{fig:5} \end{figure} \section{Conclusions} We used the datasets collected for the two benchmarks NT3 (weak scaling) and P1B2 (strong scaling) to build performance and power models based on hardware performance counters using single-object and multiple-object ensemble learning. We utilized ensemble learning to combine linear, nonlinear, and tree-/rule-based ML methods to cope with the bias-variance tradeoff and result in more accurate models; and we ranked the performance counters to identify the most important counters for improvement hints. Based on the insights from these models, we improved the performance and energy of P1B2 and NT3 by optimizing the deep learning environments TensorFlow, Keras, Horovod, and Python under the huge page size of 8 MB on Theta. Experimental results show that ensemble learning not only leads to more accurate models but also provides more robust performance counter ranking. We achieved up to 61.15\% performance improvement and up to 62.58\% energy saving for P1B2, and up to 55.81\% performance improvement and up to 52.60\% energy saving for NT3 on up to 24,576 cores on Cray XC40 Theta. Overall, ensemble learning provides a broad, robust view of learning from the data. Applying ensemble learning to science should help better identify key factors that impact science or decision-making [PR06]. We learned from this work that all HPC systems support the number of huge page sizes to accelerate scientific applications with recompiling requirement; however, there is no huge page size support for the ML and deep learning applications. For the deep learning applications and others using script languages such as Python, the huge page support should accelerate the time-consuming training process as well. Which huge page size will result in the best performance, however, depends on the application characteristics and the underlying systems. \section{Multiple-Objects Ensemble Learning} In the preceding sections, we explored single-object (univariate) ensemble learning \cite{MZ16} to combine different ML methods and result in more accurate models and provide more robust performance counter ranking. These methods target only a single metric, either runtime or node power. In this section, we discuss multiple-objects (multivariate) ensemble learning using a multivariate tree boosting method mvtboost \cite{MP16} to model performance and power and to rank the performance counters based on multiple objects. Boosted decision tree ensembles such as gradient boosting machine (gbm) \cite{FJ01} are powerful ensemble learning algorithms, allowing dependent variables to be nonlinear functions of predictors. mvtboost (multivariate tree boosting) extends gbm to multivariate, continuous object variables by fitting a separate univariate model of a common set of predictors to each object variable. This accounts for covariance in the object variables as in seemingly unrelated regression. This joint analysis of several object variables can be informative when we consider the four metrics runtime, node power, CPU power, and memory power for application improvement. In this section, we use a mvtboost method with 1,000 trees, a learning rate of 0.01, and a tree depth of 3 to model the performance and power of P1B2 and NT3 and analyze the impacts of different performance counters. We then compare the performance counter ranking with what we found in the preceding section. Table \ref{tab:7} shows the performance counter ranking for P1B2 using multiple-objects ensemble learning. When we consider runtime, node power, CPU power, and memory power for the application improvement, we need to focus on the top counters BR\_CN, TLB\_DM, and L1\_ICM, similar to the counters BR\_CN, L1\_ICH, and TLB\_DM found in the preceding section. This further confirms that we need to focus on BR\_CN, L1 cache, and TLB\_DM to improve the application performance and power. \begin{table} \center \caption{Performance counter ranking for P1B2 using ensemble learning} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{mvtb-p1b2.png} \end{tabular} \label{tab:7} \end{table} \begin{table} \center \caption{Performance counter ranking for NT3 using ensemble learning} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{mvtb-nt3.png} \end{tabular} \label{tab:8} \end{table} Table \ref{tab:8} shows the performance counter ranking for NT3 using multiple-objects ensemble learning. When we consider runtime, node power, CPU power, and memory power for the application improvement, we need to focus on the top counters L1\_ICM, BR\_MSP, and STL\_ICY. In the preceding section, when we consider runtime and node power, we need to focus on L1\_ICM, L2\_TCH, and TLB\_DM. Overall, we need to focus on L1 cache, BR\_MSP, STL\_ICY, and TLB\_DM to improve the application performance and power. \section{Modeling and Improvement Framework Using Ensemble Learning } In this section, we propose a modeling and improvement framework using ensemble learning. We discuss how to use ensemble machine learning methods to model performance and power, and we identify the most important performance counters that affect the application performance and power. Because our problem is a regression problem, we focus on three types of machine learning with a total of 15 methods: linear, nonlinear, and tree-/rule-based regressions with built-in measurements of variable importance from the caret package in R \cite{1}. For instance, multivariate adaptive regression spline and many tree-based models monitor the increase in performance that occurs when adding each variable to the model \cite{KJ13}, and linear regression and logistic regression use quantifications based on the model coefficients or statistical measurements (such as t-statistics). For linear regression, we use five methods: Lasso and elastic-net regularized generalized linear models (glmnet) \cite{GLM}, partial least squares (pls) \cite{KJ13}, ridge regression \cite{ZH18}, principal component regression (pcr) \cite{KJ13}, and elastic net regression (enet) \cite{FH10}. For nonlinear regression, we use five methods: k-nearest neighbors (knn) \cite{KJ13}, support vector machine with a linear kernel \cite{KSH}, multivariate adaptive regression spline \cite{MS19}, Gaussian process with radial basis function (gaussprRadial) \cite{KSH}, and bayesglm \cite{KJ13}. For tree-/rule-based regression, we use five methods: random forests (rf) \cite{LW18}, cubist \cite{KW20}, conditional inference tree (ctree) \cite{HH20}, eXtreme Gradient Boosting (xgbTree) \cite{CH19}, and bagged (bootstrap aggregated) CART (Classification \& Regression Trees) (treebag) \cite{KJ13}. To collect these ML models as an ensemble, we use caretEnsemble \cite{MZ16}, which is a package for making ensembles of the caret models from R caret \cite{1}. The caretEnsemble package has three primary functions: caretList(), caretEnsemble(), and caretStack(). caretList is a function for fitting many different caret models to the same dataset with the same resampling parameters. It returns a list of caret objects that can be passed to caretEnsemble or caretStack. It has almost the same arguments as train() from the R caret. caretEnsemble uses a generalized linear model (glm) to create a simple linear blend of models. It has two arguments that can be used to specify which models to fit: methodList and tuneList. methodList is a simple character vector of methods that will be fit with the default train parameters, while tuneList can be used to customize the call to each component model. varImp() \cite{1} is used to extract the variable importance from each member of the ensemble, as well as the final ensemble model. caretStack uses a caret model to combine the outputs from several component caret models. It allows to move beyond simple blends of models to use metamodels to create ensemble collections of predictive models. However, it does not support varImp(). In this work, we use caretEnsemble to make an ensemble of the 15 ML methods to build the model and rank the performance counters. Figure \ref{fig:1} shows the MuMMI modeling and improvement framework. This extends and enhances our previous MuMMI framework \cite{WT16} by leveraging ensemble learning and feature selection. For an HPC application executed on a power-aware system, we collect runtime, power (node, CPU, and memory), and performance counter data. During the application execution we capture available underived performance counters using any counter measurement tool (PAPI \cite{PAPI}, perf\_events \cite{PERF}, perfmon2 \cite{PMON}, HPM \cite{PO08}, etc.). All performance counters are normalized by using the total cycles of the execution to create performance event rates for each counter. We then use ML methods to build the models for the metrics and rank the counters based on their variable importance. However, each ML method builds the model and provides the variable importance in distinct ways. Hence, we use ensemble learning to to create the ensemble of these ML models, build a more accurate complex model, and provide the robust variable importance for ranking performance counters. Then we can use the counter ranking to identify the most important counters for potential application improvements and/or use the ensemble models to predict the execution time and power. In this work, we focus on four metrics: runtime, node power, CPU power, and memory power. \begin{figure} \center \includegraphics[width=.48\textwidth]{framework.png} \caption{MuMMI Modeling and Improvement Framework} \label{fig:1} \end{figure} \section{Performance and Energy improvement } The CANDLE benchmarks P1B2 and NT3 are implemented in Python by using Keras, TensorFlow, and Horovod. These Python codes, like other scripting languages, do not have compiler optimization support and instead rely on the library, resource, and environment settings for better performance. We can utilize what we learned from ensemble learning in preceding sections to identify better resource and environment settings for performance improvement. In this section, based on the insights from important performance counters we identified, we improve the performance and energy of these benchmarks by fine-tuning the applications, underlying library, software, and/or system. For P1B2 and NT3 executed on Theta, the dominant phase is model training \cite{WT19}, which is the function call from TensorFlow. From our analysis in the preceding sections, the most important counters indicate that TensorFlow is the optimization target with huge page sizes. The Cray XC40 Theta \cite{THET} supports the huge page sizes of 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, 64 MB, 12 8MB, 256 MB, 512 MB, 1 GB, and 2 GB. The baseline environment used in the work includes Python3.7.5, TensorFlow 1.15, Keras 2.3, and Horovod 0.18 built from the packages by using pip. To optimize TensorFlow with the huge page sizes, we have to rebuild from the source codes for TensorFlow 1.15, Keras 2.3, and Horovod 0.19 based on Intel Python 3.6.8 and MKL-DNN with page sizes of 2 MB, 8 MB, 32 MB, 128 MB, and 1 GB. Then we use the benchmarks under these different environments to evaluate performance and energy. After we identify which page size results in the best performance, we further improve the application performance and energy. \subsection{Impacts of Different Huge Page Sizes} In this section, we use P1B2 and NT3 to evaluate the impacts of different huge page sizes of 2 MB, 8 MB, 32 MB, 128 MB, and 1 GB. For P1B2, we use a strong-scaling study (384 epochs in total) with the batch size of 60 to analyze the improvement. For NT3, we use a weak-scaling study (one epoch per node) with a batch size of 20 to analyze the improvement. Table \ref{tab:9} shows performance and energy improvement for NT3 under different huge page sizes, where "2MB" stands for the huge page size of 2 MB. We rebuilt the deep learning environment from the source codes for TensorFlow 1.15, Keras 2.3, and Horovod 0.19 based on Intel Python 3.6.8 and MKL-DNN. Table \ref{tab:10} shows performance and energy improvement for P1B2 under different huge page sizes. For both benchmarks, as we showed in \cite{WT19}, the modeling training and data loading are the two dominant phases. Compared with the baseline, we find that building TensorFlow from source codes based on the latest MKL-DNN improves the model training performance significantly, and the huge page size of 8 MB outperforms the others in most cases. However, the data-loading time under the baseline with Python 3.7.5 (around 115 s for NT3, around 105 s for P1B2) is much smaller than that (around 160 s for NT3, around 125 s for P1B2) under the different huge page sizes with Python 3.6.8. We used the improved data loading methods from \cite{WT19} for both cases. Basically, the data loading calls the Pandas function read\_csv() to load the data. For all cases, the installed Pandas versions are the same (version 0.25.2). The difference is the Python version. In the next section, we investigate this issue to further improve the performance and energy. \begin{table} \center \caption{Performance and energy improvement for NT3} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{imp-nt3.png} \end{tabular} \label{tab:9} \end{table} \begin{table} \center \caption{Performance and energy improvement for P1B2} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{imp-nt3.png} \end{tabular} \label{tab:10} \end{table} XLA (Accelerated Linear Algebra) \cite{XLA} is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. It compiles subgraphs to reduce the execution time of short-lived operations to eliminate overhead from the TensorFlow runtime, fuses pipelined operations to reduce memory overhead, and specializes to known tensor shapes to allow for more aggressive constant propagation. It analyzes and schedules memory usage, in principle eliminating many intermediate storage buffers. As described in \cite{XLA}, XLA may improve the execution speed and memory usage on GPUs. However, it does not support CPUs well, as we experienced on Theta. Table \ref{tab:11} shows the XLA impacts using 8 nodes on Theta. For NT3 and P1B2, we enable XLA to rebuild TensorFlow to run them with/without enabling autoclustering by setting the TF\_XLA\_FLAGS environment variable as ``--tf\_xla\_auto\_jit=2 --tf\_xla\_cpu\_global\_jit'' under the huge page size of 8 MB. Enabling XLA caused runtime to increase significantly; however, it results in the average node power decrease because of the XLA overhead. This is the case for all other experiments as well. \begin{table} \center \caption{XLA impacts using 8 nodes under the huge page size of 8 MB on Theta} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{xla8.png} \end{tabular} \label{tab:11} \end{table} \subsection{Further Improvement} For further improvement under the huge page size of 8 MB, we also rebuilt Python3.7.5, TensorFlow 1.15, Keras 2.3, and Horovod 0.19 from the source codes based on the latest Intel MKL-DNN. We then evaluated the performance and energy and compared them with "Baseline" and "2MB'', as described in the preceding section. Table \ref{tab:12} shows the performance and energy improvement for NT3 on up to 384 nodes (24,576 cores) under three different deep learning environments. We note that for the weak-scaling case study of NT3, the workload per node is the same (1 epoch per node), and we focus on improving the TensorFlow training performance. With an increase in the number of nodes, the overall energy saving percentage decreases because of the increase of Horovod communication overhead. This overhead also results in a decrease in the average node power. Overall, we use the further improvement environment to achieve up to 55.81\% performance improvement and up to 52.60\% energy saving on up to 24,576 cores. Compared with the "2MB'' and the baseline, we further improve not only the model training time but also the data-loading time. Thus, we achieve much better energy saving. \begin{table} \center \caption{Performance and energy improvement for NT3 on up to 24,576 cores} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{improve-nt3.png} \end{tabular} \label{tab:12} \end{table} \begin{table} \center \caption{Performance and energy improvement for P1B2} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{improve-p1b2.png} \end{tabular} \label{tab:13} \end{table} Table \ref{tab:13} shows the performance and energy improvement for P1B2 on up to 192 nodes (12,288 cores) under three different deep learning environments. We note that for the strong-scaling case study of P1B2, the workload per node decreases with increasing numbers of nodes. We focus on improving the TensorFlow training performance. With the increase in the number of nodes, the overall energy saving percentage decreases because of the decrease in the workload per node and the increase in the Horovod communication overhead. This also results in a decrease in the average node power. Overall, we use the further improvement environment to achieve up to 61.15\% performance improvement and up to 62.58\% energy saving on up to 12,288 cores. \section{Introduction} Energy-efficient scientific applications require insight into how high-performance computing (HPC) system components impact the applications' power and performance. This insight can result from the development of performance and power models. Currently, HPC systems, especially petaflops supercomputers, consume a huge amount of power; and exascale HPC systems will be similarly constrained. Therefore, monitoring the power consumption of an HPC system is important for power management. Since direct online power measurement at high frequencies is impractical, hardware performance counters have been widely used as effective proxies to estimate power consumption \cite{SB08, BJ12, RA13}. Hardware performance counter values are correlated with properties of applications that impact performance and power on the underlying system. In this paper, we use ensemble machine learning, which combines several machine learning models to create a new ensemble model \cite{MZ16, FL12} for modeling performance and power based on performance counters and rank performance counters, with the aim of identifying the most important performance counters for application improvement. Much of the previous work on power modeling and estimation is based on performance counters \cite{IM03, CM05, CD06, SB08, LP10}, \cite{CX10, NM10, BJ12, LW12, RA13, SS13, LT14, TL14, WT16, WT16b, ZJ17, GL18}. These counters are used to monitor system components such as CPU, GPU, memory, disk, and I/O. The values of these performance counters are then correlated with the power consumed by each system component to derive a power model for each system component. Many of these approaches use a small set of performance counters (13 counters or less) for power modeling. In our previous work \cite{LT14, WT16}, we developed models of runtime and power based on 40 performance counters and found that the performance counters used for four different models (runtime, node power, CPU power, and memory power) were not the same. For instance, with six scientific applications we found that a total of 37 different performance counters were used for the models. Hence, in this paper we use ensemble machine learning to enhance our modeling methods. Machine learning (ML) continues to grow in importance across nearly all domains and is a natural tool in modeling to learn from data. It is the process of developing a model in a way that we can understand and quantify the model's prediction accuracy on future, yet-to-be-seen data \cite{KJ13}. A variety of ML models have been fit to different datasets in different languages. For instance, the machine learning caret package in R \cite{1} provides 238 machine learning methods in R, and the scikit-learn \cite{2} package provides many machine learning methods in Python. ML methods have their own way of learning the relationship between the predictors and the target object. Which method, then, should be selected for the final model? Generally, prediction errors can be decomposed into two important subcomponents: error due to bias and error due to variance \cite{1}. Error due to bias is the difference between the expected (or average) prediction of a model and the actual value. It measures how far off the model's predictions are from the actual value, which provides a sense of how well the model can conform to the underlying structure of the data. On the other hand, error due to variance is defined as the variability of a model prediction for a given data point. Generally, more complex models such as k-nearest neighbor, decision trees, and gradient boosting machines can have very high variance, which leads to overfitting. Simple models such as linear models, which are classical examples of high-bias models, tend not to overfit but to underfit because they are less flexible and rarely capture nonlinear relationships. Understanding how different sources of error lead to bias and variance helps us improve the data-fitting process, resulting in more accurate models. Often a tradeoff must be made between a model's ability to minimize bias and variance. Ensemble learning combines several ML models to create a new ensemble model \cite{MZ16, FL12}. It is a powerful technique in machine learning, often outperforming other methods with respect to accuracy. However, this comes at the cost of increased algorithmic and model complexity. Should the ensemble model be selected for the final model? Should the ensemble learning provide the robust feature importance? When we consider multiple objects such as runtime, node power, CPU power, and memory power for improvement, how do we utilize multiple-object ensemble learning? In this paper, we address these issues and use ensemble learning to combine linear, nonlinear, and tree-/rule-based ML methods to cope with the bias-variance tradeoff and result in more accurate models. The CANDLE project \cite{8, 21, WT19} focuses on building a single scalable deep neural network that can address three cancer challenge problems: (1) the RAS pathway problem of understanding the molecular basis of key protein interactions in the RAS/RAF pathway presented in 30\% of cancers using unsupervised learning; (2) the drug response problem of developing predictive models for drug response to optimize preclinical drug screening and drive precision-medicine-based treatments for cancer patients using supervised learning; and (3) the treatment strategy problem of automating the analysis and extraction of information from millions of cancer patient records to determine optimal cancer treatment strategies using semi-supervised learning. The CANDLE benchmarks \cite{21} implement deep learning architectures that are relevant to these three cancer problems. In our previous work \cite{WT19}, we discussed our parallel methodology and implemented and analyzed the Horovod CANDLE benchmarks (NT3, P1B1, P1B2, and P1B3), focusing on their scalability, performance, and power characteristics with different batch sizes, learning rates, and epochs on both Summit \cite{SUMM} with GPUs at Oak Ridge National Laboratory and Theta \cite{THET} with CPUs at Argonne National Laboratory. We identified the data-loading bottlenecks and then improved the performance and energy for better scalability by loading data in chunks without memory limitation. We achieved up to 78.25\% in performance improvement and up to 78\% in energy saving under strong scaling on up to 384 GPUs and up to 79.5\% in performance improvement and up to 77.11\% in energy saving under weak scaling on up to 3,072 GPUs on Summit. We also achieved up to 45.22\% performance improvement and up to 41.78\% in energy saving under strong scaling on up to 384 nodes on Theta. However, these benchmarks still ran much slower than what we expected on Theta. This result motivated us to identify opportunities to further improve their performance and energy on Theta. In this paper, we use the datasets collected for the parallel cancer deep learning CANDLE benchmarks NT3 (weak scaling) and P1B2 (strong scaling) to address these issues in performance and power modeling and to achieve improvement using single-object and multiple-object ensemble learning \cite{MZ16, MP16}, which is a combination of several machine learning methods from the R package caret. We focus on three types of machine learning: linear, nonlinear, and tree-/rule-based regressions. We analyze how a single ML method performs, then investigate how ensemble learning for 15 ML methods performs and compare the results and evaluate models with built-in feature selection. Further, we use unsupervised feature selection methods to confirm that ensemble learning provides more robust model and feature ranking. Then, based on the insights from these models, we improve the performance and energy of P1B2 and NT3 on Theta. The remainder of this paper is organized as follows. Section 2 presents the modeling and improvement framework using ensemble learning. Section 3 briefly describes the parallel cancer deep learning CANDLE benchmarks NT3 and P1B2. Section 4 discusses performance and power modeling using single-object ensemble learning and investigates the performance counter ranking under different ML methods. Section 5 discusses performance and power modeling using multiple-objects ensemble learning. Section 6 presents performance and energy improvement and the experimental results. Section 7 summarizes our conclusions and briefly discusses possible future work. \section{Performance and Power Modeling Using Ensemble Learning} To make sure we use the same training and test sets, we use the set.seed(3456) in our R codes for all experiments so that creating the random objects can be reproduced. We define the relationship between the object metric (runtime, node power, CPU power, or memory power) and variables (25 performance counters) as follows: metric $\sim $ TOT\_INS + BR\_CN + BR\_NTK + L1\_TCM + L1\_LDM + L1\_DCM + L1\_ICA + L1\_ICH + L1\_ICM + L2\_TCM + L2\_TCA + L2\_TCH + L2\_LDM + TLB\_DM + BR\_MSP + RES\_STL + SR\_INS + LD\_INS + BR\_TKN + BR\_INS + L1\_DCA + LST\_INS + REF\_CYC + STL\_ICY + BR\_UCN. When using models to predict a numeric outcome, some measure of accuracy is typically used to evaluate the effectiveness of the model. For instance, the most common method for characterizing a model's predictive capabilities is to use the root mean square error (RMSE). RMSE is a function of the model residuals, which are computed as the observed values minus the predicted value. It is interpreted either as how far, on average, the residuals are from zero or as the average distance between the observed values and the predicted values. Another common metric is $R^2$, the coefficient of determination, which can be interpreted as the proportion of the information in the data that is explained by the model. It is the squared correlation coefficient between the observed and predicted values in the simplest version. It is a measure of correlation, not accuracy. Therefore, in this paper we use the metric RMSE to evaluate the effectiveness of the model using different ML methods and ensemble learning. We choose 15 machine learning methods \cite{1} from the R caret package: tree-/rule-based group -- random forest (rf), cubist (cubist), eXtreme Gradient Boosting (xgbTree), conditional inference tree (ctree), and treebag; nonlinear group -- k-nearest neighbors (knn), support vector machines with linear kernel (svmLinear), Gaussian process with radial basis function (gaussprRadial), multivariate adaptive regression spline (earth), and Bayesian generalized linear model (bayesglm); linear group -- Lasso and elastic-net regularized generalized linear models (glmnet), partial least squares (pls), ridge fegression (ridge), elastic vet regression (enet), and principal component regression (pcr) because their RMSE is very close to each other in the same group. Then we use caretEnsemble to create the ensemble of the 15 ML methods and build the model and rank the performance counters. \subsection{Performance and Power Modeling} In this section, we use the 15 machine learning methods and caretEnsemble \cite{MZ16} to model performance (runtime) and node power of NT3 and P1B2 using the same training and test sets based on the 80/20\% rule. Table \ref{tab:1} shows the ensemble model for P1B2 that resulted from the ensemble of the following models: cubist, treebag, xgbTree, ctree, rf, glmnet, pls, ridge, pcr, enet, knn, svmLinear, earth, gaussprRadial, and bayesglm. The resulting RMSE for the ensemble model is 200.14 in performance and 3.70 in node power. They are the most accurate among these models. \begin{table} \center \caption{RMSE of different ML models for P1B2} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{ensemble-p1b2.png} \end{tabular} \label{tab:1} \end{table} Similarly, Table \ref{tab:2} shows the ensemble model for NT3 thst resulted from the ensemble of the following models: cubist, treebag, xgbTree, ctree, rf, glmnet, pls, ridge, pcr, enet, knn, svmLinear, earth, gaussprRadial, and bayesglm. The resulting RMSE for the ensemble model is 154 in performance and 4.24 in node power. The ensemble model for runtime is the most accurate among these models; however, the power model using rf results in the smallest RMSE, 4.22. This is an exception. Overall, ensemble learning results in the most accurate performance and power models for P1B2 and NT3 in most cases. \begin{table} \center \caption{RMSE of different ML models for NT3} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{ensemble-nt3.png} \end{tabular} \label{tab:2} \end{table} \subsection{Performance Counter Ranking} In this section, we use P1B2 to explore performance counter ranking using model-based feature selection for 15 ML methods and ensemble learning and using unsupervised feature selection. We compare them to identify the most important performance counters that impact the performance and node power. \subsubsection{Model-Based (Supervised) Feature Selection} In this section, we use 15 machine learning methods and caretEnsemble to explore performance counter ranking. varImp() is used to extract the variable importance from each member of the ensemble, as well as the final ensemble model. Then we sum the variable importance values for counters to calculate the percentage for each counter to rank them. \begin{table} \center \caption{Performance counter ranking for performance models} \begin{tabular}{c} \includegraphics[width=.48\textwidth]{time-cr-p1b2.png} \end{tabular} \label{tab:3} \end{table} Table \ref{tab:3} presents the top 6 counters in each performance model using the different ML methods. A total of 15 different counters are listed in the table. We observe that 7 of the 15 ML methods listed (ridge, pcr, enet, knn, svmLinear, gaussprRadia, and bayesglm) have the same performance counter ranking order. However, they are not highly correlated each other, as shown in Figure \ref{fig:6}. The others have different performance counter rankings. We note that ensemble learning provides the overall counter ranking, which is different from all 15 ML methods and includes the top counter from each ML model. The counter percentage is the counter importance values divided by the summation of all counter importance values. We note that BR\_CN, RES\_STL, and LST\_INS are the top 3 counters in performance modeling provided by ensemble learning. \begin{figure} \center \includegraphics[width=.45\textwidth]{ensemble-time-p1b2.png} \caption{ML performance model correlation matrix for P1B2} \label{fig:6} \end{figure} Table \ref{fig:4} presents the top 6 counters in each node power model using different ML methods, which are different from that in Table \ref{tab:3}. A total of 21 different counters are listed in the table. We observe that the same 7 ML methods (ridge, pcr, enet, knn, svmLinear, gaussprRadia, and bayesglm) have the same performance counter ranking order and that ridge and gaussprRadia have a high correlation with the others, as shown in Figure \ref{fig:7}. We note that L1\_ICH, LST\_INS, and L2\_LDM are the top 3 counters in power modeling provided by ensemble learning. \begin{table} \center \caption{Performance counter ranking for power models} \begin{tabular}{c} \includegraphics[width=.48\textwidth]{power-cr-p1b2.png} \end{tabular} \label{tab:4} \end{table} \begin{figure} \center \includegraphics[width=.45\textwidth]{ensemble-power-p1b2.png} \caption{ML power model correlation matrix for P1B2} \label{fig:7} \end{figure} \subsubsection{Unsupervised Feature Selection} Next, we use unsupervised feature selection and compare with the model-based feature selection. For unsupervised feature selection, we use two types of methods \cite{KJ13}: (1) wrapper methods -- recursive feature elimination (RFE), genetic algorithm (GA), and simulated annealing (SA); and (2) filter methods -- stepwise and selection by filter (SBF). Then we compare them with the results from the ensemble learning. We still use P1B2 as an example. For the RFE method, it supports random forest (rfFuncs), linear regression (lmFuncs), and bagged tree (treebagFuncs). For the GA method, it supports random forest (rfGA) and bagged tree (treebagGA). For the SA method, it supports random forests (rfSA) and bagged tree (treebagSA). For the SBF method, it supports random forests (rfSBF), linear regression (lmSBF), and bagged tree (treebagSBF). Stepwise forward and backward selection use glm. Table \ref{tab:5} shows the performance counter ranking for performance models using unsupervised feature selection. It confirms that the top two counters are BR\_CN and RES\_STL provided by ensemble learning in Table \ref{tab:3}. The other counters also occur in Table \ref{tab:5}. \begin{table} \center \caption{Performance counter ranking for performance models} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{fs-time-cr-p1b2.png} \end{tabular} \label{tab:5} \end{table} \begin{table} \center \caption{Performance counter ranking for power models} \begin{tabular}{c} \includegraphics[width=.45\textwidth]{fs-power-cr-p1b2.png} \end{tabular} \label{tab:6} \end{table} Table \ref{tab:6} shows the performance counter ranking for node power models using unsupervised feature selection. It shows that TLB\_DM is the top counter in the top 6 counters in Table \ref{tab:4}. Two of the top three counters, L1\_ICH, LST\_INS, and L2\_LDM, provided by ensemble learning in Table \ref{tab:4} are the top counters in Table \ref{tab:6}. Overall, compared with each ML method, we find that ensemble learning provides more robust performance count ranking. \subsection{Discussion} In summary, for P1B2, we used unsupervised feature selection methods to confirm that ensemble learning provides more robust performance count ranking than each individual ML method does. When we consider runtime and node power for application improvement for P1B2, we need to focus on the counters BR\_CN, L1\_ICH, and TLB\_DM. Similarly, when we apply the same methodology to NT3, we find that the counters L1\_ICM, L2\_TCH, and TLB\_DM are the focal counters for application improvement.
0911.3763
\section{Introduction} Radial metallicity gradients are observed in the disks of many galaxies, including the Milky Way, galaxies of the Local Group and other objects. Current research topics include the determination of (i) the magnitude of the gradients, (ii) any space variations along the disk and (iii) possible time variations during the evolution of the host galaxy. In this paper, we present some recent observational evidences of radial abundance gradients. Our main focus is the Milky Way, but it will be shown that the analysis of some objects in the Local Group, particularly M33, is useful in order to study the main properties of the gradients in our own Galaxy. A brief discussion of some recent theoretical models is also given, in an effort to highlight their predictions concerning the radial abundance gradients and their variations. Some recent reviews and general papers on abundance gradients include: Freeman (\cite{freeman}), Maciel \& Costa (\cite{mc2009}), Rudolph et al. (\cite{rudolph}), and Stasi\'nska (\cite{stasinska04}). Theoretical models are discussed by a number of people, including Fu et al. (\cite{fu}), Magrini et al. (\cite{magrini09a}), Cescutti et al. (\cite{cescutti}), Moll\'a and D\'\i az (\cite{molla}), Chiappini et al. (\cite{chiappini03}, \cite{chiappini01}, \cite{chiappini97}), Hou et al. (\cite{hou}), and Gensler (these proceedings). Recent discussions on azimuthal and vertical gradients, not treated here, can be found in Davies et al. (\cite{davies}) and Ivezi\'c et al. (\cite{ivezic}). \section{Abundance gradients in the Milky Way} \subsection{Cepheids} Cepheid variables are possibly the most accurate indicators of abundance gradients in the Milky Way. Since the work of Andrievsky and collaborators (cf. Andrievsky et al. \cite{sergei1}, \cite{sergei2}, \cite{sergei3}, \cite{sergei4}, Luck et al. \cite{luck}), several papers have dealt with these objects in order to study not only the magnitudes of the present-day gradients, but also the detailed behaviour of the abundances along the galactic disk. The main reason is that cepheids are bright enough to be observed at large distances, so that accurate distances and spectroscopic abundances of several elements can be obtained, which in principle allow the determination of radial variations of the gradients better than any other indicator. The ages of these objects are generally under 200 Myr (cf. Maciel et al. \cite{mlc2005}), so that the measured gradients can be safely considered as present-day gradients. Recent work on Cepheids include Lemasle et al. (\cite{lemasle08}), Pedicelli et al. (\cite{pedicelli}), and Romaniello et al. (\cite{romaniello}). Lemasle et al. (\cite{lemasle08}) obtained high-resolution spectroscopic iron abundances for galactic cepheids for which accurate distances were determined based on near-infrared photometry. The abundances were determined within 0.12 dex, while the distances, based on a near-infrared period-luminosity relation, are expected to be accurate within 0.5 kpc in average. In order to improve the sampling process, additional objects were included. The average gradient obtained in the range 5--17 kpc is $d[{\rm Fe/H}]/dR = -0.052 \pm 0.003$ dex/kpc. A better solution proposed by the authors includes a change of slope, in the sense that the gradient is steeper in the inner galaxy, with a flattened gradient of $-0.012 \pm 0.014$ dex/kpc in the outer galaxy. In this region the abundances show an increased spread, as compared with the inner galaxy. The change of slope occurs at about $R \simeq 10$ kpc, farther away than the solar radius, located at $R = 8.5$~kpc. These results have been largely confirmed by the recent work of Pedicelli et al. (\cite{pedicelli}), in which again a large sample from different sources was considered with a new photometric metallicity calibration. These results suggest a gradient of $-0.051 \pm 0.004$ dex/kpc for the whole sample, or $-0.130 \pm 0.015$ dex/kpc for the inner sample ($R < 8$ kpc) and $-0.042 \pm 0.004$ dex/kpc for the outer Galaxy. \subsection{HII regions} Concerning HII regions, several determinations have been presented in the last few years, which include Deharveng et al. (\cite{deharveng}), Esteban et al. (\cite{esteban}), Rudolph et al. (\cite{rudolph}) and Quireza et al. (\cite{quireza}). Deharveng et al. (\cite{deharveng}) obtained an oxygen gradient of $-0.04$ dex/kpc in the range 5--15 kpc, with no indication of flattening further out in the disk. Pilyugin et al. (\cite{pilyugin}) compiled spectra of 13 HII regions in the range 7--14 kpc with available [OIII]$\lambda$4363 measurements, and recomputed the oxygen abundances from the data by Shaver et al. (\cite{shaver}), obtaining a gradient of $-0.051$ dex/kpc. Esteban et al. (\cite{esteban}) obtained echelle spectrophotometry of 8 HII regions in the range 6--10 kpc, and derived carbon and oxygen abundances from recombination lines. The oxygen gradient obtained is $-0.044 \pm 0.010$ dex/kpc. More recently, Rudolph et al. (\cite{rudolph}) used both infrared and optical data to study abundance gradients in a sample of 117 HII regions. The data include both new results and a reanalysis of previous material. The best fit to optical data corresponds to a gradient of $-0.060$ dex/kpc. For the infrared data a gradient of $-0.041$ dex/kpc was obtained. Quireza et al. (\cite{quireza}), determined the electron temperature gradient from high-precision radio recombination line and continuum measurements for over a hundred HII regions, calibrated in terms of the oxygen abundance gradient. The derived slope obtained using a relation between the electron temperature and the oxygen abundances by Shaver et al. (\cite{shaver}) is $-0.043 \pm 0.007$ dex/kpc, in good agreement with the previous work by Deharveng et al. (\cite{deharveng}) and Esteban et al. (\cite{esteban}). From these results it is difficult to establish whether or not the gradients flatten out at large galactocentric distances. However, studies of HII regions in spiral galaxies are consistent with essentially constant abundances in the outer parts of these objects, as for example in the recent work of Bresolin et al. (\cite{bresolin09a}) on the extended disk of M83, in which a flat oxygen gradient was obtained beyond the R25 isophotal radius, regardless of the abundance indicator used. \subsection{Stars and Open Clusters} Apart from Cepheids, other field stars can be used in order to analyze the abundance gradients in the Milky Way. For open cluster stars, both the metallicity and the distances are well determined, and they have a reasonably large age span, so that they can be used to investigate both the spatial and temporal changes in the gradients. However, the age determinations may depend on the calibration used. Work up to 2007 is summarized by Cescutti et al. (\cite{cescutti}), where gradients of O, Mg, Si, S, and Ca are discussed. Several objects have been considered, comprising Cepheids, O, B stars, red giants, and two samples of open clusters. The main feature which is common to all data is a change of slope at large galactocentric radii, roughly $R > 10$ kpc, which characterizes the flattening of the gradients in the outer Galaxy. Theoretical models are also discussed, which assume an inside-out formation for the galactic disk with a time scale of 7 Gyr for the thin disk in the solar vicinity and a shorter timescale of 0.8 Gyr for the galactic halo. The inside-out scenario is apparently a necessary condition to explain the present-day gradients and the variation of the star formation rate with the galactocentric radius. In fact, recent models by Colavitti et al. (\cite{colavitti09}) conclude that all disk constraints cannot be simultaneously satisfied unless an inside-out formation of the galactic disk is assumed. More recent work on open clusters (Sestito et al. \cite{sestito}, Magrini et al. \cite{magrini09a}) generally confirm these findings, using larger samples. A steep negative gradient is found for the inner Galaxy up to about 10--11 kpc, with a flat distribution in the outer Galaxy. A comparison is made of these results with the earlier work by Friel et al. (\cite{friel}) based on low-resolution spectroscopy. Although the error bars are larger, there is not much difference relative to the high-resolution data for the inner Galaxy. The slopes are $-$0.17 dex/kpc and $-$0.09 dex/kpc for these samples, considering only the inner region. It is doubtful whether the difference is meaningful at this stage. In the outer Galaxy, an essentially flat gradient is observed for the new sample. In view of the age span of the open clusters, they are ideally suited to study any time variations of the gradients. Magrini et al. (\cite{magrini09a}) considered a sample of open clusters with high-resolution data, in the range 7--22 kpc, with estimated ages in the range 30 Myr to 11 Gyr. The authors present plots of the gradients according to the age interval of the clusters for Fe, Cr, Ni, Si, Ca, and Ti, with a generally similar behaviour. Also, the ratios of these elements to Fe are essentially constant, suggesting that the derived gradients are similar within the uncertainties. Again, a steep gradient was found for the inner Galaxy ($R < 12$~kpc), with a flattening outwards. The flattening can be observed in all age brackets, but is especially clear in the sample with 4--11 Gyr. Considering the gradients obtained by adopting three different galactocenctric ranges leads to the conclusion that the gradients are approximately constant or slightly flattening with time. However, from the slopes of the inner Galaxy, this conclusion is probably a conservative one, as the indication of a flattening of the gradients seems clear. Recent large surveys of galactic stars are also being used to derive abundance gradients, mainly based on the [Fe/H] ratio. Some examples can be seen in the presentations by B. Nordstr\"om on the Geneva-Copenhagen survey and by C. Boesche on the RAVE project (cf. the conference site of {\it The Milky Way and the Local Group: Now and in the GAIA Era}, Heidelberg, 2009). \subsection{Planetary Nebulae} \begin{figure} \centering \includegraphics[angle=0,width=12cm]{fig1.eps} \caption{The O/H radial gradient from PN. Top: Distances from the IAG/USP group, Bottom: Distances from Stanghellini et al. (\cite{ssv}). Left: CSPN with ages in the range 2--10 Gyr; Right: CSPN with ages in the range 4--6 Gyr.} \label{fig1} \end{figure} The analysis of abundance gradients from planetary nebulae (PN) is hampered by some aspects related to these objects: first, the distances to the galactic nebulae are often uncertain, and statistical scales have to be used in order to have a sizable sample; second, the progenitors of the planetary nebulae have a wide age span, as in the case of the open clusters, ranging from about 1 Gyr to about 8 Gyr (cf. Maciel et al. \cite{mci2009}), so that any time variation of the gradients would have to be taken into account. On the other hand, abundances of elements such as O, Ne, S, and Ar can be obtained within about 0.2 dex in average, which is probably lower than the abundance spread at a given galactocentric radius. The results presented in the last couple of years have been rather contradictory. Pottasch and collaborators (cf. Pottasch and Bernard-Salas \cite{pottasch}) have presented accurate abundance data based both on optical and infrared measurements from ISO, which do not need the consideration of the usually uncertain ionization correction factors (ICF), since more ionized species can be observed. As a result, the derived abundances are expected to be more accurate than in the case of the traditional plasma diagnostic method. The results suggest the presence of a strong negative gradient similar to the ones observed in HII regions and early type stars. For the O/H ratio a slope of $-0.085$ dex/kpc was found. For most of the elements considered, which are O, Ne, S, and Ar, the predicted abundances at the solar radius obtained by taking into account the observed gradients match exactly the solar abundances. In a more recent work (Gutenkunst et al. \cite{gutenkunst}), a larger sample was considered, including bulge nebulae. It can be concluded that the gradient flattens out near the galactic bulge, a result also obtained by Cavichia et al. (\cite{cavichia}). In contrast with these results, Stanghellini et al. (\cite{stanghellini06}) studied a sample of galactic PN with abundances derived from the traditional optical plasma diagnostic method, obtaining a flat gradient for oxygen and neon, based on a simple linear fit. However, from what we have seen in the previous sections, it seems clear that the gradients flatten out in the outer Galaxy, so that using a single fit for the whole disk may be misleading, especially if the inner Galaxy, where the gradient is steeper, is undersampled as is the case here. Moreover, the data in this sample located in the region around the solar circle clearly show some evidence of a steeper gradient, so that a flat gradient for the whole disk is probably incorrect. A more detailed analysis was made by Perinotto and Morbidelli (\cite{perinotto}), who considered a larger and more complete sample of PN for which the abundances were recalculated in a homogeneous way. The results also suggest relatively flat gradients ($<$ $-$0.04 dex/kpc) for oxygen, but a careful analysis of their data shows that the uncertainties in the distances, coupled with a possible time variation of the gradients, may wash out the gradients, so that a careful selection of the objects must be made. This can be seen by considering the oxygen gradients for PN of sets A and B, defined as follows: set A includes 131 objects whose abundances are considered by the authors as the most reliable, and set B is a control sample, containing all PN abundances published between 2000 and 2005, with about 200 objects. In order to avoid any bias due to the adopted distances, Perinotto and Morbidelli (\cite{perinotto}) considered four different statistical scales. Considering for example the results corresponding to the distances by Cahn et al. (\cite{cahn}), a gradient is apparent from set A, while set B presents a flat distribution, suggesting that the uncertainties in the abundances contribute to erase any existing gradients. Some indication of the existence of a time variation of the gradients, in the sense that the present-day gradient is flatter than in the past can also be observed from the results by Perinotto and Morbidelli (\cite{perinotto}). They have analyzed separately the PN according to the Peimbert types, in which Type I are expected to be younger objects, while Type III are older nebulae, generally located at higher distances from the galactic plane and with a larger peculiar velocity. Type II are intermediate age objects in this scheme. According to Perinotto and Morbidelli (\cite{perinotto}), Type I objects do not show any gradients in both sets, while Type II and III show measurable, albeit low, gradients. Also, for the distance scales with a meaningful sample, the gradients of Type III objects are larger than for Type II PN. \begin{figure} \centering \includegraphics[angle=0,width=8cm]{fig2.eps} \caption{Time variation of the radial abundance gradient (Maciel \& Costa \cite{mc2009}).} \label{fig2} \end{figure} A different approach has been taken by the IAG/USP group (Maciel et al. \cite{mcu2003}, \cite{mlc2005}, \cite{mlc2006}, \cite{mci2009}, Maciel \& Costa \cite{mc2009}). Here an effort has been made to divide the PN sample into age groups, as was done for the open clusters by Magrini et al. (\cite{magrini09a}), so that any time variation of the gradients could be appropriately taken into account. Several methods have been developed to obtain the age distribution of the PN central stars. Fig.~1 shows the O/H gradient for disk objects with ages of 2--10 Gyr (left) and 4--6 Gyr (right), using the distance scale adopted by our IAG Basic Sample (top), or the distance scale by Stanghellini et al. (\cite{ssv}) (bottom). The effect of restricting the age interval is clear in both figures. Taking into account the age distribution of the PN progenitor stars (Maciel et al. \cite{mci2009}), it is possible to separate the PN sample according to their ages (young, intermediate, old), so that an estimate of the time variation of the gradients can be obtained. This is shown in Fig.~2 (Maciel \& Costa \cite{mc2009}), where other objects are also considered, namely open clusters, cepheids, OB stars and HII regions. As a conclusion, data from planetary nebulae support the flattening of the gradients near the bulge-disk interface and at large galactocentric distances. However, anticentre nebulae are difficult to observe, and the problem of the distances is still a complicating factor. A considerable improvement is expected with the advent of GAIA. As for the time variation of the gradients, a conservative conclusion at this stage is that either they have not changed very much in the last 6 Gyr approximately, or they may have flattened out by a small amount. These conclusions are supported by some recent theoretical models by Fu et al. (\cite{fu}), which take into account infall, star formation based on the Kennicutt law and a delayed disk formation. \section{The Electron Temperature Gradient} An important confirmation of the abundance gradients in photoionized nebulae comes from the expected electron temperature gradient, since the heavy elements for which radial gradients are observed are the main coolants in these objects. Therefore, a smilar, albeit inverted gradient is expected for HII regions and PN. This is in fact observed, as can be seen even in the earlier papers on this subject. Also, well defined electron temperature gradients have been measured in spiral galaxies such as NGC 300 and M101 (Bresolin et al. \cite{bresolin09b}) and M33 (Magrini et al. \cite{magrini07a}). More recently, Quireza et al. (\cite{quireza}) have determined accurate electron temperature gradients from radio recombination lines. The average HII region gradient is $287 \pm 46$ K/kpc. A somewhat higher gradient of $373$ K/kpc was obtained earlier by Deharveng et al. (\cite{deharveng}). Similar conclusions have been obtained by Maciel et al. (\cite{mqc2007}) for planetary nebulae, for which a steeper electron temperature gradient was derived amounting to about $670$ K/kpc. Fig.~3 shows both the electron temperature gradient of PN and the corresponding correlation with the oxygen abundances. The difference between the electron temperature gradients of HII regions (flatter) and planetary nebulae (steeper) is a strong indication that the gradients have flattened out since the PN progenitor stars have formed. \begin{figure} \centering \includegraphics[angle=-90,width=13cm]{fig3.eps} \caption{Left: The [OIII] electron temperature gradient from planetary nebulae. Right: The correlation between the electron temperatures and oxygen abundances (Maciel et al. \cite{mqc2007}).} \label{fig3} \end{figure} \section{ M33: A very interesting Case} The galaxy M33 is a very interesting case, where abundance gradients have been recently measured both from HII regions and PN, apart from other objects. Rubin et al. (2008) have obtained Ne and S abundances in a sample of HII regions in M33 using Spitzer data, covering a wide range of galactocentric distances. Average gradients of $-0.058 \pm 0.014$ dex/kpc and $-0.052 \pm 0.021$ dex/kpc are obtained for Ne/H and S/H, respectively. Magrini et al. (\cite{magrini07a}, \cite{magrini07b}) analyzed abundances of O, N, and S in a sample of HII regions and derived both electron temperature and abundance gradients for this galaxy within a radius of about 7 kpc from the galactic nucleus. The electron temperature gradient was also measured, amounting to about 570 K/kpc, corresponding to an O/H gradient of $-0.054$ dex/kpc. By considering additional objects from the recent literature, it is concluded that the oxygen data cannot be fitted with a single slope, so that an inner slope of $-0.19$ dex/kpc and an outer slope of $-0.04$ dex/kpc are suggested. Theoretical models have been developed in which a continuous infall of gas on the disk is assumed. The models are calculated at different epochs, varying from an age of 2 Gyr to the present day, at 13.6 Gyr. According to this model, the gradients show some mild flattening with time. More recently, Magrini et al. (\cite{magrini09b}) considered a larger PN sample and obtained a relatively weak O/H gradient of $-0.03$ dex/kpc to $-0.04$ dex/kpc. Similar values were measured for Ne/H ($-0.03$ to $-0.05$ dex/kpc) and S/H ($-0.03$ to $-0.04$ dex/kpc). The recalculated O/H gradient for HII regions is similar to these values. Therefore, at face value the PN gradients are marginally steeper than the corresponding HII region gradients. However, a single slope was obtained for the whole sample, suggesting that the derived gradients are probably lower limits for the inner galaxy, since the gradients tend to flatten out at larger galactocentric distances. Moreover, PN may have progenitors with different ages, so that mixing these objects would contribute to flatten the measured slopes. This is reinforced by the fact that the inner sample, which is associated with the highest metallicities, is undesampled. A hint on this point can be obtained considering the results by Cioni (\cite{cioni}) on M33. She has considered a sample of well measured AGB stars, showing that the [Fe/H] gradient is clearly steepeer in the inner parts of the galaxy, flattening out at the outer parts. Since PN and AGB stars are objects of similar ages, their gradients are expected to be similar, which reinforces the conclusion that the flatter gradients found by Magrini et al. (\cite{magrini09b}) are lower limits. \section{Conclusions} From all objects considered, some tentative conclusions may be drawn: (1) Average abundance gradients are generally between $-0.03$ dex/kpc and $-0.10$ dex/kpc, but a single value for the whole disk may be misleading. (2) Most evidences point to a flattening out of the gradients at large galactocentric distances. (3) There are some clear evidences of a flattening of the gradients near the galactic bulge. (4) The change of slope in the outer Galaxy occurs in the region around $R \simeq 10$ kpc. (5) Any further change of the slope needs better data than presently available. Cepheids may be an exception. (6) There are no evidences of a steepening of the gradients at large galactocentric distances, as suggested by some theoretical models. (7) Either the gradients do not change appreciably during galactic evolution, or they flatten out at a moderate rate. (8) There are no clear evidences of a steepening of the gradients with time, as suggested by some theoretical models. \bigskip\noindent {\it Acknowledgements. This work was partially supported by FAPESP and CNPq.}
1808.00473
\section{Introduction} \label{sec:intro} Abell 2597 is a cool core cluster of galaxies at redshift $z=0.0821$ (\autoref{fig:overview}). The galaxies inhabit a megaparsec-scale bath of X-ray bright, $\sim 10^{7-8}$ K plasma whose central particle density is sharply peaked about a giant elliptical brightest cluster galaxy (BCG) in the cluster core. Under the right conditions (e.g., \citealt{fabian94, peterson06}), the dense halo of plasma that surrounds this galaxy can act like a reservoir from which hot gas rapidly cools, driving a long-lived rain of thermally unstable multiphase gas that collapses toward the galaxy's center (e.g., \citealt{gaspari17b}), powering black hole accretion and $\sim 5$ \Msol\ yr\mone\ of star formation \citep{tremblay12a,tremblay16}. The rate at which these cooling flow mass sinks accumulate would likely be higher were the hot atmosphere not permeated by a $\sim30$ kpc-scale network of buoyantly rising bubbles (\autoref{fig:overview}\textit{a}), inflated by the propagating jet launched by the BCG's central accreting supermassive black hole \citep{taylor99,mcnamara01,clarke05,tremblay12b}. Those clouds that have managed to cool now form a multiphase filamentary nebula, replete with young stars, that spans the inner $\sim30$ kpc of the galaxy. Its fractal tendrils, likely made of many cold molecular clouds with warmer ionized envelopes (e.g., \citealt{jaffe05}), wrap around both the radio jet and the the X-ray cavities the jet has inflated (\autoref{fig:overview}\textit{b}/\textit{c}, \citealt{mcnamara93, voit97, koekemoer99, mcnamara99, odea04,oonk10,tremblay12a, tremblay15, mittal15}). \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_overview.pdf} \end{center} \vspace*{-5mm} \caption{ A multiwavelength view of the Abell 2597 Brightest Cluster Galaxy. (\textit{Left}) \textit{ Chandra} X-ray, \textit{HST} and DSS optical, and Magellan H$\alpha$+[N~\textsc{ ii}] emission is shown in blue, yellow, and red, respectively {\small (Credit: X-ray: NASA/CXC/Michigan State Univ/G.Voit et al; Optical: NASA/STScI \& DSS; H$\alpha$: Carnegie Obs./Magellan/W.Baade Telescope/U.Maryland/M.McDonald)}. (\textit{Top right}) \textit{HST}/STIS MAMA image of Ly$\alpha$ emission associated with the ionized gas nebula. Very Large Array (VLA) radio contours of the 8.4 GHz source are overlaid in black. (\textit{Bottom right}) Unsharp mask of the \textit{HST}/ACS SBC far-ultraviolet continuum image of the central regions of the nebula. 8.4 GHz contours are once again overlaid. In projection, sharp-edged rims of FUV continuum to the north and south wrap around the edges of the radio lobes. Dashed lines indicate relative fields of view between each panel. The centroids of all panels are aligned, with East left and North up. This figure has been partially adapted from \citealt{tremblay16}. } \label{fig:overview} \end{figure*} These X-ray cavities act as a calorimeter for the efficient coupling between the kinetic energy of the jet and the hot intracluster medium through which it propagates (e.g., \citealt{churazov01,churazov02}). Given their ubiquity in effectively all cool core clusters, systems like Abell 2597 are canonical examples of mechanical black hole feedback, a model now routinely invoked to reconcile observations with a theory that would otherwise over-predict the size of galaxies and the star formation history of the Universe (see, e.g., reviews by \citealt{veilleux05,mcnamara07,mcnamara12,fabian12,alexander12,kormendy13,gaspari13,bykov15}). Yet, just as for quasar-driven radiative feedback invoked at earlier epochs (e.g., \citealt{croton06,bower06}), the degree to which the mechanical luminosity of jets might quench (or even trigger) star formation depends on how it might couple to the origin and fate of cold molecular gas, from which all stars are born. Observational evidence for this coupling grows even in the absence of a consensus explanation for it. The density contrast between hot ($\sim10^7$ K) plasma and cold ($\sim10$ K) molecular gas is nearly a million times greater than that between air and granite. So while one might naturally expect that the working surface of a jet can drive sound waves and shocks into the tenuous X-ray atmosphere, it is more difficult to explain the growing literature reporting observations of massive atomic and molecular outflows apparently entrained by jets (e.g., \citealt{morganti05,morganti13,rupke11,alatalo11,alatalo15, dasyra15, cicone14, cicone18}), or uplifted in the wakes of the buoyant hot bubbles they inflate (e.g., \citealt{mcnamara14,mcnamara16,russell14,russell16a,russell16b,russell17}). One might instead expect molecular nebulae to act like seawalls, damping turbulence, breaking waves in the hotter phases of the ISM, and redirecting jets. Recent single-dish and Atacama Large Millimeter/submillimeter Array (ALMA) observations of cool core clusters nevertheless reveal billions of solar masses of cold gas in kpc-scale filaments draped around the rims of radio lobes or X-ray cavities (e.g., Perseus: \citealt{salome08,lim08}, Phoenix: \citealt{russell16b}, Abell 1795: \citealt{russell17}, M87: \citealt{simionescu18}), or trailing behind them as if drawn upward by their buoyant ascent (e.g., Abell 1835: \citealt{mcnamara14}; 2A 0335+096: \citealt{vantyghem16}; PKS 0745-191: \citealt{russell16a}). Such a coupling would be easier to understand were it the manifestation of a top-down multiphase condensation cascade, wherein both the warm ionized and cold molecular nebulae are pools of cooling gas clouds that rain from the ambient hot halo. The disruption of this halo into a multiphase medium is regulated by the survivability of thermal instabilities, which lose entropy over a cooling time $t_\mathrm{cool}$, descend on a free-fall time $t_\mathrm{ff}$, and remain long-lived only if their local density contrast increases as they sink (e.g., \citealt{voit17a}). This implies that there is an entropy threshold for the onset of nebular emission in BCGs, long known to exist observationally \citep{rafferty08,cavagnolo08}, set wherever the cooling time becomes short compared to the effective gas dynamical timescale. This underlying principle is not new (e.g., \citealt{hoyle53,rees77,binney77,cowie80,nulsen86,balbus89}), but has found renewed importance in light of recent papers arguing that it may be fundamental to all of galaxy evolution \citep{pizzolato05,pizzolato10,marinacci10,mccourt12, sharma12,gaspari12,gaspari13,gaspari15,gaspari17b,gaspari18,voit15,voit15d,voit15b,voit15c,voit17a,voit18,li15,prasad15, prasad17, prasad18, singh15, mcnamara16, yang16, meece17, hogan17, main17,pulido18}. Amid minor disagreement over the importance of the free-fall time, (compare, e.g., \citealt{voit15d}, \citealt{mcnamara16} and \citealt{gaspari17b}), these works suggest that the existence of this threshold establishes a stochastically oscillating but tightly self-regulated feedback loop between ICM cooling and AGN heating. The entire process would be mediated by chaotic cold accretion (CCA) onto the central supermassive black hole \citep{gaspari13}, a prediction that has recently found observational support with the detection of cold clouds falling toward black hole fuel reservoirs (e.g., \citealt{tremblay16}; Edge et al.~in prep.). The radio jets that the black hole launches, and the buoyant hot bubbles it inflates, inject sound waves, shocks, and turbulence into the X-ray bright halo, lowering the cooling rate and acting as a thermostat for the heating-cooling feedback loop (e.g., \citealt{birzan04,birzan12,zhuravleva14,hlavacek12,hlavacek15,gaspari17a}). Those same outflows can adiabatically uplift low entropy gas to an altitude that crosses the thermal instability threshold, explaining their close spatial assocation with molecular filaments and star formation \citep{tremblay15,russell17}. In this scenario, a supermassive black hole acts much like a mechanical pump in a water fountain\footnote{The supermassive black hole, in this case, is akin to the ``pump-like'' action of supernova feedback driving similar fountains in less massive galaxies \citep{fraternali08,marinacci11,marasco13,marasco15}.} (e.g.~\citealt{lim08,salome06,salome11}), wherein cold gas drains into the black hole accretion reservoir, powering jets, cavity inflation, and therefore a plume of low-entropy gas uplifted as they rise. The velocity of this cold plume is often well below both the escape speed from the galaxy and the Kepler speed at any given radius (e.g., \citealt{mcnamara16}), and so those clouds that do not evaporate or form stars should then rain back toward the galaxy center from which they were lifted. This, along with merger-induced gas motions \citep{lau17} and the feedback-regulated precipitation of thermal instabilities from the hot atmosphere, keeps the fountain long-lived and oscillatory. The apparently violent and bursty cluster core must nevertheless be the engine of a process that is smooth over long timescales, as the remarkably fine-tuned thermostatic control of the heating-cooling feedback loop now appears to persist across at least ten billion years of cosmic time (e.g., \citealt{birzan04,birzan08,rafferty06,dunn06,best06,best07,mittal09,dong10,hlavacek12,hlavacek15,webb15,simpson13,mcdonald13b,mcdonald16,mcdonald17,mcdonald18, bonaventura17}). These hypotheses are testable. Whether it is called ``chaotic cold accretion'' \citep{gaspari13}, ``precipitation'' \citep{voit15d}, or ``stimulated feedback'' \citep{mcnamara16}, the threshold criterion predicts that the kinematics of the hot, warm, and cold phases of the ISM should retain memory of their shared journey along what is ultimately the same thermodynamic pathway \citep{gaspari18}. Observational tests for the onset of nebular emission, star formation, and AGN activity, and how these may be coupled to this threshold, have been underway for many years (e.g., \citealt{cavagnolo08,rafferty08,sanderson09,tremblay12b,tremblay14,tremblay15,mcnamara16,voit18,hogan17,main17,pulido18}). The multiphase uplift hypothesis, motivated by theory and simulations \citep{pope10,gaspari12,wagner12,li14a,li14b,li15}, is corroborated by observations of kpc-scale metal-enriched outflows along the radio axis (e.g., \citealt{simionescu09,kirkpatrick11}), and an increasing number of ionized and molecular filaments spatially associated with jets or cavities (e.g, \citealt{salome08,mcnamara14,tremblay15,vantyghem16,russell17}). More complete tests of these supposed kpc-scale molecular fountains will require mapping the kinematics of \textit{all} gas phases in galaxies. As we await a replacement for the \textit{Hitomi} mission to reveal the velocity structure of the hot phase \citep{hitomi16,hitomi18, fabian17}, combined ALMA and optical integral field unit (IFU) spectrograph observations of cool core BCGs can at least begin to further our joint understanding of the cold molecular and warm ionized gas motions, respectively. To that end, in this paper we present new ALMA observations that map the kinematics of cold gas in the Abell 2597 BCG. We compare these with new Multi-Unit Spectroscopic Explorer (MUSE, \citealt{bacon10}) IFU data that do the same for the warm ionized phase, as well as a new deep \textit{Chandra} X-ray image revealing what is likely filament uplift by A2597's buoyant hot bubbles. These data are described in \autoref{sec:observations}, presented in \autoref{sec:results}, and discussed in \autoref{sec:discussion}. Throughout this paper we assume $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M = 0.27$, and $\Omega_{\Lambda} = 0.73$. In this cosmology, 1\arcsec\ corresponds to 1.549 kpc at the redshift of the A2597 BCG ($z=0.0821$), where the associated luminosity and angular size distances are 374.0 and 319.4 Mpc, respectively, and the age of the Universe is 12.78 Gyr. Unless otherwise noted, all images are centered on the nucleus of the A2597 BCG at Right Ascension (R.A.) 23$^{\mathrm{h}}$ 25$^{\mathrm{m}}$ 19.7$^{\mathrm{s}}$ and Declination $-12$\arcdeg\ 07\arcmin\ 27\arcsec\ (J2000), with East left and North up. \begin{deluxetable*}{cccccc} \tabletypesize{\footnotesize} \tablecaption{\textsc{Summary of Abell 2597 Observations}} \tablehead{ \colhead{Waveband / Line} & \colhead{Facility} & \colhead{Instrument / Mode} & \colhead{Exp. Time} & \colhead{Prog. / Obs. ID (Date)} & \colhead{Reference} } \colnumbers \startdata \label{tab:observation_summary} X-ray (0.2-10 keV) & \textit{Chandra} & ACIS-S & 39.80 ksec & 922 (2000 Jul 28) & \citet{mcnamara01, clarke05} \cr \nodata & \nodata & \nodata & 52.20 ksec & 6934 (2006 May 1) & \citet{tremblay12a,tremblay12b} \cr \nodata & \nodata & \nodata & 60.10 ksec & 7329 (2006 May 4) & \citet{tremblay12a,tremblay12b} \cr \nodata & \nodata & \nodata & 69.39 ksec & 19596 (2017 Oct 8) & Tremblay et al.~(in prep) \cr \nodata & \nodata & \nodata & 44.52 ksec & 19597 (2017 Oct 16) & (Large Program 18800649) \cr \nodata & \nodata & \nodata & 14.34 ksec & 19598 (2017 Aug 15) & \nodata \cr \nodata & \nodata & \nodata & 24.73 ksec & 20626 (2017 Aug 15) & \nodata \cr \nodata & \nodata & \nodata & 20.85 ksec & 20627 (2017 Aug 17) & \nodata \cr \nodata & \nodata & \nodata & 10.92 ksec & 20628 (2017 Aug 19) & \nodata \cr \nodata & \nodata & \nodata & 56.36 ksec & 20629 (2017 Oct 3) & \nodata \cr \nodata & \nodata & \nodata & 53.40 ksec & 20805 (2017 Oct 5) & \nodata \cr \nodata & \nodata & \nodata & 37.62 ksec & 20806 (2017 Oct 7) & \nodata \cr \nodata & \nodata & \nodata & 79.85 ksec & 20811 (2017 Oct 21) & \nodata \cr \nodata & \nodata & \nodata & 62.29 ksec & 20817 (2017 Oct 19) & \nodata \cr \hline Ly$\alpha$ $\lambda$1216 \AA & \textit{HST} & STIS F25SRF2 & 1000 sec & 8107 (2000 Jul 27) & \citet{odea04,tremblay15} \cr FUV Continuum & \nodata & ACS/SBC F150LP & 8141 sec & 11131 (2008 Jul 21) & \citet{oonk10,tremblay15} \cr [\ion{O}{2}]$\lambda$3727 \AA & \nodata & WFPC2 F410M & 2200 sec & 6717 (1996 Jul 27) & \citet{koekemoer99} \cr $B$-band \& [\ion{O}{2}]$\lambda$3727 \AA & \nodata & WFPC2 F450W & 2100 sec & 6228 (1995 May 07) & \citet{koekemoer99} \cr $R$-band \& H$\alpha$+[\ion{N}{2}] & \nodata & WFPC2 F702W & 2100 sec & 6228 (1995 May 07) & \citet{holtzman96} \cr H$_2 1-0$ S(3) $\lambda1.9576 \mu$m & \nodata & NICMOS F212N & 12032 sec & 7457 (1997 Oct 19) & \citet{donahue00} \cr $H$-band & \nodata & NICMOS F160W & 384 sec & 7457 (1997 Dec 03) & \citet{donahue00} \cr H$\alpha$ (Narrowband) & Baade 6.5m & IMACS / MMTF & 1200 sec & (2010 Nov 30) & \citet{mcdonald11b, mcdonald11a} \cr $i$-band & VLT / UT1 & FORS & 330 sec & 67.A-0597(A) & \citet{oonk11} \cr Optical Lines \& Continuum & VLT / UT4 & MUSE & 2700 sec & 094.A-0859(A) & Hamer et al.~(in prep) \cr \hline NIR (3.6, 4.5, 5.8, 8 $\mu$m) & \textit{Spitzer} & IRAC & 3600 sec (each) & 3506 (2005 Nov 24) & \citet{donahue07} \cr MIR (24, 70, 160 $\mu$m) & \nodata & MIPS & 2160 sec (each) & 3506 (2005 Jun 18) & \citet{donahue07} \cr MIR (70, 100, 160 $\mu$m) & \textit{Herschel} & PACS & 722 sec (each) & 13421871(18-20) & \citet{edge10phot} \cr FIR (250, 350, 500 $\mu$m) & \nodata & SPIRE & 3336 sec (each) & (2009 Nov 30) & \citet{edge10phot} \cr \hline CO(2-1) & ALMA & Band 6 / 213 GHz & 3 hrs & 2012.1.00988.S & \citet{tremblay16} \& this paper \cr \hline Radio (8.44 GHz) & VLA & A array & 15 min & AR279 (1992 Nov 30) & \citet{sarazin95} \cr 4.99 GHz & \nodata & A array & 95 min & BT024 (1996 Dec 7) & \citet{taylor99, clarke05} \cr 1.3 GHz & \nodata & A array & 323 min & BT024 (1996 Dec 7) & \citet{taylor99,clarke05} \cr 330 MHz & \nodata & A array & 180 min & AC647 (2003 Aug 18) & \citet{clarke05} \cr 330 MHz & \nodata & B array & 138 min & AC647 (2003 Jun 10) & \citet{clarke05} \cr \enddata \tablecomments{A summary of all Abell 2597 observations used (either directly or indirectly) in this analysis, in decending order from short to long wavelength (i.e. from X-ray through radio). (1) Waveband or emission line targeted by the listed observation; (2) telescope used; (3) instrument, receiver setup, or array configuration used; (4) on-source integration time; (5) facility-specific program or proposal ID (or observation ID in the case of \textit{Chandra}) associated with the listed dataset; (6) reference to publication(s) where the listed data first appeared, or were otherwise discussed in detail. Further details for most of these observations, including Principal Investigators, can be found in Table 1 of \citet{tremblay12b}. } \end{deluxetable*} \section{Observations \& Data Reduction} \label{sec:observations} This paper synthesizes a number of new and archival observations of the A2597 BCG, all of which are summarized in \autoref{tab:observation_summary}. Here we primarily describe the new ALMA and MUSE datasets that comprise the bulk of our analysis. All Python codes / Jupyter Notebooks we have created to enable this analysis are publicly available in an online repository\footnote{This code repository is archived at DOI: \href{http://doi.org/10.5281/zenodo.1233825}{10.5281/zenodo.1233825}, and also available at \url{https://github.com/granttremblay/Tremblay2018_Code}.} \citep{papercode}. \subsection{ALMA CO(2-1) Observations} \label{sec:almareduction} ALMA observed the Abell 2597 BCG for three hours across three scheduling blocks executed between 17-19 November 2013 as part of Cycle 1 program 2012.1.00988.S (P.I.: Tremblay). One baseband was centered on the $J=2-1$ rotational line transition of carbon monoxide ($^{12}$CO) at 213.04685 GHz (rest-frame 230.538001 GHz at $z=0.0821$). CO(2-1) serves as a bright tracer for the otherwise unobservable cold molecular hydrogen gas (H$_2$) fueling star formation throughout the galaxy (H$_2$ at a few tens of Kelvin is invisible because it lacks a permanent electric dipole moment). The other three basebands sampled the local rest-frame $\sim230$ GHz continuum at 215.0, 227.7, and 229.7 GHz, enabling continuum subtraction for the CO(2-1) data and an (ultimately unsuccessful) ancillary search for radio recombination lines. The ALMA correlator was set to Frequency Division Mode (FDM), delivering a native spectral (velocity) resolution of 0.488 MHz ($\sim1.3$ km s\mone) across an 1875 MHz bandwidth per baseband. Baselines between the array's 29 operational 12 m antenna spanned $17-1284$ m, delivering a best possible angular resolution at 213 GHz of $0\farcs37$ within a $\sim 28\arcsec$ primary beam, easily encompassing the entire galaxy in a single pointing. In comparing the total recovered ALMA CO(2-1) flux with an older single-dish IRAM 30m observation \citep{tremblay12b}, we find no evidence that any extended emission has been ``resolved out'' by the interferometer. Observations of A2597 were bracketed by slews to Neptune as well as the quasars J2258-2758 and J2331-1556, enabling amplitude, flux, and phase calibration. Raw visibilities were imported, flagged, and reduced into calibrated measurement sets using \texttt{CASA} version 4.2 \citep{mcmullin07}. In addition to applying the standard phase calibrator solution, we iteratively performed phase-only self-calibration using the galaxy's own continuum, yielding a $14\%$ improvement in RMS noise. We used the \texttt{UVCONTSUB} task to fit and subtract the continuum from the CO(2-1) spectral window in the $uv$ plane. We then deconvolved and imaged the continuum-free CO(2-1) measurement set using the \texttt{CLEAN} algorithm with \texttt{natural} weighting, improving sensitivity to the filamentary outskirts of the nebula\footnote{We also experimented with a number of different weighting schemes, including \texttt{Briggs} with a \texttt{robust} parameter that ranged from -2.0 (roughly \texttt{uniform}) to 2.0 (close to \texttt{natural}). We show only \texttt{natural} weighting throughout this paper, partially because our results are not strongly dependent on the minor differences between the various available algorithms.}. The final data cube reaches an RMS sensitivity and angular resolution of $0.16$ mJy beam$^{-1}$ per 40 km~s\mone\ channel with a $0\farcs715\times0\farcs533$ synthesized beam at P.A. = $74^\circ$, enabling us to resolve molecular gas down to physical scales of $\sim 800$ pc. As indicated in figure captions, some ALMA images presented in this paper use Gaussian-weighted $uv$ tapering of the outer baselines in order to maximize sensitivity to the most extended structures, expanding the synthesized beam to a size of $0\farcs944 \times 0\farcs764$ at a P.A. of $86^\circ$. The captions also note whether we have binned the data (in the $uv$ plane) to 5, 10, or 40 km~s\mone channels, as dictated by sensitivity needs for a given science question. All CO(2-1) fluxes and linewidths reported in this paper are corrected for response of the primary beam (\texttt{pbcor = True}). We have also created an image of the rest-frame 230 GHz continuum point source associated with the AGN by summing emission in the three line-free basebands. The \texttt{CLEAN} algorithm was set to use \texttt{natural} weighting, and yielded a continuum map with a synthesized beam of $0\farcs935 \times 0\farcs747$ at a P.A. of $87^\circ$. The peak (and therefore total) flux measured from the continuum point source is $13.6 \pm 0.2$ mJy at 221.3 GHz, detected at 425$\sigma$ over the background RMS noise. It was against this continuum ``backlight'' that \citet{tremblay16} discovered infalling cold molecular clouds seen in absorption (see \autoref{sec:natureresult}). We note that the continuum also features $\sim3\sigma$ extended emission. If one includes this in the flux measurement, it rises to $14.6 \pm 0.2$ mJy. This paper also presents CO(2-1) line-of-sight velocity and velocity dispersion maps made from the ALMA data using the ``masked moment'' technique described by \citet{dame11} and implemented by Timothy Davis\footnote{\url{https://github.com/TimothyADavis/makeplots}}. The technique takes into account spatial and spectral coherence in position-velocity space by first smoothing the clean data cube with a Gaussian kernel whose FWHM is equal to that of the synthesized beam. The velocity axis is then also smoothed with a guassian, enabling creation of a three dimensional mask that selects all pixels above a 1.5$\sigma$ flux threshold. Zeroth, First, and Second moment maps of integrated intensity, flux-weighted mean velocity, and velocity dispersion (respectively) were created using this mask on the original (unsmoothed) cube, recovering as much flux as possible while suppressing noise. As we will discuss in \autoref{sec:velocitystructure}, the inner $\sim 10$ kpc of the galaxy contains molecular gas arranged in two superposed (blue- and redshifted) velocity structures. We have therefore also created CO(2-1) velocity and velocity dispersion maps that fit two Gaussians along the same lines of sight. The codes used to accomplish this are included in the software repository that accompanies this paper \citep{papercode}. \subsection{MUSE Optical Integral Field Spectroscopy} \label{sec:musedata} We also present new spatial and spectral mapping of optical stellar continuum and nebular emission lines in the A2597 BCG using an observation from MUSE \citep{bacon10}. MUSE is a high-througput, wide-FoV, image-slicing integral field unit (IFU) spectrograph mounted at UT4's Nasmyth B focus on the Very Large Telescope (VLT). Obtained as part of ESO programme 094.A-0859(A) (PI: Hamer), this observation was carried out in MUSE's seeing-limited WFM-NOAO-N configuration on the night of 11 October 2014. While the $\sim1\arcmin\times1\arcmin$ FoV of MUSE easily covered the entire galaxy in a single pointing, a three-point dither was used over a $3\times900$ (2700) sec integration time in order to reduce systematics. Throughout the observation, the source was at a mean airmass of $1.026$ with an average $V$-band (DIMM) seeing of $\sim1\farcs2$. The raw data were reduced using version 1.6.4 of the standard MUSE pipeline \citep{weilbacher14}, automating bias subtraction, wavelength and flux calibration, as well as illumination-, flat-field, and differential atmospheric diffraction corrections. In addition to the sky subtraction automated by the pipeline, which uses a model created from a ``blank sky'' region of the FoV, we have performed an additional sky subtraction using a Principal Component Analysis (PCA) code by Bernd Husemann and the Close AGN Reference Survey\footnote{\url{http://www.cars-survey.org}} (CARS; \citealt{husemann16, husemann17}). We have also corrected the datacube for Galactic foreground extinction using $A_V=0.082$, estimated from the \citet{schlafly11} recalibration of the \citet{schlegel98} \textit{IRAS}+\textit{COBE} Milky Way dust map assuming $R_V=3.1$. The final MUSE datacube maps the entire galaxy between $4750~$\AA $~< \lambda < $ $~9300$ \AA\ with a spectral resolution of $\sim2.5$ \AA. The FWHM of its seeing-limited point-spread function, sampled with $0\farcs2$ pixels, is $1\farcs0$ and $0\farcs8$ on the bluest and reddest ends of the spectral axis, respectively. This is close to the spatial resolution of our ALMA CO(2-1) map, enabling comparison of the kinematics and morphology of warm ionized and cold molecular gas phases on nearly matching spatial scales. In pursuit of that goal, we have created a number of higher level MUSE data products by decoupling and modeling the stellar and nebular components of the galaxy with \textsc{PyParadise}, also used by the CARS team as part of their custom MUSE analysis tools \citep{walcher15,husemann16,weaver18}. \textsc{PyParadise} iteratively performs non-negative linear least-squares fitting of stellar population synthesis templates to the stellar spectrum of every relevant spectral pixel (``spaxel'') in the MUSE cube, while independently finding the best-fit line-of-sight velocity distribution with a Markov Chain Monte Carlo (MCMC) method. The best-fit stellar spectrum is then subtracted from each spaxel, yielding residuals that contain nebular emission lines. These are fit with a linked chain of Gaussians that share a common radial velocity, velocity dispersion, and priors on expected emission line ratios (e.g., the line ratios of the [\ion{O}{3}] and [\ion{N}{2}] doublets are fixed to 1:3). Uncertainties on all best-fit stellar and nebular parameters are then estimated using a Monte-Carlo bootstrap approach wherein both continuum and emission lines are re-fit 100 times as the spectrum is randomly modulated within the error of each spaxel. While the nebular emission lines in the A2597 MUSE observation were bright enough to be fit at the native (seeing-limited) spatial resolution, the S/N of the stellar continuum was low enough to necessitate spatial binning. We have applied the Voronoi tesselation technique using a Python code kindly provided\footnote{\url{http://www-astro.physics.ox.ac.uk/~mxc/software/\#binning}} by Michele Cappellari \citep{cappellari03}. The MUSE cube was tessellated to achieve a minimum S/N of 20 (per bin) in the line-free stellar continuum. The products from \textsc{PyParadise} then enabled creation of spatially resolved flux, velocity, and velocity dispersion maps of those emission lines most relevant to our study, namely H$\alpha$, [\ion{O}{1}] $\lambda$6300 \AA, [\ion{O}{3}] $\lambda$5007 \AA, and H$\beta$, along with Voronoi-binned velocity and FWHM maps for the galaxy's stellar component. We have also created Balmer decrement (H$\alpha$ / H$\beta$ ratio), color excess ($E(B-V)$), and optical extinction ($A_V$) maps by dividing the H$\alpha$ and H$\beta$ maps and scaling the result by following equation (1) in \citet{tremblay10}. Finally, we show an electron density map made by scaling the ratio of forbidden sulfur lines (i.e., [\ion{S}{2}]$\lambda\lambda$ 6717 \AA\ / 6732 \AA; \citealt{osterbrock06}) using using the calibration of \citealt{proxauf14} (see their Eq.~3) and assuming an electron temperature of $T_e=10^{4}$ K. We repeated this process to make Balmer decrement and electron density maps from a cube whose spaxels were binned $4\times4$, increasing signal in the fainter lines. Comparing these maps to their unbinned counterparts revealed no quantitative difference. We therefore only show the unbinned, higher spatial resolution maps in this paper. \subsection{ALMA and MUSE Line Ratio Maps} \label{sec:ratiomaps} We have also created H$\alpha$/CO(2-1) flux, velocity, and velocity dispersion ratio maps by dividing the ALMA ``masked moment'' maps from the corresponding MUSE maps. To accomplish this, we made small WCS shifts in the MUSE maps to match the ALMA CO(2-1) image with the \textsc{PyRAF} \texttt{imshift} and \texttt{wcscopy} tasks, assuming that the CO(2-1) and H$\alpha$ photocentroids in the galaxy center as well as a bright, clearly detected ($\gae10\sigma$) ``blob'' of emission to the northwest in both datasets should be aligned. The needed shifts were minor, and applying them also aligned enough morphologically matching features that we are confident that the alignment is ``correct'', at least to an uncertainty that is smaller than the PSF of either observation. We then confirmed that the ALMA synthesized beam closely matched the MUSE PSF at H$\alpha$ (7101 \AA\ and 6563 \AA\ in the observed and rest-frames, respectively), making smoothing unnecessary. We then resampled the ALMA data onto the MUSE maps' pixel grids in Python using \texttt{reproject}, an \texttt{Astropy} affiliated package\footnote{\url{https://reproject.readthedocs.io/en/stable/}}. Depending on science application, the reprojected ALMA image was then either divided directly from the MUSE map, or divided after normalization or rescaling by some other factor (for example, to convert pixel units). The Python code used to create these maps, along with all MUSE and ALMA data products, is included in this paper's software repository \citep{papercode}. \begin{figure} \begin{center} \includegraphics[scale=0.48]{Fig_continuum.pdf} \end{center} \vspace*{-5mm} \caption{The ALMA 230 GHz continuum signal, summed over three basebands redward of the CO(2-1) line. The map is dominated by a mm synchrotron continuum point source associated with the AGN at the galaxy center, with a flux density of $13.6 \pm 0.2$ mJy. Contours marking the 8.4 GHz VLA observation of the compact steep spectrum radio source are overlaid in red. The $10\sigma$ contour is consistent with an unresolved point source. A log stretch has been applied to the data so as to best show the $3\sigma$ extended emission against the $\gae 400\sigma$ point source. Much of this extended emission is likely to be noise, though the extension to the south along the 8.4 GHz radio source may be real. We are unlikely to have detected any extended dust continuum emission, given the FIR fluxes shown in \autoref{fig:SED}. } \label{fig:continuum} \end{figure} \subsection{Adoption of a systemic velocity} All ALMA and MUSE velocity maps shown in this paper are projected about a zero-point that is set to the stellar systemic velocity of the A2597 BCG at $z=0.0821 \pm 0.0001$ ($cz = 24,613 \pm 29$ km s\mone). As discussed in the Methods section of \citet{tremblay16}, this velocity is consistent with \ion{Ca}{2}~\textsc{h+k} and G-band absorption features tracing the galaxy's stellar component, a cross-correlation of galaxy template spectra with all major optical emission and absorption lines in the galaxy \citep{voit97,koekemoer99,taylor99}, an \ion{H}{1} absorption feature \citep{odea94}, and the ALMA CO(2-1) emission line peak itself \citep{tremblay16}. It is, therefore, the best-known systemic velocity for the system, within $\sim 60$ km s\mone. \subsection{Deep Chandra X-ray data} \label{sec:archivaldata} Finally, we have combined all available \textit{Chandra X-ray Observatory} data for A2597, spanning 626.37 ksec in total integration time across fourteen separate ACIS-S observations. The oldest three of these (see \autoref{tab:observation_summary}) were previously published (ObsID 922, PI: McNamara and ObsIDs 6934 and 7329, PI: Clarke; \citealt{mcnamara01, clarke05,tremblay12a,tremblay12b}), while the latest eleven were recently observed as part of Cycle 18 Large Program 18800649 (PI: Tremblay). This new dataset will be analyzed in detail by Tremblay et al.~(in prep). Here, we show only the deep image for the purposes of comparing it with the ALMA and MUSE data. To create this deep image, all fourteen ACIS-S observations were (re)-reduced, merged, and exposure corrected using \textsc{ciao} version 4.9 \citep{fruscione06} with version 4.7.5.1 of the Calibration Database. All exposures centered the cluster core (and therefore the BCG) on the nominal aimpoint of the back-illuminated S3 chip. We have applied a radially varying gradient filter to the final merged \textit{Chandra} image using a Gaussian Gradient Magnitude (GGM) technique recently implemented to highlight surface brightness edges in \textit{Chandra} data by \citet{sanders16b,sanders16a,walker17}. The codes we used to accomplish this have been kindly provided by Jeremy Sanders, and are publicly available\footnote{\url{https://github.com/jeremysanders/ggm}}. \begin{figure} \begin{center} \includegraphics[width=0.485\textwidth]{Fig_sed.pdf} \end{center} \vspace*{-5mm} \caption{Radio-through-optical SED for Abell 2597, including the new ALMA mm continuum point. Dashed and solid lines show various fits to components of the spectrum including a one- and two-component fit to the radio and ALMA data \citep{hogan15b,hogan15a}, as well as a modified blackbody fit to the far-infrared \textit{Herschel} data \citep{mittal11,mittal12}. Observation details (including dates) and references for all photometric points are given in \autoref{tab:observation_summary}. Errorbars are shown on the plot, though in many cases remain invisible because they are smaller than the data point. The gray shaded region shows the error on the single powerlaw fit to both the radio and ALMA continuum data. These fits are discussed in \autoref{sec:natureresult}. } \label{fig:SED} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_inflow.pdf} \end{center} \vspace*{-3mm} \caption{A summary of the primary result from \citet{tremblay16}, showing three compact ($\lae 40$ pc) molecular clouds moving deeper into the galaxy and toward its nucleus at $\sim +300$ \kms. The clouds are likely in close proximity (within $\sim 100$ pc) to the central supermassive black hole, and therefore may play a direct role in fueling the black hole's accretion reservoir. (\textit{left}) A slice through the continuum-subtracted ALMA CO(2-1) datacube, 10 \kms\ in width and centered on $+240$ \kms\ relative to the galaxy's systemic velocity. A region of ``negative emission'', arising from continuum absorption, appears as a dark spot the size of the ALMA beam, whose $0\farcs715 \times 0\farcs533$ ($\sim 1$ kpc $\times \sim0.8$ kpc) size is indicated by the white ellipse in the bottom left corner. 8.4 GHz radio contours are shown in red. The innermost contours of the radio core associated with the AGN have been removed to aid viewing of the ALMA continuum absorption feature. Extracting the CO(2-1) spectrum from a region bounding the galaxy's nucleus (roughly marked by the dashed white box) reveals the spectrum in the rightmost panel (adapted from \citealt{tremblay16}). } \label{fig:nature} \end{figure*} \section{Results} \label{sec:results} \subsection{``Shadows'' cast by inflowing cold clouds} \label{sec:natureresult} The ALMA observation is dominated by a bright continuum point source, shown in \autoref{fig:continuum}. Its flux at 221.3 GHz is $13.6 \pm 0.2$ mJy, which we show as part of a radio-through-optical SED in \autoref{fig:SED}. The green line shows a single powerlaw fit to the radio and ALMA data points with a spectral index of $\alpha=0.95 \pm 0.03$ if $S \propto \nu^{-\alpha}$, where $S$ is flux density and $\nu$ is frequency. The surrounding gray region shows the error on that fit, and entirely encompasses the two-component radio-only fit by \citet{hogan15a}. That model is shown in blue dashed and red dash-dotted lines and includes, respectively, a powerlaw with spectral index $\alpha = 1.18 \pm 0.06$ and a likely highly variable, flatter, GPS-like core (see discussion in \citealt{hogan15b,hogan15a}). Some curvature in the radio spectrum is evident, though it may be partly artificial as these data points were collected over the course of more than twenty years, during which time the source likely varied in brightness. Regardless, within errors, the new ALMA data point is consistent with both the single powerlaw and two-component models, and so it is likely that the 230 GHz continuum source detected by ALMA is simply the millimeter tail of the synchrotron continuum entirely associated with the AGN. This continuum source acts as a bright backlight cast by the radio jet's launch site, in close proximity to the $\sim3\times10^8$ \Msol\ black hole in the galaxy center \citep{tremblay12b}. Against this backlight we found three deep, narrow continuum absorption features (\autoref{fig:nature}), which we discuss in \citet{tremblay16}. We suggest that these are ``shadows'' cast by inflowing cold molecular clouds eclipsing our line of sight to the black hole. Assuming they are in virial equilibrium, we calculate that the clouds, whose linewidths are not more than $\sigma_v \lae 6$ km s\mone, must have sizes no greater than $\sim 40$ pc and masses on the order of $\sim10^5-10^6$ \Msol, similar to giant molecular clouds in the Milky Way (e.g., \citealt{larson81,solomon87}). If they are in pressure equilibrium with their ambient multiphase environment, their column densities must be on the order of $N_\mathrm{H_2} \approx 10^{22-24}$ cm\mtwo. A simple argument based on geometry and probability, along with corroborating evidence from the Very Long Baseline Array (VLBA), suggests that these inflowing cold molecular clouds are within $\sim100$ pc of the black hole, and falling ever closer toward it \citep{tremblay16}. These clouds may therefore provide a substantial cold molecular mass flux to the black hole accretion reservoir, contrary to what might be expected in a ``hot mode'' Bondi-like accretion scenario. Regardless, these results establish that some cold molecular gas is clearly moving inward toward the galaxy center. The remainder of this paper connects this inflowing gas to the larger galaxy of which it is a part. \subsection{Morphology of the cold molecular nebula} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_almasummary.pdf} \vspace*{-8mm} \end{center} \caption{An overview of the morphological and spectral characteristics of the ALMA CO(2-1) observation we discuss at length in this paper. The central panel (\textit{a}) shows a clipped moment zero (flux) image of all $\ge3\sigma$ CO(2-1) emission in the A2597 BCG. The various clumps seen likely represent $\gae3\sigma$ peaks of a smoother, fainter distribution of gas below the sensitivity threshold (although some clumps may indeed be discrete). For reference, the outer contour of the H$\alpha$ nebula is shown with a solid gray contour. Various apertures are shown in black polygons, indicating the (rough) spectral extraction regions for the CO(2-1) line profiles shown in the surrounding panels. All data are binned to 10 km s\mone\ channels. (\textit{b}) The CO(2-1) line profile from a region cospatial with the $\sim10$ kpc-scale CSS radio source (red contours on panel \textit{a}). (\textit{c}) An extraction from the nucleus of the galaxy, cospatial with the mm and radio core, as well as the stellar isophotal centroid. The deep absorption features are discussed in \autoref{sec:natureresult} and \citet{tremblay16}. (\textit{d}) All detected emission across the entire nebula. It is this spectrum from which we estimate the total gas mass in \autoref{sec:gasmass}. Panels (\textit{e}) and (\textit{f}) show the spectra extracted from what we call the southern and northern filaments, respectively. } \label{fig:region_plots} \end{figure*} The continuum-subtracted ALMA CO(2-1) data reveal a filamentary molecular nebula whose largest angular extent spans the inner 30 kpc (20\arcsec) of the galaxy (\autoref{fig:region_plots}\textit{a}). The brightest CO(2-1) emission is cospatial with the galaxy nucleus, forming a ``V'' shape with an axis of symmetry that is roughly aligned with the galaxy's stellar minor axis. In projection, a 12 kpc (8\arcsec) linear filament appears to connect with the southeastern edge of the ``V'' and arcs southward. Fainter clumps and filaments, many of which are part of a smoother distribution of gas just below the $\ge3\sigma$ clipping threshold shown in \autoref{fig:region_plots}\textit{a}, are found just to the north of the ``V''. This cold molecular nebula is forming stars across its entire detected extent, at an integrated rate of $\sim5$ \Msol\ yr\mone\ as measured with a number of observations, including \textit{Herschel} photometry \citep{edge10phot,edge10spec,tremblay12b}. We have smoothed the \textit{HST}/ACS SBC FUV continuum map from \citet{oonk11} with a Gaussian whose FWHM matches that of the synthesized beam in our ALMA map of integrated CO(2-1) intensity, normalized their surface brightness peaks, and then divided one map by the other. The quotient map is close to unity across the nebula, indicating that the star formation rate surface density (even as traced by extinction-sensitive FUV continuum) is proportional to the underlying CO(2-1) surface brightness\footnote{This is unsurprising in the context of a simple \citet{kennicutt98} scenario. It is, however, also important to consider this result alongside the several known CC BCG filament systems that are clearly \textit{not} forming stars. A famous example is found in the Perseus/NGC 1275 optical nebula. Many of its filaments are rich in molecular gas \citep{salome11}, yet largely devoid of any ongoing star formation (e.g., \citealt{conselice01,canning14}).}. Where they overlap, the MUSE/ALMA H$\alpha$-to-CO(2-1) surface brightness ratio map is similarly smooth (see \autoref{sec:musealma}). Matching H$\alpha$ and CO(2-1) morphology is consistent with the hypothesis that the optical and mm emission arises from the same population of clouds, as we will discuss in \autoref{sec:muse} and \autoref{sec:discussion}. In \autoref{fig:region_plots}\textit{a}, we show the CO(2-1) emission bounded by a gray contour that marks the outer extent of the H$\alpha$ emission. That the molecular nebula appears smaller in angular extent than the warm ionized nebula is more likely due to a sensitivity floor than a true absence of cold gas at larger radii. The ALMA observations do reveal faint, smooth emission in the northern and southern locales of the warm ionized filaments, though much of it is simply below the threshold we apply to all CO(2-1) maps presented in this paper. That we have detected at least \textit{some} faint molecular emission in the outer extents of the warm nebula suggests that, were we to observe to greater depths with ALMA, we might detect CO(2-1) across its entire extent. This isn't guaranteed, as warm ionized gas can be present without cold molecular gas (e.g., \citealt{simionescu18}). We do note that most ALMA observations of CC BCGs published thus far generally show molecular filaments cospatial with warm ionized counterparts \citep{mcnamara14,russell14,vantyghem16,russell16a,russell16b,russell17}. This has been known long prior to the first ALMA observations, too (see, e.g., the single-dish observations of the Perseus filaments by \citealt{lim08,salome11}). \begin{figure*} \begin{center} \includegraphics[scale=0.39]{Fig_moments.pdf} \end{center} \vspace*{-5mm} \caption{(\textit{a}) Zeroth, First, and Second moment maps of integrated CO(2-1) intensity, mean velocity, and velocity dispersion (respectively) in the cold molecular nebula. The maps have been created from the ALMA cube using the ``masked moment'' technique to preserve spatial and spectral coherence of $\ge3\sigma$ structures in position-velocity space, as described in \autoref{sec:almareduction}. Panels (\textit{b}) and (\textit{c}) show a zoom-in on the nuclear region in the velocity and velocity dispersion maps, respectively. Take caution when interpreting these, because there are two velocity components (one approaching/blueshifted, the other receding/redshifted) superposed on one another. The velocity structure here is therefore best represented by a double-Gaussian fit, which we show in \autoref{fig:fountain_overview}. } \label{fig:alma_momentmaps} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_pv.pdf} \end{center} \vspace*{-3mm} \caption{Position-velocity (PV) diagrams extracted from the three regions of the molecular nebula. The lefthand panel shows the Moment One velocity map from \autoref{fig:alma_momentmaps}, with three position-velocity extraction apertures overlaid. The righthand panels show the PV diagrams extracted from these apertures. Arrows are used to show the cardinal orientation of each aperture's long axis (the slit orientation for panel \textit{c} is roughly perpendicular to that for panels \textit{a} and \textit{b}, and so the relative orientations are admittedly confusing at first glance). Note that while the length of the extraction aperture varied, all diagrams are shown on the spatial same scale in the righthand panels, enabling cross-comparison. Panel \textit{a} shows that the southern filament has a narrow velocity width across its entire length, and no coherent velocity gradient. Panel \textit{b} reveals the broadest velocity distribution of molecular gas in the entire nebula, and includes the region in which the 8.4 GHz radio source bends in position angle, likely because of deflection. Panel \textit{c} shows rotation of molecular gas about the nucleus. All emission shown is $\ge3\sigma$. } \label{fig:pv_figure} \end{figure*} \subsection{Total mass and mass distribution of the molecular gas} \label{sec:gasmass} Assuming a CO(2-1) to CO(1-0) flux density ratio of 3.2 \citep{braine92}, we can estimate the total mass of molecular H$_2$ in the nebula following the relation reviewed by \citealt{bolatto13b}: \begin{align} M_\mathrm{mol} & = \left(\frac{1.05 \times 10^4}{3.2}\right) ~ \left( \frac{X_{\mathrm{CO}}}{X_{\mathrm{CO,~MW}}} \right) \\ & \times \left( \frac{1}{1+z}\right) \left(\frac{S_{\mathrm{CO}}\Delta v}{\mathrm{Jy~km~s}^{-1}}\right) \left(\frac{D_\mathrm{L}}{\mathrm{Mpc}}\right)^2 M_\odot, \nonumber \label{eqn:mass} \end{align} where $S_{\mathrm{CO}}\Delta v$ is the integrated CO(2-1) intensity, $z$ is the galaxy redshift ($z=0.0821$), and $D_L$ its luminosity distance (374 Mpc in our adopted cosmology). The dominant source of uncertainty in this estimate is the CO-to-H$_2$ conversion factor $X_{\mathrm{CO}}$ (see, e.g., \citealt{bolatto13b}). Here we adopt the average value for the disk of the Milky Way of $X_{\mathrm{CO}} = X_{\mathrm{CO,~MW}} = 2 \times 10^{20}$ cm\mtwo\ $\left(\mathrm{K~km~s}^{-1}\right)^{-1}$. There is a $\sim30\%$ scatter about this value \citep{solomon87}, minor in comparison to the overriding uncertainty as to the appropriateness of assuming that the A2597 BCG is at all like the Milky Way. The true value of the conversion factor depends on gas metallicity and whether or not the CO emission is optically thick. The metal abundance of the hot X-ray plasma is $\sim0.5-0.8$ Solar in the inner $\sim50$ kpc of the A2597 BCG \citep{tremblay12a}, and the velocity dispersion of individual molecular clouds in the galaxy are similar to those in the Milky Way \citep{tremblay16}. Echoing arguments made for the A1835, A1664, and A1795 BCGs in \citet{mcnamara14}, \citet{russell14} and \citet{russell17}, respectively, we have no evidence to suggest that the ``true'' $X_{\mathrm{CO}}$ in A2597 should be wildly diffferent from the Milky Way, as it can often be in ULIRGS \citep{bolatto13b}. Indeed, \citet{vantyghem17} report one of the first detections of $^{13}$CO(3-2) in a BCG (RX J0821+0752), and in doing so find a CO-to-H$_2$ conversion factor that is only a factor of two lower than that for the Milky Way. Adopting $X_{\mathrm{CO,~MW}}$ is therefore likely to be the most reasonable choice, with the caveat that we may be overestimating the total mass by a factor of a few. This should be taken as the overriding uncertainty on all mass estimates quoted in this paper. We fit a single Gaussian to the CO(2-1) spectrum extracted from a polygonal aperture encompassing all $\ge3\sigma$ emission in the primary beam corrected cube, binned to 10 km s\mone\ channels (this spectrum is shown in \autoref{fig:region_plots}\textit{d}). This gives an emission integral of $S_{\mathrm{CO}}\Delta v = 7.8\pm0.3$ Jy km s\mone\ with a line FWHM of $252 \pm 16$ km s\mone, which, noting the caveats discussed above, converts to an H$_2$ gas mass of $M_{\mathrm{H}_2} = \left(3.2 \pm 0.1\right) \times 10^9$ \Msol. Within errors, we obtain the same integral for cubes binned to 20 or 40 km s\mone, and an identical flux with an analytic integral of the line (e.g. adding all $\ge3\sigma$ flux in the cube, rather than fitting a Gaussian). This mass estimate is a factor of $\sim1.8$ higher than that in \citealt{tremblay16} because their Gaussian was fit from $-500$ to $+500$ km s\mone, while ours is fit between $-600$ and $+600$ km s\mone. This apparently minor difference gives rise to a significant offset because the former fit misses real emission blueward and redward of the line, biasing the continuum zero point upward. \citet{tremblay16} therefore slightly underestimate the total flux, though not to a degree that affects any of the results reported in that work. Indeed, factor of two variations in the total mass estimate do not significantly impact the conclusions drawn in either paper, especially considering the larger uncertainty coupled to our assumption for $X_{\mathrm{CO}}$ and the CO(2-1) to CO(1-0) flux density ratio. It is sufficient for our purposes to say that the total cold molecular gas mass in the A2597 BCG is a few billion solar masses. Given the critical density of CO(2-1), any reasonable assumption for the three-dimensional volume of the nebula, and the total amount of cold gas available to fill it, the volume filling factor of the cold molecular clouds cannot be more than a few percent (\citealt{tremblay16}; see also \citealt{david14, anderson17,temi18}). Far from a monolithic slab, the cold gas is instead more like a ``mist'' of many smaller individual clouds and filaments seen in projection (e.g., \citealt{jaffe01,jaffe05,wilman06,emonts13,mccourt18}). A significant fraction of the total mass in this ``mist'' is found far from the galaxy's nucleus. In \autoref{fig:region_plots} we divide the nebula into three primary components consisting of the bright nuclear region cospatial with the 8.4 GHz radio source (panel \textit{b}), the northern filaments (panel \textit{f}), and the southern filaments (panel \textit{e}). Fitting the CO(2-1) spectra extracted from each of these components shows that their rough fractional contribution to the total gas mass (i.e., panel \textit{d}) is $\sim70\%$, $\sim10\%$, and $\sim20\%$, respectively. This means that although most ($\sim 2.2\times10^9$ \Msol) of the cold gas is found in the innermost $\sim8$ kpc of the galaxy, $\sim1$ billion \Msol\ of it lies at distances greater than 10 kpc from the galactic center. \subsection{Velocity structure of the molecular gas} \label{sec:velocitystructure} In \autoref{fig:alma_momentmaps} we show the ``masked moment'' maps of integrated CO(2-1) intensity, flux-weighted velocity, and velocity dispersion. The cold molecular nebula features complex velocity structure across its spatial extent, with gas found at projected line-of-sight velocities that span $\gae300$ km s\mone, arranged roughly symmetrically about the systemic velocity of the galaxy. Aside from a possible $\pm 100$ km s\mone\ rotation (or ``swirl'') of gas near the nucleus (\autoref{fig:alma_momentmaps}\textit{b}, see the blue- and redshifted components to the NW and SE of the radio core, respectively), most of the nebula appears removed from a state of dynamical equilibrium, and poorly mixed (in phase space) with the galaxy's stars. \textit{Almost} everywhere, projected line of sight velocities are below the circular speed at any given radius, and well below the galaxy's escape velocity. The kinematics of the molecular nebula can therefore be considered rather slow, unless most gas motions are contained in the plane of the sky. This is unlikely, given several recent papers reporting similarly slow cold gas motions in CC BCGs \citep{mcnamara14,russell14,russell16a,russell16b,russell17,vantyghem16}. The overall picture for A2597, then, is that of a slow, churning ``mist'' of cold gas, drifting in the turbulent velocity field of the hot atmosphere, with complex inward and outward streaming motions. In the below sections we will argue that these motions are largely induced by mechanical feedback from the central supermassive black hole, mediated either by the jets that it launches, or the buoyant X-ray cavities that those jets inflate. \subsubsection{Uplift of the Southern Filament} \label{sec:filament} The velocity and velocity dispersion maps in \autoref{fig:alma_momentmaps}\textit{a} (center and right) show largely quiescent structure along the southern filament, with no monotonic or coherent gradient in either across its $\sim12$ kpc projected length. In \autoref{fig:pv_figure}\textit{a} we show a position-velocity (hereafter ``PV'') diagram of emission extracted from a rectangular aperture around the filament. The structure is brightest at its northern terminus (i.e., the left-hand side of \autoref{fig:pv_figure}\textit{a}), which serves as the easternmost vertex of the bright central ``V'' feature around the galaxy nucleus. Southward from this bright knot, toward the right-hand side of \autoref{fig:pv_figure}\textit{a}, the filament is roughly constant in velocity centroid and width ($+50-100$ km s\mone and $\sim80-100$ km s\mone, respectively). $\sim6\arcsec$ ($\sim9$ kpc) south of the northern terminus, however, the filament broadens in velocity dispersion. Here, near the filament's apex in galactocentric altitude, it features its largest observed line-of-sight velocity width ($\sim 300$ km s\mone), with a centroid that is roughly the same as that along its entire length. The southern filament's velocity structure is inconsistent with gravitational free-fall \citep{lim08}. Its projected length spans $\sim12$ kpc in galactocentric altitude, along which one would expect a radial gradient in Kepler speed. Its major axis is roughly parallel (within $\sim20^\circ$) to the projected stellar isophotal minor axis, but the filament itself is offset at least 5 kpc to the southeast. In response to the gravitational potential, gas at high altitude will have a higher velocity toward the galaxy's nucleus than it will at its orbital apoapse \citep{lim08}. It therefore spends a longer amount of time around its high altitude ``turning point'' than it does in proximity to the nucleus. This is consistent with the observed velocity width broadening at the filament's southern terminus, where our line of sight will naturally intersect clouds that populate a broader distribution of velocities, because some will be on their ascent, while others will be slowing, and beginning to fall back inward. That the filament's velocity is \textit{slower} near the nucleus than at its high altitude terminus suggests that gas has not fallen into it, but rather has been lifted out of it. For the two scenarios to be consistent with one another, then, the filament should be dynamically young. We will discuss the cavity uplift hypothesis in \autoref{sec:discussion}. \begin{figure*} \begin{center} \includegraphics[scale=0.29]{Fig_channels.pdf} \end{center} \vspace*{-3mm} \caption{ ALMA CO(2-1) channel maps, showing 40 \kms\ slices of the full data cube, ranging from $-360$ \kms\ through $+400$ \kms\ relative to the systemic velocity of the galaxy at $z=0.0821$. The outermost baselines have been tapered so as to increase signal to noise, resulting in beam size of $0\farcs94 \times 0\farcs79$, corresponding to a physical resolution of $1.4$ kpc $\times$ $1.2$ kpc (marked by the white ellipse in the top left panel). Red contours show the 8.4 GHz radio source, and dashed black contours are used to mark significance of the emission. The outermost black dashed contours show where the CO(2-1) emission exceeds $3\sigma$, and, when present, increase inward to show $5\sigma$, $10\sigma$, and $20\sigma$ over the background RMS noise of 0.18 mJy beam$^{-1}$ per 40 \kms\ channel. The white dashed contour marks the continuum absorption discussed in \autoref{sec:natureresult}. } \label{fig:channel_maps} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_jet.pdf} \vspace*{-9mm} \end{center} \caption{A closer look at the molecular gas cospatial with the radio jet. A single-Gaussian fit to this region (i.e., as shown for the moment maps in \autoref{fig:alma_momentmaps}) does not adequately model the superposition of approaching and receding components along the same line of sight. Here we show moment maps created with a double-Gaussian fit, better representing the velocity distribution. (\textit{a}) The CO(2-1) line profile extracted from a polygonal aperture encompassing the jet region, as shown in \autoref{fig:region_plots}\textit{b}. The line features a peak slightly blueward of center, as well as a strong red wing offset by $\sim+150$ km s\mone\ relative to the systemic velocity. Two Gaussians are fit to these components (shown in blue and red, respectively). Panels \textit{b} and \textit{c} show velocity maps for these approaching and receding molecular components, while panels \textit{d} and \textit{e} show their velocity dispersion maps. Multi-Gaussian fits for various sub-regions are explored in \autoref{fig:jet_detail_expand}. } \label{fig:fountain_overview} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.97\textwidth]{Fig_jetfits.pdf} \vspace*{-6mm} \end{center} \caption{ALMA CO(2-1) spectra extracted from regions cospatial with the radio jet and lobes. One or more Gaussians have been fit to the data so as to minimize residuals, which are marked by the dark yellow line near 0 mJy. The (multi)-Gaussian fit is shown in red, while individual Gaussian components are shown in blue. The leftmost panel shows a three-Guassian fit to the entire region cospatial with the radio jet, while the center and righthand panels show fits to smaller regions cospatial with the northern and southern radio lobes, respectively. Those spectral extraction apertures are marked by orange circles on the in.aid velocity dispersion maps. Gaussian centroids and FWHMs for each component are labeled for all fits.} \label{fig:jet_detail_expand} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_stars.pdf} \end{center} \vspace*{-3mm} \caption{The host galaxy and the kinematics of its stellar component. (\textit{Left}) VLT/FORS $i$-band image of the BCG and its surrounding 250 kpc $\times$ 250 kpc environment. A logarithmic stretch has been applied to highlight the low surface brightness outskirts of the galaxy. H$\alpha$ contours are shown in black, while the white dashed box indicates the FoV of the rightmost panels. (\textit{Top right}) VLT/MUSE velocity map of the galaxy's stellar component. The data have been Voronoi binned so increase S/N in the stellar continuum, as described in \autoref{sec:musedata}. We only show the innermost $60\times60$ kpc$^2$ because the stellar surface brightness (and therefore S/N) drops rapidly beyond this FoV. Velocities have been projected around a zero-point at $z=0.0821$ (e.g., $cz=24,613$ km s\mone), as we have done for the ALMA and MUSE emission line velocity maps. (\textit{Bottom right}) Best-fit stellar velocity dispersion (e.g. FWHM$/2.35$), also from the MUSE data. Dispersions are typical for a large giant elliptical galaxy (e.g., \citealt{faber76}). } \label{fig:MuseStars} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_musespec.pdf} \end{center} \vspace*{-3mm} \caption{The MUSE optical spectrum extracted from a $10\arcsec$ circular aperture centered on the galaxy nucleus. Both nebular and stellar continuum emission are shown. The red-end of MUSE spectral coverage is around $9300$ \AA, but we have truncated it at $7500$ \AA\ for clarity. The MUSE IFU enables spatially resolved spectroscopy at the seeing limit ($\sim0\farcs9$) across the entire nebula, and so every spectral line here can be shown as a two dimensional image (or velocity/velocity dispersion map, e.g. \autoref{fig:MuseShowcase}). As examples, we show the continuum-subtracted H$\alpha$ image as an inset, as well as the [\ion{O}{3}]$\lambda5007$ and [\ion{O}{1}]$\lambda6300$ images to the right. } \label{fig:MuseSpectrum} \end{figure*} \subsubsection{Cold gas motions induced by the radio jet} The inner $\sim10$ kpc of the molecular nebula shows evidence for dynamical interaction between the radio jet and the ambient molecular gas through which it propagates. This can be seen in \autoref{fig:channel_maps}, in which we show 40 km s\mone\ ``slices'' through the CO(2-1) datacube (i.e., channel maps), from $-360$ km s\mone\ through $+400$ km s\mone\ relative to the galaxy's systemic velocity. The blueshifted channels reveal a sheet of cold gas which, in projection, bends to hug the edges of the radio lobes (see, e.g., the $-120$ km s\mone\ channel in \autoref{fig:channel_maps}, where the alignment is most apparent). The bulk of this sheet's line-of-sight velocity is slow (only $\sim -100$ km s\mone), though there is a thinner filament of higher velocity gas that bisects the sheet lengthwise, cospatial with a bright, linear knot along a P.A. of $\sim 45^\circ$ (N through E) in the 8.4 GHz radio lobe. The velocity of this filament \textit{increases} (to $\gae 200$ km s\mone) with increasing galactocentric radius, which, like the southern filament (\autoref{sec:filament}), is inconsistent with expectations of infall under gravity. The velocity structure of cold gas along the jet is better seen in \autoref{fig:fountain_overview}. In panel \textit{a}, we show the CO(2-1) spectrum extracted from a $\sim10$ kpc (major axis) elliptical aperture placed on the mm and radio core. The line profile necessitates a fit with at least two Gaussians. The emission associated with these two Gaussians is shown in panel \textit{b}. A two-component velocity map, made by fitting the blue- and redshifted components independently, is shown in panel \textit{c}. The blueshifted shell of material, whose dispersion map is shown in panel \textit{d}, is bound on its northwestern edge by a linear ridge of higher velocity dispersion blueshifted gas. In projection, this feature is cospatial with the prominent FUV-bright rim of star formation, detected by \textit{HST} (see \autoref{fig:overview}, bottom right panel), that envelopes the northern radio lobe. The molecular gas that is dynamically interacting with the working surface of the radio jet is therefore likely permeated by young stars. As we noted in our discussion of \autoref{fig:pv_figure}\textit{b}, the broadest, fastest velocity structure in the entire molecular nebula is cospatial with the bright radio knot at which the southern radio jet bends sharply in position angle. This is clearly evident in \autoref{fig:fountain_overview}, which shows multi-Guassian fits to various spectral components of CO(2-1) emission cospatial with the radio jet. These fits iteratively fit (and, if necessary, add) Gaussians to the extracted spectra using a simple $\chi^2$ minimization technique. The leftmost panel shows a three-Gaussian fit to the entire region cospatial with the radio jet, while the center and right panels show fits to the regions cospatial with the northern and southern radio lobes, respectively. The spectral extraction apertures used are indicated by orange circles on the images inlaid on these two panels. A broad, single-Gaussian fit is needed for the region cospatial with the southern jet, including the location at which the jet is deflected. This region includes the broadest velocity distribution of molecular gas in the galaxy, with a FWHM of $342\pm8$ km s\mone ($\sigma=145\pm3$ km s\mone). This fit has an integral of $\sim1.6\pm0.9$ Jy km s\mone, corresponding to a molecular gas mass of $\left(6.4\pm 0.4\right) \times 10^8$ \Msol. \citet{pollack05} presented VLA polarimetry of PKS $2322-123$, the radio source associated with the A2597 BCG. The source has a steep spectral index of $\alpha = 1.8$ between $\sim5$ and $\sim15$ GHz, suggesting either that it is old or, given its compactness, that it has remained dynamically confined as it struggles to expand against a dense, frustrating medium. The VLA polarimetry reveals a compact region of polarized flux associated with the southern lobe with a Faraday rotation measure of 3620 rad m\mtwo, suggesting that the southern lobe is deflected from its original southwestern trajectory toward the south and into our line of sight. This bright radio knot, cospatial with the broadest velocity distribution of molecular gas (see \autoref{fig:fountain_overview} and \autoref{fig:jet_detail_expand}), is likely an impact site, showing strong evidence for a dynamical interaction between the radio source and molecular gas. Whether it is the molecular gas that has redirected the jet's trajectory will be discussed in \autoref{sec:discussion}. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Fig_muselinemaps.pdf} \end{center} \vspace*{-3mm} \caption{MUSE maps of H$\alpha$ flux, line of sight velocity, and velocity dispersion in the warm ionized nebula, created after modeling and subtracting the stellar continuum as described in \autoref{sec:musedata}. The H$\alpha$ flux map (left panel) is shown with a logarithmic color scale to better show the faint filaments relative to the bright nucleus. Note the blueshifted ``S''-shaped feature near the nucleus in the velocity map (center), strongly reminiscent of the shape of the 8.4 GHz radio source (shown in the flux map, for comparison). Note that these maps properly account for blending of the H$\alpha$ and [\ion{N}{2}] lines. Note also, particularly for the northern filaments, that velocities are higher at higher altitudes from the galaxy center, consistent more with uplift than gravitational freefall. Compare these maps to those for the cold molecular nebula in \autoref{fig:alma_momentmaps}. We compare the MUSE and ALMA data directly in \autoref{sec:musealma}.} \label{fig:MuseShowcase} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig_extinction.pdf} \end{center} \vspace*{-5mm} \caption{ (\textit{left}) An extinction ($A_V$) map made by scaling the MUSE Balmer decrement map (H$\alpha/$H$\beta$ ratio) following the procedure described at the end of \autoref{sec:musedata}. The 8.4 GHz radio source is overlaid in black contours. The inset panel shows a zoom-in on this $12\times12$ kpc$^2$ region. The color bar is in units of $V$-band magnitudes. (\textit{right}) Electron density map, made by scaling the ratio of the forbidden Sulfur lines ([\ion{S}{2}]$\lambda\lambda$ 6717 \AA\ / 6732 \AA) using the calibration of \citet{proxauf14} and assuming an electron temperature of $T_e=10^4$ K. The region of highest extinction is found just to the south of the nucleus, where the radio jet bends in position angle at the site of a bright radio knot with a large Faraday rotation measure, indicative of abrupt deflection. It is here that CO(2-1) is brightest (\autoref{fig:alma_momentmaps}). The electron density map is highest at the boundaries of the southern radio lobe, and along the long axis of the northern radio lobe. } \label{fig:extinction} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Fig_bpt.pdf} \end{center} \vspace*{-5mm} \caption{MUSE emission line diagnostic diagrams for spaxels with S/N$>3$ in each line. The left panel shows a standard Baldwin, Phillips, \& Terlevich (BPT; \citealt{baldwin81}) diagnostic plot using the [\ion{O}{3}]$\lambda5007$/H$\beta$ and [\ion{N}{2}]$\lambda6585$/H$\alpha$ line ratios (e.g., \citealt{veilleux87}). Spaxels are color-coded based upon their location relative to boundaries between well-known empirical and theoretical classification schemes \citep{kewley01,kauffmann03,schawinski07} shown in gray dashed and solid lines. We also show ``pure shock'' \citep{allen08} as well as ``slow shock + star formation'' \citep{mcdonald11b} composite models in solid color lines. We discuss these lines in \autoref{sec:muse}. Spaxel color coding is shown to the panel at right, which also shows their distribution on the sky. } \label{fig:bpt} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{Fig_musealmaspec.pdf} \end{center} \vspace*{-6mm} \caption{The MUSE H$\alpha$ and ALMA CO(2-1) datacubes reveal similar morphologies at matching velocities, consistent with the hypothesis that the warm ionized and cold molecular gas are co-moving with one another, as would be predicted if the H$\alpha$ emission arose from the warm ionized skins of mm-bright molecular cores. Here we show the MUSE H$\alpha$+[N~\textsc{ii}] profile extracted from a circular aperture with a diameter of 30 spaxels ($\sim6\arcsec$), centered on the galaxy core. We have deblended the H$\alpha$ line from the [N~\textsc{ii}] doublet, and plot the resulting single Gaussian fit to H$\alpha$ with the blue dashed line. The ALMA CO(2-1) spectrum, extracted from a (roughly) matching aperture, is plotted in purple.} \label{fig:MuseALMA} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{Fig_musealmadirectcompare.pdf} \end{center} \vspace*{-3mm} \caption{ (\textit{Left panel}) ALMA CO(2-1) vs.~MUSE H$\alpha$ velocity and velocity dispersion. Points are taken from every cospatial spaxel in the $>3\sigma$ overlap region between the warm ionized and cold molecular nebulae. The points have been smoothed with a Gaussian kernel. Contour colors encode the Gaussian kernel density estimate (i.e., a darker color indicates a higher density of data points). (\textit{Right panels}) Maps of the difference and ratio between H$\alpha$ and CO(2-1) velocity centroid and dispersion, made by subtracting and dividing the corresponding MUSE and ALMA moment maps, respectively. We have applied various corrections to account for, e.g., differing spatial resolutions, as described in \autoref{sec:ratiomaps}. The edges of these maps should be ignored. In the dispersion ratio map, for example, the outermost dark blue rim is smaller than the ALMA beam size, and an artifact of the division. } \label{fig:MuseALMARatios} \end{figure*} \subsection{MUSE maps of the host galaxy and warm nebula} \label{sec:muse} In \autoref{fig:MuseStars} we show the Voronoi-binned MUSE map of stellar line of sight velocity and velocity dispersion within the inner 50 kpc of the galaxy. Only Voronoi-binned spaxels with S/N$>200$ are shown. A deep VLT/FORS $i$-band image of the BCG with MUSE H$\alpha$ contours overlaid is shown for reference. While some background/foreground sources are seen, there are a number of spectroscopically confirmed companions embedded within the stellar envelope of the BCG (Tremblay et al.~in prep.). The galaxy has clearly enjoyed a rich merger history, as is generally the case for all those that sit long enough at the bottom of a cluster potential well. At best, there is only a weak signature of coherent stellar rotation in the inner 50 kpc (NW approaching, SE receding), consistent with expectations for the boxy interior of a ``slow / non-regular rotator'' early type galaxy \citep{cappellari16}. We note that there is some evidence for a minor-axis kinematically decoupled core (KDC, e.g., \citealt{kranovic11}) in the nucleus. This will be discussed in a forthcoming paper (Tremblay et al.~in prep.). The total (stellar and nebular) spectrum, extracted from a spatial aperture that encompasses the galaxy center in the MUSE cube, is shown in \autoref{fig:MuseSpectrum}. All major nebular lines are detected at high signal-to-noise, enabling spatially resolved line maps from H$\beta$ through [\ion{S}{2}]. A selection of these are shown in the side panels of \autoref{fig:MuseSpectrum}. We note that [\ion{O}{3}]$\lambda\lambda4959,5007$ \AA\ is spatially extended, but only on the scale of the 10 kpc 8.4 GHz radio source. The remaining lines, particularly those tracing star formation, match the morphology (and linewidth, roughly) of the H$\alpha$ nebula, albeit at lower surface brightness. As is apparent from \autoref{fig:MuseStars}, the major axis of the warm emission line nebula is roughly aligned with the stellar minor axis of the host galaxy. In \autoref{fig:MuseShowcase} we show the MUSE flux, velocity, and velocity dispersion maps for the H$\alpha$ nebula. Just as for the cold molecular gas, the warm ionized nebula has not dynamically equilibrated, as there are no obvious signs of rotation save for the innermost $\sim10$ kpc of the galaxy. There, a blueshifted shell of material is found, cospatial with a similar feature in the molecular gas, clearly matching the shape of the 8.4 GHz radio source. The H$\alpha$ velocity dispersion map reveals thin, bubble-like rims of higher velocity dispersion gas, reaching widths upward of $\sim350$ km s\mone. Given their location and morphology, these broad streams are likely churned by dynamical interaction with the radio source, or the buoyant X-ray cavities it has inflated. Cospatial with these features, \citet{oonk10} discovered coherent velocity streams of warm molecular hydrogen (traced by the H$_2$ 1-0 S(3) and Pa$\alpha$ lines) similarly hugging the edges of the radio source, at roughly the same line of sight velocity and velocity width as those seen in the MUSE H$\alpha$ maps. Again, like the molecular nebula, the northern and southern warm ionized filaments are more difficult to interpret. All show narrow velocity structure, and no evidence for freefall. This is the case for a large number of warm nebulae in CC BCGs \citep{hatch07,edwards09,hamer16}, even those for which there is extremely compelling morphological coincidence between filaments and X-ray cavities, suggestive of uplift (see, e.g., IFU observations of the ionized filaments in Perseus, \citealt{hatch06, gendron-marsolais18}). As we will discuss in \autoref{sec:discussion}, A2597 is in many ways like Perseus in that the H$\alpha$ filaments are spatially coincident with X-ray cavities. We discuss these implications in \autoref{sec:discussion}. Dividing the MUSE H$\alpha$ and H$\beta$ flux density maps produces a Balmer Decrement map which, following the assumptions discussed in \autoref{sec:musedata}, we scale to create the extinction ($A_V$) map shown in \autoref{fig:extinction}\textit{a}. The highest extinction, and therefore perhaps the densest, dustiest gas, is found to the south of the nucleus, where the radio source is deflected. CO(2-1) is brightest at this same knot (compare the ALMA moment zero map in \autoref{fig:alma_momentmaps} with \autoref{fig:extinction}). The northeastern dust lane seen in optical imaging is also clear. It is along this rim that we find extended 230 GHz continuum emission (see \autoref{fig:continuum}). This could indeed be dust continuum emission, detected at $\sim3\sigma$ alongside the $\sim425\sigma$ non-thermal mm-synchrotron point source associated with the AGN. \autoref{fig:extinction}\textit{b} shows the electron density map (linearly proportional to the total gas density at $\sim10^4$ K) made from the [\ion{S}{2}]$\lambda\lambda$ 6717 \AA\ / 6732 \AA\ line ratio. The densest gas is found along the jet axis, perhaps due to dredge-up of cooler, more dense ionized gas from the nucleus, and also along a southerly ``shell'' that appears to hug the boundary of the southern radio jet as it bends in position angle. If real (and it likely is, given that it is also seen in the $A_V$ map made from the Balmer lines), this may be tracing the dense population of clouds that form the impact site at which the jet is deflected. We will discuss this possibility in \autoref{sec:discussion}. In \autoref{fig:bpt} we show a spatially resolved Baldwin, Phillips, \& Terlevich (BPT; \citealt{baldwin81}) diagnostic plot using the [\ion{O}{3}]$\lambda5007$/H$\beta$ and [\ion{N}{2}]$\lambda6585$/H$\alpha$ line ratios (e.g., \citealt{veilleux87}) extracted from each spaxel in the MUSE cube. Galaxies (or indiviual regions within a single galaxy, as shown here) stratify in BPT space based upon the relative contributions of stellar and non-stellar ionization sources. The solid gray curve shows the empirical star formation line from \citealt{kauffmann03}, the dashed gray curve shows the theoretical maximum starburst model of \citealt{kewley01}, and the dash-dotted gray line is the empirical division between LINER- and Seyfert-like sources as defined by \citealt{schawinski07}. We have color-coded the data points based upon the regions in which they sit. The vast majority of points lie in the ``composite'' or ``AGN-H\textsc{ii}'' region as defined by \citealt{kewley06} (also called the ``transition region'' in \citealt{schawinski07}). This ``classification'' should not be over-interpreted, as the situation for CC BCGs is highly complex and likely represents a superposition of several different ionization sources (see, e.g., the discussions of \citealt{ferland09,mcdonald11b}. To illustrate this, we plot lines of constant shock velocity (in orange) from the \citet{allen08} library of fast radiative shocks, assuming a gas density of $n=1000$ cm$^{-3}$ (i.e., roughly the value in the central regions of \autoref{fig:extinction}\textit{b}), as well as the ``slow shock + star formation'' composite models adapted from \citet{farage10,mcdonald11b}. Debate continues as to the relative role played by stellar photoionization, (slow) shocks \citep{mcdonald11b}, conduction \citep{sparks12}, and cosmic ray heating \citep{ferland09,donahue11,fabian11a,mittal11,johnstone12}. Galaxies are enormous, complex structures, and so any line of sight that passes through them will inevitably reveal a superposition of many physical processes. It is likely that all of these ionization mechanisms play some role in heating the envelopes of cold clouds. We note, finally, that new \textit{HST}/COS far-ultraviolet spectroscopy of the filaments in A2597 will be discussed in a forthcoming paper (Vaddi et al.~in prep.). \subsection{MUSE and ALMA Comparison} \label{sec:musealma} Comparing the MUSE and ALMA data directly reveals strong evidence that the warm ionized and cold molecular nebulae are not only cospatial, they are comoving. In \autoref{fig:MuseALMA} we overplot the H$\alpha$+[\ion{N}{2}] and CO(2-1) profiles extracted from matching 6\arcsec\ diameter apertures centered on the galaxy core in the MUSE and ALMA cubes, respectively. The panels at the sides of \autoref{fig:MuseALMA} show matching H$\alpha$ and CO(2-1) morphology at the broadest wings of each line, consistent (but not \textit{uniquely}) consistent with the hypothesis that the two lines stem from largely the same population of clouds. This cannot be true entirely, as the deblended H$\alpha$ FWHM is $565 \pm 25$ km s\mone, a factor of $\sim2$ broader than the $252\pm14$ km s\mone\ FWHM of the CO(2-1) line. This velocity width mismatch is more readily apparent in \autoref{fig:MuseALMARatios}, where we plot CO(2-1) line-of-sight velocity and dispersion against the same quantities for H$\alpha$. We have smoothed the data points (i.e., one point for each cospatial spaxel in the registered MUSE and ALMA cubes) with a Gaussian, and show shaded regions indicating ratios of 1:1-2:1 and 2:1-4:1. While the line velocity centroids lie largely along the 1:1 line, the line widths preferentially span the 2:1-4:1 range. The rightmost panels of \autoref{fig:MuseALMARatios} show the difference and ratio, respectively, between the MUSE H$\alpha$ and ALMA CO(2-1) velocity and dispersion maps. In the velocity difference map, bluer colors mean that the CO(2-1) velocity centroid is slightly blueshifted relative to the H$\alpha$ velocity centroid. The velocity difference map is largely smooth and below $\pm45$ km s\mone, which shows that the H$\alpha$ and CO(2-1) line velocity centroids track one another closely across the entire overlap region between the molecular and ionized nebulae. The velocity dispersion ratio map (\autoref{fig:MuseALMARatios}, right panel) shows that, on average, the H$\alpha$ velocity dispersion is a factor of $2-3$ times broader than that for CO(2-1). The broader observed velocity widths for H$\alpha$ are important but not \textit{necessarily} surprising, given that our line of sight is likely to intersect more warm gas (and therefore a broader velocity distribution) than it is for the cold molecular clouds, owing to their large relative contrast in volume filling factor. We discuss this further in \autoref{sec:discussion}. \section{Discussion} \label{sec:discussion} This paper presents three results: \begin{enumerate} \item \textbf{Cold gas is cospatial and comoving with warm gas}. A three billion solar mass filamentary molecular nebula is found to span the inner 30 kpc of the galaxy. Limited by the critical density of CO(2-1), its volume filling factor must be low, and so the nebula must be more like a ``mist'' than a monolithic slab of cold gas (e.g., \citealt{mccourt18}). These cold clouds are likely wrapped in warm envelopes that shine with Balmer and forbidden line emission at the cloud's interface with the hot X-ray atmosphere, explaining why the H$\alpha$ and CO(2-1) nebulae are largely cospatial and comoving. This hypothesis is now supported by a large and growing number of ALMA observations of CC BCGs (e.g., papers by Russell, McNamara and collaborators). \item \textbf{Cold gas is moving inward, and perhaps feeding the black hole}. Clouds are directly observed to fall inward toward the galaxy nucleus, probably within close proximity ($\lae 100$ pc) to the central supermassive black hole. These clouds may therefore provide a substantial (even dominant) component of the mass flux toward the black hole accretion reservoir. This result, discussed in \citealt{tremblay16} and considered in a broader context here, is consistent with a major prediction of the chaotic cold accretion (CCA) model \citep{gaspari13}. \item \textbf{Cold gas is dynamically coupled to mechanical black hole feedback}. In projection, a bright rim of blueshifted molecular gas appears to encase the radio lobes (see e.g., \autoref{fig:channel_maps}), perhaps suggestive of dynamical coupling between the cold molecular gas and the powerful radio jet plowing through it. The broadest distribution of cold gas velocities is found copatial with the southern jet (\autoref{fig:jet_detail_expand}, right panel). Just south of the radio core, this jet deflects in position angle, perhaps because it has exchanged momentum with a dense ensemble of cold clouds. Nearly all cloud velocities, save for the most extreme wings of the distribution, are nevertheless below the circular speed at any given radius, and so the clouds should be falling inward unless tethered to the hot medium. $\sim1$ billion \Msol\ of cold gas is found in dynamically short-lived filaments spanning altitudes greater than 10 kpc from the galaxy center, and may be draped around the rims of buoyant X-ray cavities. We argue that effectively all of these non-equilibrium cold gas structures are directly or indirectly due to mechanical black hole feedback, as mediated either by jets, buoyant hot cavities, or turbulence in the velocity field of the hot atmosphere. \end{enumerate} It is possible that the molecular and ionized nebula at the heart of Abell 2597 is effectively a galaxy-scale ``fountain'', wherein cold gas drains into the black hole accretion reservoir, powering a jet- or cavity-driven plume of uplifted low-entropy gas that ultimately rains back toward the galaxy center from which it came. This scenario might establish a long-lived heating-cooling feedback loop, mediated by the supermassive black hole, which would act much like a mechanical ``pump'' for this fountain. \subsection{The Fountain's ``Drain''} We directly observe at least three cold molecular clouds moving toward what would be the fountain's drain (see \autoref{sec:natureresult} and \citealt{tremblay16}). If this line-of-sight observation is at all representative of a (much) larger three-dimensional distribution of inward-moving clouds, and if indeed they are as close to the black hole as corroborating evidence suggests they are, they could supply on the order of $\sim0.1$ to a few \Msol\ yr\mone\ of cold gas to the black hole's fuel reservoir. The observation would then be consistent with a major prediction of \citet{gaspari13,gaspari15,gaspari17b}, who argue that that nonlinear condensation from a turbulent, stratified hot halo induces a cascade of multiphase gas that condenses from the $\sim10^7$ K to the $\sim20$ K regime. This cooling ``rain'' manifests as chaotic motions that dominate over coherent rotation (with turbulent Taylor number $<1$; e.g., \citealt{gaspari15}). Warm filaments condense along large-scale turbulent eddies (generated, for example, by AGN feedback), naturally creating extended and elongated structures like the H$\alpha$ filaments ubiquitously observed in CC BCGs, and possibly explaining their apparent close spatial association with radio jets and X-ray cavities (e.g., \citealt{tremblay15}). Warm overdensity peaks further condense into many cold molecular clouds\footnote{Though the need for dust grains to act as a catalyst for the formation of molecular gas remains a persistent issue, e.g., \citealt{fabian94,voit11}.}, hosting most of the total mass, that form giant associations. The thermodynamics and kinematics of the cooler gas phases should then retain ``memory'' of the hot plasma from which they have condensed \citep{gaspari17b,gaspari18,voit18b}. Despite important differences (reviewed in part by \citealt{hogan17,gaspari18,voit18b,pulido18}), the chaotic cold accretion model of \citealt{gaspari13} succeeds alongside the ``circumgalactic precipitation'' and ``stimulated feedback'' models of \citet{voit15b} and \citet{mcnamara16} (respectively) in predicting many of the major observational results we find in Abell 2597. Were we to (roughly) attempt to unify these models within the same ``fountain'' analogy, all would effectively include a ``drain'' into which cold clouds fall, providing a substantial (even dominant) mass flux toward the black hole fuel reservoir. That we have strong observational evidence for exactly such a drain in Abell 2597 enables us to place at least broad constraints on how the drain might operate. For example, whether they condense in the turbulent eddies of cavity wakes or not, a cascade of gas cooling from hot plasma will still require roughly a cooling time $t_\mathrm{cool}$ to reach the molecular phase. Using the buoyant rise time as a rough age estimate, the oldest X-ray cavities in A2597 are $\sim2\times10^8$ yr \citep{tremblay12b}, which is roughly comparable to the cooling time at the same $20-30$ kpc radius \citep{tremblay12a}. The time it takes for clouds to descend from any given altitude to the center of the galaxy is a more complicated issue. Following \citealt{lim08}, a thermal instability, precipitating at rest with respect to the local ICM velocity, will freefall in response to the gravitational potential and accelerate to a velocity $v$ given roughly by \begin{equation} v = \sqrt{v(r_0)^2 + 2GM \left( \frac{1}{r + a} - \frac{1}{r_0 - a} \right)}, \end{equation} where $v(r_0)$ is its initial velocity (assumed to be zero if the ICM and BCG velocities are roughly matched), $r_0$ is its starting radius relative to the BCG core, $G$ is the gravitational constant, $M$ is the total gravitating mass of the BCG, and $a$ is its scale radius (which is roughly half the effective radius $R_e$, as $a\approx R_e / 1.815$). For a scale radius of $a\sim20$ kpc and a gravitating mass of $M\approx10^{12}$ \Msol\ \citep{tremblay12a}, the cooling cloud would attain a rough velocity of $\sim470$ km s\mone, $\sim380$ km s\mone, or $\sim300$ km s\mone\ if it fell from a height of 20, 10 or 5 kpc, respectively. Observed line-of-sight cloud velocities in Abell 2597 are significantly lower than these freefall values, just as they are for effectively all other CC BCGs thus far observed with ALMA (see e.g. \citealt{vantyghem18}, for the latest example). The clouds might still be ballistic if most of their motion is contained in the plane of the sky, but this argument weakens with every new observation showing the same result. It is therefore now clear that the velocity of cold clouds in the hot atmospheres of CC BCGs cannot be governed by gravity alone. Simulations and arguments by (e.g.) \citet{gaspari18} and \citet{li18} indeed suggest that the clouds must have sub-virial velocities, consistent with those observed in CC BCGs including Abell 2597 (\autoref{sec:velocitystructure}). If a cooling cloud's terminal speed is smaller than typical infall speeds \citep{mcnamara16}, it can drift in the macro-scale turbulent velocity field of the hot X-ray atmosphere \citep{gaspari18}, whose dynamical structure is sculpted by jets, sound waves, and bubbles. The terminal velocity of cold clouds is set by the balance of their weight against the ram pressure of the medium through which they move (e.g., \citealt{li18}). That the clouds in Abell 2597 are apparently not in freefall may simply mean that their terminal velocity is the lower of the two speeds. While the extreme density contrast between molecular gas and hot plasma remains an issue, one simple explanation is that the clouds' velocity in the hot atmosphere has been arrested by more efficient coupling mediated by their warm ionized skins, which would effectively lower their average density (and therefore their terminal speed) and increase the strength of any magnetic interaction (e.g., \citealt{fabian08}). Given the apparent lack of coherent velocity gradients along the molecular and ionized filaments, it is also likely that the multiphase nebula is dynamically young. Such a result is unsurprising in the context of chaotic cold accretion, precipitation, and stimulated feedback models. In essence, all suggest that the cold clouds are just one manifestation of what is ultimately the same hydrodynamical flow, drifting in the velocity field of the hot plasma. That velocity field, in turn, is continually stirred by subsonic turbulence induced by buoyant bubbles, jets, and merger-driven sloshing \citep{gaspari18}. This omni-present dynamical mixing may inhibit virialization, preventing the formation of smooth gradients over kpc scales. At the very least, the recent \textit{Hitomi} observation of Perseus confirms that bulk shear in the hot plasma is similar to molecular gas speeds observed with ALMA, supporting the idea that they move together \citep{hitomi16}. At sub-kpc scales, inelastic collisions and tidal stress between clouds can funnel cold gas toward the nucleus, which we observe directly in Abell 2597 (\autoref{sec:natureresult} and \citealt{tremblay16}). Chaotic cold accretion can then boost black hole feeding far in excess of the Bondi rate, powering the ``pump'' at the fountain's center. \begin{figure*} \begin{center} \includegraphics[scale=0.17]{Fig_chandra.pdf} \end{center} \vspace*{-4mm} \caption{ A new, deeper look at the X-ray cool core cospatial with the A2597 BCG. (\textit{a}) 626 ksec \textit{Chandra} X-ray observation of $0.2 - 10$ keV emission in the innermost $\sim250\times250$ kpc$^2$ of the cluster. The X-ray data have been convolved with a Gaussian gradient magnitude (GGM) filter (e.g., \citealt{sanders16b}) to better show ripples and cavities. The optical Petrosian radius of the BCG's stellar component is (roughly) marked by the gray dashed ellipse. Brackets indicate the relative fields of view shown in the surrounding panels. (\textit{b}), (\textit{c}), and (\textit{d}) the same data, slightly zoomed in, with MUSE H$\alpha$, ALMA CO(2-1), and 8.4 GHz radio contours overlaid in green, blue, and gray, respectively. Moving inward, the ALMA contours in panel \textit{d} show emission that is $3\sigma$, $5\sigma$, $10\sigma$, and $20\sigma$ over the background RMS noise level. With the caveat that projection effects complicate interpretation, the H$\alpha$ nebula shows strong circumstantial evidence that at least some of the filaments are draped around the edges of the buoyant X-ray cavities marked by arrows. } \label{fig:deepchandra} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.47\textwidth]{Fig_musealmavel.pdf} \end{center} \vspace*{-3mm} \caption{A side-by-side comparison of the MUSE H$\alpha$ and ALMA CO(2-1) LOS velocity maps, shown on the same spatial and velocity scales. Insets show a zoom-in on the nuclear region for both maps. Black contours again show the 8.4 GHz radio source. Where they overlap, the MUSE and ALMA velocity maps look very similar to one another. These maps are more quantitatively compared in \autoref{fig:MuseALMARatios}. } \label{fig:MuseALMAVelocity} \end{figure} \subsection{The Fountain's ``Plume''} There is little doubt that this pump injects an enormous amount of kinetic energy into the hot $\sim10^{7}-10^{8}$ K phase. In \autoref{fig:deepchandra} we present a new, deep \textit{Chandra} X-ray map of the A2597 BCG and its outskirts, made by combining the new observations from our recent Cycle 18 Large Program with the archival exposures previously published by \citet{mcnamara01,clarke05} and \citet{tremblay12a}. The new map contains 1.54 million source counts collected over 626 ksec of total integration time, enabling an exquisitely deep look at the X-ray cavity network that permeates the innermost 30 kpc of the cool core. The figure makes use of a Gaussian Gradient Magnitude (GGM) filter as an edge-detector \citep{sanders16b,walker17}, revealing the X-ray cavities in sharp relief. A discussion of detailed X-ray morphology along with deep spectral maps, etc., will be discussed in a forthcoming paper (Tremblay et al.~in prep). We preview the map here because it makes obvious the need to consider uplift by buoyant hot cavities as a primary sculptor of morphology in the cold and warm nebulae. To the north and south, H$\alpha$ filaments (green contours on \autoref{fig:deepchandra}) appear draped over the edges of the inner X-ray cavities, as if they have either been uplifted as they buoyantly rise, or have formed \textit{in situ} along their wakes and rims (e.g., \citealt{brighenti15,mcnamara16}). The nothernmost H$\alpha$ filament has a morphology and X-ray cavity correspondence that is reminiscent of the Northwestern ``horseshoe'' filament in Perseus (e.g., \citealt{hatch06,fabian08,gendron-marsolais18}). In projection, the southern H$\alpha$ filaments reach a terminus at the rim of the southern cavity, forking like a snake's tongue into two thinner filaments. As seen in the H$\alpha$ velocity map (\autoref{fig:MuseShowcase}), one filament approaches and the other recedes, yet both have a coherent bulk line of sight velocity that is similar to the expected terminal velocity of the buoyantly rising hot bubble with which it is cospatial (roughly half the sound speed in the hot gas, or $\sim375$ km s\mone, \citealt{tremblay12a}). A similar ``snakes tongue'' split is seen in the redshifted northern filaments, whose $\gae15$ kpc outskirts at the edges of cavities show the fastest LOS velocities of any optical emission line in the galaxy ($+400$ km s\mone). The cospatial and comoving components of the warm and cold nebulae likely trace the same population of clouds, as we have argued repeatedly throughout this paper, and as has been suggested by many authors over many years (e.g., \citealt{odea94,jaffe97,jaffe05,wilman06,emonts13,anderson17}). In \autoref{fig:MuseALMAVelocity} we compare the H$\alpha$ and CO(2-1) line of sight velocity maps side-by-side. Where they overlap, the projected velocity of the molecular gas matches that of the warm gas, consistent with the hypothesis that much of the Balmer emission stems from warm ionized envelopes of cold molecular cores, tracing their interface with the ambient hot gas. As projected on the sky, the H$\alpha$ nebula only shows line of sight velocities consistently in excess of mean CO(2-1) velocities at galaxy-centric radii that are greater than the outermost extent of the detected CO(2-1) emission. Were we able to detect CO(2-1) at these large radii, we would likely find it at similar LOS velocities as the H$\alpha$. The fact that the latter shows a factor of two broader linewidth, then, is not necessarily surprising. Perhaps simply because of a sensitivity floor, cold molecular gas is confined to smaller radii. Any given line of sight therefore intersects a smaller volume occupied by CO(2-1)-bright clouds -- and therefore smaller-scale turbulent eddies -- which in turn have smaller velocity dispersions. H$\alpha$ is both vastly brighter (i.e. easier to detect at large radii) than CO(2-1) relative to the sensitivity limits of our optical and mm observations, respectively. Moreover, CO(2-1)-bright molecular clouds can dissociate easily, absent sufficient shielding, and so may be more vulnerable to destruction at larger galaxy-centric radii. \ion{H}{1} in A2597, as mapped in detail by \citet{odea94}, shows broader linewidths more consistent with those found in H$\alpha$, supporting this notion. In any case, if a substantial component of the H$\alpha$ filaments have been buoyantly uplifted in the rise of the X-ray cavities, then so too must be the molecular filaments. Assuming (hypothetically) that coupling efficiency is not an issue, simple energetics arguments suggest that the cavity network in Abell 2597 is powerful enough to uplift the entirety of the cold molecular nebula. Archimedes' principle dictates that the bubbles cannot lift more mass than they displace (e.g., \citealt{mcnamara14,russell17,vantyghem16}). The mass of hot gas displaced in the inflation of the cavity network is at least $\sim7\times10^9$ \Msol\ (using X-ray gas density and cavity size measurements from \citealt{tremblay12b}, assuming spherical cavity geometry, and adopting the arguments in \citealt{gitti11}), while the total cold gas mass in the molecular nebula is less than this ($\sim3.2\times10^9$ \Msol). Moreover, the cavity system has an estimated $4pV$ mechanical energy of $\sim4\times10^{58}$ ergs \citep{tremblay12b}, while the total kinetic energy in the cold molecular nebula (e.g., $\frac{1}{2} M_\mathrm{mol} v^2$) is about an order of magnitude lower, at roughly $\sim2\times10^{57}$ ergs. Therefore, if we ignore coupling efficiency, uplift of the entire mass of the molecular nebula would be safely within the kinetic energy budget of the system. Any such uplift would be temporary. The escape speed from the galaxy, which is roughly twice the circular speed at any given radius, is far in excess of any observed line of sight velocity in the system. After decoupling from either the cavity wake or jet entrainment layer that has lifted them to higher altitudes, cold clouds should fall back inward at their terminal speed, drifting in the hot gas velocity field as they descend. These infalling clouds may join the population we observe in absorption, powering black hole activity once again, and keeping the fountain long-lived. \acknowledgments This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2012.1.00988.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We are grateful to the European ALMA Regional Centres, particularly those in Garching and Manchester, for their dedicated end-to-end support of data associated with this paper. We have also received immense support from the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work is also based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 094.A-0959 (PI: Hamer). We also present observations made with the NASA/ESA \textit{Hubble Space Telescope}, obtained from the data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. GRT thanks EST and AST for educating him on Nature's many sources of uplift. GRT also acknowledges support from the National Aeronautics and Space Administration (NASA) through \textit{Chandra} Award Number GO7-8128X as well as Einstein Postdoctoral Fellowship Award Number PF-150128, issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. BJW, SWR, JAZ PEJN, RPK, WRF, CJ, and YS also acknowledge the financial support of NASA contract NAS8-03060 (Chandra X-ray Center). FC acknowledges the European Research Council for the Advanced Grant Program \# 267399-Momentum. MG is supported by NASA through Einstein Postdoctoral Fellowship Award Number PF5-160137, as well as \textit{Chandra} grant GO7-18121X. The work of SAB, CPO, and BRM was supported by a generous grant from the Natural Sciences and Engineering Research Council of Canada. HRR and TAD acknowledge support from a Science and Technology Facilities Council (STFC) Ernest Rutherford Fellowship. ACE acknowledges support from STFC grant ST/P00541/1. ACF acknowledges support from ERC Advanced Grant `Feedback'. MNB acknowledges funding from the STFC. Basic research in radio astronomy at the Naval Research Laboratory is supported by 6.1 Base funding. This research made use of \texttt{Astropy}\footnote{\url{http://www.astropy.org/}}, a community-developed core Python package for Astronomy \citep{astropypaper, astropypaper2}. Some MUSE data reduction and analysis was conducted on \textit{Hydra}, the Smithsonian Institution's High Performance Cluster (SI/HPC). \facilities{ CXO (ACIS-S), HST (ACS, NICMOS), VLT: Yepun } \software{% \texttt{Astropy} \citep{astropypaper, astropypaper2}, \texttt{CASA} \citep{mcmullin07}, \texttt{CIAO} \citep{fruscione06}, \texttt{IPython} \citep{ipython}, \texttt{Matplotlib} \citep{matplotlib}, \texttt{NumPy} \citep{numpy}, \texttt{PySpecKit} \citep{pyspeckit}, \texttt{scipy} \citep{scipy} } \dataset[DOI-linked Software Repository for this Paper]{http://doi.org/10.5281/zenodo.1233825}
1808.00460
\section{Introduction} Rapid experimental advancements have spawned an international race towards the first experimental {\it quantum supremacy} demonstration---in which a quantum computer outperforms a classical one at some task \cite{ bremner2010classical, aaronson2011computational,preskill2012quantum, farhi2016quantum,2016arXiv160800263B, harrow2017quantum, bremner2017achieving, 2017arXiv171205384B,2017arXiv171005867P,Aaronson:2017:CFQ:3135595.3135617, 2018arXiv180404797L,2018arXiv180501450C, bouland_et_al:LIPIcs:2018:8867, bermejo2018architectures, bouland2018quantum, dalzell2018many,2018arXiv190710749}. There is likewise interest in understanding the effectiveness of low-depth quantum circuits for e.g.~machine learning \cite{2017Natur.549..195B} and quantum simulation \cite{bermejo2018architectures}. Missing in this theory is a quantification of the entanglement (manifest in correlations between problem variables) that a given quantum computation can support \cite{2013JPhA...46U5301B}. Quantification of the minimal-sized circuits needed to produce---even in principle---maximally correlated quantum states fills gaps missing in the theory of quantum supremacy and low-depth circuits in general. Indeed, such minimal-depth circuits---as predicted by our theory---seem to be the most difficult small quantum circuits to simulate classically. The goal of quantum supremacy is to perform a task that is beyond the capability of any known classical computer. A naive starting point would be to consider the evident memory limitations of classical computers. If we consider an ideal quantum state, we must store at most $2^{n+1}\cdot 16$ bytes of information, assuming 32 bit precision. This upper bound reaches $80$ terabytes (TB) at just less than 43 qubits and $2.2$ petabytes (PB) at just under 47. Eighty TB and $2.2$ PB are commonly referenced as the maximum memory storage capacity of a rapid supercomputing node and the supercomputer Trinity with the world's largest memory (respectively). And so quantum supremacy might already be possible with $\geq$ 47 qubits (strong simulation). The problem is that to create states requiring $2^{n+1}$ independent degrees of freedom requires $O(\exp[n])$ gates, well beyond the coherence time of any device outside of the fault-tolerance threshold. And so we must search for another supremacy protocol which requires lower-depth circuits. Broadly speaking, the leading proposals for quantum supremacy can be divided into two categories: (i) those that provide strong complexity-theoretic evidence of classical intractability (based, for example, on the non-collapse of the polynomial hierarchy) and (ii) those that promise to be imminent candidates for experimental realization. Examples in the former category include sampling from (a) boson sampling circuits \cite{aaronson2011computational}, (b) IQP circuits \cite{bremner2010classical}, and (c) DQC1 circuits \cite{PhysRevLett.120.200502}. A leading example in the latter category is the problem of sampling from random quantum circuits. The existence of an efficient classical algorithm which can simulate random quantum circuits seems unlikely. In particular, it would imply the violation of the Quantum Threshold Assumption (QUATH) \cite{Aaronson:2017:CFQ:3135595.3135617}. However, this says nothing of the number of qubits and the depths of the circuits required to first show this separation between quantum and classical computational devices. To address this, all arguments to-date have extrapolated---based on numerics or counting resources---where the classical intractability crossover point will occur. Indeed, our theory fills in a missing gap by providing lower bounds under the empirically established assumption that random circuits producing highly correlated states are difficult to simulate. Interestingly, we found a window where the maximal possible amount of entanglement is strictly upper-bounded by half the number of qubits in ebits---prior numerical findings are positioned inside this narrow window. An observation of central importance is that existing quantum processors rely on qubits where the restriction is that these qubits interact on the 2D planar lattice. In the long-term, the specific layout will be of less consequence. However, for low-depth circuits a subtle implication is that the lattice embodies a small-world property, in which long-range correlations must be induced as a sequence of nearest neighbor operations. Indeed, the Hilbert space describing the quantum processor is entirely induced by a tensor network \cite{2015JSP...160.1389B} with the same underlying grid-geometry of the Hamiltonian governing the quantum processor itself. Our bounds are formulated in this setting and are generally applicable across all current quantum supremacy protocols. \section*{Background} The state-space describing a contemporary quantum computer can be induced by the underlying geometry of the system Hamiltonian's coupling matrix, with entries $J_{ij}$---we argue that this is particularly relevant for low-depth circuits. Contemporary processors sequence local and nearest neighbor gates on a rectangular array of qubits: the corresponding Hilbert space will be formed accordingly. We define $Q$ as the support of the matrix formed by the $J_{ij}$'s. A quantum process is hence a space-time diagram codified by a triple of natural numbers $l \times m \times g$ where we assume $n = l\cdot m$ qubits enumerate the nodes of a rectangular lattice $Q$ and $g$ is the gate-depth of circuits acting on $Q$. As will be seen, the variation over all circuits of depth at most $g$ acting on an $l \times m$ qubit grid lifts to a state-space. Here the edges of $Q$ connect $2(\sqrt{n}-1)\sqrt{n}$ horizontal (otherwise vertical) nearest neighbor pairs---where $\sqrt{n}$ will be deformed later as to deviate from a perfect square and hence capture the rectangular structure of certain contemporary quantum information processors (see \ref{appendix:grid}). We will fix a canonical basis found from iterating all possible binary values of the qubits positioned on the nodes of $Q$, which is given by the complex linear extension of the domain $\{0,1\}^l\times \{0,1\}^m$. This assignment lifts the internal legs of $Q$ to linear operators between external (qubit) nodes and hence fully defines our state-space. Indeed, the grid structure induces a dichotomy between tensors of (i) valence (3,1) and (ii) valence (4,1) where the first is of type $\mathbb{C}_\chi^{\otimes 3} \rightarrow \mathbb{C}_2$ and the second is $\mathbb{C}_\chi^{\otimes 4} \rightarrow \mathbb{C}_2$. The parameter $\chi$ will be defined later as the internal bond dimension. We note that the minimum edge cut bipartitioning $n$ qubits into two halves is $mincut(Q) = \sqrt{n}$, which will become a quantity of significance. Rank is the Schmidt number (the number of non-zero singular values) across any of the bipartitions into $\lceil n/2 \rceil$ qubits on a grid. Rank provides an upper-bound on the bipartite entanglement that a quantum state can support---as will be seen, a rank-$k$ state has at most $\log_2(k)$ ebits of entanglement. This provides an entanglement coarse-graining which we use to quantify circuits. An ebit is a unit of entanglement contained in a maximally entangled two-qubit (Bell) state. A quantum state with $q$ ebits of entanglement (quantified by any entanglement measure) contains the same amount of entanglement (in that measure) as $q$ Bell states. If a task requires $r$ ebits, it can be done with $r$ or more Bell states, but not with fewer. Maximally entangled states in $\mathbb{C}^d\otimes \mathbb{C}^d$ have $\log_2(d)$ ebits of entanglement. The question is then to upper bound the maximum amount of entanglement a given quantum computation can generate, turning to the aforementioned entanglement coarse-graining to classify quantum algorithms in terms of both the circuit depth, as well as the maximum ebits possible. For low-depth circuits, these arguments are surprisingly relevant. To understand this, we note that the maximum number of ebits generated by a fully entangling two-qubit gate acting on a pair of qubits is never more than a single ebit. We then consider that the maximum qubit partition with respect to ebits is into two (ideally) equal halves, which is never more than $\lceil n/2 \rceil$. We then arrive at the general result that a $g$-depth quantum circuit on $n$ qubits never applies more than $\min\{\lceil n/2 \rceil, g\}$ ebits of entanglement. This in turn (see \ref{appendix:lb}) puts a lower-bound of $\log_2 \chi = \sqrt{n}/2$ on the two-qubit gate-depth to potentially drive a system into a state supporting the maximum possible ebits of entanglement. However, the grid structure requires the two-qubit gates acting on each qubit to be stacked, immediately arriving at $\sim \sqrt{4n}$ as the lower-bound for a circuit to even in principle generate $\lceil n/2 \rceil$ ebits of entanglement. This lower bound is just below the gate-depths of interest which were successfully simulated in the literature (see Figure \ref{fig:gate-count} and the Discussion). Under our coarse grained definition, we don't increase entanglement by the addition of local gates. Adding local gates is possible before and after each two qubit gate, again multiplying the gate depth by a factor of four, yielding $\sim 8\sqrt{n}$. The derived interval domain $[\sqrt{4n}, 8\sqrt{n}]$ casts a narrow window enclosing reported data---save one data-point which is inside the 47 qubit strong simulation threshold. For further generalizations see \ref{appendix:lb}. \section*{Results} The state-space describing a contemporary quantum computer can be induced by the underlying geometry of the system Hamiltonian's coupling matrix, with entries $J_{ij}$---we argue that this is particularly relevant for low-depth circuits. Contemporary processors sequence local and nearest neighbor gates on a rectangular array of qubits: the corresponding Hilbert space will be formed accordingly. We define $Q$ as the support of the matrix formed by the $J_{ij}$'s. A quantum process is hence a space-time diagram codified by a triple of natural numbers $l \times m \times g$ where we assume $n = l\cdot m$ qubits enumerate the nodes of a rectangular lattice $Q$ and $g$ is the gate-depth of circuits acting on $Q$. As will be seen, the variation over all circuits of depth at most $g$ acting on an $l \times m$ qubit grid lifts to a state-space. Here the edges of $Q$ connect $2(\sqrt{n}-1)\sqrt{n}$ horizontal (otherwise vertical) nearest neighbor pairs---where $\sqrt{n}$ will be deformed later as to deviate from a perfect square and hence capture the rectangular structure of certain contemporary quantum information processors (see \ref{appendix:grid}). We will fix a canonical basis found from iterating all possible binary values of the qubits positioned on the nodes of $Q$, which is given by the complex linear extension of the domain $\{0,1\}^l\times \{0,1\}^m$. This assignment lifts the internal legs of $Q$ to linear operators between external (qubit) nodes and hence fully defines our state-space. Indeed, the grid structure induces a dichotomy between tensors of (i) valence (3,1) and (ii) valence (4,1) where the first is of type $\mathbb{C}_\chi^{\otimes 3} \rightarrow \mathbb{C}_2$ and the second is $\mathbb{C}_\chi^{\otimes 4} \rightarrow \mathbb{C}_2$. The parameter $\chi$ will be defined later as the internal bond dimension. We note that the minimum edge cut bipartitioning $n$ qubits into two halves is $mincut(Q) = \sqrt{n}$, which will become a quantity of significance. Rank is the Schmidt number (the number of non-zero singular values) across any of the bipartitions into $\lceil n/2 \rceil$ qubits on a grid. Rank provides an upper-bound on the bipartite entanglement that a quantum state can support---as will be seen, a rank-$k$ state has at most $\log_2(k)$ ebits of entanglement. This provides an entanglement coarse-graining which we use to quantify circuits. An ebit is a unit of entanglement contained in a maximally entangled two-qubit (Bell) state. A quantum state with $q$ ebits of entanglement (quantified by any entanglement measure) contains the same amount of entanglement (in that measure) as $q$ Bell states. If a task requires $r$ ebits, it can be done with $r$ or more Bell states, but not with fewer. Maximally entangled states in $\mathbb{C}^d\otimes \mathbb{C}^d$ have $\log_2(d)$ ebits of entanglement. The question is then to upper bound the maximum amount of entanglement a given quantum computation can generate, turning to the aforementioned entanglement coarse-graining to classify quantum algorithms in terms of both the circuit depth, as well as the maximum ebits possible. For low-depth circuits, these arguments are surprisingly relevant. To understand this, we note that the maximum number of ebits generated by a fully entangling two-qubit gate acting on a pair of qubits is never more than a single ebit. We then consider that the maximum qubit partition with respect to ebits is into two (ideally) equal halves, which is never more than $\lceil n/2 \rceil$. We then arrive at the general result that a $g$-depth quantum circuit on $n$ qubits never applies more than $\min\{\lceil n/2 \rceil, g\}$ ebits of entanglement. This in turn (see \ref{appendix:lb}) puts a lower-bound of $\log_2 \chi = \sqrt{n}/2$ on the two-qubit gate-depth to potentially drive a system into a state supporting the maximum possible ebits of entanglement. However, the grid structure requires the two-qubit gates acting on each qubit to be stacked, immediately arriving at $\sim \sqrt{4n}$ as the lower-bound for a circuit to even in principle generate $\lceil n/2 \rceil$ ebits of entanglement. This lower bound is just below the gate-depths of interest which were successfully simulated in the literature (see Figure \ref{fig:gate-count} and the Discussion). Under our coarse grained definition, we don't increase entanglement by the addition of local gates. Adding local gates is possible before and after each two qubit gate, again multiplying the gate depth by a factor of four, yielding $\sim 8\sqrt{n}$. The derived interval domain $[\sqrt{4n}, 8\sqrt{n}]$ casts a narrow window enclosing reported data---save one data-point which is inside the 47 qubit strong simulation threshold. For further generalizations see \ref{appendix:lb}. \section*{Discussion} Figure \ref{fig:gate-count} presents a summary plot of our findings. Data-points included follow the prescription of quantum circuit simulation introduced by Google \cite{2016arXiv160800263B}. The gate set used in this prescription comprises: $\mathrm{CZ}$, $\mathrm{T}$, $\sqrt{\mathrm{X}}$, $\sqrt{\mathrm{Y}}$, $\mathrm{H}$. While some of these simulations (e.g.~those done in Sunway TaihuLight supercomputer \cite{2018arXiv180404797L}) involve the calculation of the amplitudes of all output bitstrings (all $2^{46}$ bit strings, in the case of Sunway TaihuLight), others such as Alibaba \cite{2017arXiv171005867P} involve only the calculation of a single amplitude. The data points were obtained from different simulations done recently \cite{2016arXiv160800263B, 2017arXiv171205384B, 2018arXiv180501450C, 2018arXiv180404797L, 2017arXiv171005867P, 2018arXiv180206952C, Haner:2017:PSQ:3126908.3126947}. It is interesting to note that the reported numerical simulations fall inside the interval (pink online) depicted as the $n/2$ ebit window. Such circuits are thought to be the most difficult low-depth circuits to simulate classically. We have also included a heat map with an estimation for the running time based on state-of-art algorithms from Alibaba \cite{2017arXiv171005867P}. To estimate the running time, we made use of the following upper bound by Markov and Shi \cite{markov2008simulating}: any $\alpha$-local interacting quantum circuit of size $M$ and depth $g$ can be strongly simulated in time $t(M,g) = 10^{-17} \cdot M^{O(1)} \exp[O(\alpha g)]$, where a factor of $10^{-17}$ has been included so that the running time of the simulation is in units of seconds. For this factor, we assumed that a classical computer is capable of doing $10^{17}$ flops. In the case of our tensor network $G$ representing a $\sqrt{n} \times \sqrt{n}$ grid, $\alpha= \sqrt{n}$, since the quantum circuit that $G$ represents is $\sqrt{n}$-local interacting. We also estimate the number of total gates naively as the number of couplers in the grid multiplied by the depth gate $M(n,g) = 2(\sqrt{n}-1)\sqrt{n}g$. Hence, we consider the equation \begin{equation} \label{eq:markov_fit} \begin{split} t(n,g) &= 10^{-17} \cdot M(n,g)^{a_1} 2^{a_2 g \sqrt{n}} \\ &= 10^{-17} \cdot [2(\sqrt{n}-1)\sqrt{n}g]^{a_1} 2^{a_2 g \sqrt{n}} \end{split} \end{equation} and fit it to the numerical results of Alibaba \cite{2017arXiv171005867P}. For our fit, we obtained the parameters $a_1 = 4.36063901$ and $a_2 = 0.04315488$. With this fit we are able to give an estimation for the gate depth that can be simulated in $1$ month, $1$ year, $10$ years and $100$ years. An important remark to make here is that Alibaba simulations only calculate $1$ amplitude of an exponential number of possible strings. The algorithm is a modification from Boixo et al.~\cite{2017arXiv171205384B} and based on treewidth to measure contraction complexity as shown by Markov and Shi. Thus, this estimation should be considered as an approximation. Lastly, we include a pair of vertical lines corresponding to the quantum computers built by IBM and Google with 50 and 72 qubits, respectively. The estimations for achievable gate depths in classical computers for a given threshold are shown in Table \ref{table:runtimes}. While preparing this manuscript, work by Markov et al. appeared \cite{2018arXiv190710749} where the prescription on how gates are applied has been changed. One of these changes is the inclusion of the $\mathrm{iSwap}$ gate. They estimate that the gate depth to be simulated in a given runtime is about half for the state-of-art algorithms in this new benchmark. We don't include simulations of this latest work in Figure \ref{fig:gate-count} so that only algorithms tested for the same benchmark are included. Considering this, we show how the estimations are modified in Table \ref{table:runtimes_modbench}. In conclusion, we observe a nonlinear tradeoff between the number of qubits and gate depth, with the fleeting resource (with exponential dependency) being the gate depth. We hence remark that quantum supremacy demonstrations (assuming completely random circuit families) should involve circuits of depth at least 50---if not more---for 80 to 150 qubits. Interestingly, circuits of such depth would be inside the coarse-grained entanglement window that we derived in the study. \section*{Aknowledgements} We thank Igor Markov and Mark Saffman for useful comments and Sergio Boixo for insightful discussion regarding our work. \begin{figure*} \centering \includegraphics[width=\textwidth]{1.pdf} \caption{Summary of findings. Qubits vs gate depths superimposed on runtimes. In pink (online), coarse-grained entanglement interval containing existing numerical data (depicted as $n/2$ ebit bound). [Data points from Google \cite{2016arXiv160800263B, 2017arXiv171205384B}, Alibaba \cite{2018arXiv180501450C}, Sunway TaihuLight \cite{2018arXiv180404797L}, IBM \cite{2017arXiv171005867P}, USTC \cite{2018arXiv180206952C}, ETH \cite{Haner:2017:PSQ:3126908.3126947}].} \label{fig:gate-count} \end{figure*} \bibliographystyle{naturemag} \onecolumngrid
1105.3747
\section{\textbf{Introduction}} In studying the sequence spaces, especially, to obtain new sequence spaces, in general, the matrix domain $\mu _{A}$ of an infinite matrix $A$ defined by $\mu _{A}=\{x=(x_{k})\in w:Ax\in \mu \}$ is used. In the most cases, the new sequence space $\mu _{A}$ generated by a sequence space $\mu $ is the expansion or the contraction of the original space $\mu $. In some cases, these spaces could be overlap. Indeed, one can easily see that the inclusion $\mu _{S}\subset \mu $ strictly holds for $\mu \in \{\ell _{\infty },c,c_{0}\}$. Similarly one can deduce that the inclusion $\mu \subset \mu _{\Delta }$ also strictly holds for $\mu \in \{\ell _{\infty },c,c_{0}\};$ where $S$ and $\Delta $ are matrix operators. Recently, in \cite{MursaleenNoman}, Mursaleen and Noman constructed new sequence spaces by using matrix domain over a normed space. They also studied some topological properties and inclusion relations of these spaces. It is well known that paranormed spaces have more general properties than the normed spaces. In this work, we generalize the normed sequence spaces defined by Mursaleen \cite{MursaleenNoman} to the paranormed spaces. Furthermore we introduce new sequence space over the paranormed space. Next we investigate behaviors of this sequence space according to topological properties and inclusion relations. Finally we give certain matrix transformation on this sequence space and its duals. In the literature, by using the matrix domain over the paranormed spaces, many authors have defined new sequence spaces. Some of them are as the following. For example; Choudhary and Mishra \cite{ChMis} have defined the sequence space $\ell \overline{\left( p\right)}$ which the $S-$transform is in $\ell \left( p\right)$, Basar and Altay$\left( \text{\cite{PBasarAltay1} \cite{PBasarAltay2}}\right)$ defined the spaces $\lambda \left( u,v;p\right) =\left\{ \lambda \left( p\right) \right\}_{G}$ for $\lambda \in \left\{ \ell _{\infty },c,c_{0}\right\}$ and $\ell \left( u,v;p\right) =\left\{ \ell \left( p\right) \right\}_{G}$ respectively, and Altay and Basar \cit {AltayBasar1} have defined the spaces $r_{\infty }^{t}\left( p\right), r_{c}^{t}\left( p\right), r_{0}^{t}\left( p\right)$. In \cite{PKarakayaPolat , Karakaya and Polat defined and examined the spaces $e_{0}^{r}\left( \Delta ;p\right), e^{r}\left( \Delta ;p\right), e_{\infty }^{r}\left( \Delta ;p\right)$, and Karakaya, Noman and Polat \cite{PKarakayaNH} have recently introduced and studied the spaces $\ell _{\infty }\left( \lambda ,p\right), $ $c\left( \lambda ,p\right) $, $c_{0}\left( \lambda, p\right)$; where $R^{t}$ and $E^{r}$ denote the Riesz and the Euler means, respectively, $\Delta $ denotes the band matrix of the difference operators, and $\Lambda$, $G$ are defined in \cite{MursaleenNoman} and \cite{Makowskysavas}, respectively. By $w$, we denote the space of all real valued sequences. Any vector subspace of $w$ is called a sequence space. By the spaces $\ell _{1}$, $cs$ and $bs$, we denote the spaces of all absolutely convergent series, convergent series and bounded series, respectively. A linear topological space $X$ over the real field $\mathbb{R}$ is said to be a paranormed space if there is a subadditivity function $h:X\rightarrow \mathbb{R}$ such that $h\left( \theta \right) =0$, $h\left( x\right) =h\left( -x\right) $ and scalar multiplication is continuous, i.e.; \left\vert \alpha _{n}-\alpha \right\vert \rightarrow 0$ and $h\left( x_{n}-x\right) \rightarrow 0$ imply $h\left( \alpha _{n}x_{n}-\alpha x\right) \rightarrow 0$ for all $\alpha $ in $\mathbb{R}$ and $x$ in $X$, where $\theta $ is the zero in the linear space $X$. Let $\mu ,\nu $ be any two sequence spaces and let $A=\left( a_{nk}\right) $ be any infinite matrix of real number $a_{nk}$, where $n,k\in \mathbb{N}$ with $\mathbb{N}=\left\{ 0,1,2,...\right\} $. Then we say that $A$ defines a matrix mapping from $\mu $ into $\nu $ by writing$\ A:\mu \rightarrow \nu ,$ if for every sequence $x=\left( x_{k}\right) \in \mu ,$ the sequence Ax=\left( A_{n}\left( x\right) \right) $, the $A-$transform of $x$, is in \nu ,$ wher \begin{equation} A_{n}\left( x\right) =\sum\limits_{k}a_{nk}x_{k}\text{ \ \ }\left( n\in \mathbb{N}\right) \text{.} \label{1.1} \end{equation By $\left( \mu ,\nu \right) $, we denote the class of all matrices $A$ such that $A:\mu \rightarrow \nu $. Thus, $A\in \left( \mu ,\nu \right) $ if and only if \ the series on the right hand side of $\left( 1.1\right) $ converges for each $n\in \mathbb{N}$ and every $x\in \mu ,$ and we have Ax\in \nu $ for all $x\in \mu $. A sequence $x$ is said to be $A-$summable to $a$ if $Ax$ converges to $a$ which is called as the $A-$limit of $x$. Assume here and after that$\ \left( p_{k}\right) $, $\left( q_{k}\right) $ are bounded sequences of strictly positive real numbers with $\sup p_{k}=H$ and $M=\max \left( 1,H\right) $, also let $\grave{p}_{k}=\frac{p_{k}}{p_{k}- }$ for $1<p_{k}<\infty $ and for all $k\in \mathbb{N}$ . The linear space \ell (p)$ was defined by Maddox \cite{Maddox1967} as follows \begin{equation*} \ell (p)=\left\{ x=\left( x_{n}\right) \in w:\dsum\limits_{n=0}^{\infty }\left\vert x_{n}\right\vert ^{p_{n}}<\infty \right\} \end{equation* which are the complete space paranormed b \begin{equation*} h\left( x\right) =\left( \dsum\limits_{n=0}^{\infty }\left\vert x_{n}\right\vert ^{p_{n}}\right) ^{\frac{1}{M}}.\text{ } \end{equation* Throughout this work, by $\digamma $ and $N_{k}$ respectively, we shall denote the collection of all subsets of $\mathbb{N}$ and the set of all n\in \mathbb{N}$ such that $n\geq k$ and $e=\left( 1,1,1,...\right) .$ \section{\textbf{The sequence space} $\ell \left(\protect\lambda, p\right)$} In this section, we define the sequence spaces $\ell \left( \lambda ,p\right) $ and prove that this sequence space according to its paranorm are complete paranormed linear spaces. In \cite{MursaleenNoman}, Mursaleen and Noman defined the matrix $\Lambda =\left( \lambda _{nk}\right) _{n,k=0}^{\infty }$ b \begin{equation} \lambda _{nk}=\QDATOPD\{ . {\frac{\lambda _{k}-\lambda _{k-1}}{\lambda _{n}} ;\text{ \ \ }\left( 0\leq k\leq n\right)}{0 ;\text{ \ \ \ \ \ }\left( k>n\right) } \label{2.1} \end{equation where $\lambda =\left( \lambda _{k}\right) _{k=0}^{\infty }$ be a strictly increasing sequence of positive reals tending to $\infty $, that is, 0<\lambda _{0}<\lambda _{1}<...$ and $\lambda _{k}\rightarrow \infty $ as k\rightarrow \infty $. Now, by using $(2.1)$ we define new sequence space as follows \begin{equation*} \ell \left( \lambda ,p\right) =\left\{ x=\left( x_{k}\right) \in w:\sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n} \tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) x_{k}\right\vert ^{p_{n}}<\infty \right\}. \end{equation* For any $x=(x_{n})\in w$, we define the sequence $y=(y_{n})$, which will frequently be used, as the $\Lambda $-transform of $x$, i.e., $y=\Lambda (x)$ and hence \begin{equation} y_{n}=\sum_{k=0}^{n}\left( \frac{\lambda _{k}-\lambda _{k-1}}{\lambda _{n} \right) x_{k}~~~~(n\in N). \label{2.2} \end{equation We now may begin with the following theorem. \begin{theorem} The sequence space $\ell \left( \lambda ,p\right) $ is the complete linear metric space with respect to paranorm defined by \end{theorem} \begin{center} $h\left( x\right) =\left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1} \lambda _{n}}\tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) x_{k}\right\vert ^{p_{n}}\right) ^{\frac{1}{M}}$. \end{center} \begin{proof} The linearity of $\ell \left( \lambda ,p\right) $ with respect to the coordinatewise addition and scalar multiplication follows from the following inequalities which are satisfied for $x,t\in \ell \left( \lambda ,p\right) $ (see; \cite{Maddoxelmt1988}) \begin{equation} \left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n} \tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) \left( x_{k}+t_{k}\text{ }\right) \right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}}\leq \left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n} \tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) x_{k}\right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}} \label{2.3} \end{equation \begin{equation*} \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }+\left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n} \tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) t_{k}\text{ \right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}} \end{equation* and for any $\alpha \in \mathbb{R}$ (see;\cite{I.J.Maddox1968} \begin{equation} \left\vert \alpha \right\vert ^{p_{k}}\leq \max \left\{ 1,\left\vert \alpha \right\vert ^{M}\right\} . \label{2.4} \end{equation It is clear that $h\left( \theta \right) =0$, $h\left( x\right) =h\left( -x\right) $ for all $x\in \ell \left( \lambda ,p\right) $. Again the inequalities (2.3) and (2.4) yield the subadditivity of $h$ and hence h\left( \alpha x\right) \leq \max \left\{ 1,\left\vert \alpha \right\vert ^{M}\right\} h\left( x\right) $. Let $\left\{ x^{m}\right\} $ be any sequence of points $x^{m}\in \ell \left( \lambda ,p\right) $ such that h\left( x^{m}-x\right) \rightarrow 0$ and $\left( \alpha _{m}\right) $ also be any sequence of scalars such that $\alpha _{m}\rightarrow \alpha $. Then, since the inequalit \begin{equation*} h\left( x^{m}\right) \leq h\left( x\right) +h\left( x^{m}-x\right) \end{equation* holds by subadditivity of $h$, we can write that $\left\{ h\left( x^{m}\right) \right\} $ is bounded and we thus hav \begin{eqnarray*} h\left( \alpha _{m}x^{m}-\alpha x\right) &=&\left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n}}\tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) \left( \alpha _{m}x_{k}^{m}-\alpha x_{k}\right) \right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}} \\ &\leq &\left\vert \alpha _{m}\rightarrow \alpha \right\vert ^{\frac{1}{M }h\left( x^{m}\right) +\left\vert \alpha \right\vert ^{\frac{1}{M}}h\left( x^{m}-x\right) \end{eqnarray* which tends to zero as $n\rightarrow \infty $. Therefore, the scalar multiplication is continuous. Hence $h$ is a paranorm on the space $\ell \left( \lambda ,p\right) $. It remains to prove the completeness of the space $\ell \left( \lambda ,p\right) $. Let $\left\{ x^{j}\right\} $ be any Cauchy sequence in the space $\ell \left( \lambda ,p\right) $, where x^{j}=\left\{ x_{0}^{\left( j\right) },x_{1}^{\left( j\right) },x_{2}^{\left( j\right) },...\right\} $. Then, for a given $\varepsilon >0,$ there exists a positive integer $m_{0}\left( \varepsilon \right) $ such that $h\left( x^{j}-x^{i}\right) <\frac{\varepsilon }{2}$ for all $i$, j>m_{0}\left( \varepsilon \right) $. Using definition of $h,$ we obtain for each fixed $n\in \mathbb{N}$ tha \begin{equation} \left\vert \Lambda _{n}\left( x^{j}\right) -\Lambda _{n}\left( x^{i}\right) \right\vert \leq \left( \sum\limits_{n=0}^{\infty }\left\vert \Lambda _{n}\left( x^{j}\right) -\Lambda _{n}\left( x^{i}\right) \right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}}<\frac{\varepsilon }{2} \label{2.5} \end{equation for every $i$,$j>m_{0}\left( \varepsilon \right) $ which leads us to the fact that $\left\{ \Lambda _{n}\left( x^{0}\right) ,\Lambda _{n}\left( x^{1}\right) ,\Lambda _{n}\left( x^{2}\right) ,...\right\} $ is a Cauchy sequence of real numbers for every fixed $n\in \mathbb{N}$. Since $\mathbb{R} $ is complete, it converges, say $\Lambda _{n}\left( x^{i}\right) -\Lambda _{n}\left( x\right) $ as $i\rightarrow \infty $. Using these infinitely many limits, we may write the sequence $\left\{ \Lambda _{0}\left( x\right) ,\Lambda _{1}\left( x\right) ,\Lambda _{2}\left( x\right) ,...\right\} $. From $(2.5)$ as $i\rightarrow \infty $, we hav \begin{equation*} \left\vert \Lambda _{n}\left( x^{j}\right) -\Lambda _{n}\left( x\right) \right\vert <\frac{\varepsilon }{2},\left( j\geq m_{0}\left( \varepsilon \right) \right) \end{equation* for every fixed $n\in \mathbb{N}$. Since $x^{j}=\left( x_{k}^{\left( j\right) }\right) \in \ell \left( \lambda ,p\right) $ for each $j\in \mathbb N}$, there exists $m_{0}\left( \varepsilon \right) \in \mathbb{N}$ such that $\left( \sum\limits_{n=0}^{\infty }\left\vert \Lambda _{n}\left( x^{j}\right) \right\vert ^{p_{_{n}}}\right) ^{\tfrac{1}{M}}<\frac \varepsilon }{2}$ for every $j\geq m_{0}\left( \varepsilon \right) $ and for each $n\in \mathbb{N}$. By taking a fixed $j\geq m_{0}\left( \varepsilon \right) $, we obtain by $(2.5)$ tha \begin{equation*} \left( \sum\limits_{n=0}^{\infty }\left\vert \Lambda _{n}\left( x\right) \right\vert ^{p_{_{n}}}\right) ^{\tfrac{1}{M}}\leq \left( \sum\limits_{n=0}^{\infty }\left\vert \Lambda _{n}\left( x^{j}\right) -\Lambda _{n}\left( x^{i}\right) \right\vert ^{p_{_{n}}}\right) ^{\tfrac{1}{ }}+\left( \sum\limits_{n=0}^{\infty }\left\vert \Lambda _{n}\left( x^{j}\right) \right\vert ^{p_{_{n}}}\right) ^{\tfrac{1}{M}}<\infty . \end{equation* Hence, we get $x\in \ell \left( \lambda ,p\right) $. So, the space $\ell \left( \lambda ,p\right) $ is complete. \end{proof} \begin{theorem} The sequence space $\ell \left( \lambda ,p\right) $ of non-absolute type is linearly isomorphic to the space $\ell \left( p\right) $; where $0<p_{k}\leq H<\infty $. \end{theorem} \begin{proof} To prove the theorem, we should show the existence of linear bijection between the spaces $\ell \left( \lambda ,p\right) $ and $\ell \left( p\right) $. With the notation of $(2.2)$, we define transformation $T$ from \ell \left( \lambda ,p\right) $ to $\ell \left( p\right) $ by $x\rightarrow y=Tx$. The linearity of $T$ is trivial. Furthermore, it is obvious that x=\theta $ whenever $Tx=\theta $ and hence $T$ is injective. Let $y\in \ell \left( p\right) $ and define the sequence $x=\left\{ x_{n}\right\} \begin{equation*} x_{n}\left( \lambda \right) =\tsum\limits_{k=n-1}^{n}\left( \left( -\right) ^{n-k}\frac{\lambda _{k}}{\lambda _{n}-\lambda _{n-1}}\right) y_{k}\qquad \left( n,k\in \mathbb{N}\right) . \end{equation* Then, we hav \begin{equation*} h_{\ell \left( \lambda ,p\right) }\left( x\right) =\left( \sum\limits_{n=0}^{\infty }\left\vert \frac{1}{\lambda _{n} \tsum\limits_{k=0}^{n}\left( \lambda _{k}-\lambda _{k-1}\right) x_{k}\right\vert ^{p_{n}}\right) ^{\tfrac{1}{M}}=\left( \sum\limits_{n=0}^{\infty }\left\vert y_{n}\right\vert ^{p_{n}}\right) ^ \tfrac{1}{M}}=h_{\ell \left( p\right) }\left( y\right) . \end{equation* Thus, we have that $x\in \ell \left( \lambda ,p\right) $ and consequently $T$ is surjective. Hence, $T$ is a linear bijection and this says us that the spaces $\ell \left( \lambda ,p\right) $ and $\ell \left( p\right) $ linearly isomorphic. This completes the proof. \end{proof} \section{\textbf{Some inclusion relations}} In this section, we give some inclusion relations concerning the space $\ell \left( \lambda ,p\right) $. Before giving the theorems about the section, we give a Lemma given in \cite{MursaleenNoman}. \begin{lemma} For any sequence $x=\left( x_{k}\right) \in w,$ the equalitie \begin{equation} S_{n}\left( x\right) =x_{n}-\Lambda _{n}\left( x\right) \label{3.1} \end{equation an \begin{equation*} S_{n}\left( x\right) =\frac{\lambda _{n-1}}{\lambda _{n}-\lambda _{n-1} \left[ \Lambda _{n}\left( x\right) -\Lambda _{n-1}\left( x\right) \right] \end{equation* hold, where the sequence $S\left( x\right) =\left\{ S_{n}\left( x\right) \right\} $ is defined b \begin{equation*} S_{0}\left( x\right) =0\text{ and }S_{n}\left( x\right) =\frac{1}{\lambda _{n}}\tsum\limits_{k=1}^{n}\lambda _{k-1}\left( x_{k}-x_{k-1}\right) \text{ \ \ }\left( n\geq 1\right). \end{equation*} \end{lemma} \begin{theorem} The inclusion $\ell \left( \lambda ,p\right) \subset $ $c_{0}\left( \lambda ,p\right) $ strictly holds. \end{theorem} \begin{proof} Let $x\in $ $\ell \left( \lambda ,p\right) .$ It can be written $\Lambda x\in \ell \left( p\right) .$ By the definition of the space $\ell \left( p\right) ,$ $\Lambda _{n}x\rightarrow \infty $ as $n\rightarrow \infty ,$ we obtain $\Lambda x\in c_{0}.$ Hence we get $x\in c_{0}\left( \lambda ,p\right) .$ To show strict of the inclusion, by taking $x_{n}^{^{\prime }}=\frac{1}{n+1 , $ $p_{k}=1+\frac{1}{n+1},$ we consider the sequence $|x|^{p}= (|x_{k}|^{p_{k}})}_{k=0}^{\infty }.$ Then it is easy to see that $\Lambda \left( |x|^{p}\right) \in c_{0}\left( p\right) .$ Since $c_{0}\left( p\right) \subset $ $c_{0}\left( \lambda ,p\right) ,$ $x\in c_{0}\left( \lambda ,p\right) $ $\left( \text{see;\cite{PKarakayaNH}}\right) .$ Henc \begin{equation*} \left\vert \Lambda _{n}\left( x\right) \right\vert \geq \frac{1}{\left( n+1\right) ^{\frac{1}{1+\frac{1}{n+1}}}} . \end{equation* This shows that $\Lambda x\notin \ell \left( p\right) $ and hence $x\notin \ell \left( \lambda ,p\right) .$ Thus the sequence $x$ is in $c_{0}\left( \lambda ,p\right) $ but not in $\ell \left( \lambda ,p\right) .$ \end{proof} \begin{theorem} The inclusion $\ell \left( \lambda ,p\right) \subset \ell \left( p\right) $ if and only if $S\left( x\right) \in $ $\ell \left( p\right) $ for every sequence $x\in \ell \left( \lambda ,p\right) ;$ where $1\leq p_{k}\leq H.$ \end{theorem} \begin{proof} We suppose that $\ell \left( \lambda ,p\right) \subset \ell \left( p\right) $ holds and take any $x\in \ell \left( \lambda ,p\right) .$ Then $x\in \ell \left( p\right) $ by hypothesis. Thus we obtain from $\left( 3.1\right) $ tha \begin{equation*} \left[ h\left( S\left( x\right) \right) \right] _{\ell \left( p\right) }\leq \left[ h\left( x\right) \right] _{\ell \left( p\right) }+\left[ h\left( \Lambda x\right) \right] _{\ell \left( p\right) }=\left[ h\left( x\right) \right] _{\ell \left( p\right) }+\left[ h\left( x\right) \right] _{\ell \left( \lambda ,p\right) } \end{equation* which yields that $S\left( x\right) \in \ell \left( p\right) .$ Conversely, let $x\in \ell \left( \lambda ,p\right) $ be given. Then we have by the hypothesis that $S\left( x\right) \in \ell \left( p\right) .$ Again by using $\left( 3.1\right) \begin{equation*} \left[ h\left( x\right) \right] _{\ell \left( p\right) }\leq \left[ h\left( S\left( x\right) \right) \right] _{\ell \left( p\right) }+\left[ h\left( \Lambda x\right) \right] _{\ell \left( p\right) }=\left[ h\left( S\left( x\right) \right) \right] _{\ell \left( p\right) }+\left[ h\left( x\right) \right] _{\ell \left( \lambda ,p\right) } \end{equation* which shows that $x\in \ell \left( p\right) .$ Hence the inclusion $\ell \left( \lambda ,p\right) \subset \ell \left( p\right) $ holds. This completes the proof. \end{proof} \begin{theorem} $\left( i\right) $ If $p_{n}>1$ for all $n\in N$, then the inclusion $\ell _{p}^{\lambda }\subset \ell \left( \lambda ,p\right) $ holds. $\left( ii\right) $ If $p_{n}<1$ for all $n\in N$, then the inclusion $\ell \left( \lambda ,p\right) \subset \ell _{p}^{\lambda }$ holds. \end{theorem} \begin{proof} $\left( i\right) $ Let $x\in \ell _{p}^{\lambda }.$ It is clear that \Lambda \left( x\right) \in \ell _{p}.$ One can find $m\in N$ such that \left\vert \Lambda _{n}\left( x\right) \right\vert <1$ for all $n\geq m.$ Under the condition $\left( i\right) ,$ we have $\left\vert \Lambda _{n}\left( x\right) \right\vert ^{p_{n}}<\left\vert \Lambda _{n}\left( x\right) \right\vert $ for all $n\geq m.$ Hence we get $x\in \ell \left( \lambda ,p\right) .$ $\left( ii\right) $ We suppose that $x\in \ell \left( \lambda ,p\right) .$ Then $\ \Lambda \left( x\right) \in \ell \left( p\right) $ and there exists m\in N$ such that $\left\vert \Lambda _{n}\left( x\right) \right\vert ^{p_{n}}<1$ for all $n\geq m.$ To obtain the result, we consider the following inequality \begin{equation*} \left\vert \Lambda _{n}\left( x\right) \right\vert =\left( \left\vert \Lambda _{n}\left( x\right) \right\vert ^{p_{n}}\right) ^{\frac{1}{p_{n} }<\left\vert \Lambda _{n}\left( x\right) \right\vert ^{p_{n}} \end{equation* for all $n\geq m.$ So, we get $x\in \ell _{p}^{\lambda }.$ \end{proof} \section{\textbf{Some matrix transformations and duals of the space }$\ell \left( \protect\lambda ,p\right) $} In this section, we give the theorems determining the $\alpha -,\beta -$ and $\gamma -$ duals of the space $\ell \left( \lambda ,p\right) $. In proving the theorem, we apply the technique used in \cite{PBasarAltay1}. Also we give some matrix transformations from the space $\ell \left( \lambda ,p\right) $ into paranormed spaces $\ell \left( q\right) $ by using the matrix given in \cite{MursaleenNoman}. For the sequence space $\mu $ and $\nu $, the set $S\left( \mu ,\nu \right) \ defined by \begin{equation*} S\left( \mu ,\nu \right) =\left\{ a=\left( a_{k}\right) \in w:ax\in \nu \text{ for all }x\in \mu \right\} \end{equation* is called the multiplier space of $\mu $ and $\nu $. The $\alpha -,\beta -$ and $\gamma -$duals of a sequence space $\mu ,$ which are respectively denote by $\mu ^{\alpha }$, $\mu ^{\beta }$ and $\mu ^{\gamma }$ are defined by \begin{equation*} \mu ^{\alpha }=S\left( \mu ,\ell _{1}\right) ,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\mu ^{\beta }=S\left( \mu ,cs\right) ,\text{ \ \ \ \ \ \ \ \ \ \ \ \ }\mu ^{\gamma }=S\left( \mu ,bs\right) . \end{equation* We may begin with the following theorem which computes the $\alpha $-dual of the space $\ell \left( \lambda ,p\right) $. \begin{theorem} Let $K_{1}=\left\{ k\in \mathbb{N}:p_{k}\leq 1\right\} $ and $\ K_{2}=\left\{ k\in \mathbb{N}:p_{k}>1\right\} $. Define the matrix D^{a}=\left( d_{nk}^{a}\right) $ b \begin{equation} d_{nk}^{a}=\left\{ \begin{array}{cc} \left( -1\right) ^{n-k}\frac{\lambda _{k}}{\lambda _{n}-\lambda _{n-1}}a_{n} & ,\left( n-1\leq k\leq n\right) \\ 0 & ,\left( 0\leq k\leq n-1\right) \text{ or }\left( k>n\right \end{array \right. . \label{4.1} \end{equation Then \begin{equation*} \ell _{K_{1}}^{\alpha }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:D^{a}\in \left( \ell \left( p\right) ;\ell _{\infty }\right) \right\} \end{equation* \begin{equation*} \ell _{K_{2}}^{\alpha }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:D^{a}\in \left( \ell \left( p\right) ;\ell _{1}\right) \right\} . \end{equation*} \end{theorem} \begin{proof} We consider the following equalit \begin{equation} a_{n}x_{n}=\tsum\limits_{k=n-1}^{n}d_{nk}^{a}y_{k}=\left( D^{a}y\right) _{n \text{ \ }\left( n\in \mathbb{N}\right) \label{4.2} \end{equation where $D^{a}=\left( d_{nk}^{a}\right) $ is defined by $\left( 4.1\right) .$ From $\left( 4.2\right) $, it can be obtained \ that $ax=\left( a_{n}x_{n}\right) \in \ell _{1}$ or $ax=\left( a_{n}x_{n}\right) \in \ell _{\infty }$ whenever $x\in \ell \left( \lambda ,p\right) $ if and only if D^{a}y\in \ell _{1}$ or $D^{a}y\in \ell _{\infty }$ whenever $y\in \ell \left( p\right).$ This means $a\in \ell _{K_{1}}^{\alpha }\left( \lambda ,p\right) $ or $a\in \ell _{K_{2}}^{\alpha }\left( \lambda ,p\right) $ if and only if $D^{a}\in \left( \ell \left( p\right) ;\ell _{1}\right) $ or D^{a}\in \left( \ell \left( p\right) ;\ell _{\infty }\right) .$ Hence this completes the proof. \end{proof} The result of the Theorem above corresponds the Theorem 5.1 $\left( 0,8,12\right) $ given in \cite{Gro-Erd1993}. As a direct consequence of the Theorem 6, we have the following. \begin{corollary} Let $K^{\ast }=\left\{ k\in \mathbb{N}:n-1\leq k\leq n\right\} \cap K$ for K\in \digamma $. Then $\left( i\right) $ $\ell _{K_{1}}^{\alpha }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:\sup_{N}\sup_{k\in \mathbb{N}}\left\vert \tsum\limits_{n\in K^{\ast }}d_{nk}^{a}\right\vert ^{p_{k}}<\infty \right\} ; $ \end{corollary} $\ \ \ \ \ \left( ii\right) $ $\ell _{K_{2}}^{\alpha }\left( \lambda ,p\right) =\bigcup\limits_{M>1}\left\{ a=\left( a_{n}\right) \in w:\sup_{K\in \digamma }\sum\limits_{k}\left\vert \tsum\limits_{n\in K^{\ast }}d_{nk}^{a}M^{-1}\right\vert ^{p_{k}^{!}}<\infty \right\} $ In the following theorem, we characterize $\ $the $\beta -$ and $\gamma -$ duals of the space $\ell \left( \lambda ,p\right)$. \begin{theorem} Let $K_{1}=\left\{ k\in \mathbb{N}:p_{k}\leq 1\right\} $, $K_{2}=\left\{ k\in \mathbb{N}:p_{k}>1\right\} ,$ and let $\Delta x_{k}=x_{k}-x_{k+1}$. Define the sequence $s^{1}=\left( s_{k}^{1}\right) ,$ $s^{2}=\left( s_{k}^{2}\right) $ and the matrix $B^{a}=\left( b_{nk}^{a}\right) $ by $s_{k}^{1}=\Delta \left( \frac{a_{k}}{\lambda _{k}-\lambda _{k-1}}\right) \lambda _{k},$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $s_{k}^{2}=\frac{a_{k}\lambda _{k}}{\lambda _{k}-\lambda _{k-1}} \begin{equation*} b_{nk}^{a}=\left\{ \begin{array}{cc} s_{k}^{1} & ,\left( 0\leq k\leq n-1\right) \\ s_{k}^{2} & ,\left( k=n\right) \\ 0 & ,\left( k>n\right \end{array \right. . \end{equation* for all $n,k\in \mathbb{N}.$ Then \begin{equation} \ell _{K_{1}}^{\beta }\left( \lambda ,p\right) =\ell _{K_{1}}^{\gamma }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:B^{a}\in \left( \ell \left( p\right) ;\ell _{\infty }\right) \right\} ; \label{4.3} \end{equation and \begin{equation*} \ell _{K_{2}}^{\beta }\left( \lambda ,p\right) =\ell _{K_{2}}^{\gamma }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:B^{a}\in \left( \ell \left( p\right) ;c\right) \right\} . \end{equation*} \end{theorem} \begin{proof} Consider the equalit \begin{equation} \tsum\limits_{k=0}^{n}a_{k}x_{k}=\tsu \limits_{k=0}^{n-1}s_{k}^{1}y_{k}+s_{n}^{2}y_{n}=\left( B^{a}y\right) _{n} \label{4.4} \end{equation From $\left( 4.4\right) $, it can be obtained \ that $ax=\left( a_{n}x_{n}\right) \in cs$ or $bs$ whenever $x=\left( x_{n}\right) \in \ell \left( \lambda ,p\right) $ if and only if $B^{a}y\in c$ or $\ell _{\infty }$ whenever $y=\left( y_{k}\right) \in \ell \left( p\right) .$ This means that a=\left( a_{n}\right) \in \left\{ \ell _{K_{1}}^{\beta }\left( \lambda ,p\right) \text{ or }\ell _{K_{2}}^{\beta }\left( \lambda ,p\right) \right\} $ or $a=\left( a_{n}\right) \in \left\{ \ell _{K_{1}}^{\gamma }\left( \lambda ,p\right) \text{ or }\ell _{K_{2}}^{\gamma }\left( \lambda ,p\right) \right\} $ if and only if $B^{a}\in \left( \ell \left( p\right) ;c\right) $ or $B^{a}\in \left( \ell \left( p\right) ;\ell _{\infty }\right) .$ Hence this completes the proof. \end{proof} We can write the following corollary from \ the Theorem 7. \begin{corollary} Let $\grave{p}_{k}=\frac{p_{k}}{p_{k}-1}$ for $1<p_{k}<\infty $ and for all k\in \mathbb{N}.$ Then $\left( i\right) $ $\ell _{K_{1}}^{\beta }\left( \lambda ,p\right) =\ell _{K_{1}}^{\gamma }\left( \lambda ,p\right) =\left\{ a=\left( a_{n}\right) \in w:s^{1},s^{2}\in \ell _{\infty }\left( p\right) \right\} ;$ $\left( ii\right) $ $\ell _{K_{2}}^{\beta }\left( \lambda ,p\right) =\ell _{K_{2}}^{\gamma }\left( \lambda ,p\right) =\bigcup\limits_{M>1}\left\{ a=\left( a_{n}\right) \in w:s^{1}M^{-1},s^{2}M^{-1}\in \ell \left( p^{^{\prime }}\right) \cap \ell _{\infty }\left( p^{^{\prime }}\right) \right\}$. \end{corollary} After this step, we can give our theorems on the characterization of some matrix classes concerning \ with the sequence space $\ell \left( \lambda ,p\right) .$ Let $x,y\in w$ be connected by the relation $y=\Lambda (x)$. For an infinite matrix $A=(a_{nk})$, we have by using (4.4) of Theorem 7 that \begin{equation} \sum_{k=0}^{m}a_{nk}x_{k}=\sum_{k=0}^{m-1}\tilde{a}_{nk}y_{k}+\frac{\lambda _{m}}{\lambda _{m}-\lambda _{m-1}}a_{nm}y_{m}~~~~~(m,n\in \mathbb{N}) \label{4.5} \end{equation where \begin{equation*} \tilde{a}_{nk}=\left( \frac{a_{nk}}{\lambda _{k}-\lambda _{k-1}}-\frac a_{n,k+1}}{\lambda _{k+1}-\lambda _{k}}\right) \lambda _{k};~~~~(n,k\in \mathbb{N}). \end{equation* The necessary and sufficient conditions characterizing the matrix mapping of the sequence space $\ell \left( p\right) $ of Maddox have been determined by Grosse-Erdmann \cite{Gro-Erd1993}. Let $L$ and $M$ be the natural numbers and define the sets by $K_{1}=\left\{ k\in \mathbb{N}:p_{k}\leq 1\right\} $ and $K_{2}=\left\{ k\in \mathbb{N}:p_{k}>1\right\} $ also let us put $\grave p}_{k}=\frac{p_{k}}{p_{k}-1}$ for $1<p_{k}<\infty $ and for all $k\in \mathbb{N}.$ Before giving the theorems, let us suppose that $\left( q_{n}\right) $ is a non-decreasing bounded sequence of positive real numbers and consider the following conditions: $\sup_{N}$ sup$_{k\in K_{1}}\left\vert \tsum\limits_{n\in N}\tilde{a _{nk}\right\vert ^{q_{n}}<\infty ,\ \ \ \ \ \ \ \ \ \ \ \ \exists M$ \sup_{N}\tsum\limits_{k\in K_{2}}\left\vert \tsum\limits_{n\in N}\tilde{a _{nk}M^{-1}\right\vert ^{\grave{p}_{k}}<\infty ,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.6\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\ \ \ \ \ \ \ \ \ \uparrow \left( 4.7\right) $ $\exists M$ sup$_{k}\tsum\limits_{n}\left\vert \tilde{a}_{k}M^{-\frac{1} p_{k}}}\right\vert ^{q_{n}}<\infty ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \lim_{n}\left\vert \tilde{a}_{nk}\right\vert ^{q_{n}}=0$ $\left( \forall k\in \mathbb{N}\right) ,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.8\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\ \ \ \ \ \uparrow \left( 4.9\right) $ $\forall L,$ sup$_{n}$ $sup_{k\in K_{1}}\left\vert \tilde{a}_{nk}L^{^{\frac{ }{q_{n}}}}\right\vert ^{p_{k}}<\infty ,\ \ \ \ \ \ \forall L,\exists M$ sup_{n}\tsum\limits_{k\in K_{2}}\left\vert \tilde{a}_{k}L^{^{\frac{1}{q_{n} }}M^{-1}\right\vert ^{\grave{p}_{k}}<\infty ,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow 4.10$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \uparrow \left( 4.11\right) $\ $\sup_{n}$ sup$_{k\in K_{1}}\left\vert \tilde{a}_{nk}\right\vert ^{p_{k}}<\infty ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \exists M$ sup_{n}\tsum\limits_{k\in K_{2}}\left\vert \tilde{a}_{k}M^{-1}\right\vert ^ \grave{p}_{k}}<\infty ,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.12\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\ \uparrow \left( 4.13\right) $ $\forall L,$ sup$_{n}sup_{_{k\in K_{1}}}\left( \left\vert \tilde{a}_{nk} \tilde{a}_{k}\right\vert L^{^{\frac{1}{q_{n}}}}\right) ^{p_{k}}<\infty ,$ \ \ \ $\lim_{n}\left\vert \tilde{a}_{nk}-\tilde{a}_{k}\right\vert ^{q_{n}}=0, for all $k.$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.14\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.15\right) $ $\forall L,\exists M$ sup$_{n}\tsum\limits_{_{k\in K_{2}}}\left( \left\vert \tilde{a}_{nk}-\tilde{a}_{k}\right\vert L^{^{\frac{1}{q_{n}}}}M^{-1}\right) ^{\grave{p}_{k}},$ $\exists L, $sup$_{n}sup_{_{k\in K_{1}}}\left\vert \tilde a}_{nk}L^{^{-\frac{1}{q_{n}}}}\right\vert ^{p_{k}}<\infty ,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.16\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.17\right) $ $\exists L,$sup$_{n}\tsum\limits_{_{k\in K_{2}}}\left\vert \tilde{a _{nk}L^{^{-\frac{1}{q_{n}}}}\right\vert ^{\grave{p}_{k}}<\infty ,$ \ \ \ \ \ \ \ \ \ \ \ $\ \ \ \left( \frac{\lambda _{k}}{\lambda _{k}-\lambda _{k-1} a_{nk}\right) _{k=0}^{\infty }\in c_{0}\left( q\right) $ $\left( \forall n\in \mathbb{N}\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.18\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \uparrow \left( 4.19\right) $\ \ $\ \left( \frac{\lambda _{k}}{\lambda _{k}-\lambda _{k-1}}a_{nk}\right) _{k=0}^{\infty }\in c\left( q\right) $ $\left( \forall n\in \mathbb{N \right) $\ \ \ \ \ \ \ \ \ $\left( \frac{\lambda _{k}}{\lambda _{k}-\lambda _{k-1}}a_{nk}\right) _{k=0}^{\infty }\in \ell _{\infty }\left( q\right) $ \left( \forall n\in \mathbb{N}\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\uparrow \left( 4.20\right) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \uparrow \left( 4.21\right) $ By using $\left( 4.3\right) ,\left( 4.5\right) $ and Corollary 2, we have the following results: \begin{theorem} We have $\left( i\right) $ $A\in \left( \ell \left( \lambda ,p\right) :\ell \left( q\right) \right) $ if and only if $\left( 4.6\right) ,\left( 4.7\right) ,\left( 4.8\right) $ and $\left( 4.19\right) $\ hold. $\left( ii\right) $ $A\in \left( \ell \left( \lambda ,p\right) :c_{0}\left( q\right) \right) $ if and only if $\left( 4.9\right) ,\left( 4.10\right) ,\left( 4.11\right) $ and $\left( 4.19\right) $\ hold. $\left( iii\right) ${\small \ }$A\in \left( \ell \left( \lambda ,p\right) :c\left( q\right) \right) ${\small \ }if and only if $\left( 4.12\right) {\small ,}\left( 4.13\right) ,\left( 4.14\right) ,\left( 4.15\right) ,\left( 4.16\right) $ and $\left( 4.20\right) $\ hold. $\left( iv\right) $ $A\in \left( \ell \left( \lambda ,p\right) :\ell _{\infty }\left( q\right) \right) $ if and only if $\left( 4.17\right) ,\left( 4.18\right) $ and $\left( 4.21\right) $\ hold. \end{theorem}
1105.3922
\section{Introduction} Binary black holes (BBHs) are important sources of gravitational waves for the current and future gravitational wave detectors such as LIGO, Virgo, LCGT~\cite{Barish:1999,Sigg:2008, Acernese:2008,Kuroda:2010} and LISA~\cite{lisa_revised:2009, Jennrich:2009}. Data-analysis of these gravitational wave detectors proceeds with matched filtering, which requires accurate knowledge of the expected waveforms. This motivates numerical simulations of the inspiral, merger and ringdown of two black holes. Starting with Pretorius' 2005 breakthrough~\cite{Pretorius2005a}, several research groups have developed numerical codes capable of simulating this process (see~\cite{Centrella:2010} for a recent review). BBH inspiral simulations for gravitational wave detectors must cover at least the last $\approx 10$ orbits of the inspiral, and possibly many more~\cite{Santamaria:2010yb,Hannam:2010,Damour:2010, MacDonald:2011ne,Boyle:2011dy}, requiring simulations significantly longer than the dynamical timescales of the individual black holes. This separation of temporal scales becomes particularly pronounced for a BBH with mass-ratio $q\gg 1$: The dynamical time of the smaller black hole shrinks proportional to $1/q$. Simultaneously, the inspiral proceeds slower and the time the binary spends in the strong-field regime lengthens proportionally to $q$. All published numerical simulations of BBH inspiral and merger employ {\em explicit timestepping} algorithms which are subject to the Courant-Friedrichs-Lewy (CFL) condition which limits the timestep size by the smallest spatial scale in the problem. Binary inspiral typically involves spatial scales (the spatial resolution required by a small or rapidly spinning hole) which are orders of magnitude smaller than the relevant (orbital, precession, and radiation-reaction) timescales characterizing the inspiral. In explicit binary evolutions the CFL condition then effectively fixes the timestep size to be the dynamical timescale (see the last paragraph) for one of the constituent holes. Such a timestep is orders of magnitude smaller than the relevant physical timescales for the binary as a whole; particularly when the binary has a large mass ratio (such as the simulations in Refs.~\cite{LoustoZlowchower:2011,Sperhake:2011ik}) or when at least one constituent hole has a high spin (since the horizon of the high-spin hole then requires higher spatial resolution). For instance, a simulation with constituent holes with dimensionless spin magnitudes $0.95$~\cite{Lovelace2010} required half a million timesteps over 12.5 orbits. Were the CFL restriction overcome, computation of BBH inspirals with higher mass ratios, higher spins, and more orbits could become feasible. Implicit timestepping is one way to overcome the CFL condition and take larger timesteps. Of course, larger timesteps correspond to larger temporal truncation errors; however, a small timestep is required in BBH inspirals for {\em stability} (CFL condition) rather than {\em accuracy} (since, as argued above, the accuracy of a BBH inspiral is typically limited by spatial resolution, not temporal resolution). For problems dominated by spatial rather then temporal error, implicit timestepping methods often reduce the total computational cost (without significant loss of accuracy), but fully implicit methods can be difficult to implement for nonlinear evolution systems like the Einstein equations. Implicit-explicit (IMEX) methods \cite{Dutt2000,Minion2003,LaytonMinion,HagstromZhou2006} are a compromise which we explore here. IMEX timestepping has been successfully applied to a variety of problems, including fluid-structure interaction~\cite{vanZuijlenEtAl:2007}, relativistic plasma astrophysics~\cite{PalenzuelaEtAl:2008}, and hydrodynamics with heat conduction~\cite{KadiogluKnoll:2010}. In Ref.~\cite{LauPfeiffer2008}, Lau, Pfeiffer, and Hesthaven applied IMEX methods to evolve a forced scalar wave propagating on a curved spacetime (a Schwarzschild black hole), achieving stable evolutions with timestep sizes $\approx 1000$ times larger than with explicit methods. In this paper, we lay much of the groundwork toward applying IMEX methods to full binary-black-hole evolutions. We develop an IMEX algorithm for one particular formulation of Einstein's equations used in explicit BBH evolutions, the generalized harmonic formulation (see \cite{Lindblom2006} and references therein). We use our IMEX algorithm to perform the first IMEX evolutions of single black holes (both static and dynamically perturbed). Our single-black-hole evolutions demonstrate the stability of our IMEX method. Further numerical experiments also investigate our method's efficiency; the IMEX algorithm offers a computational cost competitive with explicit evolution for sufficiently large step sizes. (Note that improved efficiency does not automatically follow from an IMEX algorithm affording larger timesteps, since each IMEX timestep is more expensive than an explicit step.) We also discuss further efficiency improvements of our IMEX implementation, and provide an outlook toward simulation of black hole binaries with IMEX techniques. This paper is organized as follows. In Sec.~\ref{sec:math}, we derive the IMEX generalized harmonic equations and boundary conditions that we will use. In Sec.~\ref{sec:NumericalExperiments}, we explore numerical simulations using these equations, with a particular focus on the stability and efficiency gains of these simulations. We conclude in Sec.~\ref{sec:Discussion} by discussing the implications of our results, emphasizing the probable gains in computational efficiency when using IMEX in full binary-black-hole simulations. \section{IMEX formulation of Einstein's equations}\label{sec:math} The generalized harmonic formulation of Einstein's equations consists of ten coupled scalar wave equations. Therefore, the present discussion will borrow heavily from our earlier work on IMEX evolutions of scalar fields on curved backgrounds~\cite{LauPfeiffer2008}. \subsection{Generalized harmonic system} \label{sec:GH} Our goal is to solve Einstein's equations for the spacetime metric $\psi_{ab}$, where Latin indices from the start of the alphabet ($a, \ldots, f$) range over $0, 1, 2, 3$. The first order {\em generalized harmonic} formulation of the Einstein evolution equations given by Lindblom {\em et al} (Eqs.~(35)--(37) of Ref.~\cite{Lindblom2006}) is the following: \begin{subequations}\label{eq:GhSystem} \begin{align} \partial_t\psi_{ab} & = (1+\gamma_1)V^k \partial_k\psi_{ab} - N \Pi_{ab} - \gamma_1 V^k \Phi_{kab}\\ \label{eq:GH-Pi} \partial_t \Pi_{ab} & = V^k\partial_k \Pi_{ab} - N g^{jk} \partial_j \Phi_{kab} + \gamma_1\gamma_2 V^k \partial_k \psi_{ab} \nonumber \\ & + 2N \psi^{cd}\big(g^{jk}\Phi_{jca}\Phi_{kdb} - \Pi_{ca} \Pi_{db} - \psi^{ef}\Gamma_{ace}\Gamma_{bdf} \big)\nonumber\\ & - 2N \nabla_{(a}H_{b)} - {\textstyle \frac{1}{2}}N t^c t^d \Pi_{cd}\Pi_{ab} - N t^c \Pi_{cj} g^{jk} \Phi_{kab} \nonumber \\ & + \gamma_0 N\big(2\delta^{c}{}_{(a} t_{b)} - \psi_{ab}t^c\big)\big(H_c+\Gamma_c\big) - \gamma_1 \gamma_2 V^k \Phi_{kab}\\ \partial_t \Phi_{jab} & = V^k \partial_k \Phi_{jab} - N \partial_j\Pi_{ab} + N \gamma_2\partial_j \psi_{ab}\nonumber \\ & + {\textstyle \frac{1}{2}} N t^c t^d \Phi_{jcd}\Pi_{ab} + N g^{km} t^c \Phi_{jkc}\Phi_{mab} - N\gamma_2 \Phi_{jab}. \end{align} \end{subequations} Here, $N$, $V^k$, and $g_{jk}$ are the spacetime metric's associated lapse function, shift vector, and spatial metric induced on level-$t$ slices. Latin indices from the middle of the alphabet $i, j, \ldots=1,2,3$ range only over spatial dimensions. As a one-form, $t_a = -N \partial_a t$ is the unit normal to the temporal foliation defined by the coordinate time $t$. The other fundamental variables $\Pi_{ab} \equiv -t^c \partial_c\psi_{ab}$ and $\Phi_{kab} \equiv \partial_k\psi_{ab}$ arise from the reduction of the generalized harmonic equations to first order form. The latter definition leads to the auxiliary constraint \begin{equation}\label{eq:3indexC} \mathcal{C}_{kab} \equiv \partial_k \psi_{ab} - \Phi_{kab}=0. \end{equation} The variable $\Gamma_a = \psi^{bc}\Gamma_{abc}$ represents a contraction of the Christoffel symbols $\Gamma_{abc}$ of the spacetime metric $\psi_{ab}$. Time derivatives $\partial_t\psi_{ab}$ inside $\Gamma_{abc}$ are evaluated in terms of $N$, $V^k$, $\Pi_{ab}$, and $\Phi_{kab}$~\cite{Lindblom2006}. The functions $H_c$ are freely specifiable and embody the coordinate-freedom of Einstein's equations~\cite{Lindblom2006}. Einstein's equations can be written as a set of constrained evolution equations; in the generalized harmonic formulation, the fundamental constraint takes the form \begin{equation}\label{eq:1indexC} \mathcal{C}_a \equiv H_a + \Gamma_a=0. \end{equation} Constraint damping~\cite{Gundlach2005,Pretorius2005a,Lindblom2006,Holst2004} is used to enforce both the fundamental constraint~(\ref{eq:1indexC}) and the auxiliary constraint~(\ref{eq:3indexC}). Those terms in Eqs.~(\ref{eq:GhSystem}) proportional to $\gamma_0$ damp the fundamental constraint~(\ref{eq:1indexC}). Those terms proportional to $\gamma_{1}$ and $\gamma_{2}$ in Eqs.~(\ref{eq:GhSystem}) damp the constraint \eqref{eq:3indexC}. Our IMEX formulation converts to second order variables and so the auxiliary constraint is trivially satisfied. Therefore, in the rest of this paper, we set $\gamma_1 = 0 = \gamma_2$ in all IMEX evolutions. \subsection{First-order implicit equations and second-order implicit equation for the metric} \label{sec:ImexFoshSplit} Although \eqref{eq:GhSystem} is a system of partial differential equations (PDEs), we formally view it as an ordinary differential equation (ODE) initial value problem, \begin{equation}\label{eq:IVP} \frac{d\boldsymbol{u}}{dt} = \boldsymbol{f}(t,\boldsymbol{u}), \quad \boldsymbol{u}(t_0) = \boldsymbol{u}_0, \end{equation} so that our notation conforms with the literature \cite{Dutt2000,Minion2003,LaytonMinion,HagstromZhou2006} on IMEX ODE methods. [Otherwise, we would have used partial time differentiation in \eqref{eq:IVP}.] The system \eqref{eq:GhSystem} is actually also solved as an initial boundary value problem; however, we defer the issue of boundary conditions to a later subsection. In this view $\boldsymbol{u}$ represents the collection $(\psi_{ab},\Pi_{ab},\Phi_{kab})$ of fundamental fields. Furthermore, we assume there exists a splitting \begin{equation} \boldsymbol{f}(t,\boldsymbol{u}) = \boldsymbol{f}^I(t,\boldsymbol{u}) + \boldsymbol{f}^E(t,\boldsymbol{u}) \end{equation} of the right-hand side $\boldsymbol{f}$ into an explicit sector $\boldsymbol{f}^E$ and an implicit sector $\boldsymbol{f}^I$. In this paper, as in Ref.~\cite{LauPfeiffer2008}, we {\em split by equation}. That is, we choose which terms on the right-hand side of Eq.~\eqref{eq:GhSystem} are to be treated implicitly. To take a timestep, we choose an IMEX timestepping algorithm, such as ImexEuler, Additive Runge Kutta (ARK) \cite{Kennedy-Carpenter:2003}, or semi-implicit spectral-deferred correction (SISDC) \cite{Dutt2000,Minion2003,LaytonMinion,HagstromZhou2006}. We note that while ARK was used almost exclusively in Ref.~\cite{LauPfeiffer2008}, we have encountered stability issues with its use in the work presented here, and therefore focus here on SISDC. As explained in Sec.~II A of Ref.~\cite{LauPfeiffer2008}, each of these algorithms requires that we are able to solve (multiple times per timestep) an implicit equation of the form \begin{equation} \boldsymbol{u} - \alpha\boldsymbol{f}^I(t,\boldsymbol{u}) = \boldsymbol{B}, \end{equation} where $\alpha$ is proportional to the step size $\Delta t$ and the inhomogeneity $\boldsymbol{B}$ is defined by the algorithm. For example, the corresponding equation for ImexEuler integration, \begin{equation} \boldsymbol{u}_{n+1} - \Delta t\boldsymbol{f}^I(t_{n+1},\boldsymbol{u}_{n+1}) = \boldsymbol{u}_n + \Delta t\boldsymbol{f}^E(t_n,\boldsymbol{u}_n), \end{equation} is solved to advance the solution from time $t_n$ to time $t_{n+1}$. Concrete expressions for $\boldsymbol{B}$ are given in Ref.~\cite{LauPfeiffer2008} for ARK and in Appendix~\ref{sec:SISDC} for SISDC. The IMEX splitting of the system \eqref{eq:GhSystem} that we chose is analogous to the ``case (ii)" equations for the scalar-wave system given as Eqs.~(15a)--(15c) in Ref.~\cite{LauPfeiffer2008}. Specifically, we treat implicitly the entire right-hand sides of Eqs.~(\ref{eq:GhSystem}a) and (\ref{eq:GhSystem}c). However, a fully implicit treatment of the equation for $\Pi_{ab}$ has turned out to be prohibitively complicated. Therefore, of the terms appearing in the right-hand side of Eq.~(\ref{eq:GhSystem}b), we have chosen to include in the implicit sector only the principal-part terms and, possibly, the constraint damping term proportional to $\gamma_0$. The principal-part terms are the stiff terms which most constrain the timestep size, and, as we shall see later, the constraint damping term is also stiff. Implicit treatment of the remaining terms on the right-hand side of Eq.~(\ref{eq:GhSystem}b) would be difficult because the implicit equation which results from their inclusion has an extremely complicated variation. This variation would be required were the resulting equation solved (as part of the overall system) via Newton iteration. Our splitting of Eq.~(\ref{eq:GhSystem}b) could be improved upon. Indeed, with $\boldsymbol{f}_{\Pi_{ab}}(t,\boldsymbol{u})$ representing the right-hand side of the evolution equation (\ref{eq:GhSystem}b) for $\Pi_{ab}$, a binary evolution based on the dual-frames approach will have $\boldsymbol{f}_{\Pi_{ab}} = \mathcal{O}(\omega)$, where $\omega$ is the orbital frequency (a small quantity). However, for our described splitting both $\boldsymbol{f}^I_{\Pi_{ab}}$ and $\boldsymbol{f}^E_{\Pi_{ab}}$ would be $\mathcal{O}(1)$. Although their combination is small, each individual term on the right-hand side of (\ref{eq:GhSystem}b) need not be. In other words, there appears to be no natural {\em splitting by equation} for Eqs.~(\ref{eq:GhSystem}), as there often is for, say, advection-diffusion problems. While we do not yet fully appreciate the consequences of the splitting we shall employ here, we are considering approaches to mitigate potential problems with our splitting-by-equation approach. Among these is a fully implicit implementation of Eq.~(\ref{eq:GhSystem}b), with other possibilities discussed in the conclusion of Ref.~\cite{LauPfeiffer2008}. Our choices above correspond to the following first-order implicit equation for $\Pi_{ab}$: \begin{align}\label{eq:ImpPiequation} \begin{split} \Pi_{ab} & - \alpha \big[V^k\partial_k \Pi_{ab} - N g^{jk}\partial_j\Phi_{kab}\\ & + \gamma_0^I N\big(2\delta^{c}{}_{(a} t_{b)} - \psi_{ab}t^c\big)\big(H_c+\Gamma_c\big)\big] = B_{\Pi_{ab}}. \end{split} \end{align} Here we have split the damping parameter as $\gamma_0 = \gamma_0^I + \gamma_0^E$, which in general allows for part of the damping term to be treated implicitly (if $\gamma_0^I \neq 0$) and part explicitly (if $\gamma_0^E \neq 0$). In Eq.~\eqref{eq:ImpPiequation} we view $\Gamma_e$ as \begin{equation}\label{eq:decompGamma} \Gamma_e = \underbrace{\frac{\partial \Gamma_e}{ \partial\Pi_{cd}} \Pi_{cd}}_{\text{terms with }\Pi_{ab}} + \underbrace{\left[\Gamma_e - \frac{ \partial \Gamma_e}{\Pi_{cd}} \Pi_{cd}\right]}_{\text{terms without }\Pi_{ab}}, \end{equation} with the details of this decomposition given in Appendix~\ref{sec:GammaDecomp}. The reason for the decomposition is given immediately after Eq.~\eqref{eq:alfaNPi}. In all, our first--order implicit equations corresponding to the evolution system \eqref{eq:GhSystem} are then as follows: \begin{subequations}\label{eq:ImexGhCase2} \begin{align} \psi_{ab} & - \alpha\big(V^k \partial_k\psi_{ab} - N \Pi_{ab}\big) = B_{\psi_{ab}}\\ \Pi_{ab} & - \alpha \big( V^k\partial_k \Pi_{ab} - N g^{jk} \partial_j \Phi_{kab} + N\mathcal{Q}_{ab}{}^{cd}\Pi_{cd} \nonumber \\ & + N \mathcal{G}_{ab} \big) = B_{\Pi_{ab}}\\ \Phi_{jab} & - \alpha\big(V^k \partial_k \Phi_{jab} - N \partial_j\Pi_{ab} + {\textstyle \frac{1}{2}} N t^c t^d \Phi_{jcd}\Pi_{ab} \nonumber \\ & + N g^{km} t^c \Phi_{jkc}\Phi_{mab}\big) = B_{\Phi_{jab}}, \end{align} \end{subequations} where \begin{align}\label{eq:QandG} \mathcal{Q}_{ab}{}^{cd} & \equiv \gamma_0^I \big(2\delta^{e}{}_{(a} t_{b)} - \psi_{ab}t^e\big)\frac{\partial\Gamma_e}{\partial \Pi_{cd}} \nonumber \\ \mathcal{G}_{ab} & \equiv \gamma_0^I \big(2\delta^{e}{}_{(a} t_{b)} - \psi_{ab}t^e\big)\left[H_e+\Gamma_e - \frac{\partial\Gamma_e}{\partial \Pi_{cd}}\Pi_{cd}\right]. \end{align} To solve these equations, we first take a combination of them to get a single second-order equation for $\psi_{ab}$. In terms of $\xi_{ab} \equiv \psi_{ab} - \alpha V^k \partial_k\psi_{ab}$, we express (\ref{eq:ImexGhCase2}a) as \begin{equation}\label{eq:xifirsteqn} \alpha N \Pi_{ab} = B_{\psi_{ab}} - \xi_{ab}. \end{equation} Multiplication of Eq.~(\ref{eq:ImexGhCase2}b) by $\alpha N$, followed by a substitution with \eqref{eq:xifirsteqn}, yields \begin{align}\label{eq:alfaNPi} & \alpha N \Pi_{ab} - \alpha^2 NV^k\partial_k \Pi_{ab} + \alpha^2 N^2 g^{jk} \partial_j \Phi_{kab} \nonumber \\ & - \alpha N \mathcal{Q}_{ab}{}^{cd}(B_{\psi_{cd}} - \xi_{cd}) - \alpha^2 N^2 \mathcal{G}_{ab} = \alpha N B_{\Pi_{ab}}. \end{align} The decomposition (\ref{eq:decompGamma}) ensures that the substitution with Eq.~\eqref{eq:xifirsteqn} is also made for the $\Pi_{cd}$ terms in $\Gamma_e$. We subtract the last equation from (\ref{eq:ImexGhCase2}a) to reach \begin{align}\label{eq:Piresult} & \psi_{ab} - \alpha V^k \partial_k \psi_{ab} + \alpha^2 NV^k\partial_k \Pi_{ab} - \alpha^2 N^2 g^{jk} \partial_j \Phi_{kab} \nonumber \\ & - \alpha N \mathcal{Q}_{ab}{}^{cd}\xi_{cd} + \alpha^2 N^2 \mathcal{G}_{ab} \nonumber \\ & = B_{\psi_{ab}} - \alpha N B_{\Pi_{ab}} -\alpha N \mathcal{Q}_{ab}{}^{cd}B_{\psi_{cd}}. \end{align} We must eliminate the term $\alpha^2 N V^k \partial_k\Pi_{ab}$ from the result. To this end, we contract Eq.~(\ref{eq:ImexGhCase2}c) into $\alpha V^j$, thereby finding \begin{align} & \alpha V^j \Phi_{jab} - \alpha^2 V^k V^j \partial_k \Phi_{jab} + \alpha^2 N V^j \partial_j\Pi_{ab} \nonumber \\ & - {\textstyle \frac{1}{2}} \alpha^2 N t^c t^d V^j \Phi_{jcd}\Pi_{ab} - \alpha^2 N g^{km} t^c V^j \Phi_{jkc}\Phi_{mab} \nonumber \\ & = \alpha V^j B_{\Phi_{jab}}, \end{align} which, using Eq.~(\ref{eq:xifirsteqn}), we rewrite as \begin{align} & \alpha V^j \Phi_{jab} - \alpha^2 V^k V^j \partial_k \Phi_{jab} + \alpha^2 N V^j \partial_j\Pi_{ab} \nonumber \\ & + {\textstyle \frac{1}{2}} \alpha t^c t^d V^j \Phi_{jcd} \xi_{ab} - \alpha^2 N g^{km} t^c V^j \Phi_{jkc}\Phi_{mab} \nonumber \\ & = \alpha V^j B_{\Phi_{jab}} + {\textstyle \frac{1}{2}} \alpha t^c t^d V^j \Phi_{jcd} B_{\psi_{ab}}. \end{align} Subtracting the last equation from \eqref{eq:Piresult} and making substitutions with the constraint \eqref{eq:3indexC}, we arrive at the following second--order equation: \begin{align} \psi_{ab} & - 2\alpha V^k \partial_k \psi_{ab} - \alpha^2 \big(N^2 g^{jk} - V^j V^k\big) \partial_j\partial_k \psi_{ab} \nonumber \\ & - {\textstyle \frac{1}{2}} \alpha t^c t^d V^j (\partial_j \psi_{cd}) (\psi_{ab} - \alpha V^k \partial_k \psi_{ab}) \nonumber \\ & + \alpha^2 N g^{km} t^c V^j (\partial_j \psi_{kc})(\partial_m \psi_{ab}) \nonumber \\ & - \alpha N \mathcal{Q}_{ab}{}^{cd} (\psi_{cd} - \alpha V^k \partial_k \psi_{cd}) + \alpha^2 N^2 \mathcal{G}_{ab} \nonumber \\ & = \big(1-{\textstyle \frac{1}{2}} \alpha t^c t^d V^j \partial_j \psi_{cd}\big)B_{\psi_{ab}} - \alpha N B_{\Pi_{ab}} - \alpha V^k B_{\Phi_{kab}} \nonumber \\ & -\alpha N \mathcal{Q}_{ab}{}^{cd}B_{\psi_{cd}} + \text{ terms homogeneous in } \mathcal{C}_{kab}. \label{eq:psiEquation} \end{align} To solve the system \eqref{eq:ImexGhCase2}, we first solve \eqref{eq:psiEquation}, subject to boundary conditions discussed in Sec.~\ref{sec:BC}. Next, we recover $\Pi_{ab}$ algebraically from (\ref{eq:ImexGhCase2}a). Finally, we set $\Phi_{kab} = \partial_k \psi_{ab}$, i.e., we enforce that the constraint $\mathcal{C}_{kab} = 0$. We stress that, as a linear and undifferentiated combination of Eqs.~\eqref{eq:ImexGhCase2} for the first-order system, Eq.~\eqref{eq:psiEquation} actually contains no second-order derivatives of $\psi_{ab}$. Indeed, all of the $B$-terms on the right-hand side of Eq.~\eqref{eq:psiEquation} appear undifferentiated, indicating that we have not differentiated the first-order system \eqref{eq:ImexGhCase2}. Each second-order derivative of $\psi_{ab}$ on the left-hand side of \eqref{eq:psiEquation} is precisely canceled by a corresponding term appearing in one of the constraint terms on the right-hand side [not shown explicitly in Eq.~(\ref{eq:psiEquation})]. Now, when \emph{numerically} solving Eq.~\eqref{eq:psiEquation}, we set the constraint terms from the right-hand side to zero, thereby creating a genuinely second-order equation. We discuss the permissibility of this procedure in Sec.~\ref{sec:AuxCon} below. \subsection{Boundary conditions}\label{sec:BC} For black-hole evolutions which employ excision, the inner boundary lies within an apparent horizon. For this scenario we adopt no inner boundary condition, regardless of what condition is adopted at the outer boundary and despite the fact that Eq.~\eqref{eq:psiEquation} is a second-order equation. In the context of scalar fields on a fixed black-hole background, Ref.~\cite{LauPfeiffer2008} has discussed the motivation for and permissibility of this procedure. A similar analytical treatment of the coupled nonlinear system \eqref{eq:psiEquation} would be, we suspect, a difficult piece of mathematical analysis, one beyond the scope of this paper. Therefore, here we content ourselves both with the scalar field analogy and the observation that the lack of an inner boundary condition has caused no difficulties numerically. Nevertheless, the issue merits further study. The outer boundary condition that we apply to Eq.~\eqref{eq:psiEquation} is either (i) a fixed Dirichlet condition on each component $\psi_{ab}$ of the spacetime metric or (ii) the following condition. In terms of the incoming characteristic variable $U^{-}_{ab} \equiv \Pi_{ab} - n^k \Phi_{kab}$ (where $n^k$ is the unit, outward-pointing, normal vector to the boundary), we rewrite Eq.~(\ref{eq:ImexGhCase2}a) as \begin{equation}\label{eq:ImexFixUminus} \psi_{ab} + \alpha (N n^k - V^k) \partial_k\psi_{ab} = B_{\psi_{ab}} - \alpha N U^{-}_{ab} + \alpha N n^k \mathcal{C}_{kab}, \end{equation} We control $U^{-}_{ab}$ at the boundary; therefore, both $B_{\psi_{ab}}$ and $U^{-}_{ab}$ here appear as fixed quantities, and Eq.~(\ref{eq:ImexFixUminus}) represents a boundary condition on $\psi_{ab}$. Moreover, when numerically enforcing this condition we also set the constraint term on the right-hand side to zero. \subsection{Implicit equation for the auxiliary constraint} \label{sec:AuxCon} Eqs.~(\ref{eq:ImexGhCase2}a) and (\ref{eq:ImexGhCase2}c) imply an implicit equation for the auxiliary constraint. Partial differentiation of (\ref{eq:ImexGhCase2}a) yields \begin{align} \partial_j\psi_{ab} & - \alpha\big[ (\partial_jV^k) (\partial_k\psi_{ab}) + V^k \partial_k\partial_j \psi_{ab} \nonumber \\ & - (\partial_j N) \Pi_{ab} - N \partial_j\Pi_{ab}\big) = \partial_j B_{\psi_{ab}}. \label{eq:psiIMPdiff} \end{align} To express the derivatives of the lapse and shift in terms of derivatives of the metric $\psi_{ab}$, we use the result \begin{equation} \delta\psi_{ab} = -2N^{-1}t_a t_b \delta N -2N^{-1} g_{k(a} t_{b)} \deltaV^k +g^i_{(a} g^k_{b)} \delta g_{ik}, \end{equation} which in turn yields \begin{equation} \delta N = -{\textstyle \frac{1}{2}} N t^c t^d \delta\psi_{cd},\quad \delta V^k = N g^{km} t^c \delta\psi_{mc}. \end{equation} Insertion of these results (with the variation $\delta \rightarrow \partial_j$) into \eqref{eq:psiIMPdiff} gives \begin{align} \partial_j\psi_{ab} & - \alpha\big[ N g^{km} t^c (\partial_j\psi_{mc})(\partial_k\psi_{ab}) + V^k \partial_k\partial_j \psi_{ab} \nonumber \\ & +{\textstyle \frac{1}{2}}N t^c t^d (\partial_j\psi_{cd}) \Pi_{ab} - N \partial_j\Pi_{ab}\big] = \partial_j B_{\psi_{ab}}. \end{align} Finally, we subtract (\ref{eq:ImexGhCase2}c) from the last equation and make substitutions with the constraint to reach \begin{align}\label{eq:ImpEquationC} \mathcal{C}_{jab} & - \alpha\big[ V^k \partial_k \mathcal{C}_{jab} + N g^{km} t^c (\Phi_{jkc}\mathcal{C}_{mab} +\mathcal{C}_{jkc}\partial_m\psi_{ab}) \nonumber \\ & +{\textstyle \frac{1}{2}} N t^c t^d \mathcal{C}_{jcd}\Pi_{ab}\big] =\partial_j B_{\psi_{ab}} - B_{\Phi_{jab}}. \end{align} This equation is analogous to Eq.~(20) of Ref.~\cite{LauPfeiffer2008}, \begin{equation} \bar{\mathcal{C}}_j - \alpha \pounds_V \bar{\mathcal{C}}_j = \partial_j B_\psi - B_{\Phi_j}, \label{eq:scalarimpeq} \end{equation} for scalar waves on a fixed curved background, where the overbar on $\bar{\mathcal{C}}_j$ serves to differentiate this constraint from the generalized harmonic constraint $\mathcal{C}_a$ in Eq.~\eqref{eq:1indexC} (which carries a spacetime rather than spatial index in any case). Specifically, in the scalar wave scenario the variables $(\psi,\Pi,\Phi_k)$ are analogous to the generalized harmonic variables $(\psi_{ab},\Pi_{ab},\Phi_{kab})$, and the auxiliary constraint is $\bar{\mathcal{C}}_j \equiv \partial_j\psi - \Phi_j$. Starting with a prescribed $\bar{\mathcal{C}}_j$ at the outer boundary, we may integrate Eq.~(\ref{eq:scalarimpeq}) along the integral curves of the shift vector. This independent integration of $\bar{\mathcal{C}}_j$ proved important toward understanding in what sense solving the second-order implicit equation for $\psi$ [analogous to Eq.~\eqref{eq:psiEquation}] was equivalent to solving the first-order system for $(\psi,\Pi,\Phi_k)$ [analogous to Eq.~\eqref{eq:ImexGhCase2}]. Such an independent integration of \eqref{eq:ImpEquationC} is clearly not possible. Nevertheless, provided both $\mathcal{C}_{jab} = 0$ on the outer boundary and a vanishing right-hand source in \eqref{eq:ImpEquationC}, the equation formally determines $\mathcal{C}_{jab} = 0$ along the integral curves of $V^k$. This motivates our neglecting the terms homogeneous in $\mathcal{C}_{jab}$ in Eq.~(\ref{eq:psiEquation}). Consideration of our steps above for solving \eqref{eq:ImexGhCase2} shows that the constraint $\mathcal{C}_{jab}$ remains exactly zero throughout our IMEX scheme. We are then effectively evolving only the variables $\psi_{ab}$ and $\Pi_{ab}$. Our reasons for nevertheless retaining $\Phi_{jab}$ in the formalism are twofold. First, {\tt SpEC} ---the software project we have used for simulations--- chiefly supports first order symmetric hyperbolic systems. Second, as described in the conclusion, for the binary problem we envision a {\em split by region} approach, in which outer subdomains are treated explicitly and inner subdomains (spherical shells) immediately near the holes are treated by IMEX methods. Since explicit evolutions in {\tt SpEC} currently require a first order system, the variable $\Phi_{jab}$ must be present in the outer subdomains. Coupling between the outer and inner subdomains is then facilitated by having $\Phi_{jab}$ also available on the inner subdomains. There has been recent progress in applying spectral methods to evolve second order in space partial differential equations~\cite{Taylor:2010ki}. If these techniques work for the generalized harmonic system, it should be possible to abandon $\Phi_{jab}$ entirely. \section{Numerical Experiments} \label{sec:NumericalExperiments} Through numerical simulations of single black holes, we now examine the behavior of the scheme presented above. We evolve initial data representing both (i) the static Schwarzschild solution in Kerr-Schild coordinates and (ii) the same solution with a superposed ingoing pulse of gravitational radiation. The latter is a vacuum problem with non-trivial evolution. As the gravitational wave pulse travels inward, it hits and perturbs the black hole. Most of the pulse is absorbed by the black hole, increasing its mass; the rest is scattered and propagates away. This test features initial dynamics on short timescales (moving pulse of radiation, perturbed black hole), with relaxation to time-independence. Eventually, the black hole settles down to a stationary black hole, and the scattered radiation leaves the computational domain through the outer boundary. Technical details for the dynamical case (ii) are summarized in Appendix~\ref{App:ID}. \subsection{Long-time stability of IMEX evolutions} \label{sec:stability} In this subsection we demonstrate the stability of our IMEX algorithm by evolving the static Schwarzschild solution in Kerr-Schild coordinates to late times (up to $10^4M$), adopting fixed Dirichlet conditions, that is with $\psi_{ab}$ fixed as the analytical solution on the outer boundary. We note that the radiation conditions (\ref{eq:ImexFixUminus}), with $U^{-}_{ab}$ determined by the analytical solution on the outer boundary, apparently give rise to an extremely weak instability. Indeed, with Eq.~(\ref{eq:ImexFixUminus}) a slowly growing instability appears after (sometimes well after) time $10^3 M$. We specify no inner boundary condition (cf. Sec.~\ref{sec:BC}). Our domain, a single spherical shell with Cartesian center $(0.01,-0.0097,0.003)$, is determined by a top spherical harmonic index $\ell_\mathrm{max} = 7$ and the radial interval $1.9 \leq r \leq 11.9$, with $N_r = 15$ radial collocation points and an exponential mapping of the radial coordinate (see Eq.~(48) of \cite{LauPfeiffer2008}). Results for Cartesian center $(0,0,0)$ are qualitatively similar, but with the corresponding errors a few orders of magnitude smaller. For constraint damping parameters, we have taken $\gamma_0^I = 1$ and $\gamma_0^E = 0$. We have performed IMEX evolutions with an ImexEuler timestepper (first order accurate and requiring one solution of the system (\ref{eq:ImexGhCase2}) per timestep), 3-point (substep) Gauss-Lobatto SISDC (GLoSISDC3, fourth order accurate, eight implicit solves per timestep), and 2-point (substep) Gauss-Radau-right SISDC (GRrSISDC2, third order accurate, six implicit solves per timestep). Since the geometry is time-independent, numerical solution of \eqref{eq:psiEquation} will be achieved without any iterations in the Newton-Raphson algorithm, assuming that the solution at the previous timestep serves as an initial guess. To prevent this trivial convergence, we have rescaled the initial guess $\psi^0_{ab} \rightarrow 1.00001 \psi^0_{ab}$ before each implicit solve. For GLoSISDC3 and GRrSISDC2 respectively, Figs.~\ref{fig:LongTimeErrorsGLo} and \ref{fig:LongTimeErrorsGRr} depict error histories for the metric $\psi_{ab}$ as measured against the exact solution. Each plot exhibits long-time stability for the larger timesteps considered but weak instability for some of the smaller timesteps. \begin{figure} \includegraphics[width=0.45\textwidth]{LongTimeErrorsGLo.eps} \caption{Error histories for GLoSISDC3. $\|\cdot\|$ represents the 1-norm with respect to the Cartesian coordinate measure over the spherical shell, i.~e.~$\|f\| = \int_V |f|dxdydz$, and $V$ is the improper (coordinate) volume of the spherical shell. As mentioned in the text, $\Delta\psi_{ab}$ denotes the difference between the numerical metric and the exact solution. } \label{fig:LongTimeErrorsGLo} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{LongTimeErrorsGRr.eps} \caption{Error histories for GRrSISDC2. See the caption of Figure~\ref{fig:LongTimeErrorsGLo} for an explanation of the figure labels. } \label{fig:LongTimeErrorsGRr} \end{figure} Examination of the stability diagrams for these methods suggests a heuristic explanation of our results. The diagram for a given (either explicit or implicit) ODE method is determined by its application to the model problem $du/dt = \lambda u$, where $\lambda = \xi + \mathrm{i}\eta$. Subject to the initial condition $u_0 = 1$, a single timestep for a given method produces an update $u_{\Delta t} = \mathrm{Amp}(\lambda \Delta t)$, the {\em amplification factor} which is a function of the complex variable $\lambda \Delta t$. The {\em region of absolute stability} for a given method is then the domain in the $(\lambda \Delta t)$-plane for which $|\mathrm{Amp}(\lambda \Delta t)| \leq 1$. Figures \ref{fig:ImpGLoStabilityDiagram} and \ref{fig:ImpGRrStabilityDiagram} respectively depict the stability diagrams for GLoSISDC3 and GRrSISDC2, with the model problem treated fully implicitly, i.e.~with $f^I = \lambda u$ and $f^E = 0$. For both diagrams, our interest lies with the imaginary axis, since the system \eqref{eq:GhSystem} of equations we evolve supports the propagation of waves. For GLoSISDC3, the imaginary axis lies within the region of absolute stability, except for a portion around the origin. The bottom panel of Fig.~\ref{fig:ImpGLoStabilityDiagram} shows that $|\mathrm{Amp}(\mathrm{i}\eta\Delta t)|>1$ for $|\eta\Delta t|\lesssim 1.28$, with the maximum at $\eta\Delta t \approx \pm 1$. Note also that $|\eta\Delta t|\lesssim 0.35$ corresponds to an essentially conservative method, since then $|\mathrm{Amp}(\mathrm{i}\eta\Delta t)|$ is very close to unity. Therefore, assuming $\lambda$ in the model problem is purely imaginary, we expect growth in the numerical solution for timesteps $\Delta t \lesssim 1.28|\lambda|^{-1}$, and absolute stability for $\Delta t \gtrsim 1.28|\lambda|^{-1}$. Figure~\ref{fig:ImpGRrStabilityDiagram} provides the analogous information for GRrSISDC2; the bottom plot indicates growth for timesteps $\Delta t \lesssim 0.51|\lambda|^{-1}$ but absolute stability for $\Delta t \gtrsim 0.51|\lambda|^{-1}$. We now attempt to identify $\lambda$ in the model problem with characteristic speeds for the evolution system \eqref{eq:GhSystem}. \begin{figure} \includegraphics[width=0.45\textwidth]{ImpGLoStabilityDiagram.eps} \caption{Diagram for implicit sector of GLoSISDC3. The bottom plot depicts the cross section of the top plot along the imaginary axis, with $\lambda = \mathrm{i}\eta \in \mathrm{i}\mathbb{R}$. } \label{fig:ImpGLoStabilityDiagram} \end{figure} Given an outward-pointing unit normal $n^k$ (often to the boundary of a computational domain or subdomain), the characteristic variables of Eqs.~(\ref{eq:GhSystem}) are \begin{equation}\label{eq:CharFields} \psi_{ab},\quad \Pi_{ab} \pm n^k\Phi_{kab},\quad (\delta^k_j - n_j n^k)\Phi_{kab}, \end{equation} and their respective characteristic speeds are \begin{equation}\label{eq:CharSpeeds} -n_kV^k,\quad -n_kV^k\pm N,\quad -n_kV^k. \end{equation} Equations~(\ref{eq:CharFields}) and~(\ref{eq:CharSpeeds}) are derived in~\cite{Lindblom2006} [see Eqs.~(32)--(34) of that reference and the text thereafter, but set $\gamma_2 = 0 = \gamma_1$ as is the case here]. For the Schwarzschild solution in Kerr-Schild coordinates (see Eq.~(34) of \cite{LauPfeiffer2008}), the characteristic speeds for propagation orthogonal to an $r=\mbox{const}$ sphere reduce to \begin{subequations} \begin{align}\label{eq:AdvectionSpeed} n_k V^k & = \frac{2M}{\sqrt{r^2+2Mr}},\\ n_k V^k \pm N & = \frac{2M}{\sqrt{r^2+2Mr}} \pm \sqrt{\frac{r}{r + 2M}}, \end{align} \end{subequations} where these expressions correspond to coordinate spheres adapted to the spherical symmetry, i.e.~to Cartesian center $(0,0,0)$. The smallest speeds (in magnitude) are $n_k V^k$ near the outer boundary ($r$ large), and $n_k V^k - N$ near the horizon ($r = 2M$). \begin{figure} \includegraphics[width=0.45\textwidth]{ImpGRrStabilityDiagram.eps} \caption{Diagram for implicit sector of GRrSISDC2. See relevant comments given in the caption of Fig.~\ref{fig:ImpGLoStabilityDiagram}. } \label{fig:ImpGRrStabilityDiagram} \end{figure} An instability driven by the speed Eq.~(\ref{eq:AdvectionSpeed}) evaluated at the outer boundary appears consistent with the stability diagrams Figs.~\ref{fig:ImpGLoStabilityDiagram} and~\ref{fig:ImpGRrStabilityDiagram} in the following sense: At the outer boundary $r=11.9$, $n_k V^k \approx 0.16$. Assuming wave solutions propagating with this characteristic speed, we have $\lambda = \mathrm{i}0.16$ in the model problem above. Our simple analysis predicts instability when $\Delta t \lesssim 8.0$ for GLoSISDC3 and $\Delta t \lesssim 3.2$ for GRrSISDC2, with stability for $\Delta t$ larger than these estimates. The results depicted in Figs.~\ref{fig:LongTimeErrorsGLo} and \ref{fig:LongTimeErrorsGRr} are consistent with these predictions. Note that the bottom panels of Figs.~\ref{fig:ImpGLoStabilityDiagram} and~\ref{fig:ImpGRrStabilityDiagram} indicate better stability properties for $|\eta\Delta t|$ close to zero. However, even if the characteristic speeds at the outer boundary correspond to this ``near-stable'' portion of the imaginary axis in the relevant stability diagram, the characteristic speeds normal to $r=\mbox{const}$ surfaces for smaller radius $r$ have larger characteristic speeds, and thus $|\mbox{Amp}(\mathrm{i}\eta\Delta t)|$ near its maximum. Moreover, the predictions of our stability analysis appear at least qualitatively correct when the location of the outer boundary is moved to larger radii, where $n_kV^k$ is smaller. As $n_kV^k$ decreases, larger timesteps $\Delta t$ should become unstable. Indeed, with GLoSISDC3 for example, we find that $\Delta t=8$ is unstable for $r = 18.9$ (and apparently independent of radial resolution). By similarly pushing the outer boundary outward, we can render $\Delta t = 4$ unstable for GRrSISDC2. Finally, we note that the standard stability region for backward Euler contains the entire imaginary axis, and is dissipative for imaginary $\lambda$. All of our evolutions with ImexEuler have proved correspondingly stable, even for small timesteps (with $\Delta t = 1/2$ the smallest considered). \subsection{Convergence of the IMEX method} We now verify both the temporal and spatial convergence of our scheme, using the perturbed initial data [case (ii)] described both above and in more detail in Appendix~\ref{App:ID}. We continue to use $(\gamma_0^I,\gamma_0^E) = (1,0)$, and to adopt exponential mappings for all radial intervals. To verify temporal convergence, we first construct an accurate reference solution obtained by evolving the perturbed-black-hole initial data to final time $t_F = 15.0$ with an explicit Dormand Prince 5 (DP5) timestepper and timestep $\Delta t = 0.015625$. The spatial domain is determined by a top spherical harmonic index $\ell_\mathrm{max} = 15$ and $1.9 \leq r \leq 81.9$, and is divided into 8 equally spaced concentric shells, each with with $N_r = 21$ radial collocation points. Next, for each in a sequence of increasingly smaller timesteps we perform an analogous IMEX evolution using the GLoSISDC3 timestepper, which is fourth order accurate. One complication involves boundary conditions: we must ensure that the choices for the explicit and IMEX evolutions are consistent. For both we have chosen a ``frozen" condition, in which the incoming characteristic is fixed to its initial value, i.e.~we freeze $U_{ab}^-$ in Eq.~\eqref{eq:ImexFixUminus} to its initial value. We compute the error, \begin{equation}\label{eq:psiInfErr} \|\Delta \psi\|_\infty = \max_{a,b}\|\psi^\mathrm{GLoSISDC3}_{ab} - \psi^\mathrm{DP5}_{ab}\|_\infty, \end{equation} and plot it in Figure \ref{fig:TemporalCvgTest}. For intermediate $\Delta t$, we observe the predicted fourth-order convergence rate. We remark that all timesteps shown in Fig.~\ref{fig:TemporalCvgTest}, except the largest, correspond to $\Delta t \ll |\lambda|^{-1}$ from the standpoint of the model problem analyzed in Section \ref{sec:stability}. However, we have encountered no stability issues with these short-time evolutions. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{ExplicitImexComparisonEightDomain.eps} \caption{Temporal convergence test. Error points (circles) have been computed using \eqref{eq:psiInfErr} in the text. The straight line in the plot and its indicated slope have been computed by a least squares fit of the third through fifth error points. } \label{fig:TemporalCvgTest} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth]{ConstraintsSpatialCVG.eps} \caption{Spatial convergence test. This plot depicts histories for the constraint energy norm $\sqrt{\mathcal{E}_c}$ described in the text. } \label{fig:ConstraintsSpatialCvg} \end{figure} We test spatial convergence as follows. Our spatial domain, determined by $\ell_\mathrm{max} = 15$ and $1.9 \leq r \leq 41.9$, is divided into 4 equally spaced concentric shells. For a fixed $\Delta t = 0.0625$, we then evolve the perturbed-black-hole initial data for different number $N_r$ of radial collocation points in each shell. We compute the root-mean-square sum of all constraint violations $\sqrt{\mathcal{E}_c}$ (see Eq.~(53) of Ref.~\cite{Lindblom2006} for the precise definition), and plot it in Fig.~\ref{fig:ConstraintsSpatialCvg}. The figure indicates that the solution is dominated by spatial error, and exhibits convergence with increased spatial resolution. A plot of the dimensionless constraint norm $\|\mathcal{C}\|$ defined in Eq.~(71) of \cite{Lindblom2006} is qualitatively the same. \subsection{Treatment of constraint damping terms} \begin{figure} \includegraphics[scale=0.5]{ConstraintDampingTest} \caption{\label{fig:ConstraintDampingTest} Stability of various timesteppers when the constraint damping terms are treated explicitly or implicitly. Plotted are constraint violations $\sqrt{\mathcal{E}_c}$. The top two panels show explicit treatment of the constraint damping terms. This is stable for small timesteps $\Delta t\le 1.024$ (top panel) and unstable for large timesteps, $\Delta t\ge 2.048$ (middle panel). The lowest panel shows implicit treatment of the constraint damping term, resulting in stable evolutions for all timesteps. } \end{figure} As described in Sec.~\ref{sec:GH}, the generalized harmonic equations~(\ref{eq:GhSystem}) are modified by constraint damping terms proportional to $\gamma_0$ in Eq.~(\ref{eq:GH-Pi}). These terms cause constraint violations to decay exponentially. Because these terms are stiff, they require attention when choosing the IMEX splitting, as we now demonstrate. We perform runs similar to Fig.~\ref{fig:LongTimeErrorsGLo} but for explicit ($\gamma_0^E=1, \gamma_0^I=0$) and implicit ($\gamma_0^E=0, \gamma_0^I=1$) constraint damping. The computational domain is the same as in Fig.~\ref{fig:LongTimeErrorsGLo} but with Cartesian center $(0,0,0)$, $N_r=17$, and $L=9$. Our final evolution time for these runs is short enough that the weak instabilities (associated with small GLoSISDC3 timesteps) observed in Fig.~\ref{fig:LongTimeErrorsGLo} do not arise. Figure~\ref{fig:ConstraintDampingTest} shows the constraints for various timesteps and three different IMEX timesteppers. From the lowest panel, we see that the system is well-behaved for all considered timesteps if the constraint-damping terms are treated {\em implicitly}. The upper two panels show that for {\em explicit} handling of the constraint damping terms, the timestep matters: For small $\Delta t$, the simulations behave well, for large $\Delta t$ they blow up. This is consistent with a Courant limit for the explicit sector of the timestepper, arising from the constraint-damping term. \subsection{Adaptive timestepping and comparison to explicit timestepper} \begin{figure} \includegraphics[width=0.45\textwidth]{IMEX_ShapeAndCourant_Final} \caption{Demonstration of IMEX evolution of a single perturbed black hole using GRrSISDC2 with adaptive timestepping. \emph{Top panel:} the minimum and maximum of the horizon's dimensionless intrinsic scalar curvature $M^2 R$, which characterizes the horizon shape. \emph{Bottom panel:} The Courant factor $\Delta t / \Delta x_{\rm min}$, where $\Delta t$ is the size of each timestep and $\Delta x_{\rm min}$ is the minimum spacing between grid-points, for an IMEX evolution and for an analogous explicit evolution. Both evolutions are evolved at the same spatial resolution (with approximately $43^3$ grid-points). \label{fig:Adapt_ShapeAnddt}} \end{figure} In this subsection, we demonstrate adaptive timestepping in an IMEX evolution by using an adaptive timestepper on the perturbed-black-hole initial data from Appendix~\ref{App:ID}. We evolved this initial data on a set of 16 concentric spherical shells with Cartesian center (0,0,0) and with $1.9 \leq r \leq 161.9$, $N_r=17$, and $L=11$. A gravitational-wave pulse falls into a nonspinning black hole of mass $M=1$ shortly after $t=0$, which causes a time-dependent deformation of the hole's horizon. The top panel of Fig.~\ref{fig:Adapt_ShapeAnddt} shows the minimum and maximum values of the intrinsic scalar curvature $R$ of the horizon: As the wave falls into the hole, the horizon shape oscillates and then relaxes back to the Schwarzschild value $M^2 R = 1/2$, which holds for the curvature of a sphere of Schwarzschild radius $r=2M$. The bottom panel of Fig.~\ref{fig:Adapt_ShapeAnddt} plots the step size chosen by the adaptive timestepper $\Delta t/\Delta x_{\rm min}$ for an IMEX evolution and an analogous explicit evolution of the same initial data. The explicit timestepper chooses an essentially constant $\Delta t$, right at its CFL stability limit. During the initial perturbation, the IMEX step size decreases to a local minimum; as the hole relaxes to its final time-independent configuration, the step size increases, eventually reaching an artificially imposed upper limit. (This upper limit was chosen to guarantee that the elliptic solver would converge in a reasonable amount of wallclock time.) During the initial time-dependent perturbation, the IMEX evolution is usually able to take significantly larger timesteps than the analogous explicit evolution. In the explicit evolution, the Courant factor is limited to $\Delta t/\Delta x_{\rm min}\approx 3$, which is comparable to the minimum of the IMEX evolution's Courant factor. We remark that the above IMEX simulations exhibit some instability: the IMEX run shows slow constraint growth, perhaps because we did not impose a constraint-preserving boundary condition on the outer boundary. However, the analogous explicit evolution exhibits no instability, and the IMEX and explicit evolutions' constraint violations are comparable in size when we terminate the simulations (after time $t=2000 M$, which is long after the spacetime has relaxed to its final, stationary state). \section{Discussion} \label{sec:Discussion} \subsection{Results obtained in the present work} \label{sec:DiscussionA} In this article, we have further developed IMEX-techniques applied to hyperbolic systems. Specifically, we have moved beyond the model problem of a scalar wave~\cite{LauPfeiffer2008} to the study of the full non-linear Einstein's equations for single black hole spacetimes. Many results of the model problem presented in~\cite{LauPfeiffer2008} carry over to Einstein's equations in generalized harmonic form~\cite{Lindblom2006}: We continue to rewrite the implicit equation in second order form to utilize an existing elliptic solver~\cite{Pfeiffer2003}. Furthermore, as in the scalar-field case, we do not impose a boundary condition at the excision boundary inside the black hole. Uniqueness of the solution of the second order implicit equation is enforced, we believe, by the demand that the solution be {\em regular} across the horizon. In contrast to the model problem, the generalized harmonic evolution system contains physical constraints\footnote{These are in addition to the auxiliary constraints arising from the reduction to first order form.} which in explicit simulations are handled with constraint damping~\cite{Gundlach2005,Pretorius2005a,Lindblom2006}. We have introduced analogous constraint damping terms in the IMEX formulation, namely the terms proportional to $\gamma_0^I$ in Eqs.~(\ref{eq:ImexGhCase2}) and~(\ref{eq:QandG}). We have found that these constraint damping terms are essential for stability. Treating the constraint damping terms explicitly incurs a Courant limit due to their stiffness, and so we recommend an implicit treatment of these terms ($\gamma_0^E=0; \gamma_0^I=\gamma_0$). We have focused our investigation on spectral deferred correction schemes~\cite{Dutt2000,Minion2003,LaytonMinion,HagstromZhou2006}, utilizing 3 Gauss-Lobatto and 2 Gauss-Radau-right quadrature points: GLoSISDC3 and GRrSISDC2, respectively. These schemes generally work well; however, we find a weak instability for small timesteps which may be related to the stability region of the implicit sector of these IMEX schemes. We also have investigated ImexEuler and third order Additive Runge Kutta (ARK3). While ImexEuler proved robustly stable, our simulations with ARK3 showed a linear growing instability. The origin of this instability remains an open question. The most demanding scenario that we have considered is a perturbed single black hole that rings down to a quiescent state. We have evolved this configuration with explicit and IMEX techniques. The explicit evolution used a fifth order Dormand-Prince timestepper with adaptive timestepping; however, because of the necessarily small grid-spacing close to the black hole, the explicit simulation uses an essentially constant timestep at its Courant limit, cf.~Fig.~\ref{fig:Adapt_ShapeAnddt}. The IMEX method uses a small timestep for the early, dynamic part of the simulation, and then chooses increasingly larger timesteps, until it exceeds the explicit timestep by about a factor of 200. For very large timesteps, the convergence rate of our elliptic solver deteriorates, and overall efficiency drops. Therefore, so far we have limited the IMEX timestep to $\approx 200$ times the explicit timestep. For these timesteps, the computational efficiency of the implicit and explicit code are approximately similar, for the example shown in Fig.~\ref{fig:Adapt_ShapeAnddt}. We are confident that improved preconditioning will accelerate convergence of the implicit solver, allowing us to utilize yet larger timesteps in IMEX at lower computational cost. Besides improved preconditioning, several aspects of our future work will increase the efficiency of the IMEX code: We plan to implement a more accurate starting method for the prediction phase of an SISDC timestep. We further plan to perform a detailed analysis of the required tolerances in the implicit solve (in the present work we set tolerances near numerical round-off to eliminate spurious instabilities due to insufficient accuracy), and we plan to optimize the C++ code implementing Eq.~(\ref{eq:psiEquation}). We expect these steps to significantly increase efficiency of the IMEX code; in contrast, the explicit code is already highly optimized. In the next subsection, we discuss additional code improvements relevant to IMEX evolutions of binary black holes. \subsection{Prospects for binary black hole evolutions} \label{sec:4B} Long and accurate binary black hole simulations are needed for optimal signal-processing of current and future gravitational wave-detectors~\cite{Hannam:2010,Damour:2010,MacDonald:2011ne,Boyle:2011dy}; this provides the motivation for the present work. While the results obtained here are very encouraging, additional work will be necessary to apply IMEX to black hole binaries. First, the formalism must be adopted to the dual-frame approach~\cite{Scheel2006} used in binary black hole simulations with {\tt SpEC}. The corotating coordinates implemented via the dual-frame technique are essential for implicit time-stepping, because they localize the black holes in the computational coordinates. Without corotating coordinates, the black holes would move across the grid, resulting in rapid time-variability of the solution (on timescales $M/v$, where $v$ denotes the velocity of the black hole with mass $M$). This variability would necessitate a small time-step to achieve small time-discretization error. The dual-frame technique merely adds a new advection term into the evolution equations, therefore, we expect the extension to dual-frames to be straightforward. Second, the implicit solver must remain efficient despite the more complicated computational domain. And third, good outer boundary conditions will be necessary. We expect that the second and third issues can be addressed simultaneously with the following ideas: {\tt SpEC} evolves binary black holes on a domain decomposition consisting of ``inner'' spherical shells around each of the black holes, which are surrounded by a complicated structure of ``outer'' subdomains (cylinders, distorted blocks and spherical shells, the latter of which extend to a large outer radius). The inner spherical shells require the highest resolution and therefore determine the Courant condition for fully explicit evolutions. To simulate binary black holes with IMEX methods, we envision a split-by-region approach \cite{Kanevsky2007}, where the inner spherical shells are treated with the IMEX techniques described in this paper and the outer subdomains are handled explicitly. The split-by-region approach has two important advantages: First, implicit equations will have to be solved only on series of concentric shells. This is the case considered here, for which {\tt SpEC}'s elliptic solver is already reasonably efficient with further possible efficiency improvements as discussed in Sec.~\ref{sec:DiscussionA}. In contrast, solution of implicit equations on the entire (rather complicated) domain-decomposition would likely be less efficient because of difficulties in preconditioning the inter-subdomain boundary conditions. Second, for explicit evolutions non-reflecting and constraint-preserving outer boundary conditions are available~\cite{Lindblom2006,Rinne2007,Rinne2008b}. Explicit treatment of the region near the outer boundary will allow us to reuse these boundary conditions. In contrast, similarly sophisticated boundary conditions have not yet been investigated in an IMEX setting. Because the outer subdomains will be handled explicitly, the split-by-region scheme will still be subject to a Courant condition, based on the minimum grid-spacing $\Delta x_{\rm outer}$ in the explicitly evolved region. Because the minimum grid-spacing in the outer subdomains is larger than the minimum grid-spacing $\Delta x_{\rm inner}$ near the black holes, the envisioned split-by-region approach should allow for timesteps larger by a factor \begin{equation}\label{eq:RDeltat} R_{\rm \Delta t} \equiv \frac{\Delta x_{\rm outer}}{\Delta x_{\rm inner}}\gg 1. \end{equation} We shall assume that the cost-per-timestep is proportional to the number of collocation points, with different constants for explicit and IMEX cases: \begin{align} C_{\rm explicit} &= C(N_{\rm outer}+N_{\rm inner})\\ C_{\rm IMEX} &= C N_{\rm outer} + C R_{\rm step} N_{\rm inner} \end{align} Here, $R_{\rm step}$ is the ratio of the cost of an IMEX-timestep to a fully explicit timestep. The simulations presented in Sec.~\ref{sec:NumericalExperiments} give $R_{\rm step} \approx 100$, with $R_{\rm step}$ being somewhat larger for very large $\Delta t$ and somewhat smaller for small $\Delta t$. For temporal integration to a fixed final time, the number of timesteps for a fully explicit scheme will be proportional to $1/\Delta x_{\rm inner}$, whereas for the IMEX split-by-region scheme, the number of timesteps will be proportional to $1/\Delta x_{\rm outer}$. Therefore, the IMEX split-by-region scheme should require the following fractional amount of CPU resources relative to a completely explicit evolution (a smaller number indicates advantage for IMEX): \begin{align} R_{\rm BBH} \equiv \frac{\Delta x_{\rm inner}}{\Delta x_{\rm outer}} \frac{C_{\rm IMEX} }{C_{\rm explicit}} =\frac{1}{R_{\rm \Delta t}} \frac{N_{\rm outer}+R_{\rm step}N_{\rm inner}}{N_{\rm outer}+N_{\rm inner}}. \end{align} When $R_{\rm step}N_{\rm inner} \gg N_{\rm outer}$, this simplifies to \begin{equation} R_{\rm BBH} \approx\frac{R_{\rm step}}{R_{\Delta t}}\,\frac{N_{\rm inner}}{N_{\rm inner}+N_{\rm outer}}. \label{eq:BBH-cost} \end{equation} As expected, the question is whether the larger timestep, encoded in $R_{\rm \Delta t}$, can compensate for the additional cost per timestep, encoded in $R_{\rm step}$. However, split-by-region mitigates the effect of $R_{\rm step}$ by an extra factor $N_{\rm inner}/N_{\rm total}$. To make this discussion concrete, a recent mass-ratio $q\!=\!6$ simulation of non-spinning black holes used $N_{\rm outer}\!=\!219222$, $N_{\rm inner}\!=\!147288$, and $R_{\Delta t}\!=\!34$. With these values Eq.~(\ref{eq:BBH-cost}) gives $R_{\rm BBH} = 1.2.$, i.e.~an IMEX evolution should be marginally more expensive than a fully explicit one. As the mass-ratio is further increased, the grid-spacing needed to resolve the smaller black hole decreases proportionally. Therefore, $\Delta x_{\rm inner}$ will decrease proportional to $1/q$, and $R_{\Delta t}$ will increase proportional to $q$. The constant of proportionality can be determined from $R_{\Delta t}=34$ at $q=6$, so that $R_{\Delta t} \approx 6q$. The number of grid-points will only modestly change, so we assume $N_{\rm inner} \approx N_{\rm outer}$. Then from Eq.~(\ref{eq:BBH-cost}) we estimate an efficiency increase for IMEX of \begin{equation} R_{\rm BBH} \approx \frac{100}{6q}\,\frac{1}{2}\approx \frac{8}{q}. \end{equation} Therefore, with increasing mass-ratio, IMEX will become increasingly more efficient than the explicit evolution code. The additional efficiency gains for IMEX discussed in Sec.~\ref{sec:DiscussionA} are not taken into account in this estimate. Furthermore, a more judicious choice of domain decomposition with a more carefully tuned number of collocation points in the inner spheres would reduce the ratio $N_{\rm inner}/N_{\rm total}$. Finally, we have not accounted for the fact that BBH evolutions require additional CPU resources for interpolation. Because interpolation occurs only in the outer subdomains, this will reduce $R_{\rm step}$. On the other hand, at this point we do not know how accurately the implicit equations must be solved in the binary case; if higher accuracy is required to control secularly accumulating phase-errors, then each implicit solve would become more expensive. Furthermore, the binary simulations utilize a dual-frame method which will add some overhead to the implicit solutions. In summary, we believe that IMEX schemes offer the promise of faster binary black-hole simulations, but many interesting issues (such as those outlined in this section) deserve further investigation. \subsection{Applicability to other computational techniques} The results in this paper were obtained for the generalized harmonic formulation of Einstein's equations using pseudo-spectral methods. IMEX methods might also be implemented for other formulations of the Einstein equations, such as the Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation \cite{shibata95,Baumgarte1998} or the recent conformal decompositions of the Z4 formulation \cite{Alic:2011gg}. Indeed, for such systems specification of the first-order implicit system [analogous to Eqs.~\eqref{eq:ImexGhCase2}] corresponding to a single time-step is straightforward. However, relative to the analogous reduction performed for the generalized harmonic formulation in this paper, the reduction of such a first-order system to a second-order system involving, presumably, some subset of the system variables would seem to be more involved. A second impediment arises from the need to use corotating coordinates. In corotating coordinates, temporal timescales are long, allowing large time-steps with sufficiently small time-discretization error (cf. Sec.~\ref{sec:4B}). To our knowledge, none of the BSSN/Z4 codes currently utilize corotating coordinates, although, in principle, the dual-frame approach~\cite{Scheel2006} could be applied in such codes. Provided the existence of efficient solvers for the resulting discretized implicit equations, the IMEX methods developed here should also be applicable to other spatial discretizations, e.g.~finite differences, finite elements, or other Galerkin spectral-element approaches. The presence of a horizon and the replacement of an inner boundary condition by a regularity condition (cf. Sec.~\ref{sec:BC}) are points demanding particular attention. In our approach each component of the apparent horizon is covered by a single subdomain. Therefore, in our pseudo-spectral treatment the metric in the vicinity of the horizon is expanded in terms of a single set of basis functions, with regularity of the solution an automatic consequence. Guaranteed regularity of the solution might be lost for either a finite-difference method or an unstructured-mesh method, but further studies of these possibilities are clearly warranted. \acknowledgments We are pleased to thank Saul Teukolsky, Larry Kidder, Jan Hesthaven, and Mike Minion for helpful discussions. This work was supported in part by the Sherman Fairchild foundation, NSF grants Nos.~PHY-0969111 and PHY-1005426, and NASA grant No.~NNX09AF96G at Cornell; and by NSF grant No.~PHY 0855678 to the University of New Mexico. H.P. gratefully acknowledges support from the NSERC of Canada, from the Canada Research Chairs Program, and from the Canadian Institute for Advanced Research. Some computations in this paper were performed using the GPC supercomputer at the SciNet HPC Consortium; SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund --- Research Excellence; and the University of Toronto. Some computations in this paper were performed using Pequena at the UNM Center for Advanced Research Computing.
1105.3709
\section{Introduction} During the years after the discovery of $D+1=3+1$ Supergravity by Freedman, Ferrara, and van Nieuwenhuizen in 1976 \cite{FreedmanProgressTowardsA}, there has been a lot of activity in the newly formed field of Supergravity, driven by the hope to construct a theory of quantum gravity without the shortcoming of perturbative non-renormalisability. Werner Nahm classified in 1977 all possible Supergravities, arriving at the result that, under certain assumptions, $d=11$ was the maximal number of Minkowski signature spacetime dimensions in which Supergravities could exist \cite{NahmSupersymmetriesAndTheir}. In the following year, $d=11$ Supergravity was constructed by Cremmer, Julia and Scherk \cite{CremmerSupergravityTheoryIn} in order to obtain $d=4$, $N=8$ maximal Supergravity by dimensional reduction. Various forms of Supergravity were derived in dimensions $d \leq 11$ and relations among them were discovered in the subsequent years \cite{SalamSupergravitiesInDiverse}. While the initial hope linked with perturbative Supergravity was vanishing due to results suggesting its non-renormalisability \cite{DeserNonrenormalizabilityOfLast} and the community turned to Superstring theory, a new candidate theory for quantum gravity, Loop Quantum Gravity (LQG), started to emerge after Ashtekar discovered his new variables in 1986 \cite{AshtekarNewVariablesFor}. During the next 10 years, the initially complex variables were cast into a real form by Barbero \cite{BarberoRealAshtekarVariables}, rigorous techniques for the construction of a Hilbert space were developed \cite{AshtekarRepresentationsOfThe, AshtekarRepresentationTheoryOf, AshtekarDifferentialGeometryOn, AshtekarProjectiveTechniquesAnd, MarolfOnTheSupport, AshtekarQuantizationOfDiffeomorphism} and a representation of the constraint operators on the Hilbert space was defined \cite{ThiemannQSD1}. The strengths of the theory are, among others, its entirely non-perturbative and background-independent formulation as well as the suggested appearance of a quantum geometry at the Planck scale. It is therefore in a sense dual to the perturbative descriptions coming from conventional quantum (Super)gravities and Superstring- / M-theory and it would be very interesting to compare and merge the results coming from these two different approaches to quantum gravity. The main conceptual obstacle in comparing these two methods of quantisation has been the spacetime they are formulated in. While the Ashtekar-Barbero variables are only defined in $3+1$ dimensions, where also an extension to supersymmetry exists, Superstring- / M-theory favours $9+1$ / $10+1$ dimensions and is regarded as a quantisation of the respective Supergravities. It is therefore interesting to study quantum Supergravity as a means of probing the low-energy limit of Superstring- / M-theory with different quantisation techniques, both perturbative and non-perturbative. A somewhat different approach has been taken in \cite{ThiemannTheLQGString1}, where the closed bosonic string has been quantised using rigorous background-independent techniques, resulting in a new solution of the representation problem which differs from standard String theory. Also, the Hamiltonian formulation of the algebraic first order bosonic string and its relation to self-dual gravity have been recently investigated in \cite{FairbairnCanonicalAnalysisOf, FairbairnEquivalenceOfThe}. Apart from contact with Superstring- / M-theory, new results from perturbative $d=4$, $N=8$ Supergravity \cite{BernUltraviolettBehaviorOf, GreenNonrenormalizationConditionsIn, KalloshOnUVFiniteness} suggest that the theory might be renormalisable, contrary to prior believes. It is therefore interesting in its own right to study the loop quantised $d=4$, $N=8$ theory and compare the results with the perturbative expansion. The main obstacle in the connection formulation of General Relativity with or without SUSY is represented by the gravitational variables. Starting from the plain Einstein-Hilbert-term in a (Super)gravity action, one obtains the ADM variables $q_{ab}$ and $P^{ab}$ \cite{ArnowittTheDynamicsOf}. In order to incorporate fermions, it is mandatory to use tetrads and their higher dimensional analogues (vielbeins) to construct a representation of the curved spacetime Dirac (Clifford) algebra. At this point, it is convenient to use the time gauge \cite{SchwingerQuantizedGravitationalField} to obtain a densitised $D$-bein $E^a_i$ and its canonically conjugate momentum $K_a^i$ as canonical variables. Since the time gauge fixes the boost part of the internal gauge group SO$(1,D)$, we are left with a SO$(D)$ gauge theory. The problem however is that the Gau{\ss} constraint generating the internal rotations transforms $E^a_i$ and $K_a^i$ as internal vectors, and thus not as a connection and a vector. It was shown in \cite{BTTI, BTTIV} that starting from this formulation, one can construct a canonical transformation which leads to a formulation in terms of a SO$(D+1)$ connection $A_{aIJ}$ and its canonically conjugate momentum $\pi^{aIJ}$, where it was necessary to enlarge the dimension of the internal space by one space-like dimension. The purpose of this paper is to generalise this transformation to Supergravity theories. The problem arising in these generalisations are not so much linked to the appearance of additional tensor fields and spin $1/2$ fermions, but to the Rarita-Schwinger field which obeys a Majorana condition. It is well known \cite{DeserHamiltonianFormulationOf} that in order to have simple and metric independent Poisson brackets for the Rarita-Schwinger field $\psi^\alpha_a$, one should use half-densitised internally projected fields $\phi^\alpha_i := \sqrt[4]{q} e^a_i \psi^\alpha_a$. This field redefinition has to be changed in order to work in the new internal space, more specifically we have to ensure that the number of degrees of freedom still matches by imposing suitable constraints. Also, the Majorana conditions are sensitive to the dimensionality and signature of spacetime. We thus have to ensure that no inconsistencies arise when using SO$(D+1)$ instead of SO$(1,D)$ as the internal gauge group. Concretely, this will be achieved in dimensions where Majorana representation for the $\gamma$-matrices exists, which covers many interesting Supergravity theories ($d=4, 8, 9, 10, 11$). The presence of additional tensors, vectors, scalars and spin $1/2$ fermions in various SUGRA theories does not pose any problems for this classical canonical transformation. However, we must provide background independent representations for these fields in the quantum theory which, to the best of our knowledge, has not been done yet for all of them. As an example, in our companion paper \cite{BTTVII} we consider the quantisation of Abelian $p$-form fields such as the 3-index photon present in $11d$ SUGRA with Chern-Simons term. Scalars, fermions and connections of compact, possibly non-Abelian, gauge groups have already been treated in \cite{ThiemannKinematicalHilbertSpaces}.\\ \\ The article is organised as follows:\\ \\ Section 2 is subdivided into two parts. In the first we review prior work on canonical Supergravity theories in various dimensions and identify their common structural elements. We also mention the basic difficulties in our goal to match these canonical formulations to the reformulations \cite{BTTI,BTTII} of the graviton sector. In the second we display canonical Supergravity explicitly in the time gauge paying special attention to the Rarita-Schwinger sector. Section 3 is also subdivided into two parts. In the first we display the symplectic structure of the Rarita-Schwinger field in the time gauge in convenient variables which will be crucial for a later quantisation of the theory. In the second, following the strategy in \cite{BTTI,BTTIV} we will perform an extension of the phase space subject to additional second class constraints ensuring that we are dealing with the same theory while the internal gauge group can be extended from SO$(D)$ to SO$(D+1)$. In section 4 we construct a representation of the Dirac anti bracket geared to Majorana spinor fields rather than Dirac spinor fields. In section 5 we show that our formalism easily extends without additional complications to chiral Supergravities (Majorana-Weyl spinors) and to spin $1/2$ Majorana fields which are present in some Supergravity theories. In section 6 we summarise and conclude. Finally, in the appendix, we supply the details of the formulation of higher dimensional connection General Relativity with linear simplicity constraints in terms of a normal field $N^I$ which is convenient in order to resolve the afore mentioned tension between SO$(1,D)$ and \mbox{SO$(D+1)$} Majorana spinors and we provide a Hilbert space representation of the normal field sector. \section{Review of Canonical Supergravity} In the first part of this section we summarise the status of canonical Supergravity and its quantisation. In the second we display the details of the theory to the extent we need it which will settle the notation. \subsection{Status of Canonical Supergravity} Hamiltonian formulations of Supergravity are a tedious business due to the complexity of the Lagrangians and the appearance of constraints. Nevertheless, the canonical structure emerging is very similar for the explicitly known Hamiltonian formulations. To the best of our knowledge, the $D+1$ split for $D \geq 3$ has been explicitly performed for $D+1=3+1$, $N=1$ \cite{DeserHamiltonianFormulationOf, FradkinHamiltonianFormalismQuantization, PilatiTheCanonicalFormulation, SenjanovicHamiltonianFormulationAnd}, $D+1=9+1$, $N=1$ \cite{HenneauxHamiltonianFormulationOf}, and $D+1=10+1$, $N=1$ \cite{DiazHamiltonianFormulationOf}. The algebra of constraints of $D+1=3+1$ Supergravity was first computed by Henneaux \cite{TeitelboimSupergravityAndSquare} up to terms quadratic in the constraints \cite{HenneauxPoissonBracketsOf}. The same method was applied by Diaz \cite{DiazConstraintAlgebraIn} to $D+1=10+1$ Supergravity, also neglecting terms quadratic in the constraints. Sawaguchi performed an explicit calculation of the constraint algebra of $D+1=3+1$ Supergravity in \cite{SawaguchiCanonicalFormalismOf} where a term quadratic in the Gau{\ss} constraint appears in the Poisson bracket of two supersymmetry constraints. The constraint algebra for $D+1=9+1$, $N=1$ Supergravity coupled to supersymmetric Yang-Mills theory was calculated by de Azeredo Campos and Fisch in \cite{CamposHamiltonianFormulationOf}. Shortly after the introduction of the complex Ashtekar variables, Jacobson generalised the construction to $d=4$, $N=1$ Supergravity \cite{JacobsonNewVariablesFor}. In the following, different authors including F\"ul\"op \cite{FulopAboutASuperAshtekar}, Gorobey and Lukyanenko \cite{GorobeyTheAshtekarComplex}, as well as Matschull \cite{MatschullAboutLoopStates}, explored the subject further. Armand-Ugon, Gambini, Obr\'egon and Pullin \cite{Armand-UgonTowardsALoop} formulated the theory in terms of a GSU$(2)$ connection and thus unified bosonic and fermionic variables in a single connection. Building on these works, Ling and Smolin published a series of papers on the subject \cite{LingSupersymmetricSpinNetworks, LingHolographicFormulationOf, LingElevenDimensionalSupergravity}, where, among other topics, supersymmetric spin networks coming from the GSU$(2)$ connection were studied in detail. In the above works, complex Ashtekar variables are employed for which the methods developed in \cite{AshtekarRepresentationsOfThe, AshtekarRepresentationTheoryOf, AshtekarDifferentialGeometryOn, AshtekarProjectiveTechniquesAnd, MarolfOnTheSupport, AshtekarQuantizationOfDiffeomorphism} are not available. Also, the Ashtekar variables are restricted to four spacetime dimensions and thus not applicable to higher dimensional Supergravities. Aiming at a unification of String theory and LQG, Smolin explored non-perturbative formulations of certain parts of eleven dimensional Supergravity \cite{SmolinChernSimonsTheory, SmolinAQuantizationOf}. The generalisation of the Loop Quantum Gravity methods to antisymmetric tensors was considered by Arias, di Bartolo, Fustero, Gambini, and Trias \cite{AriasSecondQuantizationOf}. The full canonical analysis of $d=4$, $N=1$ Supergravity using real Ashtekar-Barbero variables was first performed by Sawaguchi \cite{SawaguchiCanonicalFormalismOf}. Kaul and Sengupta \cite{SenguptaCanonicalSupergravityWith} considered a Lagrangian derivation of this formulation using the Nieh-Yan topological density. An attempt to construct Ashtekar-type variables for $d=11$ Supergravity has already been made by Melosch and Nicolai using an $\text{SO}(1,2) \times \text{SO}(16)$ invariant reformulation of the original CJS theory \cite{MeloschNewCanonicalVariables}. In this formulation the connection is not Poisson commuting thus forbidding LQG techniques. In a paper on canonical Supergravity in $2+1$ dimensions \cite{MatschullCanonicalQuantumSupergravity}, Matschull and Nicolai discovered a similar noncommutativity property which they avoided by adding a purely imaginary fermionic bilinear to the connection, leading to a complexified gauge group. As was observed in \cite{ThiemannKinematicalHilbertSpaces}, this problem can be avoided by using half-densitised fermions as canonical variables. The general picture emerging is that the canonical decomposition $S = \int dt \, (p \dot{q} - H)$ in the time gauge leads to \begin{eqnarray} S = \int_\mathcal{\sigma} d^Dx \, \int dt \, \biggl( && \dot{E}^a_i K_a^i + i \sqrt{q} \bar{\psi}_a \gamma^{a \perp b} \dot{\psi}_b + \text{tensors} +\text{vectors}+ \text{spin }1/2 +\text{scalars} \nonumber \\ && - N \mathcal{H} - N^a \mathcal{H}_a - \lambda_{ij} G^{ij} - \bar{\psi}_t \mathcal{S} -\text{tensor constraints}\biggr) \text{,} \end{eqnarray} where the Hamiltonian constraint $\mathcal{H}$, the spatial diffeomorphism constraint $\mathcal{H}_a$, the Spin$(D)$ Gau{\ss} constraint $G^{ij}$, the supersymmetry constraint $\mathcal{S}$ and the tensor constraints form a first class algebra. $E^a_i$ is the densitised vielbein and $K_a^i$ its canonical momentum. $\psi_a$ denotes the Rarita-Schwinger field with suppressed spinor indices. $N$, $N^a$, $\lambda_{ij}$ and $\bar{\psi}_t$ are Lagrange multipliers for the respective constraints. With tensor constraints we mean constraints acting only on additional tensor fields such as the three-index photon of $D+1=10+1$ Supergravity. The remaining terms in the first line are kinetic terms appearing in the decomposition of the action. Since we will not deal with them in this paper, we refer to \cite{HenneauxHamiltonianFormulationOf, DiazHamiltonianFormulationOf} for details. In order to apply the techniques developed for Loop Quantum Gravity to this system, we have to turn it into a connection formulation in the spirit of the Ashtekar variables. Concerning the purely gravitational part, this has been achieved in \cite{BTTI, BTTII} and extended to the case of spin $1/2$ fermions in \cite{BTTIV}. The Rarita-Schwinger field turns out to be more difficult to deal with than the spin $1/2$ fermions. On the one hand, it leads to second class constraints \cite{PilatiTheCanonicalFormulation}, which encode the reality conditions, with a structure which is different from the case of Dirac spinors\footnote{While for Dirac spinors, the second class constraints are of the form $\pi_{\bar{\psi}} \propto \psi$, $\pi_{\psi} \propto \bar{\psi}$, in the Majorana case we obtain an equation of the form $\pi_{\psi} \propto \psi$, where $\pi_{x}$ denotes the momenta conjugate to $x$.}. On the other hand, as the other fermions, it has to be treated as a half-density in order to commute with $K_a^i$\cite{DeserHamiltonianFormulationOf}. Apart from the conventional canonical analysis, where time and space are treated differently, there exists a covariant canonical formalism treating space and time on an equal footing \cite{DaddaCovariantCanonicalFormalism}. It has been applied to vielbein gravity \cite{NelsonCovariantCanonicalFormalism}, $d=4$, $N=1$ Supergravity \cite{LerdaCovariantCanonicalFormalism}, $d=5$ Supergravity and higher dimensional pure gravity \cite{FoussatsCanonicalCovariantFormalism, FoussatsHamiltonianFormalismForHigher} and $d=10$, $N=1$ Supergravity coupled to supersymmetric Yang-Mills theory \cite{FoussatsHamiltonianFormalismFor, FoussatsSecondOrderHamiltonian}. The relation of the covariant canonical formalism and the conventional canonical analysis is discussed in \cite{FoussatsAlgebraOfConstraints} using the example of four dimensional Supergravity coupled to supersymmetric Yang-Mills theory. \subsection{Canonical Supergravity in the Time Gauge} We will illustrate the $3+1$ split of $N=1$ Supergravity in first order formulation as performed by Sawaguchi \cite{SawaguchiCanonicalFormalismOf} in order to give the reader a feeling for what is happening during the canonical decomposition. The resulting picture generalises to all dimensions. The symplectic potential derived in this context is exemplary for the Supergravity theories of our interest and we will continue with the general treatment in the next section. We remark that in $3+1$ spacetime dimensions, the relations $C^T = -C$ and $C\gamma^{I}C^{-1} = - (\gamma^{I})^T$ hold, where $C$ denotes the charge conjugation matrix. The action for $3+1$, $N=1$ first order Supergravity is given by \begin{equation} S = \int_{\mathcal{M}} d^{4}X \left( \frac{s}{2} e e^{\mu I} e^{\nu J} F_{\mu \nu IJ} (A)+i s ~ e \bar{\psi}_{\mu} \gamma^{\mu \rho \sigma} \nabla_{\rho}(A) \psi_{\sigma} \right) \text{.} \end{equation} Using the conventions introduced above and $\gamma^{\mu \rho \sigma} = \gamma^{IJK} e^{\mu}_I e^{\rho}_J e^{\sigma}_K$, one can explicitly check that the action is real. The $3+1$ decomposition is done like in the previous papers, and the notation used can be found there (if not above). We obtain \begin{eqnarray} S &=& \int_{\mathcal{M}} d^{4}X \left( \frac{s}{2} e e^{\mu I} e^{\nu J} F_{\mu \nu IJ} (A)+ s \,i\, e \bar{\psi}_{\mu} \gamma^{\mu \rho \sigma} \nabla_{\rho}(A) \psi_{\sigma} \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left[ \frac{1}{2} \pi^{aIJ} \mathcal{L}_T A_{aIJ} - i \sqrt{q} \; \bar{\psi}_{a} \gamma^{\perp ab} \mathcal{L}_T \psi_b - \utilde{N}\left(\mathcal{H}^{\text{grav}} - i \sqrt{q} \; \bar{\psi}_a \gamma^{abc} \nabla_b(A) \psi_c \right) \right. \nonumber \\ &~& ~~~ \left. -N^a \left( \mathcal{H}_a^{\text{grav}} + 3 i \sqrt{q} \bar{\psi}_{[a} \gamma^{\perp bc} \nabla_{b}(A) \psi_{c]} \right) + \frac{1}{2} A_{t IJ} \left( G^{IJ}_{\text{grav}} - i \sqrt{q} \bar{\psi}_a \gamma^{\perp ab} [i\Sigma^{IJ}] \psi_b \right) \right. \nonumber \\ &~& ~~~\left. -i \bar{\psi}_t \left( \sqrt{q} \gamma^{\perp ab} \nabla_b(A) \psi_a + \sqrt{q} \nabla_b(A)(\gamma^{\perp ab}\psi_a )\right) \right] \text{.} \end{eqnarray} From there, one can read off the constraints $\mathcal{H}$, $\mathcal{H}_a$, $G^{IJ}$ and $\mathcal{S}$. We will choose time gauge $n^I = \delta^{I}_0$ at this point to simplify the further discussion. For the symplectic potential, we find \begin{eqnarray} &~& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \frac{1}{2} \pi^{aIJ} \mathcal{L}_T A_{aIJ} - i \sqrt{q} \bar{\psi}_{a} \gamma^{\perp ab} \mathcal{L}_T \psi_b \right) \nonumber \\ &\rightarrow& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \dot E^{ai} K_{ai} - i \phi^{\dagger}_{a} \gamma^{ab} \dot \phi_b \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \dot E^{ai} K_{ai} - i \phi^{\dagger}_{i} \gamma^{ij} \left[ \dot \phi_j - \dot {(E^{b}_{j})} E_b^k \phi_k \right] \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \dot E^{ai} K_{ai} - \pi^j \left[ \dot \phi_j - \dot {(E^{b}_{j})} E_b^k \phi_k \right] \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \dot E^{ai} {(K_{ai} + \pi_i E_a^j \phi_j)} - \pi^j \dot \phi_j \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{3}x \left( \dot E^{ai} K'_{ai} - \pi^j \dot \phi_j \right) \text{,} \label{eq:SymplecticPotential} \end{eqnarray} where we successively defined \begin{equation} \label{eq:definitions1} \phi_a := \sqrt[4]{q} \psi_a ~~\text{,}~~ \phi_i := \frac{1}{\sqrt{q}} E^{a}_i \phi_a ~~\text{,}~~ \pi^i := i \phi_j^{\dagger} \gamma^{ji} ~~\text{and}~~ K'_{ai} := K_{ai}+ \pi_i E_a^j \phi_j~~\text{.} \end{equation} In the second line, we chose time gauge and half-densities as fermionic variables \cite{ThiemannKinematicalHilbertSpaces}. Then, we transformed the spatial index of the fermions into an internal one using the vielbein, but preserving the fermionic density weight \cite{DeserHamiltonianFormulationOf}. This second transformation also affects the extrinsic curvature and we have to define a new variable $K'_{ai}$. The Gau{\ss} constraint becomes under these changes of variables \begin{eqnarray} G^{ij} &=& 2 K_a^{[i} E^{a|j]} - \pi^k \left[ i \Sigma^{ij}\right] \phi_k \nonumber \\ &=& 2 \left(K'_a\mbox{}^{[i} - \pi^{[i} E_a^k \phi_k \right) E^{a|j]} - \pi^k \left[ i \Sigma^{ij}\right] \phi_k \nonumber \\ &=& 2 K'_a\mbox{}^{[i} E^{a|j]} - 2 \pi^{[i} \phi^{j]} - \pi^k \left[ i \Sigma^{ij}\right] \phi_k \text{.} \end{eqnarray} The generator of spatial diffeomorphisms $\tilde{\mathcal{H}}_a$ is given by the following linear combination of constraints \begin{equation} \tilde{\mathcal{H}}_a := \mathcal{H}_a + \frac{1}{2} A_{aij} G^{ij} + i\, \bar{\psi}_a \mathcal{S} \text{.} \end{equation} It becomes \begin{eqnarray} \tilde{\mathcal{H}}_a &=& E^{bj} \partial_a K_{bj} - \partial_b \left( E^{bj} K_{aj} \right) - \pi^b \partial_a \phi_b + \partial_b \left( \pi^b \phi_a \right) \nonumber \\ &=& E^{bj} \partial_a \left(K'_{bj} + \pi_j E_b^k \phi_k\right) - \partial_b \left( E^{bj} \left(K'_{aj}+ \pi_j E_a^k \phi_k \right) \right) \nonumber \\ & & - \frac{1}{\sqrt[4]{q}} \pi_i E^{bi} \partial_a \left( \sqrt[4]{q} \phi^j E_{bj} \right) + \partial_b \left( \pi_i E^{bi} \phi^j E_{aj} \right) \nonumber \\ &=& E^{bj} \partial_a K'_{bj} - \partial_b \left( E^{bj} K'_{aj} \right) + \sqrt[4]{q} \partial_a \left( \frac{1}{\sqrt[4]{q}}\pi_i \right) \phi^i \nonumber \\ &=& E^{bj} \partial_a K'_{bj} - \partial_b \left( E^{bj} K'_{aj} \right) + \frac{1}{2} \partial_a \left(\pi_i \right) \phi^i - \frac{1}{2} \pi_i \partial_a \phi^i \text{.} \end{eqnarray} For the last step, note that $\pi^i \phi_i = 0$. Thus, these constraints exactly change as one would expect under the performed change of variables. The other constraints also can be rewritten in terms of the new variables, but this is less instructive and their explicit form is not important for what follows. We only want to remark that they depend on the contorsion $K_{aij}$, which is not dynamical and has to be solved for in terms of $\phi_i$. This can be done explicitly. \section{Phase Space Extension} In this section we focus on the symplectic structure of the Rarita-Schwinger sector. In the time gauge this is a SO$(D)$ theory which is the subject of the first part. In the second part we will perform a phase space extension to a SO$(D+1)$ theory where special attention must be paid to the reality conditions. \subsection{Symplectic Structure in the SO$(D)$ Theory} The $3+1$ split described above generalises directly to higher dimensions. We will always impose the time gauge $n^I = \delta^I_0$ prior to the $D+1$ split and restrict to dimensions where a Majorana representation of the $\gamma$-matrices exists, which we will use. This allows us to set $C = \gamma^0$ which simplifies the following analysis. The generic terms important for this paper appearing in Supergravity theories are \begin{equation} S_{grav. + RS} = \int_{\mathcal{M}} d^{D+1}X \left( \frac{s}{2} e e^{\mu I} e^{\nu J} F_{\mu \nu IJ} (A)+ i s ~ e \bar{\psi}_{\mu} \gamma^{\mu \rho \sigma} \nabla_{\rho}(A) \psi_{\sigma} \right) \end{equation} in case of a first order formulation and analogous terms for a second order formulation. This difference in defining the theory will not be important in what follows, since as demonstrated above for the $3+1$ dimensional case, the symplectic potential of these actions in the time gauge turns out to be \begin{eqnarray} && \int_{\mathbb{R}} dt \int_{\sigma} d^{D}x \left( - E^{ai} \mathcal{L}_T K_{ai} - i \sqrt{q} \bar{\psi}_{a} \gamma^{\perp ab} \mathcal{L}_T \psi_b \right) \nonumber \\ &=& \int_{\mathbb{R}} dt \int_{\sigma} d^{D}x \left( \dot E^{ai} K'_{ai} - \pi^j \dot \phi_j \right) \text{,} \label{eq:SymplecticPotentialD} \end{eqnarray} where we used the same definitions as in (\ref{eq:definitions1}). From (\ref{eq:SymplecticPotentialD}) we can read off the non-vanishing Poisson brackets\footnote{More precisely we should call them Poisson anti-brackets which are symmetric under exchange of the arguments and which are to be quantised by anti commutators. We will call them Poisson brackets anyway for notational simplicity in what follows with the usual rules for the interplay between the Poisson brackets for integral and half-integral spin respectively. See e.g. \cite{HenneauxQuantizationOfGauge, GitmanQuantizationOfFields} for an account.} \begin{equation} \left\{E^{ai}, {K'}_{bj}\right\} = \delta^a_b \delta^i_j ~~\text{and}~~ \left\{\phi_{i}^{\alpha}, \pi^j_{\beta}\right\} = -\delta^{\alpha}_{\beta} \delta_i^j \text{.} \end{equation} Additionally, we have the following second class constraints and reality conditions \begin{equation} \Omega^i := \pi^i + i \phi_{j}^{T} C \gamma^0 \gamma^{ji} = 0 ~~\text{and}~~ \phi_{i}^{\dagger} = -\phi_{i}^{T}C\gamma^0 \text{.} \label{eq:RealityConditions} \end{equation} In order to be able to introduce a connection variable along the lines of \cite{BTTI}, we need to enlarge the internal space, i.e. replacing the gauge group SO$(D)$ by either SO$(1,D)$ or SO$(D+1)$. In view of subsequent quantisation, SO$(D+1)$ is favoured because of its compactness and will be our choice in the following. This enlargement can be done consistently if also additional spinorial degrees of freedom are added as well as additional constraints which remove the newly introduced fermions. Finally, the extension has to be consistent with the reality conditions. All this turns out to be rather hard to achieve, and the final version of the theory looks rather different from what a ``first guess" might have been. To motivate it, we will review the whole process of finding the theory, showing where the straight-forward ideas lead to dead ends, and how they can be modified to arrive at a consistent theory. We will only discuss the fermionic variables, the gravitational part is treated in the appendix. Before we enlarge the internal space, we will get rid of the second class constraints. To this end, we calculate the Dirac matrix \begin{equation} C^{ij} = \left\{\Omega^i, \Omega^j\right\} = - 2 i C\gamma^0\gamma^{ij} ~~\text{,}~~ (C^{-1})_{ij} =- \gamma^0 \frac{i}{2(D-1)} \left( \left(2-D\right)\eta_{ij} + \gamma_{ij} \right) C^{-1}\text{,} \end{equation} and thus find for the Dirac bracket \begin{equation} \left\{\phi_i, \phi_j \right\}_{DB} = -\left\{\phi_i, \Omega^k \right\} (C^{-1})_{kl} \left\{\Omega^l ,\phi_j\right\} = - (C^{-1})_{ij} \text{.} \label{eq:DiracBracket} \end{equation} To simplify the subsequent discussion, in the following we will consider real representations of the Dirac matrices only, which implies $C = \gamma^0$. Then the above equations read \begin{equation} C^{ij} = 2i \gamma^{ij} ~~\text{,}~~ (C^{-1})_{ij} = -\frac{i}{2(D-1)} \left( \left(2-D\right)\eta_{ij} + \gamma_{ij} \right) ~~\text{,}~~ \left\{\phi_i, \phi_j \right\}_{DB} = -(C^{-1})_{ij} \text{.} \end{equation} Now we can either (a) try to enlarge the internal space and afterwards choose new variables which have simpler brackets, or (b) we simplify the Dirac bracket before enlarging the internal space. (a) immediately leads to problems. The symmetry of the Poisson brackets $\left\{\phi^{\alpha}_I, \phi_J^{\beta}\right\} \propto (\tilde C^{-1})_{IJ}^{\alpha \beta}$ implies that matrix $\tilde{C}^{-1}$ is symmetric under the exchange of $(I, \alpha) \leftrightarrow (J,\beta)$. The naive extension $(C^{-1})_{IJ} = -\frac{i}{2(D-1)} \left( \left(2-D\right)\eta_{IJ} + \gamma_{IJ} \right)$ however does not have this symmetry. Its symmetric part $\tilde{C}^{-1} + (\tilde{C}^{-1})^T$ is not invertible. Of course, one can extend $C^{-1}$ in different, more ``unnatural" ways, e.g. containing terms like $\gamma_J^T \gamma_I$ etc. and ``cure" this problem for a moment, but also the Gau{\ss} constraint will be problematic. The SO$(D)$ constraint contains $C^{ij}$ (since we used $\pi^{i} =- \frac{1}{2} \phi_j^T C^{ji}$) and this matrix should also be replaced by some $\hat C^{IJ}$, such that $\phi_I$ transforms covariantly and $G^{IJ}$ reduces correctly to $G^{ij}$ if we choose time gauge and solve its boost part. This implies restrictions on $\hat{C}$ and further restrictions on $\tilde{C}^{-1}$. We did not succeed in finding matrices which fulfil all these requirements. In the following, we therefore will follow the second route (b) and simplify the Dirac brackets before doing the enlargement of the internal space.\\ \\ There are several possible ways how to simplify the Dirac brackets: \begin{enumerate} \item Note that the matrix $C^{-1}$ on the right hand side of the Dirac brackets is imaginary and symmetric, hence there always exists a real, orthogonal matrix $O^i\mbox{}_j$ such that under the change of variables $\phi^i \rightarrow \phi'\mbox{}^i := O^i\mbox{}_j\phi^j$ the brackets becomes $i$ times a real diagonal matrix. However, now the new fundamental degrees of freedom $\phi'\mbox{}^i$ in general do not transform nicely under SO$(D)$ gauge transformations, only $(O^{-1})^i\mbox{}_j \phi'\mbox{}^j$ do. More severely, it is unclear how the extension $O^i\mbox{}_j \rightarrow O^I\mbox{}_J$ should be done. \item To assure that the fundamental degrees of freedom still transform nicely under SO$(D)$ transformations, we can use the Ansatz $\phi'\mbox{}^i := M^{ij} \phi_j$ with $M^{ij} := (\alpha \delta^{ij} \mathbb{1}+ \beta \Sigma^{ij})$. Matrices of this form are in general invertible (cf. point 3. below for two exceptions) and, since they are constructed from intertwining matrices, $\phi'\mbox{}^i$ will transform nicely under gauge transformations. Moreover, now there is a chance to generalise the matrix to one dimension higher. For the Dirac brackets to become diagonal, $\alpha$ and $\beta$ have to be determined by solving $MC^{-1}M^T = i \mathbb{1}$. The problem is that there is no solution for both parameters being real, at least one is necessarily complex. More general Ans\"atze for $M^{ij}$ (e.g. involving $\gamma_{\text{five}}$ in even dimensions) share the same problem. Thus we exchanged the problem of complicated brackets with complicated reality conditions, which again are hard to quantise. \item The third route, which will lead to the consistent theory, in the end implies the introduction of additional fermionic degrees of freedom already before enlargement of the internal space. Given the difficulties just mentioned, the optimal approach in the desire to simplify the Poisson brackets is to find orthogonal projections onto subspaces of the real Gra{\ss}mann vector space which are built from $\delta_{ij}\mathbb{1}$ and $\Sigma_{ij}$ such that the symplectic structure becomes block diagonal on those subspaces. One can then define simple Poisson brackets and add the projection constraints as secondary constraints which leads to corresponding Dirac brackets which will be proportional to those projectors. As we will see, the fact that these are projectors makes it possible to find a Hilbert space representation of the corresponding Dirac bracket.\\ \\ We define in any dimension $D$ \begin{eqnarray} \mathbb{P}^{ij}_{\alpha \beta} &:=& \eta^{ij} \delta_{\alpha \beta} - \frac{1}{D} (\gamma^i \gamma^j)_{\alpha \beta} = \frac{D-1}{D} \eta^{ij} \delta_{\alpha \beta} - \frac{2i}{D} \Sigma^{ij}_{\alpha \beta} \text{,} \\ \mathbb{Q}^{ij}_{\alpha \beta} &:=& \frac{1}{D} (\gamma^i \gamma^j)_{\alpha \beta} = \frac{1}{D} \eta^{ij} \delta_{\alpha \beta} + \frac{2i}{D} \Sigma^{ij}_{\alpha \beta} \text{.} \end{eqnarray} Those matrices are both real (we are using Majorana representations) and built from intertwiners, but they are not invertible. It is easy to check that \begin{equation} \mathbb{P}^{ij}_{\alpha \beta} \mathbb{Q}^{\beta \gamma}_{j k} = 0 \text{,}~~~\mathbb{P}^{ij}_{\alpha \beta} \mathbb{P}^{\beta \gamma}_{j k} = \mathbb{P}^{i \gamma}_{\alpha k} \text{,}~~~ \mathbb{Q}^{ij}_{\alpha \beta} \mathbb{Q}^{\beta \gamma}_{j k} = \mathbb{Q}^{i \gamma}_{\alpha k}, ~~~\text{and}~~~\mathbb{P} + \mathbb{Q} = \mathbb{1} \eta \text{,} \end{equation} i.e. the above equations define projectors. By construction, $\mathbb{P}$ projects on ``trace-free" components w.r.t. $\gamma_i$, i.e. $\mathbb{P}^{ij}_{\alpha \beta} \gamma^{\beta}_j = 0 = \gamma_{i}^{\alpha} \mathbb{P}^{ij}_{\alpha \beta}$. Using these projectors, we can decompose the Rarita-Schwinger field as follows \begin{equation} \phi_i = \mathbb{P}_{ij}\phi^j + \mathbb{Q}_{ij}\phi^j =: \rho_i + \frac{1}{D} \gamma_i \sigma\text{,} \label{eq:DecompositionFermions} \end{equation} with $\rho_i := \mathbb{P}_{ij}\phi^j$ and $\sigma := \gamma^i \phi_i$\footnote{When considering the free Rarita-Schwinger action, this decomposition also appears to isolate the physical degrees of freedom, cf. e.g. \cite{DeserHamiltonianFormulationOf}. The ``trace part" $\sigma$ is unphysical for the free field.}. Using the reality conditions (\ref{eq:RealityConditions}) for $\phi_i$, we find \begin{equation} \bar{\rho}_i = \rho_i^T C ~~\text{and}~~\bar{\sigma} = \sigma^T C \text{.} \label{eq:RealityConditions2} \end{equation} Moreover, using \begin{equation} \gamma^{ij} = - \mathbb{P}^{ij} + (D-1) \mathbb{Q}^{ij} \text{,} \end{equation} the symplectic potential becomes \begin{eqnarray} -\pi^i \dot \phi_i &=& - i \phi^{\dagger}_j \gamma^{ji} \dot \phi_i \nonumber \\ &=& - i\phi^{\dagger}_j \left( - \mathbb{P}^{ji} +(D-1) \mathbb{Q}^{ji} \right) \dot \phi_i \nonumber \\ &=& - i\phi^{\dagger}_j \left( - \mathbb{P}^{j}\mbox{}_k \mathbb{P}^{ki} +(D-1) \mathbb{Q}^{j}\mbox{}_k \mathbb{Q}^{ki} \right) \dot \phi_i \nonumber \\ &=& i\left( \mathbb{P}_k\mbox{}^j \phi_j \right)^{\dagger} \dot{\left(\mathbb{P}^{ki} \phi_i\right)} - i(D-1)\left( \mathbb{Q}_k\mbox{}^{j}\phi_j\right)^{\dagger} \dot{\left(\mathbb{Q}^{ki} \phi_i\right)} \nonumber \\ &=& i\rho^{\dagger}_i \dot \rho^i - i\frac{D-1}{D} \sigma^{\dagger} \dot \sigma \nonumber \\ &=& - i\rho^{T}_i C \gamma^0 \dot \rho^i + i\frac{D-1}{D} \sigma^T C \gamma^0 \dot \sigma \nonumber \\ &=& i\rho^{T}_i \dot \rho^i - i\frac{D-1}{D} \sigma^T \dot \sigma \text{,} \end{eqnarray} where in the second to last line we used the reality conditions (\ref{eq:RealityConditions2}) and in the last line we restricted to a real representation, $C = \gamma^0$. \end{enumerate} This motivates the definition of the brackets \begin{equation} \left\{ \rho_j, \rho^i \right\} = -\frac{i}{2} \mathbb{1} \delta^i_j ~~\text{and}~~ \left\{\sigma, \sigma \right\} = i \frac{D}{2(D-1)} \mathbb{1} \text{,} \label{eq:PoissonBrackets2} \end{equation} together with the reality conditions $\rho^*_i = \rho_i$, $\sigma^* = \sigma$ (cf. (\ref{eq:RealityConditions2})) and additionally introduced constraints to account for the superfluous fermionic degrees of freedom, \begin{equation} \Lambda_{\alpha} := \gamma^i_{\alpha \beta} \rho^{\beta}_i \approx 0 \text{.} \end{equation} We need to check that the extension is valid, i.e. that the Poisson brackets of the $\phi_i$, considered as functions on the extended phase space, are equal to the Dirac brackets (\ref{eq:DiracBracket}) of the system before we did the extension. Using $\phi_i = \mathbb{P}_{ij}\rho^j + \frac{1}{D} \gamma_i \sigma$ (cf. \ref{eq:DecompositionFermions}) and the Poisson brackets (\ref{eq:PoissonBrackets2}), this can be checked explicitly (this calculation shows why the factors of $\frac{1}{2}$ in (\ref{eq:PoissonBrackets2}) are needed). Using this, we can express the constraints $\mathcal{H}$ and $\mathcal{S}$ in terms of the new variables in the obvious way and know that their algebra is unchanged. In particular, since the projectors are built from intertwiners, we find for the fermionic part of the Gau{\ss} constraint \begin{equation} G^{ij} = ... + \left( i \rho^{kT}\right) \left[ 2\eta^{[i}_k\eta_l^{j]} + i \Sigma^{ij} \eta_{kl}\right] \rho^l + \left(- i \frac{D-1}{D}\sigma^T \right) \left[i\Sigma^{ij}\right]\sigma \text{,} \end{equation} which allows for an easy generalisation to SO$(D+1)$ or SO$(1,D)$ as a gauge group. Furthermore, since $\rho^i$ in the other constraints only appears in the combination $\mathbb{P}_{ij}\rho^j$, they automatically Poisson commute with $\Lambda_{\alpha}$. Note that if we now would calculate the Dirac bracket, we would get $\{\rho_i,\rho_j\}_{DB} = -\frac{i}{2} \mathbb{P}_{ij}$, which again is non-trivial. Instead, we directly enlarge the phase space from $\{\rho^i, \sigma\}$ to $\{\rho^I, \sigma\}$, with, as a first guess, the brackets $\left\{ \rho_I, \rho_J\right\} = -\frac{i}{2} \eta_{IJ} \mathbb{1}$, $\left\{\sigma,\sigma\right\} = i \frac{D}{2(D-1)} \mathbb{1}$, the reality conditions $\rho_I^* = \rho_I$, $\sigma^* = \sigma$ and the constraints \begin{equation} N^I \rho_I \approx 0 ~~\text{and}~~ \gamma^I \rho_I \approx 0 \text{.} \label{eq:NewConstraints} \end{equation} Unfortunately, this immediately leads to an inconsistency in the case of the compact gauge group SO$(D+1)$, since for our choice of Dirac matrices, $\gamma^0$ necessarily is complex in the Euclidean case. Therefore, the reality conditions again are not SO$(D+1)$ covariant and the constraints (\ref{eq:NewConstraints}) only are consistent in the time gauge $N^I = \delta^I_0$\footnote{$\gamma^I \rho_I \approx 0$ is a complex constraint and thus equal to two real constraints. Only in time gauge, its imaginary part is already solved by demanding $N^I \rho_I \approx 0 $.}. With a more elaborate choice of reality condition it is possible to define a consistent theory, which will be the subject of the next section. \subsection{SO$(D+1)$ Gauge Supergravity Theory} As we just have seen, the remaining obstacle on our road of extending the internal gauge group from SO$(D)$ to SO$(D+1)$ is that the real vector space $V$ of real SO$(1,D)$ Majorana spinors is not preserved under SO$(D+1)$ whose spinor representations are necessarily on complex vector spaces. Let $V_{\mathbb{C}}$ be the complexification of $V$. Now SO$(D+1)$ acts on $V_{\mathbb{C}}$ but the theory we started from is not $V_{\mathbb{C}}$ but rather the SO$(D+1)$ orbit of $V$. This is the real vector subspace \begin{equation} \label{1} V_{\mathbb{R}}=\{\theta\in V_{\mathbb{C}};\;\;\exists \; \rho\in V,\;g\in \text{SO}(D+1)\;\;\ni \;\;\theta=g\cdot \rho\} \text{,} \end{equation} where $g\cdot$ denotes the respective representation of SO$(D+1)$. This defines a reality structure on $V_{\mathbb{C}}$ that is $V_{\mathbb{C}}=V_{\mathbb{R}}\oplus i V_{\mathbb{R}}$. The mathematical problem left is therefore to add the reality condition that we are dealing with $V_{\mathbb{R}}$ rather than $V_{\mathbb{C}}$. In order to implement this, recall that any $g\in \text{SO}(D+1)$ can be written as $g=BR$ where $B$ is a ``Euclidean boost'' in the $0j$ planes and $R$ a rotation that preserves the internal vector $n^I_0:=\delta^I_0$. The spinor representation of $R$ just needs $\gamma_j$ which is real valued. It follows that (\ref{1}) can be replaced by \begin{equation} \label{2} V_{\mathbb{R}}=\{\theta\in V_{\mathbb{C}};\;\;\exists \; \rho\in V,\;B\in \text{SO}(D+1)\;\;\ni \;\;\theta=B\cdot \rho\} \text{.} \end{equation} The problem boils down to extracting from a given $\theta\in V_{\mathbb{R}}$ the boost $B$ and the element $\rho\in V$, that is, we need a kind of polar decomposition. If $V_{\mathbb{C}}$ would be just a vector subspace of some $\mathbb{C}^n$ we could do this by standard methods. But this involves squaring of and dividing by complex numbers and these operations are ill defined for our $V_{\mathbb{C}}$ since Gra{\ss}mann numbers are nilpotent. Thus, we need to achieve this by different methods. The natural solution lies in the observation that if we use the linear simplicity constraint then the $D$ boost parameters can be extracted from the $D$ rotation angles in the normal $N^I=B_{IJ} n_0^J$ to which we have access because $N$ is part of the extended phase space. To be explicit, let $e^{(A)}$ be the standard base of $\mathbb{R}^{D+1}$, that is, $e^{(A)}_I=\delta^A_I$. We construct another orthonormal basis $b^{(A)}$ of $\mathbb{R}^{D+1}$ as follows:\\ Let $b^{(0)}:=N$ and \begin{equation} \label{3} b^{(0)}_0=\sin(\phi_1)..\sin(\phi_D),\;\; b^{(0)}_j=\sin(\phi_1)..\sin(\phi_{D-j})\cos(\phi_{D+1-j});\;j=1..D \text{,} \end{equation} with $\phi_1,..\phi_{D-1}\in [0,\pi]$ and $\phi_{D}\in [0,2\pi]$ modulo usual identifications and singularities of polar coordinates. Define \begin{equation} \label{4} b^{(j)}_I=\frac{\partial b^{(0)}_I/\partial \phi_j}{||\partial b^{(0)}/\partial \phi_j||} \text{,} \end{equation} where the denominator denotes the Euclidean norm of the numerator. Then it maybe checked by straightforward computation that \begin{equation} \label{5} \delta^{IJ}\; b^{(A)}_I\; b^{(B)}_J=\delta^{AB} \text{.} \end{equation} We consider now the SO$(D+1)$ matrix \begin{equation} \label{6} (A(N)^{-1})_{IJ} :=\sum_{A=0}^D\; b^{(A)}_I\; e^{(A)}_J \text{,} \end{equation} which has the property that $A(N)^{-1}\cdot e^{(0)}=N$. Now starting from the time gauge, $g\in \text{SO}(D+1)$ acts on $V$ and produces $N=g\cdot e^{(0)}$ and $\theta=g\cdot \rho$. We decompose $g=A(N)^{-1} R(N)$ where $A(N)^{-1}$ is the boost defined above and $R\cdot e^{(0)}=e^{(0)}$ is a rotation preserving $e^{(0)}$. It follows that we may parametrise any pair $(N,\theta)$ with $||N||=1$ and $\theta\in V_{\mathbb{R}}$ as $A(N)^{-1}\cdot (e^{(0)}, \rho)$ where $\rho\in V$. We need to investigate how SO$(D+1)$ acts on this parametrisation. On the one hand we have \begin{equation} \label{7} [g\;A(N)^{-1}]_{IJ}=\sum_A\; (g b^{(A)})_I\; e^{(A)}_J \text{.} \end{equation} On the other hand we can construct $A(g \cdot N)^{-1}$ by following the above procedure, that is, computing the polar coordinates $\theta_{g j}$ of $g\cdot N$ and defining the $b^{(A)}_j(g\cdot N)$ via the derivatives with respect to the $\theta_{g j}$. The common element of both bases is $g\cdot N=g\cdot b^{(0)}$. Therefore, there exists an element $R(g,N)\in \text{SO}(D)$ such that \begin{equation} \label{8} g\cdot b^{j}(N)=R_{kj}(g,N) b^{(k)}(g\cdot N) \text{,} \end{equation} or with $R_{00}=1,\; R_{0i}=R_{i0}=0$ \begin{equation} \label{9} g\cdot b^{A}(N)=R_{BA}(g,N) b^{(B)}(g\cdot N) \end{equation} defines a rotation in SO$(D+1)$ preserving $e^{(0)}$. Putting these findings together we obtain \begin{eqnarray} \label{10} && [g\cdot A(N)^{-1}]_{IJ} =\sum_{A,B}\; R_{BA}(g,N) \; b^{(B)}_I(g\cdot N)\;e^{(A)}_J =\sum_{A}\; R_{AJ}(g,N) \; b^{(A)}_I(g\cdot N) \nonumber\\ & =& \sum_{A}\; R_{KJ}(g,N) \; b^{(A)}_I(g\cdot N) \; \delta^{(A)}_K =[A(g\cdot N)^{-1} R(g,N)]_{IJ} \text{.} \end{eqnarray} ~\\ Hence the matrix $A(N)^{-1}$ plays the role of a filter in the sense that the action of SO$(D+1)$ on $A(N)^{-1} \cdot \rho$ can be absorbed into the matrix $A^{-1}$ parametrised by $g\cdot N$ modulo a rotation that preserves $V$ and thus altogether the decomposition of $V_{\mathbb{R}}=\{A(N)^{-1}\cdot V;\;||N||=1\}$ is preserved with the expected covariant action of SO$(D+1)$ on $N$. It therefore makes sense to impose the reality condition that $A(N)\, \theta$ is a real spinor. In the subsequent construction, this idea will be implemented together with an extension of the phase space $\rho_j\to \rho_I$ subject to the constraint $N^I \rho_I=0$. All these constraints and the reality conditions are second class and we will show explicitly that the symplectic structure reduces to the time gauge theory. Despite the fact that we end up with a non trivial Dirac (anti-) bracket, it can nevertheless be quantised and non trivial Hilbert space representations can be found as we will demonstrate in the next section.\\ \\ We define $A(N) \in \text{SO}(D+1)$ quite generally\footnote{There exist other possible choices apart from the construction using polar coordinates which might be better suited for certain problems. In $D=3$, we can, e.g., construct $A(N)$ as a linear function of the components of $N^I$ by using $A_{0I} = N_I$ and subsequently interchanging the components of $N^I$ with appropriate signs for the remaining columns of $A(N)$.} in the spin 1 representation by the equation \begin{equation} A^I\mbox{}_J N^J = \delta^{I}_0 \text{.} \end{equation} It is determined up to SO$(D)$ rotations. From the above equation, it follows that \begin{equation} A_{0I} = N_I~~\text{and}~~ A_{IJ} \bar{X}^J = \delta^I_i A_{IJ} \bar{X}^J \label{eq:PropertiesOfA} \end{equation} for $X^J$ arbitrary. The corresponding rotation on spinors will be denoted by A. This matrix rotates the normal $N^I$ into its time gauge value $\delta^I_0$ without imposing time gauge explicitly, which we will use to circumvent the reality problems of the SO$(D+1)$ theory mentioned above appearing if we do not choose time gauge. We introduce the set of variables $(A_{aIJ}, \pi^{bKL}, N^I, P_J, \rho_I, \rho^*_J, \sigma, \sigma^*)$ together with the following non-vanishing Poisson brackets \begin{alignat}{3} \left\{A_{aIJ}(x), \pi^{bKL}(y) \right\} &= 2 \delta_a^b \delta_I^{[K} \delta_J^{L]} \delta^D(x-y) \text{,}&~~~~~\left\{N^I(x), P_J(y) \right\} &= \delta_J^I \delta^D(x-y)\text{,}~~ \nonumber \\ \left\{\rho_I(x), \rho^*_J(y) \right\} &= -i \eta_{IJ} \mathbb{1} \delta^D(x-y)\text{,}&~~~\left\{\sigma(x), \sigma^*(y) \right\}& = i \frac{D}{D-1} \mathbb{1} \delta^D(x-y) \text{,} \end{alignat} and the reality conditions \begin{equation} \chi_I := A \rho_I - (A \rho_I)^* = 0 \text{,}~~~~ \chi := A\sigma - (A\sigma)^* = 0~~\text{,} \label{eq:NewRealityConditions} \end{equation} which just say that the fermionic variables are real as soon as the normal $N^I$ gets rotated into time gauge. Notice that before imposing the constraints, $\rho,\theta$ are complex Gra{\ss}mann variables and only the Poisson brackets between these and their complex conjugates are non vanishing. The non vanishing brackets between themselves of the previous section will be recovered when replacing the above Poisson bracket by the corresponding Dirac bracket. Additionally, we want that the variables transform nicely under spatial diffeomorphisms and gauge transformations, thus we add \begin{eqnarray} G^{IJ} &:=& D_a \pi^{aIJ} + 2P^{[I}N^{J]} + 2i \rho^{\dagger[I}\rho^{J]} + i\rho^{\dagger}_K [i\Sigma^{IJ}] \rho^K - i\left(\frac{D-1}{D} \sigma^{\dagger}\right) [i\Sigma^{IJ}] \sigma + \hdots \\ \tilde{\mathcal{H}}_a &:=& \frac{1}{2} \pi^{bIJ} \partial_a A_{bIJ} - \frac{1}{2}\partial_b\left(\pi^{bIJ}A_{aIJ}\right) + P^I \partial_a N_I \nonumber \\ &\mbox{}& - \frac{i}{2} \partial_a(\rho^{\dagger I}) \rho_I + \frac{i}{2} \rho^{\dagger I} \partial_a \rho_I +i \frac{D-1}{2D} \partial_a(\sigma^{\dagger}) \sigma - i \frac{D-1}{2D} \sigma^{\dagger} \partial_a \sigma + \hdots \text{.} \end{eqnarray} The old variables are expressed in terms of the new ones by \begin{eqnarray} E^{ai} := \zeta A^{iJ} \bar{\eta}_{JK} \pi^{aIK} N_I \text{,}&\mbox{}& K_{ai} = \zeta A_{i}\mbox{}^I \bar{\eta}_{IK} (A_{aKJ} - \Gamma_{aKJ}(\pi)) N^J \text{,} \nonumber \\ \rho_i = \frac{1}{2} A_{iJ} \bar{\eta}^{JK} \left(A\rho_K + A^* \rho^*_K\right) \text{,}&\mbox{}& \sigma = \frac{1}{2} \left(A\sigma+ A^* \sigma^*\right) \text{,} \label{eq:OldVariablesNew} \end{eqnarray} where the bar here means rotational components w.r.t $N^I$, $\bar{\eta}_{IJ} := \eta_{IJ} - \zeta N_I N_J$. To remove unnecessary degrees of freedom, we need the constraints \begin{eqnarray} S^a_{I\overline{M}} &:=& \epsilon_{IJKL\overline{M}} N^J \pi^{aKL} \text{,} \nonumber \\ \mathcal{N} &:=& N^IN_I - \zeta \text{,} \nonumber \\ \Lambda &:=& \gamma^{I} A_{IJ} \bar{\eta}^{JK} (A \rho_K + A^*\rho^*_K) = A \gamma_{J} \bar{\eta}^{JK} (\rho_K + A^{-1}A^*\rho^*_K) \text{,} \nonumber \\ \Theta &:=& N^I (A\rho_I + A^*\rho^*_I) \text{,} \label{eq:NewConstraintsFinal} \end{eqnarray} together with the Hamilton and supersymmetry constraints, where we replace the old by the new variables as shown above. To prove that this theory is equivalent to Supergravity and can possibly be quantised, we have to answer the following questions: \begin{itemize} \item Are the reality conditions (\ref{eq:NewRealityConditions}) consistent? I. e., do they transform under gauge transformations in a sensible way and do they (weakly) Poisson commute with the other constraints? \item Are the Poisson brackets of the old variables when expressed in terms of the new ones (\ref{eq:OldVariablesNew}) equal to those on the old phase space? Does the constraint algebra close, i.e. do the newly introduced constraints (\ref{eq:NewConstraintsFinal}) fit ``nicely" in the set of the old constraints? If not, do at least the constraints which were of the first class before the enlargement of the gauge group retain this property? \item Do the constraints, especially the Gau{\ss} and spatial diffeomorphism constraint, reduce correctly? \item Which Dirac brackets arise from the reality conditions? In view of a later quantisation, can we find variables such that the Dirac brackets become simple? \end{itemize} We will answer these questions in the order they were posed above. \begin{itemize} \item The orthogonal matrix $A_{IJ}$ is a function of $N^I$ only as we have seen above. We have $A_{0K} = N_K$, but the remaining components of the matrix are complicated functions of the components of the vector $N^I$. Thus, the whole matrix $A_{IJ}$ will have a rather awkward transformation behaviour under the action of $G^{IJ}$. The reality conditions (\ref{eq:NewRealityConditions}) as a whole, however, transform in a ``nice" way under SO$(D+1)$ gauge transformations (we will discuss $\rho_I$ in the following, $\sigma$ can be treated analogously). For $g \in \text{SO}(D+1)$, the reality condition transforms as follows: \begin{equation} A(N)\rho^J = A(N)^* \rho^{J*} \longrightarrow g^{JK} A(g\cdot N) g\rho_K = g^{*JK} A(g\cdot N)^* g^* \rho^*_K \text{.} \end{equation} Since $g^{IJ}$ is real, it is sufficient to consider the transformation behaviour of the spinor $A\rho^I$, so we will skip the action on internal indices in the following. Note that every rotation can be split up in a part which leaves $N^I$ invariant and a ``Euclidean boost" changing $N^I$. For the rotations, $A$ is invariant and we find using $A\bar{\gamma}^I A^{-1} = \bar{\eta}^I\mbox{}_JA^{-1}_{JK} \gamma^K$ and $\Sigma^{ij*} = -\Sigma^{ij}$ \begin{eqnarray} \delta_{\bar{\Lambda}}A\rho_I &=& i\bar{\Lambda}_{JK} A\bar\Sigma^{JK}\rho_I = i\bar{\Lambda}_{JK} A\bar\Sigma^{JK}A^{-1}A\rho_I = i\bar{\Lambda}^{JK} A^{-1}_{[J|L} A^{-1}_{K]M} \Sigma^{LM} A \rho_I = \nonumber \\ &=& iA_{L[J} A_{M|K]} \bar{\Lambda}^{JK} \Sigma^{LM} A \rho_I = iA_{l[J} A_{m|K]} \bar{\Lambda}^{JK} \Sigma^{lm} A \rho_I \text{,} \\ \delta_{\bar{\Lambda}}(A\rho_I)^* &=& (i\bar{\Lambda}_{JK} A\bar\Sigma^{JK}\rho_I)^* = (iA_{l[J} A_{m|K]} \bar{\Lambda}^{JK} \Sigma^{lm} A \rho_I)^* = \nonumber \\ &=& - iA_{l[J} A_{m|K]} \bar{\Lambda}^{JK} \Sigma^{lm*} A^* \rho^*_I = iA_{l[J} A_{m|K]} \bar{\Lambda}^{JK} \Sigma^{lm} A^* \rho^*_I \text{.} \end{eqnarray} For finite transformations $\bar{g} \in \text{SO}(D)_N$ stabilising $N^I$, we thus have $A\rho_I \rightarrow A \bar{g} \rho_I = g_0 A \rho_I$, where $g_0 \in \text{SO}(D)_0$ stabilises the zeroth component and thus is, with our choice of representation, a real matrix. Hence, reality conditions transform again into reality conditions under rotations. For a boost $b$ the situation is a bit more complicated. Under a boost $A_{IJ}$ will transform intricately, but we know that a) the matrix remains orthogonal by construction, and b) that $A_{0K} = N_K \rightarrow \Lambda_K\mbox{}^L N_L = - A_{0L} \Lambda^L\mbox{}_K$. The most general transformation compatible with the above is $A_{IJ}\rightarrow (g_0)_{IK} A^{KL} (\bar{g}^{-1})_{LN} (b^{-1})^N\mbox{}_M (\mbox{}^b\bar g^{-1})^M\mbox{}_J$ where $g_0 \in \text{SO}(D)_0$ is some group element which does not change the zeroth component, $\bar{g} \in \text{SO}(D)_N$ is in the stabiliser of $N^I$ and $\mbox{}^b\bar{g} \in \text{SO}(D)_{b\cdot N}$. Since we have SO$(D)_N = b^{-1}\text{SO}(D)_{b \cdot N} b$, we can eliminate $\mbox{}^b\bar{g}$ by a redefinition of $\bar{g}$. By definition of a representation, we then also have $A \rightarrow g_0A\bar{g}^{-1} b^{-1}$ and thus \begin{equation} A \rho_I\rightarrow g_0A\bar{g}^{-1} b^{-1} b \rho_I = g_0 A \bar{g}^{-1} \rho_I = \tilde{g}_0 A \rho^I \text{,} \end{equation} where in the last step we used the result we obtained for rotations above. Since $\tilde{g}_0 \in \text{SO}(D)_0$ is real, we see that under a ``Euclidean boost" the reality condition can only get rotated. What remains to be checked is that the reality condition Poisson commutes with all other constraints. It transforms covariantly under spatial diffeomorphisms by inspection and, as we have just proven, it forms a closed algebra with SO$(D+1)$ gauge transformations. Concerning all other constraints, note that they, by construction, depend only on $\Re(A\rho^J)$ (cf. the replacement (\ref{eq:OldVariablesNew}) and the new constraints (\ref{eq:NewConstraintsFinal})), while the reality condition demands that $\Im(A\rho^J)$ vanishes. But real and imaginary parts Poisson commute, which can be checked explicitly, \begin{equation} \big\{ \left(A \rho_I- A^* \rho^*_I\right) , \left(A \rho_J + A^* \rho^*_J\right) \big\} = -i \eta_{IJ} \left[+ A A^{\dagger} - A^* A^T \right] = 0\text{.} \end{equation} \item The brackets between $E^{ai}$ and $K_{bj}$ have already been shown to yield the right results in \cite{BTTIV}. The only modifications in the case at hand are a) the replacement of $n^I(\pi)$ by $N^I$ and the corresponding replacement of the quadratic by the linear simplicity constraint, which, in fact, simplifies the calculations, and b) the matrix $A_{IJ}$, which does not lead to problems because of its orthogonality. For the fermionic variables, we find using\footnote{Because of orthogonality, we trivially have $A_{IJ} A_K\mbox{}^J = \eta_{IK}$. Additionally, $A_{iJ} \bar{\eta}^{J}_{K} = A_{iK}$, which can be seen from $A_{iK} N^{K} = 0$. Therefore, $A_{i}\mbox{}^I A_{j}\mbox{}^J \bar{\eta}_{IJ} = A_{i}\mbox{}^I A_{jI} = \eta_{ij}$.} $A_{i}\mbox{}^I A_{j}\mbox{}^J \bar{\eta}_{IJ} = \eta_{ij}$ and $A^{\dagger}A = \mathbb{1}$ \begin{eqnarray} \left\{\rho_i(x), \rho_j(y)\right\} &=& -\frac{i}{4} A_{i}\mbox{}^I A_j\mbox{}^J \bar{\eta}_I^K \bar{\eta}_J^L \left[ \left\{A\rho_K(x), A^*\rho^*_L(y) \right\}+ \left\{A^*\rho^*_K(x), A\rho_L(y) \right\}\right] \nonumber \\ &=& -\frac{i}{4} A_{i}\mbox{}^I A_{j}\mbox{}^J \bar{\eta}_{IJ} \left[ AA^{\dagger} + A^* A^T\right] \delta^D(x-y) \nonumber \\ &=& -\frac{i}{4} \delta_{ij} \left[ \mathbb{1} + \mathbb{1}^T\right] \delta^D(x-y) \nonumber \\ &=& - \frac{i}{2} \delta_{ij} \mathbb{1} \delta^D(x-y) \text{,} \\ \left\{\sigma(x), \sigma(y) \right\} &=& i \frac{D}{2(D-1)}\mathbb{1} \delta^D(x-y) \text{.} \end{eqnarray} This automatically implies that the algebra of $\mathcal{H}$ and $\mathcal{S}$ remains unchanged if we replace the old variables by (\ref{eq:OldVariablesNew}). From (\ref{eq:OldVariablesNew}), it is also clear that $\mathcal{H}$ and $\mathcal{S}$ Poisson commute with $S^a_{I\overline{M}}$ and $\mathcal{N}$. By inspection, all constraints transform covariant under spatial diffeomorphisms. More surprisingly, all constraints Poisson commute with $G^{IJ}$. This can be seen quite easily for $G^{IJ}$, $\tilde{\mathcal{H}}_a$, $S^a_{I\overline{M}}$, $\mathcal{N}$ and also for $\Lambda$ and $\Theta$ (note that $A$, $A_{IJ}$ are invertible and that $\left(\rho_I + A^{-1} A^* \rho^*_I \right)$ transforms like $\rho_I$ which can be shown using the methods above). But for $\mathcal{H}$ and $\mathcal{S}$ this is, at first sight, a small miracle, since the replacement rules (\ref{eq:OldVariablesNew}) of all old variables depend on $A(N)$, which is known to transform oddly. But the matrices $A$ are placed such that they, in fact, either $a)$ appear in the combinations $(\rho_I + A^{-1}A^*\rho^*_I)$ or $(\rho^{\dagger}_I + \rho^T_I A^TA)$, which can easily be shown to transform like $\rho^I$ and $\rho^{\dagger}_I$ respectively with the methods above, or $b)$ all cancel out! The general situation is the following: $\rho_i$ is replaced by $\rho_i = A_i\mbox{}^J \bar{\eta}_J\mbox{}^I A (\rho_I + A^{-1}A^*\rho^*_I)$, $\rho_i^T$ by $\rho_i^T = A_i\mbox{}^J \bar{\eta}_J\mbox{}^I (\rho_I^{\dagger} + \rho^{T} A^T A)A^{-1}$, where the expression in brackets transform sensible (cf. above). The free internal indices of $E^{ai}$, $K_{bj}$ and $\rho^k$ are either contracted with each other, then in the replacement the $A_{iJ}$s will cancel because of orthogonality, or with $\gamma^i$, which will be contracted from both sides\footnote{Strictly speaking, this is true only for $\mathcal{H}$, since it has no free indices. For $S$ we may change the definition of the Lagrange multiplier $\bar{\psi}_t \rightarrow \bar{\psi}_t A$ to make it hold.} with $A(N)$ and all $A$s cancel due to $(A^{-1})_{IJ} A^{-1}\gamma^J A = \gamma_I$. Cancelling the $A$s makes $\mathcal{H}$ gauge invariant and $\mathcal{S}$ gauge covariant by inspection, if we replace all $\gamma^0$ by $i\slashed N$. Thus we are left with $\Theta$ and $\Lambda$, which are their own second class partners but Poisson commute with everything else, which can be seen as follows. For $\Theta$, note that $\mathcal{H}$, $\mathcal{S}$ and $\Lambda$ only depend on $\bar\eta^{JK}(A\rho_K + A^* \rho^*_K)$, which Poisson commutes with $\Theta$ due to the projector $\bar{\eta}$. For $\Lambda$, the situation again is more complicated. Remember that $\mathcal{H}$ and $\mathcal{S}$ in the time gauge only depended on $X^i \mathbb{P}_{ij} \rho^j$ for some $X^i$. Whatever $X^i$ may be, under (\ref{eq:OldVariablesNew}) it will be replaced by something of the form $A^{IJ} \bar{X}_J$ and the whole expression will become $\bar{X}_I A^{JI} \mathbb{P}_{JK} A^{KL} \bar{\eta}_{L}\mbox{}^{M} \left(A \rho_M + A^*\rho^*_M \right)$ with $\mathbb{P}_{IJ} = \eta_{IJ} - \frac{1}{D} \gamma_I \gamma_J$. Crucial for the following calculation is the property (\ref{eq:PropertiesOfA}), which will be used several times. Then we find that the generic term is Poisson commuting with $\Lambda$, \begin{eqnarray} & &\left\{ \bar{X}_I A^{JI} \mathbb{P}_{JK} A^{KL} \bar{\eta}_{L}\mbox{}^M \left(A\rho_M + A^* \rho^*_M\right), \gamma^{N} A_{NO} \bar{\eta}^{OP} (A \rho_P + A^*\rho^*_P) \right\} \nonumber \\ &=& -i\bar{X}_I A^{JI} \mathbb{P}_{JK} A^{KL} \bar{\eta}_{L}\mbox{}^M \left(\gamma^{N}\right)^T A_{NM} \nonumber \\ &=& -i\bar{X}_I A^{jI} \mathbb{P}_{jk} A^{kL} \bar{\eta}_{L}\mbox{}^M \left(\gamma^{n}\right)^T A_{nM} \nonumber \\ &=& -i\bar{X}_I A^{jI} \mathbb{P}_{jk} A^{kL} \gamma^{n} A_{nL}= -i \bar{X}_I A^{jI} \mathbb{P}_{jk} \gamma^{k} = 0\text{.} \end{eqnarray} The constraint algebra is summarised in table \ref{tab:Constraints}. \begin{table}[h] \renewcommand{\arraystretch}{2}\addtolength{\tabcolsep}{0.1pt} \begin{center} \begin{tabular}{|c|c|} \hline First class constraints & Second class constraints \\ \hline $G^{IJ}$, $\tilde{\mathcal{H}}_a$, $\mathcal{H}$, $\mathcal{S}$, $S^a_{I\overline{M}}$ and $\mathcal{N}$ & $\Lambda$, $\Theta$, $\chi_I$ and $\chi$ \\ \hline \end{tabular} \caption{List of first and second class constraints.} \label{tab:Constraints} \end{center} \end{table} \item By construction, $\mathcal{H}$ and $S$ reduce correctly if we choose time gauge $N^I = \delta^I_0$, which automatically implies $A_{IJ} \rightarrow (g_0)_{IJ} \in \text{SO}(D)_0$. Since the theory is SO$(D)_0$ invariant, a gauge transformation $g_0 \rightarrow 1$ can be performed, which implies $\rho^I = \rho^I_r$. From this one easily deduces that $G^{IJ}$ and $\tilde{\mathcal{H}}_a$ also reduce correctly. Since the theory was SO$(D+1)$ invariant in the beginning, these results do not depend on the gauge choice. \item For the Dirac matrix, we find\footnote{Note that the Dirac matrix is block diagonal. Therefore, we do not need to consider the full Dirac matrix at once.} \begin{eqnarray} C_{IJ} &=& \left\{ A \rho_I - (A \rho_I)^*,A \rho_J- (A \rho_J)^*\right\} \nonumber \\ &=& -i \eta_{IJ} \left[- AA^{\dagger} - A^*A^T \right] = 2i \mathbb{1} \eta_{IJ} \\ (C^{-1})^{IJ} &=& -\frac{1}{i}\mathbb{1}\eta^{IJ} \\ \left\{\rho_I,\rho_J\right\}_{DB} &=& - \left\{\rho_I,A \rho_K - (A \rho_K)^*\right\} (C^{-1})^{KL} \left\{A \rho_L - (A \rho_L)^*,\rho_J\right\} = \nonumber \\ &=& -\frac{i}{2} \eta_{IJ} A^{\dagger}A^* \text{,} \end{eqnarray} and for $\sigma$ analogously. We now can choose new variables which have simpler brackets. Motivated from the original replacement (\ref{eq:OldVariablesNew}), we define \begin{alignat}{3} \rho_r^I &:= A^{IJ} A \rho_J\text{,}&~~~~ \sigma_r &:=A \sigma \text{,}\\ \left(\rho_r^I\right)^* &= A^{IJ} A^* \rho^*_J = A^{IJ} A^* ((A^*)^{-1}A\rho_J) = \rho_r^I \text{,}&~~~~~~~~ \sigma_r^* &= \sigma_r\text{,} ~~~~~~~~~~~~~~~~ \end{alignat} with the Dirac brackets \begin{eqnarray} \left\{\rho_r^I,\rho_r^J\right\}_{DB} &=& \left\{A^{IK} A \rho_K,A^{JL} A \rho_L\right\}_{DB} = -\frac{i}{2} \eta^{IJ} AA^{\dagger}A^*A^T = -\frac{i}{2} \eta^{IJ}\mathbb{1}\text{,} \label{eq:DiracBracketRhoR} \\ \left\{\sigma_r,\sigma_r\right\} &=& i\frac{D}{2(D-1)}\mathbb{1}\text{.} \end{eqnarray} Thus, the Dirac brackets of the $\rho_r^I$, $\sigma_r$ are simple as are the reality conditions. Only the transformation behaviour of the new variables under SO$(D+1)$ rotations is complicated because of the appearance of the rotation $A$ in their definition. Note that also $\left\{P^I, P^J\right\}_{DB}$, $\left\{P^I, \rho^J_r \right\}_{DB}$ and $\left\{P^I, \sigma_r \right\}_{DB}$ will be non-zero. Therefore, we also choose a new variable $\tilde{P}^I$ with simple Dirac brackets, which can most easily be found by performing the symplectic reduction. After that, we can simply read it off the symplectic potential. We find using $\rho_I = A_{JI} A^{-1} \rho^J_r$ and $\rho_I^{\dagger} = A_{JI} (\rho^{J}_r)^T A$ \begin{eqnarray} &\mbox{}& + i \rho_I^{\dagger}\dot{\rho}^I - i \frac{D-1}{D} \sigma^{\dagger} \dot{\sigma} + P^I \dot{N}_I \nonumber \\ &=& i A_{J}\mbox{}^I (\rho^{J}_r)^T A \dot{(A_{KI} A^{-1} \rho^K_r)} - i\frac{D-1}{D} \sigma^{T}_r A \dot{(A^{-1} \sigma_r)} + P^I \dot{N}_I \nonumber \\ &=& i (\rho^{J}_r)^T \dot{\rho}_{Jr} -i \frac{D-1}{D} \sigma_r^T\dot{\sigma}_r + P^I \dot{N}_I + \nonumber \\ &\mbox{}& + i \left( A_{J}\mbox{}^{L} (\rho^{J}_r)^T \frac{\partial A_{KL}}{\partial N_I} \rho^K_r + (\rho^{J}_r)^T A \frac{\partial A^{-1}}{\partial N_I} \rho_{Jr} -\frac{D-1}{D} \sigma_r^T A \frac{\partial A^{-1}}{\partial N_I} \sigma_r \right) \dot{N}_I \nonumber \\ &=& i(\rho^{J}_r)^T \dot{\rho}_{Jr} -i \frac{D-1}{D} \sigma_r^T \dot\sigma_r + \tilde{P}^I \dot{N}_I \text{,} \end{eqnarray} with $\tilde{P}^I := P^I + i A_{J}\mbox{}^L (\rho^{J}_r)^T \frac{\partial A_{KL}}{\partial N_I} \rho^K_r + i (\rho^{J}_r)^T A \frac{\partial A^{-1}}{\partial N_I} \rho_{Jr} - i \frac{D-1}{D} \sigma_r^T A \frac{\partial A^{-1}}{\partial N_I} \sigma_r $. It can be checked explicitly that $\tilde{P}^I$, expressed in the old variables $(P^I, N_J, \rho^{\dagger}_I, \rho^J, \sigma^{\dagger}, \sigma)$, Poisson commutes with the reality conditions and with itself, and therefore has nice Dirac brackets. For the spatial diffeomorphism constraint, a short calculation yields \begin{eqnarray} \tilde{\mathcal{H}}_a &=& P^I \partial_a N_I - \frac{i}{2} \partial_a(\rho^{\dagger I}) \rho_I +\frac{i}{2} \rho^{\dagger I} \partial_a \rho_I +i\frac{D-1}{2D} \partial_a(\sigma^{\dagger}) \sigma -i\frac{D-1}{2D} \sigma^{\dagger} \partial_a \sigma + \hdots = \nonumber \\ &=& \tilde{P}^I \partial_a N_I + i(\rho^{I}_r)^T \partial_a \rho_{Ir} - i\frac{D-1}{D}\sigma^{T}_r \partial_a \sigma_r + \hdots \text{,} \end{eqnarray} which by inspection generates spatial diffeomorphisms on the new variables. The constraints $\Lambda$ and $\Theta$ become \begin{equation} \Lambda = \gamma_i \rho_r^i \approx 0 ~~\text{and}~~ \Theta = \rho_r^0 \approx 0\text{,} \end{equation} which look utterly non-covariant, but which by construction still Poisson commute with the SO$(D+1)$ Gau{\ss} constraint. It therefore has to have a complicated form. We find \begin{eqnarray} G^{IJ} &=& 2P^{[I}N^{J]} + 2i\rho^{\dagger[I}\rho^{J]} + i\rho^{\dagger}_K [i\Sigma^{IJ}] \rho^K - i\left(\frac{D-1}{D}\sigma^{\dagger}\right) [i\Sigma^{IJ}] \sigma + \hdots \nonumber \\ &=& 2\tilde{P}^{[I}N^{J]} + 2i \rho^{TK}_r A_{K}\mbox{}^{[I} A_{L}\mbox{}^{|J]} \rho^{L}_r + i \rho^{T}_{Kr} A[i\Sigma^{IJ}]A^{-1} \rho^K_r \nonumber \\ & &- i\frac{D-1}{D}\sigma^{T}_r A [i\Sigma^{IJ}]A^{-1} \sigma_r + 2i \left( A_{M}\mbox{}^{L} (\rho^{M}_r)^T \frac{\partial A_{KL}}{\partial N_{[I}} \rho^K_r + \right. \nonumber \\ & &+\left. (\rho^{N}_r)^T A \frac{\partial A^{-1}}{\partial N_{[I}} \rho_{Nr} - \frac{D-1}{D}\sigma_r^T A \frac{\partial A^{-1}}{\partial N_{[I}} \sigma_r \right) N^{J]} + \hdots \end{eqnarray} \end{itemize} Finally, we solve the remaining second class constraints $\Lambda$ and $\Theta$ which after a short calculations results in the final Dirac brackets \begin{equation} \left\{\rho_r^i, \rho_r^j\right\}_{DB} = - \frac{i}{2} \mathbb{P}^{ij} \nonumber \text{,} ~~~ \left\{\rho_r^0, \rho_r^j\right\}_{DB} =0 \nonumber \text{,} ~~ ~ \left\{\rho_r^0, \rho_r^0\right\}_{DB}= 0 \nonumber \text{.} \end{equation} As a consistency check, we can consider \begin{equation} \left\{\phi^i, \phi^j\right\}_{DB} = \left\{\rho_r^i + \frac{1}{D} \gamma^i \sigma_r,~ \rho_r^j+ \frac{1}{D} \gamma^j \sigma_r\right\}_{DB} = -( C^{-1})^{ij} \text{,} \end{equation} which coincides with the Dirac brackets obtained in (\ref{eq:DiracBracket}). The form of the Hamiltonian and supersymmetry constraints $\mathcal{H}$, $\mathcal{S}$ strongly depends on the Supergravity theory under consideration. Exemplarily, we cite the supersymmetry constraint in $D=3$, $N=1$ Supergravity from \cite{SawaguchiCanonicalFormalismOf} adapted to our notation, \begin{eqnarray} \mathcal{S} &=& - \frac{i}{2} \epsilon^{abc} \gamma_{5} \left[ \gamma_k e_a^k \hat{D}_b \left( \frac{1}{\sqrt[4]{q}} e_c^l \phi_l\right) + \hat{D}_b \left( \frac{1}{\sqrt[4]{q}} \gamma_k e_a^k e_c^l \phi_l \right) \right] \nonumber \\ && + \frac{1}{2 \sqrt[4]{q}} \epsilon^{abc} \epsilon_{ijk} e_a^i \left(K'_b\ ^j + i \bar\phi_l \gamma^0 \gamma^{lj} E_{b}^m \phi_m\right) \gamma^k e_c^n \phi_n \text{,} \end{eqnarray} where $\hat{D}_a \phi_i = \partial_a \phi_i + \hat\omega_{aij} \phi^j + \frac{i}{2} \hat\omega_{akl} \Sigma^{kl} \phi_i$, $\hat\omega_{aij} = \Gamma_{aij} + \frac{i}{4 \sqrt{q}} e_a^k \left( \bar{\phi}_i \gamma_k \phi_j + 2 \bar{\phi}_{[i} \gamma_{j]} \phi_k \right)$, and $\Gamma_{aij}$ is the spin-connection annihilating the triad. An explicit expression for $\mathcal{S}$ in terms of the extended variables $(A,\pi,N,P,\rho,\rho^*, \sigma, \sigma^*)$ can be found using (\ref{eq:DecompositionFermions}), (\ref{eq:OldVariablesNew}). The corresponding constraint operator is obtained using the methods in section \ref{sec:Quantisation}, \ref{sec:KinematicalHilbertSpace} and \cite{ThiemannQSD5, BTTIII}. \section{Background Independent Hilbert Space Representations for Majorana Fermions} \label{sec:Quantisation} Background independent Hilbert space representations for Dirac spinor fields were constructed in \cite{ThiemannKinematicalHilbertSpaces}. One may think that for the Rarita-Schwinger field or more generally for Majorana fermion fields one can reduce to this construction as follows: Consider the following variables \begin{equation} \xi^{I \alpha} = \frac{1}{\sqrt{2}} \left( \rho_r^{2\alpha+1} + i \rho_r^{2\alpha+2} \right), ~~~ \pi^{I \alpha} = \frac{-i}{\sqrt{2}} \left( \rho_r^{2\alpha+1} - i \rho_r^{2\alpha+2} \right), ~~~ \alpha = 1, \ldots, 2^{\lfloor (D+1)/2 \rfloor } \text{,} \end{equation} which have the non-vanishing Dirac brackets \begin{equation} \left\{ \xi^{I \alpha}(x), \pi^{J \beta}(y) \right\} = -i \eta^{IJ} \delta^{\alpha \beta} \delta^{(D)}(x-y) \end{equation} and the simple reality condition \begin{equation} \bar{\pi} = - i \xi \text{.} \end{equation} The elements of the Hilbert space are field theoretic extensions of holomorphic (i.e. they only depend on $\theta^\alpha$) functions on the Gra{\ss}mann space spanned by the Gra{\ss}mann numbers $\theta^\alpha$ and their adjoints $\bar{\theta}^\alpha$, the operators corresponding to the phase space variables act as \begin{equation} \hat{\xi} f := \theta f, ~~~ \hat{\pi} f := i \frac{d}{d \theta} f, \end{equation} and the scalar product \begin{equation} <f, g> := \int e^{\bar{\theta} \theta} \bar{f} g \, d \bar{\theta} \, d \theta \end{equation} faithfully implements the reality conditions. There are, however, two drawbacks to this:\\ 1. Due to the arbitrary split of the variables into two halves, the scalar product is not SO$(D)$ invariant which makes it difficult to solve the Gau{\ss} constraint.\\ 2. The scalar product given above fails to implement the Dirac bracket resulting from the second class constraints, that is, $\{\rho^{r\alpha}_i,\rho^{r\beta}_j\}_{DB}= -i/2\; \mathbb{P}^{\alpha\beta}_{ij}$. Recall that one must solve the second class constraints before quantisation, hence it is not sufficient to consider the quantisation of the Poisson bracket as was done above.\\ \\ In what follows we develop a background independent Hilbert space representation that is SO$(D)$ invariant, implements the Dirac bracket and is geared to real valued (Majorana) spinor fields. We begin quite generally with $N$ real valued Gra{\ss}mann variables $\theta_A,\;A=1,..,N;\; \theta_{(A} \theta_{B)}=0;\; \theta_A^\ast=\theta_A$. We consider the finite dimensional, complex vector space $V$ of polynomials in the $\theta_A$ with complex valued coefficients. Notice that $f\in V$ depends on all real Gra{\ss}mann coordinates, it is not holomorphic as in the case of the Dirac spinor field \cite{ThiemannKinematicalHilbertSpaces}. Thus $\dim(V)=2^N$ is the complex dimension of $V$. We may write a polynomial $f\in V$ in several equivalent ways which are useful in different contexts. Let $f^{(n)}_{A_1..A_n},\;0\le n\le N$ be a completely skew complex valued tensor (n-form) then $f$ can be written as \begin{equation} \label{11} f=\sum_{n=0}^N \;\frac{1}{n!}\; f^{(n)}_{A_1..A_n}\; \theta^{A_1}..\theta^{A_n} =\sum_{n=0}^N \; \sum_{1\le A_1<..<A_n\le N} \; f^{(n)}_{A_1..A_n}\; \theta^{A_1}..\theta^{A_n} \text{.} \end{equation} An equivalent way of writing $f$ is by considering for $\sigma_k\in \{0,1\}$ and $A_1<..<A_n$ the relabelled coefficients \begin{equation} \label{22} f_{\sigma_1..\sigma_N}:=f^{(n)}_{A_1..A_n},\;\;\; \sigma_k:= \left\{ \begin{array}{cc} 1 & k\in\{A_1,..,A_n\} \\ 0 & {\rm else} \text{.} \end{array} \right. \end{equation} It follows \begin{equation} \label{33} f=\sum_{\sigma_1,..,\sigma_N\in \{0,1\}}\; f_{\sigma_1..\sigma_N}\; \theta_1^{\sigma_1}.. \theta_N^{\sigma_N} \end{equation} with the convention $\theta_A^0:=1$. On $V$ we define the obvious positive definite sesqui-linear form \begin{equation} \label{44} <f,f'>:=\sum^N_{n=0}\sum_{A_1<..<A_n} \; \overline{f^{(n)}_{A_1..A_N}}\; f^{(n)\prime}_{A_1.. A_N} =\sum_{\sigma_1..\sigma_N}\; \overline{f_{\sigma_1..\sigma_n}} \; f'_{\sigma_1..\sigma_N} \end{equation} as well as the operators \begin{equation} \label{55} [\theta_A \cdot f](\theta):=\theta_A\; f(\theta),\;\;\;\; [\partial_A \cdot f](\theta):=\partial^l f(\theta)/\partial_A \text{,} \end{equation} where the latter denotes the left derivative on Gra{\ss}mann space (see, e.g., \cite{HenneauxQuantizationOfGauge} for precise definitions). Notice the relations $\partial_{(A} \partial_{B)}=0,\;\; 2\partial_{(A} \theta_{B)}=\delta_{AB}$ which can be verified by applying them to arbitrary polynomials $f$. We claim that the operators (\ref{55}) satisfy the adjointness relation \begin{equation} \label{5a} \theta_A^\dagger=\partial_A \text{.} \end{equation} The easiest way to verify this is to use the presentation (\ref{33}). We find explicitly \begin{eqnarray} \label{66} \theta_A\cdot f &=& \sum_{\sigma_1,..,\sigma_N}\; f_{\sigma_1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}}\; \delta_{\sigma_A,0}\;\theta_1^{\sigma_1}..\theta_A.. \theta_N^{\sigma_N} \nonumber\\ &=& \sum_{\sigma_1,..,\sigma_N}\; [f_{\sigma_1..\sigma_A-1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}}\; \delta_{\sigma_A,1}]\;\theta_1^{\sigma_1}.. \theta_N^{\sigma_N} \nonumber\\ &=:& \sum_{\sigma_1,..,\sigma_N}\; \tilde{f}^A_{\sigma_1..\sigma_N}\; \theta_1^{\sigma_1}..\theta_N^{\sigma_N} \text{,} \nonumber\\ \partial_A\cdot f &=& \sum_{\sigma_1,..,\sigma_N}\; f_{\sigma_1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}}\; \delta_{\sigma_A,1}\;\theta_1^{\sigma_1}..\widehat{\theta_A}.. \theta_N^{\sigma_N} \nonumber\\ &=& \sum_{\sigma_1,..,\sigma_N}\; [f_{\sigma_1..\sigma_A+1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}}\; \delta_{\sigma_A,0}]\;\theta_1^{\sigma_1}.. \theta_N^{\sigma_N} \nonumber\\ &=:& \sum_{\sigma_1,..,\sigma_N}\; \hat{f}^A_{\sigma_1..\sigma_N}\; \theta_1^{\sigma_1}..\theta_N^{\sigma_N} \text{,} \end{eqnarray} where the wide hat in the fourth line denotes omission of the variable. We conclude \begin{eqnarray} \label{77} <f,\theta_A f'> &=& \sum_{\sigma_1,..,\sigma_N}\; \overline{f_{\sigma_1..\sigma_N}}\; \tilde{f}^{A\prime}_{\sigma_1..\sigma_N} \nonumber\\ &=& \sum_{\sigma_1,..,\sigma_N}\; \overline{f_{\sigma_1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}}}\; \tilde{f}'_{\sigma_1..\sigma_A-1..\sigma_N} \delta_{\sigma_A,1} \nonumber\\ &=& \sum_{\sigma_1,..,\sigma_N}\; \overline{f_{\sigma_1..\sigma_{A}+1..\sigma_N}\; (-1)^{\sigma_1+..+\sigma_{A-1}} \delta_{\sigma_A,0}}\; \tilde{f}'_{\sigma_1..\sigma_N} \nonumber\\ &=& \sum_{\sigma_1,..,\sigma_N}\; \overline{\hat{f}^A_{\sigma_1..\sigma_N}}\; \tilde{f}'_{\sigma_1..\sigma_N}=<\partial_A f,f'> \text{.} \end{eqnarray} Although not strictly necessary, it is interesting to see whether the scalar product (\ref{44}) can be expressed in terms of a Berezin integral, perhaps with a non trivial measure as in \cite{ThiemannKinematicalHilbertSpaces} for complex Gra{\ss}mann variables. The answer turns out to be negative: The most general Ansatz for a ``measure'' is $d\mu=d\theta_1..d\theta_N, \; \mu(\theta)$ with $\mu\in V$ fails to reproduce (\ref{44}) if we apply the usual rule for the Berezin integral\footnote{Rather a linear functional on $V$ which is of course also a non Abelian Gra{\ss}mann algebra.} $\int d\theta\; \theta^\sigma=\delta_{\sigma,1}$. Notice that from this we induce $\int d\theta_A\; d\theta_B=-\int d\theta_B \; d\theta_A$ as one quickly verifies when applying to $V$. However, there exists a non-trivial differential kernel \begin{equation} \label{88} K:=(\theta_1+(-1)^{N-1}\partial_1)..(\theta_N+(-1)^{N-1}\partial_N) \end{equation} such that \begin{equation} \label{99} <f,f'>=\int\; d\theta_N..d\theta_1\; f^\ast\;K\; f' \text{,} \end{equation} where we emphasise that $f^\ast$ is the Gra{\ss}mann involution \begin{equation} \label{1010} f^\ast=\sum_{\sigma_1..\sigma_N} \overline{f_{\sigma_1..\sigma_N}} \theta_N^{\sigma_N}..\theta_1^{\sigma_1} =\sum_{\sigma_1..\sigma_N} \overline{f_{\sigma_1..\sigma_N}} \;(-1)^{\sum_{k=1}^{N-1} \sigma_k \sum_{l=k+1}^N \sigma_l}\; \theta_1^{\sigma_1}..\theta_N^{\sigma_N} \end{equation} and not just complex conjugation of the coefficients of $f$. Notice also that due to total antisymmetry we may rewrite (\ref{99}) in the form \begin{equation} \label{1111} <f,f'>=\frac{(-1)^{N(N-1)/2}}{N!} \int d\theta_{A_1}..d\theta_{A_N} \; f^\ast D_{A_1}..D_{A_N} \; f' \end{equation} where \begin{equation} \label{12} D_A=\theta_A+(-1)^{N-1} \partial_A \text{.} \end{equation} The presentation (\ref{13}) establishes that the linear functional is invariant under U$(N)$ acting on $V$ by \begin{equation} \label{13} f\mapsto U\cdot f;\;\;\;\; [U\cdot f]^{(n)}_{A_1..A_N}= f^{(n)}_{B_1..B_N} U_{B_1 A_1} .. U_{B_N A_N} \text{,} \end{equation} which is of course also clear from (\ref{44}). Notice that (\ref{13}) formally corresponds to $\theta_A \mapsto U_{AB} \theta_B$ but this is not an action on real Gra{\ss}mann variables unless $U$ is real valued. If we want to have an action on the linear polynomials with real coefficients then we must restrict U$(N)$ to O$(N)$ or a subgroup thereof which will precisely the case in our application. In this case it is sufficient to restrict to real valued coefficients in $f$ and now the real dimension of $V$ is $2^N$.\\ \\ We sketch the proof that (\ref{88}) accomplishes (\ref{99}). We introduce the notation for $k=1,..,N$ \begin{equation} \label{14} F_{\sigma_1 .. \sigma_k}:=\sum_{\sigma_{k+1}.. \sigma_N} \; f_{\sigma_1..\sigma_N}\; \theta_{k+1}^{\sigma_{k+1}}.. \theta_N^{\sigma_N} \text{,} \end{equation} whence $F_{\sigma_1..\sigma_N}=f_{\sigma_1..\sigma_N}$. Notice that $F_{\sigma_1..\sigma_k}$ no longer depends on $\theta_1,..,\theta_k$. Using this we compute with $d^N\theta:=d\theta_N..d\theta_1$ and using anticommutativity at various places \begin{eqnarray} \label{15} && \int \; d^N\theta\;f^\ast\; K\; f' \nonumber\\ &=& \int \; d^N\theta\; [F_0^\ast+F_1^\ast \; \theta_1]\; (\theta_1+(-1)^{N-1}\partial_1)\;D_2\;..\; D_N\; [F'_0+\theta_1 \; F'_1] \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_0^\ast\; (-1)^{N-1}\; D_2..D_N (\theta_1+(-1)^{N-1}\partial_1) [F'_0+\theta_1 F'_1] \nonumber\\ && ~~~~~~~~~~~~+F_1^\ast \;(-1)^{N-1}\;D_2\;..\; D_N\;\theta_1\partial_1 [F'_0+\theta_1 \; F'_1] \Big\} \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_0^\ast\; (-1)^{N-1}\; D_2..D_N \; [\theta_1 F'_0+(-1)^{N-1} F'_1] +F_1^\ast \;(-1)^{N-1}\;D_2\;..\; D_N\;\theta_1\; F'_1\Big\} \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_0^\ast\; \theta_1\; D_2..D_N \;F'_0 +F_1^\ast \; \theta_1 \;D_2\;..\; D_N \; F'_1\Big\} \text{,} \end{eqnarray} where we used that the second term no longer is linear in $\theta_1$ and therefore drops out from the Berezin integral. The calculation explains why the factor $(-1)^{N-1}$ in (\ref{12}) is necessary. Next consider the first term in the last line of (\ref{15}). We have \begin{eqnarray} \label{16} && \int \; d^N\theta\; F_0^\ast\; \theta_1\; D_2..D_N \;F'_0 \nonumber\\ &=& \int \; d^N\theta\; [F_{00}^\ast+F_{01}^\ast \theta_2] \; \theta_1\; (\theta_2+(-1)^{N-1}\partial_2)\; D_3..D_N \; [F'_{00}+\theta_2 F'_{01}] \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_{00}^\ast\;(-1)^{N-2}\;\theta_1\; D_3..D_N\;(\theta_2+(-1)^{N-1}\partial_2) [F'_{00}+\theta_2 F'_{01}] \nonumber\\ && ~~~~~~~~~~~~+F_{01}^\ast \; (-1)^{N-2}\; \theta_1\; D_3..D_N \;\theta_2\partial_2\; [F'_{00}+\theta_2 F'_{01}] \Big\} \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_{00}^\ast\;(-1)^{N-2}\;\theta_1\; D_3..D_N\; [\theta_2 F'_{00}+(-1)^{N-1} F'_{01}] +F_{01}^\ast \; (-1)^{N-2}\; \theta_1\; D_3..D_N \;\theta_2\; F'_{01}] \Big\} \nonumber\\ &=& \int \; d^N\theta\; \Big\{F_{00}^\ast \; \theta_1\theta_2\; D_3..D_N\; F'_{00} +F_{01}^\ast \; \theta_1 \theta_2\; D_3..D_N \; F'_{01}\Big\} \text{.} \end{eqnarray} Similarly for the second term in (\ref{15}) \begin{equation} \label{17} \int \; d^N\theta\; F_1^\ast\; \theta_1\; D_2..D_N \;F'_1 = \int \; d^N\theta\; \Big\{F_{10}^\ast \; \theta_1\theta_2\; D_3..D_N\; F'_{10} +F_{11}^\ast \; \theta_1 \theta_2\; D_3..D_N \; F'_{11}\Big\} \text{.} \end{equation} It is transparent how the computation continues: We continue expanding $F_{\sigma_1..\sigma_k}=F_{\sigma_1..\sigma_k 0}+\theta_{k+1} F_{\sigma_1 .. \sigma_k 1}$ and see by exactly the same computation as above that\footnote{A strict proof would proceed by induction which we leave as an easy exercise for the interested reader.} the signs match up to the effect that \begin{equation} \label{18} \int \; d^N\theta\; F_{\sigma_1..\sigma_k}^\ast\; \theta_1..\theta_k\; D_{k+1}..D_N \; F'_{\sigma_1..\sigma_k} =\sum_{\sigma_{k+1}}\; \int \; d^N\theta\; F_{\sigma_1..\sigma_{k+1}}^\ast\; \theta_1..\theta_{k+1}\; D_{k+2}..D_N \; F'_{\sigma_1..\sigma_{k+1}} \text{,} \end{equation} from which the claim follows using $\int\; d^N\theta \; \theta_1..\theta_N=1$. In our applications $N$ will be even so that $D_A=\theta_A-\partial_A$. \\ \\ For our application to the Rarita-Schwinger field we consider the compound index $A=(j,\alpha),\;j=1,..,D;\;\alpha=1,..,2^{[(D+1)/2\rfloor}$ or just $A=\alpha$ whence $N=DM$ or $N=M$ is even. We consider the auxiliary operator \begin{equation} \label{19} \hat{\rho}_j^\alpha:=\frac{\sqrt{\hbar}}{2}[\theta_j^\alpha+\partial_j^\alpha] \text{,} \end{equation} which by virtue of (\ref{5a}) is self adjoint and satisfies the anticommutator relation \begin{equation} \label{20} [\hat{\rho}_j^\alpha,\hat{\rho}_k^\beta]_+=\frac{\hbar}{2} \delta_{jk}\;\delta^{\alpha\beta} \text{.} \end{equation} However, $\hat{\rho}_j^\alpha$ is not yet a representation of $\rho^{r\alpha}_j$ which satisfies the Dirac antibracket $\{\rho_j^{r\alpha},\rho_k^{r\beta}\}_{{\rm DB}}= -\frac{i}{2}\mathbb{P}^{\alpha\beta}_{jk}$ and the reality condition $(\rho_j^{r\alpha})^\ast=\rho_j^{r\alpha}$. Similarly, $\{\sigma_\alpha,\sigma_\beta\}_{{\rm DB}}=i\frac{D}{2(D-1)}\delta_{\alpha\beta},\;\;\sigma_\alpha^\ast= \sigma_\alpha$. Correspondingly, what we need is a representation $\pi(\rho_j^\alpha),\;\pi(\sigma_\alpha)$ of the abstract CAR $^\ast$-algebra defined by canonical quantisation, that is, \begin{equation} \label{2222} [\rho_j^{r\alpha},\rho_k^{r\beta}]_+=\frac{\hbar}{2} \mathbb{P}^{\alpha\beta}_{jk},\;\;\;\; (\rho_j^{r\alpha})^\ast= \rho_j^{r\alpha},\;\;\;\; [\sigma^r_\alpha,\sigma^r_\beta]_+=\frac{D \hbar}{2(D-1)} \delta_{\alpha\beta},\;\;\;\; (\sigma^r_\alpha)^\ast =\sigma^r_\alpha \end{equation} all other anticommutators vanishing\footnote{This corresponds to the quantisation rule that the anticommutator is $+i\hbar$ times the Dirac bracket in the $\rho$ sector and $-i\hbar$ times the Dirac bracket in the $\sigma$ sector. This is the only possible choice of signs because the anticommutator of the same operator which in our case is self adjoint is a positive operator. The other choice of signs would yield a mathematical contradiction.}. Fortunately, using that $P_{jk}^{\alpha\beta}$ is a real valued projector (in particular symmetric and positive semidefinite) we can now write the following faithful representation of our abstract $^\ast$-algebra (\ref{2222}) on the Hilbert ${\cal H}=V_{DM}\otimes V_M$ defined above: \begin{eqnarray} \label{23} \pi(\rho_j^{r\alpha}):=\mathbb{P}_{jk}^{\alpha\beta} \hat{\rho}_k^\beta,\;\;\;\; \pi(\sigma^r_\alpha):=\frac{1}{2} \sqrt{\frac{D \hbar}{D-1}}[\theta_\alpha+\partial_\alpha] \text{.} \end{eqnarray} ~\\ So far we have considered just one point on the spatial slice corresponding to a quantum mechanical system. The field theoretical generalisation now proceeds exactly as in \cite{ThiemannKinematicalHilbertSpaces} and consists in considering copies ${\cal H}_x$ of the Hilbert space just constructed, one for every spatial point $x$ and taking as representation space either the inductive limit of the finite tensor products ${\cal H}_{x_1,..,x_n}= \otimes_{k=1}^n\; {\cal H}_{x_k}$ \cite{ThiemannModernCanonicalQuantum} or the infinite tensor product \cite{ThiemannGCS4} ${\cal H}=\otimes_x {\cal H}_x$ of which the former is just a tiny subspace. The $^\ast$-algebra (\ref{2222}) is then simply extended by adding labels $x$ to the operators and to ask that anticommutators between operators at points $x,y$ be proportional to $\delta_{x,y}$ in agreement with the classical bracket. It is easy to see that adding the label $x$ to (\ref{23}) correctly reproduces this Kronecker symbol and that they satisfy all relations on the Hilbert space\footnote{In the case of the inductive limit, a vector in $v\in {\cal H}_{x_1,..,x_n}$ is embedded in any larger ${\cal H}_{x_1,.., x_n,y_1,..,y_m}$ by $v\mapsto v\otimes \;\otimes^m 1$ where $1$ is the constant polynomial equal to one. This way any operator at $x$ acts in a well defined way on any vector in the Hilbert space.}. Finally notice that the corresponding scalar product is locally SO$(D)$ invariant. \section{Generalisations to Different Multiplets} \subsection{Majorana Spin $1/2$ Fermions} The above construction generalises immediately to Majorana spin $1/2$ fermions which are also present in Supergravity theories, e.g. $D+1=9+1$, $N=2a$ non-chiral Supergravity \cite{GianiN=2SupergravityIn}. They are described by actions of the type \begin{equation} S_{\text{Majorana, } 1/2} = \int d^{D+1}X \, i \bar{\lambda} \gamma^\mu D_\mu \lambda \end{equation} which, using time gauge and a real representation for the $\gamma$-matrices, lead to the canonical brackets $\{ \lambda_\alpha, \lambda_\beta \} \sim i \delta_{\alpha \beta}$ with the reality conditions $\lambda^* = \lambda$. They can thus also be treated with the above techniques by substituting $\rho_i$ with $\lambda$ and removing the $A_{IJ}$ matrices as well as the $\bar{\eta}^{IJ}$ projectors. \subsection{Mostly Plus / Mostly Minus Conventions} The convention used for the internal signature, i.e. mostly plus or mostly minus and the associated purely real or purely imaginary representations of the $\gamma$-matrices, does not interfere with the above construction. The important property we are using is the reality of $i \Sigma_{IJ}$ for SO$(1,D)$, i.e. that the Gau{\ss} constraint is consistent with real spinors. The substitution $\gamma_I \rightarrow i \gamma_I$ necessary when changing the signature convention does not influence these considerations. \subsection{Weyl Fermions} In dimensions $D+1$ even, we also need to consider the case of Weyl fermions. To this end, we define \begin{equation} \gamma_{\text{five}} := i^{\frac{D(D+1)}{2} + 1} \gamma^L_0 \gamma_1 \hdots \gamma_{D} \end{equation} with the properties $\gamma_{\text{five}}^2 = 1$, $\gamma_{\text{five}}^{\dagger} = \gamma_{\text{five}}$ and $\left[\gamma_I, \gamma_{\text{five}}\right]_+ = 0$ (which follows from our conventions for the gamma matrices $(\gamma_0^L)^2 = -1$, $\gamma_i^2 = 1$, $\gamma_I^{\dagger} = \eta_{II} \gamma_I$). We introduce the chiral projectors \begin{equation} \mathcal{P}^{\pm} = \frac{1}{2} \left( 1 \pm \gamma_{\text{five}} \right) \text{,} \end{equation} which fulfil the relations $\mathcal{P}^{\pm}\mathcal{P}^{\pm} = \mathcal{P}^{\pm}$, $\mathcal{P}^{\pm} \mathcal{P}^{\mp} = 0$, $\mathcal{P}^{+}+ \mathcal{P}^{-} = 1$ and $(\mathcal{P}^{\pm})^{\dagger} = \mathcal{P}^{\pm}$. These follow directly from the properties of $\gamma_{\text{five}}$. \subsubsection{Spin $1/2$ Dirac-Weyl Fermions} The kinetic term of the action for a chiral Dirac spinor is given by \begin{equation} S_{\text{F}} = -\int_\mathcal{M} d^{D+1} X \left(\frac{i}{2} \overline{\Psi} e^\mu_I \gamma^I D_\mu \mathcal{P}^+ \Psi - \frac{i}{2} \overline{ D_\mu \Psi} e^\mu_I \gamma^I \mathcal{P}^+ \Psi \right) \text{.} \label{eq:action} \end{equation} The $3+1$ split is performed analogous to \cite{BTTIII}. Choosing time gauge, we obtain the non-vanishing Poisson brackets \begin{equation} \left\{ \Psi^{\pm}_{\alpha}, \Pi^{\pm}_{\beta} \right\} = -\mathcal{P}^{\pm}_{\alpha \beta}\text{,} \end{equation} where $\Pi^{\pm}_{\beta} = -i (\Psi^{\pm})^{\dagger}_{\beta}$, and the first class constraint \begin{equation} \chi_{\alpha} := \Pi^{-}_{\alpha} \text{,} \end{equation} where we used the notation $\Psi^{\pm} := \mathcal{P}^{\pm} \Psi$. The first class property of this constraint follows from the fact that the action (\ref{eq:action}) and therefore all resulting constraints do not depend on $\Psi^{-}$ at all. In the quantum theory, the Hilbert space for the chiral fermions can be constructed similar to the case of non-chiral ones \cite{ThiemannKinematicalHilbertSpaces}. We obtain a faithful representation of the Poisson algebra by replacing the operators $\hat \theta_{\alpha}$ (acting by multiplication) and $\hat {\bar{ \theta}}_{\alpha} = \frac{d}{d\theta_{\alpha}}$ defined in \cite{ThiemannKinematicalHilbertSpaces} by $\hat \theta^+_{\alpha} := \mathcal{P}^+_{\alpha \beta} \hat{\theta}_{\beta}$ and $\hat{\bar{ \theta}}^+_{\alpha} := \hat {\bar{ \theta}}_{\beta} \mathcal{P}^+_{ \beta \alpha}$, as can be seen by \begin{equation} \left[\hat \theta^+_{\alpha}, \hat{\bar{ \theta}}^+_{\beta}\right]_+ = \mathcal{P}^+_{\alpha \beta}\text{.} \end{equation} The reality conditions are implemented if we use the unique measure constructed in \cite{ThiemannKinematicalHilbertSpaces}. We then have to impose the condition \begin{equation} \hat{\bar{\theta}}^-_{\alpha} f(\{\theta_{\beta}\}) = 0 \text{,} \end{equation} which restricts the Hilbert space to functions $f$ such that $f(\{\theta_{\alpha}\})= f(\{\mathcal{P}^{+}_{\alpha \beta} \theta_{\beta}\})$. Classically, observables do not depend on $\Psi^-_{\alpha}$. In the quantum theory, they become operators which do not contain $\hat{\theta}^-_{\alpha}$ and therefore commute with $\hat{\bar{\theta}}^-_{\alpha}$. \subsubsection{Spin $3/2$ Majorana-Weyl Fermions} Majorana-Weyl spin $3/2$ fermions appear in chiral Supergravity theories, e.g., $D+1=9+1$, $N=1$ \cite{ChamseddineN=4}. In general, in a real representation ($\gamma_I^T = \eta_{II} \gamma_I$) or in a completely imaginary representation ($\gamma_I^T = -\eta_{II} \gamma_I$) we have $\gamma_{\text{five}}^T = (-1)^{\frac{D(D+1)}{2}+1} \gamma_{\text{five}}$. Therefore, if $\frac{D+1}{2}$ is odd, we have $(\mathcal{P}^{\pm})^T = \mathcal{P}^{\pm}$, and if $\frac{D+1}{2}$ is even, $(\mathcal{P}^{\pm})^T = \mathcal{P}^{\mp}$. In the case at hand ($D=9$), there exists a real representation and the chiral projectors will be symmetric, $(\mathcal{P}^{\pm})^T = \mathcal{P}^{\pm}$. Again, we will just consider the kinetic term for a chiral Rarita-Schwinger field, \begin{equation} S = \int_{\mathcal{M}} d^{D+1}X \left(is ~ e \overline{\psi}_{\mu} \gamma^{\mu \rho \sigma} D_{\rho} \mathcal{P}^{+}\psi_{\sigma} \right) \text{.} \end{equation} The $3+1$ split is performed like above. We find the second class constraint $\pi_i^{+} = i (\phi_j^+)^T \gamma^{ji} \mathcal{P}^+$ and the first class constraint $\pi^{-}_i = 0$. We introduce a second class partner $\phi_i^- = 0$ for the first class constraint. Then we can solve all the constraints using the Dirac bracket \begin{equation} \left\{\phi_i^+, \phi_j^+\right\}_{DB} = - \mathcal{P}^+ (C^{-1})_{ij} \mathcal{P}^+ \text{,} \label{eq:DiracBracketsMajorana} \end{equation} and all other brackets are vanishing. From here, we can copy the enlargement of the internal space from above, which results in the same theory with all variables projected with $\mathcal{P}^+$. (Note that equations like e.g. $\rho_i^+ = \frac{1}{2} A_{iJ} \bar{\eta}^{JK} \mathcal{P}^+ \left(A\rho_K + A^* \rho^*_K\right) = \frac{1}{2} A_{iJ} \bar{\eta}^{JK} \left(A\rho^+_K + A^* (\rho^+)^*_K\right)$ are consistent. This can be seen by the fact that the matrix $A(N)$ can be written as an infinite sum of even powers of gamma matrices, $A(N) \propto \exp(i {\Lambda_{IJ}(N) \Sigma^{IJ}})$ and therefore it commutes with the projectors $\mathcal{P}^{\pm}$.) The quantisation of the resulting theory with variables $\rho^+_r\mbox{}_I$ and $\sigma^+$ is similar to the non-chiral case, with chiral projectors $\mathcal{P}^+$ added in observables, and modifications of the Hilbert space similar to the ones given above for Dirac-Weyl fermions. \section{Conclusions} In the present paper we have demonstrated that the complications arising when trying to extend canonical Supergravity in the time gauge from the gauge group SO$(D)$ to SO$(D+1)$ in order to achieve a seamless match to the canonical connection formulation of the graviton sector outlined in \cite{BTTI,BTTII} can be resolved. Since we worked with a Majorana representation of the $\gamma$-matrices, our analysis is restricted to those dimensions where this representation is available, which, however, covers many interesting supergravity theories ($d=4, 8,9,10,11$). The price to pay for the enlargement of the gauge group is that the phase space requires an additional normal field $N$ and that the constraints depend non trivially on a matrix $A(N)$ which transforms in a complicated fashion under SO$(D+1)$ but which in the present formulation is crucial in order to formulate the reality conditions for the Majorana fermions in the SO$(D+1)$ theory. One would expect that the field $N$ is superfluous and that the matrix $A(N)$ would simply drop out when performing an extension to SO$(1,D)$ because then no non trivial reality conditions need to be imposed. One would expect that one only needs the quadratic and not the linear simplicity constraint and that, just as it happened in the graviton sector \cite{BTTI,BTTII}, the Hamiltonian phase space extension method simply coincides with the direct Hamiltonian formulation obtained by an $D+1$ split of the SO$(1,D)$ action followed by a gauge unfixing step in order to obtain a first class formulation. Surprisingly, this is not the case. The basic difficulty is that when performing the $D+1$ split without time gauge, the symplectic structure turns out to be unmanageable. A treatment similar to the one carried out in this paper is possible but turns out to be of similar complexity. It therefore appears that there is no advantage of the SO$(1,D)$ extension as compared to the SO$(1+D)$ even as far as the classical theory is concerned. We hope to communicate our findings in a forthcoming publication. Of course, the quantum theory of the SO$(1,D)$ extension is beyond any control at this point. The solution to the tension presented in this paper, between having real Majorana spinors coming from SO$(1,D)$ on the one hand and an SO$(D+1)$ extension of the theory which actually needs complex valued spinors on the other, is most probably far from unique nor the most elegant one. Several other solutions have suggested themselves in the course of our analysis but the corresponding reformulation is not yet complete at this point. Hence, we may revisit this issue in the future and simplify the presentation. Furthermore, it would be interesting to investigate if the extensions of the gauge group $SO(D) \rightarrow SO(D+1)$ also is possible in the case of symplectic Majorana fermions, which would permit access to even more supergravity theories. To the best of our knowledge, the background independent Hilbert space representation of the Rarita-Schwinger field presented in section 4 is also new. Apart from the fact that this has to be done for half-density valued Majorana spinors whose tensor index is transformed into an external one by contracting with a vielbein, as compared to Dirac spinors there is no representation in terms of holomorphic functions \cite{ThiemannKinematicalHilbertSpaces} of the Gra{\ss}mann variables and one had to deal with the non trivial Dirac bracket.\\ \\ \\ \\ {\bf\large Acknowledgements}\\ NB and AT thank Christian Fitzner for discussions about fermionic variables and the German National Merit Foundation for financial support. We thank two anonymous referees for helpful comments. The part of the research performed at the Perimeter Institute for Theoretical Physics was supported in part by funds from the Government of Canada through NSERC and from the Province of Ontario through MEDT. \newpage \begin{appendix} \section{Linear Simplicity Constraints} As outlined in the main text, the most convenient SO$(D+1)$ extension of SO$(D)$ Lorentzian Supergravity in the time gauge employs a normal vector field $N$ for which we have to provide symplectic structure, additional constraints and its interplay with the quadratic simplicity constraint in order to make sure that the physical content of the theory remains unaltered. In effect, the results of \cite{BTTI, BTTII} are reformulated in terms of a linear simplicity constraint in the spirit of the new Spin Foam models \cite{EngleTheLoopQuantum, LivineNewSpinfoamVertex, EngleFlippedSpinfoamVertex, EngleLoopQuantumGravity, FreidelANewSpin, KaminskiSpinFoamsFor}. Therefore, a dynamical unit-length scalar field $N^I$ will be introduced, which - if the simplicity constraints hold - has the interpretation of the normal to the (spatial pullback of the spacetime $(D+1)$-) vielbein in the internal (Lorentzian or Euclidean) space. It will be shown that the constraints comprise a first class system and that the theory in any dimension is equivalent to the ADM formulation of General Relativity. The results are shown to extend to coupling of fermionic matter treated in \cite{BTTIV}. Like in \cite{BTTI,BTTII}, we can choose either SO$(D+1)$ or SO$(1,D)$ as gauge group for Lorentzian gravity. However, only for the compact case, we are able to construct the Hilbert space $\mathcal{H}_N$ for the normal field $N^I$. In a companion paper \cite{BTTV}, we will comment on the implementation of the linear simplicity constraint operators on the Hilbert space $\mathcal{H}_T = \mathcal{H}_\text{grav} \otimes \mathcal{H}_N$. \subsection{Introductory Remarks} In \cite{BTTI,BTTII}, gravity in any dimension $D$ has been formulated as a gauge theory of SO$(1,D)$ or of the compact group SO$(D+1)$, irrespective of the space time signature. The resulting theory has been obtained by two different routes, a Hamiltonian analysis of the Palatini action making use of the procedure of gauge unfixing\footnote{See \cite{MitraGaugeInvariantReformulation, AnishettyGaugeInvarianceIn, VytheeswaranGaugeUnfixingIn} for original literature on gauge unfixing.}, and on the canonical side by an extension of the ADM phase space. The additional constraints appearing in this formulation, the simplicity constraints, are well known. They constrain bivectors to be simple, i.e. the antisymmetrised product of two vectors. Originally introduced in Plebanski's \cite{PlebanskiOnTheSeparation} formulation of General Relativity as constrained $BF$ theory in $3+1$ dimensions, they have been generalised to arbitrary dimension in \cite{FreidelBFDescriptionOf}. Moreover, discrete versions of the simplicity constraints are a standard ingredient of the covariant approaches to Loop Quantum Gravity called Spin Foam models \cite{BarrettRelativisticSpinNetworks, EngleFlippedSpinfoamVertex, FreidelANewSpin} and recently were also used in group field theory \cite{BaratinGroupFieldTheory}. Two different versions of simplicity constraints are considered in the literature, which are either quadratic or linear in the bivector fields. The quantum operators corresponding to the quadratic simplicity constraints have been found to be anomalous both in the covariant \cite{EngleLoopQuantumGravity} as well as in the canonical picture \cite{WielandComplexAshtekarVariables, BTTIII}. On the covariant side, this lead to one of the major points of critique about the Barrett-Crane model \cite{BarrettRelativisticSpinNetworks}: The anomalous constraints are imposed strongly\footnote{Strongly here means that the constraint operator annihilates physical states, $\hat C \left|\psi\right\rangle = 0 ~ \forall \left| \psi \right\rangle \in \mathcal{H}_{\text{phys}}$.}, which may imply erroneous elimination of physical degrees of freedom \cite{DiracLecturesOnQuantum}. This lead to the development of the new Spin Foam models \cite{EngleTheLoopQuantum, LivineNewSpinfoamVertex, EngleFlippedSpinfoamVertex, EngleLoopQuantumGravity, FreidelANewSpin, KaminskiSpinFoamsFor}, in which the quadratic simplicity constraints are replaced by linear simplicity constraints. The linear version of the constraint is slightly stronger than the quadratic constraint, since in $3+1$ dimensions the topological solution is absent. The corresponding quantum operators are still anomalous (unless the Immirzi parameter takes the values $\beta = \pm \sqrt{\zeta}$, where $\zeta$ denotes the internal signature). Therefore, in the new models (parts of) the simplicity constraints are implemented weakly to account for the anomaly. To make contact to the covariant formulation, it is therefore of interest to ask whether, from the canonical point of view, $(a)$ the theory of \cite{BTTI,BTTII} can be reformulated using a linear simplicity constraint, and $(b)$ if so, whether the linear version of the constraint can be quantised without anomalies. Both of these questions will be answered affirmatively, the answer to $(a)$ in this appendix and the answer to $(b)$ in our companion paper \cite{BTTV}. As we have shown in the present paper, the use of the linear simplicity constraints (already at the classical level) is probably the most convenient approach towards constructing a connection formulation for Supergravity theories in $D+1$ dimensions with compact gauge group. To answer $(a)$ we will follow the second route as in \cite{BTTI} and construct the theory with linear simplicity constraint by an extension of the ADM phase space. Note that the linear constraints already have been introduced in a continuum theory in \cite{GielenClassicalGeneralRelativity}, yet the considerations there are rather different. The authors reformulate the action of the Plebanski formulation of General Relativity using constraints which involve an additional three form and which are linear in the bivectors, without giving a Hamiltonian formulation. This paper on the other hand will deal exclusively with the Hamiltonian framework. Notice that we denote by ($s$) the space time signature and by ($\zeta$) the internal signature, which can be chosen independently as in \cite{BTTI}. In particular, the gauge group SO$(\eta)$ (with $\eta = \text{diag}(\zeta,1,1,...)$) can be chosen compact, irrespective of the space time signature. This will be exploited when quantising the theory in \cite{BTTV}, where we fix $\zeta = 1$ and therefore do not have to bother with the non-compact gauge group SO$(1,D)$. There, we employ the Hilbert space representation for the normal field derived in section \ref{sec:KinematicalHilbertSpace} of this paper and then we find quantum operators corresponding to the linear simplicity constraint and show that these operators $(b)$ actually are {\it{of the first class}} and therefore can be implemented strongly. \\ \\ This appendix is organised as follows. Since the construction of the new theory follows neatly the treatment in \cite{BTTI}, in section \ref{sec:QuadraticSimplicity} we will shortly review the extensions of the ADM phase space introduced there, highlighting those details which will become important in the case of linear simplicity constraints. In section \ref{sec:LinearSimplicity} the new theory is presented and proved to be equivalent to the ADM formulation, which already implies that solving the linear simplicity constraints classically (section \ref{sec:Solution}) leads back to the (extended) ADM phase space and its constraints. Next, we show that the framework can be extended to coupling of fermionic matter (section \ref{sec:Fermions}). Finally, we construct a background independent Hilbert space representation for the normal field $N^I$ in section (\ref{sec:KinematicalHilbertSpace}) which exploits the fact that $N^I$ on-shell is a unit vector and therefore valued in a compact set. \subsection{Review: Quadratic Simplicity Constraints} \label{sec:QuadraticSimplicity} \subsubsection{Step 1: $\left\{ K_{aIJ}, \pi^{bKL}\right\}$ - Theory} \label{sec:KPi} In \cite{BTTI}, the ADM phase space is extended using the variables $\pi^{aIJ}$ and $K_{bKL}$, which are related to the ADM variables via \begin{eqnarray} \pi^{aIJ} \pi^{b}\mbox{}_{IJ} &:=& 2 \zeta q q^{ab} \label{eq:metric} \text{,} \\ K^{ab} &:=& - \frac{s}{4 \sqrt{q}} \pi^{bKL} K_{cKL} q^{ac}\left(\pi\right) \text{,} \\ P^{ab} &=& -s \sqrt{q} \left( K^{ab} - q^{ab} K^c \mbox{}_c\right) = \frac{1}{4} \left( q^{ac}\left(\pi\right) \pi^{bKL} K_{cKL} - q^{ab}\left(\pi\right) \pi^{cKL} K_{cKL} \right) \text{.} \end{eqnarray} The ADM constraints expressed in this variables\footnote{Note that by calculating the determinant of equation (\ref{eq:metric}), we can express both $q$ and $q^{ab}$ in terms of $\pi^{aIJ}$, and via the formula for the inverse matrix we also obtain an expression for $q_{ab}(\pi)$. All metric-related quantities, like e.g. the Levi-Civita connection $\Gamma_{ab}^c := \frac{1}{2} q^{cd} \left( \partial_a q_{bd} + \partial_b q_{ad} - \partial_d q_{ab} \right)$ and the Ricci scalar $R$, can now be expressed in terms $\pi^{aIJ}$ and will automatically be understood as functions of $\pi^{aIJ}$ in the following. To keep notation simple, the $\pi^{aIJ}$ - dependence will not be made explicit.} are given by \begin{eqnarray} \mathcal{H}_a &=& -2 q_{ac} \nabla_b P^{bc} = -\frac{1}{2} \nabla_b \left( K_{aIJ} \pi^{bIJ} - \delta_a^b K_{cIJ} \pi^{cIJ} \right) \text{,} \\ \mathcal{H} &=& - \left[ \frac{s}{\sqrt{q}} \left( q_{ac} q_{bd} - \frac{1}{D-1} q_{ab} q_{cd} \right) P^{ab} P^{cd} + \sqrt{q} R \right] \nonumber \\ &=& - \frac{s}{8\sqrt{q}} \left( \pi^{[a|IJ} \pi^{b]KL} K_{bIJ} K_{aKL}\right) - \sqrt{q} R \text{,} \end{eqnarray} where $\nabla_a$ is the covariant derivative annihilating the spatial metric. In order to have the right number of physical degrees of freedom, the Gau{\ss} and the (quadratic) simplicity constraints are introduced \begin{eqnarray} G^{IJ} &:=& 2 K_{a}\mbox{}^{[I}\mbox{}_{K} \pi^{aK|J]}\text{,} \label{eq:gauss}\\ S^{ab}_{\overline{M}} &:=& \frac{1}{4} \epsilon_{IJKL\overline{M}} \pi^{aIJ} \pi^{bKL} \text{.} \label{eq:simplicity} \end{eqnarray} Using the Poisson brackets \begin{equation} \left\{ K_{aIJ} , \pi^{bKL} \right\} = \delta_a^b \left(\delta_I^K \delta_J^L - \delta_I^L \delta_J^K \right) \text{,} ~~\left\{ \pi^{aIJ}, \pi^{bKL} \right\} = \left\{ K_{aIJ}, K_{bKL} \right\} = 0 \end{equation} for the extended variables, it has been verified in \cite{BTTI} that ADM Poisson brackets \begin{equation} \left\{ q_{ab}, q_{cd} \right\}_{(K,\pi)} \approx 0 \approx \left\{ P^{ab}, P^{cd} \right\}_{(K,\pi)} \text{,} ~~ \left\{ q_{ab}, P^{cd} \right\}_{(K,\pi)} \approx \delta^{c}_{(a} \delta ^{d}_{b)} \text{,} \end{equation} are reproduced in the extended system up to terms which vanish if the constraints (\ref{eq:gauss},\ref{eq:simplicity}) hold. Actually, only the rotational parts of the Gau{\ss} constraint (\ref{eq:gauss}) is needed for the above Poisson brackets to hold. Without going into the details of the calculation (cf. \cite{BTTI} for further details), we want to point out that (\ref{eq:gauss}) is only needed for the $\{ P, P\}$ - bracket, where terms $K^{[a}\mbox{}_{IJ} \pi^{b]IJ} \approx K_{a~K}^{[I} \pi^{aK|J]} \pi^{[b}\mbox{}_{IM} \pi^{c]~M}_{~J} \propto \bar{G}^{IJ}$ appear (the bar denotes rotational components, see below for notation). This will become important when proving the validity of the theory with linear simplicity constraint in section \ref{sec:LinearSimplicity}. Since the ADM brackets are recovered, in particular the Dirac algebra of $\mathcal{H}_a$ and $\mathcal{H}$ is reproduced in the extended system, the whole system of constraints $\left\{\mathcal{H}_a,\mathcal{H}, G^{IJ}, S^{ab}_{\overline{M}}\right\}$ can easily be shown to be of the first class \cite{BTTI}. For later comparison with the solution of the linear simplicity constraints in section \ref{sec:Solution}, we review the solution of the quadratic simplicity constraints as given in \cite{PeldanActionsForGravity, BTTI}. The solution to the quadratic simplicity constraint is in any dimension given by \cite{FreidelBFDescriptionOf, BTTI} \begin{equation} \pi^{aIJ} = 2 n^{[I} E^{a|J]} \text{,} \label{eq:solsimp} \end{equation} where $n^I$ is the unit normal to the vielbein, defined (up to sign) by the equations $n^I n_I = \zeta$ and $n_I E^{aI} = 0$. We can use $n^I$ and the projector $\bar{\eta}_{IJ} := \eta_{IJ} - \zeta n_I n_J $ to decompose any bivector $X_{IJ}$ into its rotational ($\bar{X}_{IJ} := \bar{\eta}_I^K \bar{\eta}_J^L X_{KL}$) and boost parts (${\bar{X}_I := - \zeta n^J X_{IJ}}$) with respect to $n^I$. Using (\ref{eq:solsimp}), the symplectic potential reduces to \cite{PeldanActionsForGravity} \begin{eqnarray} &\mbox{}& \frac{1}{2} \pi^{aIJ} \dot{K}_{aIJ} \nonumber \\ &\approx& - \zeta \bar{K}_{aJ} \dot{E}^{aJ} - \bar{K}_{aIJ} E^{aJ} \dot{n}^{I} \nonumber \\ &\approx& \left[- \zeta \bar{K}_{aJ} - n_J E_{aI} \bar{K}_{bK}\mbox{}^{I} E^{bK} \right] \dot{E}^{aJ} \nonumber \\ &:=& K'_{aJ} \dot{E}^{aJ} \text{,} \label{eq:SymplecticReduction1} \end{eqnarray} where we have dropped total time derivatives and divergences. The inverse vielbein $E_{aI}$ is defined by the equations $E_{aI} E^{a}\mbox{}_{J} = \bar{\eta}_{IJ}$ and $E_{aI} E^{bI} = \delta^b_a$ . For the constraints, we find in terms of these variables \begin{eqnarray} \frac{1}{2} \lambda_{IJ} G^{IJ} &=& - \lambda_{IJ} E^{a[I} K'_a\mbox{}^{J]} \text{,} \label{eq:GaussConstraint1} \\ N^a \mathcal{H}_a &\approx& 2 \zeta N^a \nabla_{[a} E^{bI} K'_{b]I} \text{,} \label{eq:DiffeoConstraint1} \\ N\mathcal{H} &\approx& N\left( \frac{s}{2} E^{aI} E^{bJ} R_{abIJ} - E^{a[I} E^{b|J]} K'_{aI} K'_{bJ} \right) \text{,} \label{eq:HamConstraint1} \end{eqnarray} where terms proportional to the Gau{\ss} constraint (\ref{eq:GaussConstraint1}) as well as total derivatives were dropped in (\ref{eq:DiffeoConstraint1}). Thus we arrive at an already well-established Hamiltonian formulation of General Relativity \cite{PeldanActionsForGravity}, which leads to the ADM formulation after solving the SO$(\eta)$ Gau{\ss} constraint. \subsubsection{Step 2: $\left\{ \mbox{}^{(\beta)}A_{aIJ}, \mbox{}^{(\beta)}\pi^{bKL}\right\}$ - Theory} \label{sec:APi} Having this extension of the ADM phase space at our disposal, we can turn it into a connection formulation in a second step. We define the spin connection constructed of the $\pi^{aIJ}$ by \begin{equation} \Gamma_{aIJ} := \frac{2}{D-1} \pi_{bKL} n^K n_{[I} \partial_a \pi^{b} \mbox{}_{J]} \mbox{}^L + \zeta \bar{\eta}_{[I}^M \bar{\eta}_{J]K} \pi_{bLM} \partial_a \pi^{bLK} + \zeta \Gamma_{ab}^c \pi^{b} \mbox{}_{K[I} \pi_{c|J]} \mbox{}^K \text{,} \label{eq:Spinconnection} \end{equation} where we used the abbreviations \begin{eqnarray} n^I n_J &:=& \frac{1}{D-1} \left( \pi^{aKI} \pi_{aKJ}-\zeta \eta^{I} \mbox{}_J \right) \text{,}~~~ \bar{\eta}_{IJ} := \eta_{IJ} - \zeta n_I n_J, ~~\text{and} \nonumber \\ \pi_{aIJ} &:=& \left(\frac{\zeta}{2} \pi^{aKL} \pi^{b}\mbox{}_{KL}\right)^{(-1)} \pi^{b}\mbox{}_{IJ} = \frac{1}{q} q_{ab} \pi^{b}\mbox{}_{IJ} \text{.} \end{eqnarray} One can check that $\Gamma_{aIJ}$ satisfies weakly\footnote{Note that when solving the simplicity constraint ($\pi^{aIJ} = 2 n^{[I}E^{a|J]}$), $\Gamma_{aIJ}$ reduces to the hybrid spin connection introduced by Peldan \cite{PeldanActionsForGravity} which annihilates $E^{aI}$ (and, since $n^I$ is a function of $E^{aI}$, also annihilates $n^I$). When additionally choosing time gauge $n^I = \delta^I_0$, the hybrid connection furthermore reduces to the familiar SO$(D)$ spin connection annihilating the SO$(D)$ vielbein $E^{ai}$ ($i = 1,...,D$).} the following identity \cite{BTTI} \begin{equation} \partial_a \pi^{aIJ} + \left[\Gamma_a, \pi^a \right]^{IJ} \approx 0 \text{.}\label{eq:Spinconnection2} \end{equation} Moreover, it transforms as a connection under gauge transformations \begin{equation} \left\{ \frac{1}{2} G^{KL}\left[\Lambda_{KL}\right], \Gamma_{aIJ} \right\} \approx \partial_a \Lambda_{IJ} +\left[\Gamma_{a},\Lambda\right]_{IJ} \end{equation} if the simplicity constraint holds. This suggests the introduction of the following connection variables \begin{equation} \mbox{}^{(\beta)} A_{aIJ} := \Gamma_{aIJ} + \beta K_{aIJ} ~~\text{and}~~ \mbox{}^{(\beta)}\pi^{aIJ} := \frac{1}{\beta} \pi^{aIJ} \text{,} \end{equation} with a free parameter\footnote{Since $\Gamma_{aIJ}$ is a homogeneous function of degree zero in $\pi^{aIJ}$ and its derivatives, it is unaffected by the constant rescaling $\pi^{aIJ} \rightarrow \mbox{}^{(\beta)}\pi^{aIJ}$.} $\beta \in \mathbb{R}/\{0\}$ and Poisson brackets given by \begin{equation} \left\{ \mbox{}^{(\beta)}A_{aIJ} , \mbox{}^{(\beta)}\pi_{bKL} \right\} = \delta_a^b \left(\delta_I^K \delta_J^L - \delta_I^L \delta_J^K \right) \text{,} ~~\left\{ \mbox{}^{(\beta)}\pi^{aIJ}, \mbox{}^{(\beta)}\pi^{bKL} \right\} = \left\{ \mbox{}^{(\beta)}A_{aIJ}, \mbox{}^{(\beta)} A_{bKL} \right\} = 0 \text{.} \label{eq:PoissonAPi} \end{equation} Using equation (\ref{eq:Spinconnection2}), we can rewrite the Gau{\ss} constraint to obtain a familiar expression for the generator of gauge transformations \begin{equation} G^{IJ} = 0 + \left[\mbox{}^{(\beta)} K_a, \mbox{}^{(\beta)} \pi^a \right]^{IJ} \approx \partial_a \mbox{}^{(\beta)} \pi^{aIJ} + \left[\mbox{}^{(\beta)}A_a, \mbox{}^{(\beta)} \pi^a \right]^{IJ} \text{.} \end{equation} Now we can repeat the above analysis, again expressing the ADM variables and constraints in terms of the new ones, \begin{equation} \label{eq:ExtendedAdmVariables2} \mbox{}^{(\beta)} \pi^{aIJ}\mbox{}^{(\beta)} \pi^{b}\mbox{}_{IJ} := \frac{2\zeta}{\beta^2} q q^{ab} ~~~\text{and}~~~ \sqrt{q} K_a \mbox{}^b := -\frac{s}{4} \mbox{}^{(\beta)} \pi^{bIJ} \left( \mbox{}^{(\beta)} A-\Gamma \right)_{aIJ} \text{,} \end{equation} \begin{eqnarray} \mathcal{H}_a &=& -\frac{1}{2} \nabla_b \left( \left( \mbox{}^{(\beta)} A-\Gamma \right)_{aIJ} \mbox{}^{(\beta)}\pi^{bIJ} - \delta_a^b \left( \mbox{}^{(\beta)} A-\Gamma \right)_{cIJ} \mbox{}^{(\beta)}\pi^{cIJ} \right) \text{,} \\ \mathcal{H} &=& - \frac{s}{8\sqrt{q}} \left( \mbox{}^{(\beta)}\pi^{[a|IJ} \mbox{}^{(\beta)}\pi^{b]KL} \left( \mbox{}^{(\beta)} A-\Gamma \right)_{bIJ} \left( \mbox{}^{(\beta)} A-\Gamma \right)_{aKL}\right) - \sqrt{q} R \label{eq:HamConstraint} \text{,} \end{eqnarray} and checking that the ADM Poisson brackets are reproduced on the extended phase space $\{\mbox{}^{(\beta)}A,\mbox{}^{(\beta)}\pi\}$ up to Gau{\ss} and (quadratic) simplicity constraints \cite{BTTI}. From the classical point of view, this formulation is a genuine connection formulation of General Relativity. In the quantum theory, the quadratic simplicity constraint leads to anomalies both in the covariant \cite{EngleLoopQuantumGravity} as well as in the canonical approach \cite{BTTII}. Therefore, we want to introduce a linear simplicity constraint in the canonical theory in the next section, inspired by the new Spin Foam models \cite{EngleTheLoopQuantum, LivineNewSpinfoamVertex, EngleFlippedSpinfoamVertex, EngleLoopQuantumGravity, FreidelANewSpin, KaminskiSpinFoamsFor}. \subsection{Introducing Linear Simplicity Constraints} \label{sec:LinearSimplicity} Recall that the solution to the (quadratic) simplicity constraint in dimensions $D \geq 3$ is given by \cite{FreidelBFDescriptionOf}\footnote{In $D = 3$, an additional topological sector exists \cite{FreidelBFDescriptionOf}. The above results hold in $D = 3$ only if this sector is excluded by hand.} $S^{ab}_{\overline M} = 0 \Leftrightarrow \mbox{}^{(\beta)}\pi^{aIJ} = \frac{2}{\beta} n^{[I} E^{a|J]} $ and that $n^I$ is no independent field but determined by the vielbein $E^{aI}$. We now postulate a new field $N^I$, which will play the role of this normal, together with its conjugate momentum $P_I$, subject to the linear simplicity and normalisation constraints \begin{eqnarray} S^{a}_{I \overline M} &:=& \epsilon_{IJKL\overline M} ~ N^J ~ \mbox{}^{(\beta)}\pi^{a KL} \text{,} \label{eq:LinearSimplicity} \\ \mathcal{N} &:=& N^I N_I - \zeta \text{.} \label{eq:Normalization} \end{eqnarray} The solution to the linear simplicity constraints in any dimension $D \geq 3$ is given by\footnote{Using the linear simplicity constraints, we automatically exclude the topological sector in $D = 3$.} $\mbox{}^{(\beta)}\pi^{aIJ} = \frac{2}{\beta} N^{[I} E^{a|J]}$, with $N_I E^{aI} = 0$. We see that on the solutions, the physical information of $\mbox{}^{(\beta)}\pi^{aIJ}$ is encoded in the vielbein $E^{aI}$, which in turn fixes the direction of $N^I$ completely. The remaining freedom in choosing its length is fixed by the normalisation constraint $\mathcal{N}$ and we find $N^I = n^I(E)$, i.e. the $N^I$ are no physical degrees of freedom. The same has to be assured for the momenta $P^I$, i.e. we should add additional constraints $P^I = 0$. However, these extra conditions can be interpreted as (partial) gauge fixing conditions for (\ref{eq:LinearSimplicity},\ref{eq:Normalization}), which then can be removed by applying the procedure of gauge unfixing. We will take a short-cut and directly ``guess'' the theory such that the constraints (\ref{eq:LinearSimplicity},\ref{eq:Normalization}) are implemented as first class, and we will show that when solving these constraints, the momenta $P_I$ are automatically removed from the theory. The theory we want to construct is very similar to the $\{\mbox{}^{(\beta)} A_{aIJ} , \mbox{}^{(\beta)} \pi^{bKL} \}$ - theory of section \ref{sec:APi}. It is defined by the Poisson brackets (\ref{eq:PoissonAPi}) and \begin{equation} \left\{ N^I, P_J \right\} = \delta_J^I \text{,} ~~\left\{ N^I,N^J \right\} = \left\{ P_I,P_J \right\} = 0 \text{,} \end{equation} and, apart from the linear simplicity and normalisation constraints (\ref{eq:LinearSimplicity},\ref{eq:Normalization}), is subject to \begin{eqnarray} G^{IJ} &=& \frac{1}{2} \mbox{}^{(\beta)}D_a \mbox{}^{(\beta)}\pi^{aIJ} + P^{[I}N^{J]} \text{,} \\ \mathcal{H}_a &=& \frac{1}{2} \mbox{}^{(\beta)}\pi^{bIJ} \partial_a \mbox{}^{(\beta)}A_{bIJ} - \frac{1}{2} \partial_b \left( \mbox{}^{(\beta)}\pi^{bIJ} \mbox{}^{(\beta)}A_{aIJ} \right) + P_I \partial_a N^I \text{,} \label{eq:VectorConstraint} \\ \mathcal{H} &=& - \frac{s}{8\sqrt{q}} \left[ \mbox{}^{(\beta)}\pi^{[a|IJ} \mbox{}^{(\beta)}\pi^{b]KL} \left(\mbox{}^{(\beta)}A-\Gamma\right)_{bIJ} \left(\mbox{}^{(\beta)}A-\Gamma\right)_{aKL}\right] - \sqrt{q} R \text{.} \end{eqnarray} Note that the Hamilton constraint is the same\footnote{In particular, we want to point out that it is not the Hamilton constraint for gravity coupled to standard scalar fields $\phi$, which would obtain additional terms $\sim \frac{p^2}{\sqrt{\det{q}}} + \sqrt{\det{q}} q^{ab} \phi_{,a} \phi_{,b}$ for the scalar field $\phi$ and its conjugate momentum $p$ which are missing here. In fact, these terms would spoil the constraint algebra, since $\{\mathcal{H}, S^a_{I\overline{M}}\}$ and $\{\mathcal{H}, \mathcal{N}\}$ would not vanish weakly.} as in equation (\ref{eq:HamConstraint}), whereas the Gau{\ss} and vector constraint differ and are chosen such that they obviously generate SO$(\eta)$ gauge transformations and spatial diffeomorphisms respectively on all phase space variables. In the following, we prove its equivalence to the ADM formulation. First of all, we will show that the Poisson brackets of the ADM variables $K^{ab}$, $q_{ab}$ in terms of the new variables $\mbox{}^{(\beta)}A_{aIJ}$, $\mbox{}^{(\beta)}\pi^{aIJ}$, $N^I$, and $P_I$ are still reproduced on the extended phase space up to constraints. This is non-trivial, even if the expressions for $K^{ab}(\mbox{}^{(\beta)}\pi,\mbox{}^{(\beta)}A)$, $q_{ab}(\mbox{}^{(\beta)}\pi)$ are given by (\ref{eq:ExtendedAdmVariables2}) as in the previous section, since we changed both the simplicity and the Gau{\ss} constraint. For the linear simplicity and normalisation constraints, the solution for $\mbox{}^{(\beta)}\pi^{aIJ}$ is the same as in the case of the quadratic simplicity (neglecting the topological sector), $\mbox{}^{(\beta)}\pi^{aIJ} = \frac{2}{\beta} n^{[I}E^{a|J]}$, and terms which vanished due to the quadratic simplicity constraint still vanish in the case at hand. For the Gau{\ss} constraint, note that the only terms appearing in the calculation are of the form $(\mbox{}^{(\beta)}A-\Gamma)^{[a}\mbox{}_{IJ}\mbox{}^{(\beta)}\pi^{b]IJ}$, which already vanish on the surface defined by the vanishing of the rotational parts of the Gau{\ss} constraint (cf. section \ref{sec:KPi}). Now, if the linear simplicity and normalisation constraints hold, we know that $N^I = n^I(E)$, i.e. the modification of the Gau{\ss} constraint $P^{[I}N^{J]} \approx \bar{P}^{[I}n^{J]}$ on-shell just changes the boost part of the Gau{\ss} constraint. Thus, the ADM brackets are reproduced on the surface defined by the vanishing of $G^{IJ}$, $S^{a}_{I \overline M}$ and $\mathcal{N}$. Next, we will show that the algebra is of first class. Note that since $G^{IJ}$ and $\mathcal{H}_a$ generate gauge transformations and spatial diffeomorphisms by inspection, their algebra with all other constraints obviously closes. The algebra of the linear simplicity and the normalisation constraint is trivial. Moreover, the Hamilton constraint Poisson-commutes trivially with the normalisation constraint and, since it depends only on the combination $\mbox{}^{(\beta)}\pi^{aIJ} \mbox{}^{(\beta)}A_{bIJ}$, we find $\left\{\mathcal{H}[N], S^{a}_{I\overline{M}}[s_a^{I\overline{M}}]\right\} = S^{a}_{I\overline{M}}[...]$. We are left with the Poisson-bracket between two Hamilton constraints. Since on-shell the ADM brackets are reproduced, the result is \begin{equation} \left\{ \mathcal{H}[M], \mathcal{H}[N]\right\} \approx \mathcal{H}'_a[q^{ab}\left(MN_{,b} - NM_{,b}\right)] \text{,} \end{equation} where $\mathcal{H}'_a = -2 q_{ac} \nabla_b P^{bc}$ now denotes the ADM diffeomorphism constraint. As we will see in the next section \ref{sec:Solution}, the vector constraint (\ref{eq:VectorConstraint}) correctly reduces to the ADM diffeomorphism constraint if the Gau{\ss} and simplicity constraint hold, $\mathcal{H}_a \approx \mathcal{H}_a'$. Therefore, the algebra closes. What is left to show is that also the Hamilton constraint $\mathcal{H}$ on-shell reproduces the ADM constraint, which will be made explicit when solving the constraints in the next section \ref{sec:Solution}. Note that because of the modified Gau{\ss} and simplicity constraint, again this is non-trivial even if $\mathcal{H}$ is identical with (\ref{eq:HamConstraint}). The counting of the number of physical degrees of freedom goes as follows: The full phase space consists of $\left|\left\{A,\pi,N,P\right\}\right| = 2 \frac{D^2(D+1)}{2} + 2 (D+1)$ degrees of freedom which are subject to \mbox{$\left|\left\{\mathcal{H}_a,\mathcal{H},G^{IJ},S^{a}_{I \overline M},\mathcal{N}\right\}\right|$} = $(D+1) + \frac{D(D+1)}{2} + \frac{D^2(D-1)}{2} + 1$ first class constraints. It is most convenient to compare this to Peldan's \cite{PeldanActionsForGravity} extended ADM formulation given at the end of section \ref{sec:KPi}, with $\left|\left\{E,K\right\}\right| = 2D(D+1)$ phase space degrees of freedom and the first class constraints $\left|\left\{\mathcal{H}_a,\mathcal{H},G^{IJ}\right\}\right| = (D+1) + \frac{D(D+1)}{2}$. In any dimension, the difference in phase space degrees of freedom is exactly removed by the simplicity and normalisation constraint, $\left|\left\{A,\pi,N,P\right\}\right| - \left|\left\{E,K\right\}\right| = 2 \left|\left\{S^{a}_{I \overline M},\mathcal{N}\right\}\right|$. We remark that related formulations of General Relativity, where a time normal appears as an independent dynamical field, have already appeared in the literature \cite{SaHamiltonianAnalysisOf, AlexandrovSU(2)LoopQuantum, GeillerALorentzCovariant}. The difference between these and our formulation is that while our formulation features both the simplicity constraint and the time normal at the same time, the time normal appears in the process of solving the simplicity constraint without solving the boost part of the Gau{\ss} constraint in the other approaches. In other words, the time normal is an integral part of the simplicity constraint in our approach, not a concept emerging after its solution. \subsection{Classical Solution of the Linear Simplicity Constraints} \label{sec:Solution} Solving the linear simplicity and normalisation constraints can be done similarly as in section \ref{sec:KPi}. As already pointed out in section \ref{sec:LinearSimplicity}, solving these constraints leads to \begin{equation} \mbox{}^{(\beta)}\pi^{aIJ} = \frac{2}{\beta} n^{[I} E^{a|J]} ~~~\text{and}~~~ N^I = n^I(E) \text{.} \end{equation} We make the Ansatz $\mbox{}^{(\beta)}A_{aIJ} = \Gamma_{aIJ} + \beta K_{aIJ}$ with $\Gamma_{aIJ}$ defined as in (\ref{eq:Spinconnection}). Then, the symplectic potential reduces to \cite{PeldanActionsForGravity} \begin{eqnarray} &\mbox{}& \frac{1}{2} \mbox{}^{(\beta)}\pi^{aIJ} \mbox{}^{(\beta)}\dot{A}_{aIJ} + P_I \dot{N}^I \nonumber \\ &\approx& \bar{K}_{aJ} \dot{E}^{aJ} - \bar{K}_{aIJ} E^{aJ} \dot{n}^{I} + \bar{P}_I \dot{n}^I \nonumber \\ &\approx& \left[ \bar{K}_{aJ} - n_J E_{aI} \left( \bar{K}_{bK}\mbox{}^{I} E^{bK} + \bar{P}^I\right)\right] \dot{E}^{aJ} \nonumber \\ &:=& K''_{aJ} \dot{E}^{aJ} \text{,} \end{eqnarray} where we have dropped total time derivatives and divergences. Note that, compared to (\ref{eq:SymplecticReduction1}), the result is the same up to the additional $\bar{P}^I$ term appearing in the definition of $K''_{aI}$. For the constraints, we find equal expressions as in (\ref{eq:GaussConstraint1}, \ref{eq:DiffeoConstraint1}, \ref{eq:HamConstraint1}) with $K'_{aI}$ replaced by $K''_{aI}$ and again arrive at Peldan's extended ADM formulation without time gauge \cite{PeldanActionsForGravity}, which leads to the ADM formulation after solving the SO$(\eta)$ Gau{\ss} constraint. \subsection{Linear Simplicity Constraints for Theories with Dirac Fermions} \label{sec:Fermions} In order to incorporate fermions into the framework, tetrads and their higher dimensional analogues (vielbeins) have to be used at the Lagrangian level to construct a representation of the spacetime Clifford algebra. Therefore, the extension of the ADM phase space introduced above is not applicable here. In \cite{BTTIV} it is shown that the symplectic reduction of the extension of the phase space $\left(K_{ai}, E^{bj}\right)$ with SO$(D)$ Gau{\ss} constraint to $\left(A_{aIJ},\pi^{bKL}\right)$ with SO$(\eta)$ Gau{\ss} and (quadratic) simplicity constraint gives back the unextended theory. We may now apply this fact to theories with fermions. The explicit construction is \begin{equation} \bar{E}^{aI} = \zeta \bar{\eta}^I\mbox{}_J\pi^{aKJ}n_K\text{,}~~~~ \bar{K}_{aI} = \zeta \bar{\eta}_I\mbox{}^J(A-\Gamma)_{aKJ}n^K \text{,} \end{equation} where $\Gamma_{aIJ}$, $\bar\eta_{IJ}$ and $n^I$ are understood as functions of $\pi^{aIJ}$ (cf. \cite{BTTIV} for more details). The extension of $\left(K_{ai}, E^{bj}\right)$ with SO$(D)$ Gau{\ss} constraint to $\left(A_{aIJ},\pi^{bKL}, N^I, P_J \right)$ with SO$(\eta)$ Gau{\ss}, linear simplicity and normalisation constraint works exactly the same way. We can even choose to simplify the replacement of the vielbein and extrinsic curvature using the normal $N^I$, \begin{equation} \bar{E}^{aI} = \zeta \bar{\eta}^I\mbox{}_J\pi^{aKJ}N_K\text{,}~~~~ \bar{K}_{aI} = \zeta \bar{\eta}_I\mbox{}^J(A-\Gamma)_{aKJ}N^K \text{,} \end{equation} where $\bar\eta_{IJ}$ now is understood as a function of $N^I$. The calculations are completely analogous to those in \cite{BTTIV} and therefore will not be detailed here. \subsection{Kinematical Hilbert Space for $N^I$} \label{sec:KinematicalHilbertSpace} We restrict to the case $\zeta = 1$ in the following, because the kinematical Hilbert space for canonical Loop Quantum Gravity has been defined rigorously only for compact gauge groups like SO$(D+1)$. For scalar fields like the Higgs field, two different constructions to obtain a kinematical Hilbert space have been given. In the first one \cite{ThiemannKinematicalHilbertSpaces}, a crucial role is played by point holonomies $U_x(\Phi) := \exp\left(\Phi^{IJ}(x) \tau_{IJ}\right)$. The field $\Phi^{IJ}$, which is assumed to transform according to the adjoint representation of $G$, is contracted with the basis elements $\tau_{IJ}$ of the Lie algebra of $G$ and then exponentiated. Point holonomies are better suited for background independent quantisation than the field variables $\Phi^{IJ}$ themselves, since the latter are real valued rather than valued in a compact set. Therefore, a Gau{\ss}ian measure would be a natural choice for the inner product for $\Phi^{IJ}$, but this is in conflict with diffeomorphism invariance \cite{ThiemannKinematicalHilbertSpaces}. In the case at hand, this framework is not applicable, since $N^I$ transforms in the defining representation of SO$(D+1)$ and therefore, there is no (or, at least no obvious) way to construct point holonomies from $N^I$ in such a way that the exponentiated objects transform ``nicely" under gauge transformations. The second avenue \cite{ThiemannModernCanonicalQuantum} for background independent quantisation of scalar fields leads to a diffeomorphism invariant Fock representation and can be applied in principle. However, in the case at hand there is a more natural route. On the constraint surface $\mathcal{N} = N^I N_I - 1 = 0$, $N^I$ is valued in the compact set $S^{D}$. In this case the measure problems can be circumvented by solving $\mathcal{N}$ classically. The obvious choice of Hilbert space is then of course the space of square integrable functions on the $D$-sphere.\\ \\ To solve $\mathcal{N}$, we choose a second class partner $\tilde{\mathcal{N}} := N^I P_I$, \begin{equation} \left\{N^I(x) N_I(x) - 1, N^J(y) P_J(y) \right\} = N^I(x) N_I(x) \delta^D(x-y) \approx \delta^D(x-y)\text{,} \end{equation} where terms $\propto \mathcal{N}$ have been dropped. $\tilde{\mathcal{N}}$ weakly Poisson commutes with the constraints: it is Gau{\ss} invariant and transforms diffeomorphism covariant, it trivially Poisson commutes with $\mathcal{H}$ (which neither depends on $N^I$ nor on $P_I$), and a short calculations yields \begin{equation} \left\{S^a_{I \overline{M}}[s_a^{I\overline{M}}], \tilde{\mathcal{N}}[\tilde{n}]\right\} = S^{a}_{I\overline{M}}[\tilde{n} s_a^{I\overline{M}}] \text{.} \end{equation} Therefore, the algebra of the remaining constraints is unchanged when we solve $\mathcal{N}$, $\tilde{\mathcal{N}}$ using the Dirac bracket. Let $\bar{\eta}_{IJ}=\eta_{IJ}-N_I N_J/||N||^2$ whence $\bar{\eta}_{IJ} N^J=0$ also when $||N||\not=1$. Then $\bar{P}_I=\bar{\eta}_{IJ} P^J$ Poisson commutes with the normalisation constraint and thus is an observable just as $N^I$. Since the Dirac matrix is diagonal, the Dirac brackets of $\bar{P}_I,N^I$ coincide with their Poisson brackets. We find \begin{eqnarray} \left\{N^I(x), N_J(y)\right\}_{DB} &=& 0\text{,} \nonumber \\ \left\{N^I(x), \bar{P}_J(y)\right\}_{DB} &=& \bar{\eta}^I\mbox{}_J(x) \delta^{D}(x-y) \text{,} \nonumber \\ \left\{\bar{P}^I(x), \bar{P}^J(y)\right\}_{DB} &=& 2 \bar{P}^{[I}(x)N^{J]}(x) \delta^{D}(x-y)\text{,} \end{eqnarray} while the remaining brackets are unaffected. We see that unfortunately the Poisson algebra of the $N^I$ and $\bar{P}_I$ does not close, it automatically generates also the rotation generator $L_{IJ}=2 N_{[I }P_{J]}=2 N_{[I} \bar{P}_{J]}$. We therefore have to include it into our algebra. On the other hand obviously $\{L_{IJ},\mathcal{N}\}=0$ so that $L_{IJ}$ is also an observable and moreover the $L_{IJ}$ generate the Lie algebra so$(D+1)$ while $\{L_{IJ},N_K\} =-2 N_{[I} \delta_{J]K}$ so that the algebra of the $N_I,L_{IJ}$ already closes. Finally we have the identity $L_{IJ} N^J=-||N||^2 \bar{P}_I$ so that the $N^I, L_{IJ}$ already determine $\bar{P}_I$. We conclude that nothing is gained by considering the $\bar{P}_I$ and that it is better to consider the overcomplete set of observables $N^I, L_{IJ}$ instead.\\ \\ Consider, similar as in LQG, cylindrical functions $F[N]$ of the form $F[N]=F_{p_1,..,p_n}(N(p_1),..,N(p_n))$ where $F_{p_1.. p_n}$ is a polynomial with complex coefficients of the $N^I(p_k),\;k=1,..,n;\;I=0,..,D+1$. We define the operator $\hat{N}_I(x)$ to be multiplication by $N_I(x)$ on this space. Let also $\Lambda^{IJ}$ be a smooth antisymmetric matrix valued function of compact support and define the operator \begin{equation} \label{a} \hat{L}[\Lambda]:=2\int\; d^Dx \; \Lambda^{IJ}(x) \; \hat{N}_{[I}(x)\; \hat{P}_{J]}(x) \text{,} \end{equation} where $\hat{P}_J(x)=i\delta/\delta N_J(x)$. Notice that no factor ordering problems arise. The operator $\hat{L}[\Lambda]$ has a well defined action on cylindrical functions, specifically \begin{equation} \label{b} \hat{L}[\Lambda] \; F[N]=2i\sum_{k=1}^n\; \Lambda^{IJ}(p_k) \hat{N}_{[I}(p_k) \partial/\partial N_{J]}(p_k)\; F[N] \end{equation} and annihilates constant functions. Let $d\nu(N):=c_D d^{D+1}N \delta(||N||^2-1)$ the SO$(D+1)$ invariant measure on $S^D$ where the constant $c_D$ makes it a probability measure. For a function cylindrical over the finite point set $\{p_1,..,p_n\}$ we define the following positive linear functional \begin{equation} \label{bb} \mu[F]:=\int\; d\nu(N_1)\;..\;d\nu(N_n)\; F_{p_1..p_n}(N_1,..,N_n) \text{.} \end{equation} Just as in LQG it is straightforward to show that the measure is consistently defined and thus has a unique $\sigma-$additive extension to the projective limit of the finite Cartesian products of copies of $S^D$ which in this case is just the infinite Cartesian product $\overline{{\cal N}}:=\prod_x S^D$ of copies of $S^D$ \cite{YamasakiMeasuresOnInfinite}, one for each spatial point. This space can be considered as a space of distributional normals because a generic point in it is a collection of vectors $(N(x))_x$ without any continuity properties. The operator $\hat{N}_I(x)$ is bounded and trivially self-adjoint because $N_I(x)$ is real valued and $S^D$ is compact. To see that $\hat{L}[\Lambda]$ is self adjoint we let $g_\Lambda(p)=\exp(\Lambda^{IJ}(p)\tau_{IJ})$ where $\tau_{IJ}$ are the generators of so$(D+1)$. We define the operator \begin{equation} \label{c} \left( \hat{U}(\Lambda) F \right)[N]:=F_{p_1..p_n} \left(g_\Lambda(p_1) N(p_1),..,g_\Lambda(p_n) N(p_n)\right) \text{,} \end{equation} which can be verified to be unitary and strongly continuous in $\Lambda$. It maybe verified explicitly that \begin{equation} \label{d} \hat{L}[\Lambda]=\frac{1}{i}\; [\frac{d}{dt}]_{t=0} \hat{U}[t\Lambda] \text{,} \end{equation} whence $\hat{L}[\Lambda]$ is self-adjoint by Stone's theorem \cite{ReedBook1}. Finally it is straightforward to check that besides the $^\ast$-relations also the commutator relations hold, i.e. they reproduce $i$ times the classical Poisson bracket.\\ \\ We conclude that we have found a suitable background independent representation of the normal field sector. \\ \\ At each point $p \in \Sigma$, an orthonormal basis in the Hilbert space $\mathcal{H}_p=L_2(S^D,d \nu)$ is given by the generalisations of spherical harmonics to higher dimensions $\Xi_l^{\vec K}(N) $, which are shortly introduced in the appendix of our companion paper \cite{BTTV} (see \cite{VilenkinSpecialFunctionsAnd} for a comprehensive treatment). An orthonormal basis for $\mathcal{H}_N$ is given by spherical harmonic vertex functions $F_{\vec{v}, \vec{l}, \vec{\vec{K}}}(N) : = \prod_{v \in \vec v} \Xi_{l_v}^{\vec{K}_v}(N)$. Any cylindrical function $F_{\vec v}$ can be written as a mean-convergent series of the form \begin{equation} F_{\vec v}(N) = \sum_{\vec{l},\vec{\vec K}} a_{\vec{v}, \vec{l}, \vec{\vec K}} F_{\vec{v}, \vec{l}, \vec{\vec{K}}}(N) \end{equation} for complex coefficients $a_{\vec{v},\vec{l},\vec{\vec{K}}}$. The sum here runs for each $v \in \vec{v}$ over all values $l \in \mathbb{N}_0$ and for each $l$ over all $\vec{K}$ compatible with $l$. Following the construction in \cite{ThiemannKinematicalHilbertSpaces} we obtain the combined Hilbert space for the scalar field and the connection simply by the tensor product, $\mathcal{H}_T = \mathcal{H}_{\text{grav}} \otimes \mathcal{H}_N = L_2(\overline{\mathcal{A}}^{\text{SO}(D+1)}, d\mu_{AL}^{\text{SO}(D+1)}) \otimes L_2(\overline{{\cal N}}, d\mu_N)$. An orthonormal basis in this space is given by a slight generalisation of the usual gauge-variant spin network states (cf., e.g., \cite{ThiemannKinematicalHilbertSpaces}), where each vertex is labelled by an additional simple SO$(D+1)$ irreducible representation coming from the normal field. This of course leads to an obvious modification of the definition of the intertwiners which also have to contract the indices coming from this additional representation. \end{appendix} \newpage
1105.4067
\section{Introduction and summary} For classical mechanics (field theory in $0{+}1$ dimensions) there exists a rich landscape of ${\cal N}{=}\,8$ supersymmetric models, distinguished by the number~$b$ of propagating bosonic degrees of freedom and by the nature of the supersymmetry transformations (linear or nonlinear) \cite{BeIvKrLe1,BeIvKrLe2,IvLeSu}. Restricting to the linear type, the notation $(b,{\cal N},{\cal N}{-}b)$ counts their propagating bosonic, fermionic and auxiliary components. As was already observed in~\cite{IvKrPa,DoPaJuTs}, an important role is played by a potential inhomogeneity in the supersymmetry transformation of the fermions. The parameters appearing there may be viewed as a constant shift of the auxiliary components and are introduced through the superfield constraints. Together with Fayet-Iliopoulos terms, they create a bosonic potential, lead to central charges and partial supersymmetry breaking. To accomodate these inhomogeneous terms, we apply the techniques discussed in \cite{PaTo} and~\cite{KuRoTo} and produce the most general inhomogeneous linear supermultiplets compatible with the ordinary supersymmetry algebra $\{Q_i,Q_j\}=\delta_{ij} H$ (without central extensions). Here, we concentrate on the classical mechanics of a (2,8,6)~particle. The Lagrangian and Hamiltonian of this model has been formulated for a general prepotential~$F$ in~\cite{BeKrNe} (without inhomogeneity) and in~\cite{BeKrNeSh} (with inhomogeneity). Here, we specialize to the conformal case and investigate the classical dynamics of the conformal (2,8,6)~particle. The inhomogeneous (2,8,6) ${\cal N}{=}\,8$ supermultiplet, under the requirement of scale-invariance for the action, defines a unique superconformal mechanical system. The only free parameters are the the scale-setting Fayet-Iliopoulos coupling and the dimensionless shift entering the inhomogeneous supersymmetry transformations. We review the inhomogeneous supersymmetry transformations for ${\cal N}{\le}\,8$ and rederive the invariant conformal action for the inhomogeneous (2,8,6) multiplet including Fayet-Iliopoulos terms, without using superspace technology. After eliminating the auxiliary components we arrive at a very specific (non-isotropic and indefinite) Weyl factor and bosonic potential in the two-dimensional target space. It proves to be legitimate (at least classically) to restrict to a (positive-definite) half-space, where we present some typical particle trajectories. The inhomogeneous supersymmetry transformations that we investigate here close the ordinary supersymmetry algebra without central extensions. This is the case because we work within the Lagrangian framework. Central extensions of the supersymmetry algebra can arise, both in the classical and quantum cases, as a consequence of the Hamiltonian formulation and the closure of the Noether-(super)charge algebra under the Poisson bracket structure~\cite{IvKrPa}. It is tempting to push the idea of this paper to even higher-extended supersymmetry. For example, by coupling two inhomogeneous (2,8,6) multiplets linked by an extra, 9th, supersymmetry, one should be able to construct an ${\cal N}{=}\,9$ superconformal mechanics model with a four-dimensional target. This might be related with the standard reduction of ${\cal N}{=}\,4$ super Yang-Mills to an off-shell multiplet of type (9,16,7) in one dimension. \newpage \section{Inhomogeneous minimal linear supermultiplets} Minimal linear supermultiplets of extended supersymmetry in one dimension are usually formulated with homogeneous transformations for their component fields. However, in some cases it is possible to extend the supersymmetry transformations by the addition of an inhomogeneous term. This is admissible at \begin{itemize} \addtolength{\itemsep}{-6pt} \item ${\cal N}{=}\,2$ for the supermultiplet $(0,2,2)$ \item ${\cal N}{=}\,4$ for the supermultiplets $(0,4,4)$ and $(1,4,3)$ \item ${\cal N}{=}\,8$ for the supermultiplets $(0,8,8)$ and $(1,8,7)$ and $(2,8,6)$ \end{itemize} The remaining ${\cal N}=2,4,8$ supermultiplets do not admit an inhomogeneous extension, as can be easily verified by investigating the closure of the ordinary ${\cal N}$-extended supersymmetry algebra. \par Let $x$ and $y$ be physical bosons, $\psi$, $\psi_i$, $\lambda$ and $\lambda_i$ denote fermions, and $g$, $g_i$, $f$ and $f_i$ describe auxiliary fields. Here, the isospin index $i$ runs over a range depending on the number of supersymmetries. The presence of an inhomogeneous term requires the following mass dimension for the fields: \begin{equation} [t]=-1 \qquad\longrightarrow\qquad [x]=-1\ ,\quad [\psi]=-\sfrac12\ ,\quad [g]=0\ . \end{equation} In all the above cases, by a suitable R~transformation, the inhomogeneous terms can be rotated to point only in a specific iso-direction. We choose the one with the highest iso-index, i.e.~$i=2,3$ or~$7$, depending on the case. With this choice, let us list the various supersymmetry transformations~$Q_i$ for the six cases listed above. {\bf (0,2,2).}\quad For the inhomogenous ${\cal N}{=}\,2$ $(0,2,2)$ supermultiplet, the two supersymmetry transformations, without loss of generality, can be expressed as ($j,k=1,2$, $\epsilon_{12}=1$) \begin{eqnarray} & \begin{array}{ll} Q_1 \psi_j =g_j\ , & Q_1 g_j={\dot \psi_j}\ , \\[4pt] Q_2 \psi_j = \epsilon_{jk} {\tilde g_k}\ ,\quad & Q_2 g_j= \epsilon_{jk} {\dot \psi_k} \ , \end{array}& \end{eqnarray} where the inhomogeneous extension hides in \begin{equation} \tilde g_k\ :=\ g_k+c_k \qquad\textrm{with}\quad c_k\in\mathbb R\ , \end{equation} and we rotate to $c_1=0$, $c_2\equiv c>0$. {\bf (0,4,4).}\quad For the ${\cal N}{=}\,4$ $(0,4,4)$ multiplet, we have ($i,j,k=1,2,3$, $\epsilon_{123}=1$) \begin{eqnarray} & \begin{array}{llll} Q_0\psi=g\ , &Q_0 \psi_j =g_j\ , & Q_0 g= {\dot \psi}\ , & Q_0 g_j={\dot \psi_j}\ , \\[4pt] Q_i\psi = g_i\ ,\ &Q_i \psi_j = -\delta_{ij} {g}+\epsilon_{ijk} {\tilde g_k}\ ,\ & Q_i g= -{\dot \psi_i}\ ,&\ Q_i g_j= \delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot\psi_k}\ , \end{array}& \end{eqnarray} and we may choose \begin{equation} \tilde g_1=g_1\ ,\quad \tilde g_2=g_2\quad\textrm{but}\quad\tilde g_3=g_3+c\ . \end{equation} {\bf (1,4,3).}\quad The ${\cal N}{=}\,4$ $(1,4,3)$ multiplet looks slightly different, \begin{eqnarray} & \begin{array}{llll} Q_0 x=\psi\ , &Q_0\psi={\dot x}\ , &Q_0 \psi_j =g_j\ , & Q_4 g_j={\dot \psi_j}\ , \\[4pt] Q_i x=\psi_i\ ,\quad& Q_i\psi=-g_i\ ,\quad& Q_i\psi_j=\delta_{ij}{\dot x}+\epsilon_{ijk} {\tilde g_k}\ ,\quad& Q_i g_j= -\delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot \psi_k}\ , \end{array}& \end{eqnarray} with the same $\tilde g_k$ as in (0,4,4). {\bf (0,8,8).}\quad Without loss of generality, we can generate the ${\cal N}{=}\,8$ multiplets from the ${\cal N}{=}\,4$ ones by replacing the quaternionic structure constants~$\epsilon_{ijk}$ by the (totally antisymmetric) octonionic structure constants~$c_{ijk}$, with $i,j,k=1,\ldots,7$ and \begin{equation} c_{123}=c_{147}=c_{165}=c_{246}=c_{257}=c_{354}=c_{367}=1\ , \end{equation} together with $c_{ijk}=0$ for all other index combinations. Therefore, the case of (0,0,8) yields \begin{eqnarray} & \begin{array}{llll} Q_0\psi=g\ , &Q_0 \psi_j =g_j\ , & Q_0 g= {\dot \psi}\ , & Q_0 g_j={\dot \psi_j}\ , \\[4pt] Q_i\psi = g_i\ ,\quad&Q_i \psi_j = -\delta_{ij} g+ c_{ijk} {\tilde g_k}\ ,\quad& Q_i g= -{\dot \psi_i}\ ,\quad & Q_i g_j= \delta_{ij} {\dot\psi}-c_{ijk} {\dot \psi_k}\ , \end{array}& \end{eqnarray} and we take \begin{equation} \tilde g_k = g_k + \delta_{k,7}\,c\ . \end{equation} {\bf (1,8,7).}\quad In analogy with (1,4,3), we get \begin{eqnarray} & \begin{array}{llll} Q_0 x=\psi\ , &Q_0\psi={\dot x}\ , &Q_0 \psi_j =g_j\ , & Q_0 g_j={\dot \psi_j}\ , \\[4pt] Q_i x=\psi_i\ ,\quad& Q_i\psi = -g_i\ ,\quad& Q_i \psi_j = \delta_{ij} {\dot x}+\epsilon_{ijk} {\tilde g_k}\ ,\quad& Q_i g_j= -\delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot \psi_k}\ , \end{array}& \end{eqnarray} and again $\tilde g_k=g_k$ except for $\tilde g_7=g_7+c$ with $c>0$. {\bf (2,8,6).}\quad This is the most interesting multiplet. It is convenient to present it in quaternionic form, by fusing $(1,4,3)\oplus(1,4,3)=(2,8,6)$, with components labeled by $(x,\psi_{(i)},g_{(i)})$ and $(y,\lambda_{(i)},f_{(i)})$, respectively, where $i=1,2,3$. It is convenient to present the supersymmetry transformations in the following table, {\tiny \begin{eqnarray}\label{table1} & \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline &x&g_1&g_2&g_3&y&f_1&f_2&f_3&\psi&\psi_1&\psi_2&\psi_3&\lambda&\lambda_1&\lambda_2& \lambda_3\\ \hline Q_0&\psi&{\dot \psi_1}&{\dot\psi_2}&{\dot\psi_3}&\lambda&{\dot\lambda_1}&{\dot\lambda_2}&{\dot\lambda_3}& {\dot x}&g_1&g_2&g_3&{\dot y}&f_1&f_2&f_3\\\hline Q_1&\psi_1&-{\dot\psi}&-{\dot\psi_3}&{\dot\psi_2}&\lambda_1& -{\dot\lambda}&{\dot\lambda_3}&-{\dot\lambda_2}&-g_1&{\dot x}&{\tilde g_3}&-{\tilde g_2}&-f_1&{\dot y}&-{\tilde f_3}&{\tilde f_2}\\\hline Q_2&\psi_2&{\dot \psi_3}&-{\dot \psi}&-{\dot\psi_1}&\lambda_2&-{\dot\lambda_3}&-{\dot\lambda}&{\dot\lambda_1}& -g_2&-{\tilde g_3}&{\dot x}&{\tilde g_1}&-f_2&{\tilde f_3}&{\dot y}&-{\tilde f_1}\\\hline Q_3&\psi_3&-{\dot \psi_2}&{\dot\psi_1}&-{\dot\psi}&\lambda_3&{\dot\lambda_2}&-{\dot\lambda_1}&-{\dot\lambda}& -g_3&{\tilde g_2}&-{\tilde g_1}&{\dot x}&-f_3&-{\tilde f_2}&{\tilde f_1}&{\dot y}\\\hline Q_4&\lambda&-{\dot\lambda_1}&-{\dot\lambda_2}&-{\dot\lambda_3}&-\psi&{\dot\psi_1}& {\dot\psi_2}&{\dot\psi_3}&-{\dot y}&f_1&f_2&f_3&{\dot x}&-g_1&-g_2&-g_3\\\hline Q_5&\lambda_1&{\dot\lambda}&{\dot \lambda_3}&-{\dot\lambda_2}&-\psi_1&-{\dot\psi}&{\dot\psi_3}&-{\dot\psi_2}&-f_1&-{\dot y}&-{\tilde f_3}&{\tilde f_2}&g_1&{\dot x}&-{\tilde g_3}&{\tilde g_2}\\\hline Q_6&\lambda_2&-{\dot\lambda_3}&{\dot\lambda}&{\dot\lambda_1}&-\psi_2&-{\dot\psi_3}& -{\dot\psi}&{\dot\psi_1}&-f_2&{\tilde f_3}&-{\dot y}&-{\tilde f_1}&g_2&{\tilde g_3}&{\dot x}&-{\tilde g_1}\\\hline Q_7&\lambda_3&{\dot\lambda_2}&-{\dot\lambda_1}&{\dot\lambda}&-\psi_3& {\dot\psi_2}&-{\dot\psi_1}&-{\dot\psi}&-f_3&-{\tilde f_2}&{\tilde f_1}&-{\dot y}&g_3&-{\tilde g_2}&{\tilde g_1}&{\dot x}\\\hline \end{array} &\nonumber \end{eqnarray} } Inspection shows that $Q_0,Q_1,Q_2,Q_3$ act within each of the two (1,4,3) submultiplets, while the additional supersymmetries $Q_4,Q_5,Q_6,Q_7$ mix the two. Having SO(3)-rotated inside each (1,4,3) submultiplet to \begin{equation} \tilde g_k = g_k + \delta_{k3}\,c \qquad\text{and}\qquad \tilde f_k = f_k + \delta_{k3}\,c' \end{equation} we may employ a further SO(2) rotation, acting on the $\psi_3\lambda_3$ and $g_3f_3$ planes, to remove the $c'$ contribution and align the inhomogeneity with one of the two ${\cal N}{=}\,4$ submultiplets. \section{Invariant action for a (2,8,6) particle} To investigate the dynamics of superconformal particles on a line, based on the various inhomogeneous supermultiplets, we shall need to construct invariant actions for them. For ${\cal N}{\ge}\,4$ and the presence of at least one physical boson, there exists a canonical method~\cite{KuRoTo} to generate such actions, by setting \begin{equation} \label{N4action} {\cal S} \= \int\!\mathrm d t\;{\cal L} \= \int\!\mathrm d t\ Q_1Q_2Q_3Q_4\,F(x,y,\ldots)\ , \end{equation} where $F(x,y,\ldots)$ is an unconstrained prepotential. In order to obtain conformally invariant mechanics, the action should not contain any dimensionful coupling parameter, and therefore, due to $[Q_i]=\sfrac12$, we demand that $[F]=-1$. One can prove that the ensuing scale invariance extends to full conformal invariance. Without the inhomogeneous extension, (\ref{N4action}) yields only a kinetic term with some metric. It is the inhomogeneous term which will give rise to a Calogero-type potential. The action may be complemented by the addition of a Fayet-Iliopoulos term \begin{equation} {\cal S}_{\textrm{FI}} \= \int\!\mathrm d t\;\sum_i(q_i g_i + r_i f_i) \qquad\textrm{with}\quad [q_i]=[r_i]=1\ , \end{equation} introducing dimensionful couplings compatible with conformal invariance. These Fayet-Iliopoulos terms produce an oscillatorial damping, via the DFF trick of conformal mechanics~\cite{AlFuFu}. For the (1,4,3) multiplet (only $x$ and $g_i$, no $y$ or $f_i$), the proper choice for the prepotential is \begin{equation} F(x)\= \sfrac14\,x\ln x \qquad\longrightarrow\qquad {\cal L}+{\cal L}_{\textrm{FI}}\= F''(x)\bigl({\dot x}^2+g_i^2\ +\ c\,g_3\bigr)+q_ig_i \ +\ \textrm{fermions}\ . \end{equation} After eliminating the auxiliary components $g_i$ via their equations of motion and putting the fermions to zero, one gets \begin{eqnarray} {\cal L}'_{\textrm{bos}}&=& F''(x)\bigl({\dot x}^2-\sfrac14c^2\bigr)\ -\ \sfrac14q_i^2/F''(x)\ -\ \sfrac12c\,q_3 \nonumber\\[4pt] &=&\sfrac14\bigl({\dot x}^2-\sfrac14c^2\bigr)/x\ -\ g_i^2x\ -\ \sfrac12c\,q_3 \label{143pot} \\[4pt] &=&\sfrac12{\dot w}^2-\sfrac18c^2w^{-2}\ -\ \sfrac12g_i^2w^2\ -\ \sfrac12c\,q_3\ ,\nonumber \end{eqnarray} and we have recovered the standard conformal action after the coordinate change $x=\sfrac12w^2$. Stepping up to ${\cal N}{=}\,8$, we change the iso-labelling to make $Q_0,Q_1,Q_2,Q_3$ manifest, \begin{equation} \label{N8action} {\cal S} \= \int\!\mathrm d t\;{\cal L} \= \int\!\mathrm d t\ Q_0Q_1Q_2Q_3\,F(x,y,\ldots)\ . \end{equation} Demanding invariance under the additional four supersymmetries by requiring \begin{equation} \label{N8constraint} Q_l{\cal L} \= \partial_t W_l \qquad\textrm{for}\quad l=4,5,6,7 \end{equation} imposes severe constraints on~$F$. In fact, for the (1,8,7) multiplet no action can be invariant under the inhomogeneous supersymmetry transformations.\footnote{ In the homogeneous case the constraint reads $F^{\prime\prime\prime\prime}(x)=0$, which produces ${\cal L}=(ax{+}b)\,\dot x^2+\ldots$.} However, the situation is much more interesting for the (2,8,6) multiplet. Here, the constraint~(\ref{N8constraint}) says that, like in the homogeneous case~\cite{GoRoTo}, the prepotential~$F(x,y)$ must be harmonic, \begin{equation} \Box F \ \equiv\ F_{xx}+F_{yy}\=0\ . \end{equation} The general solution is encoded in a meromorphic function~$H(z)$ via \begin{equation} \label{harmonicF} F(x,y) \= H(z) + \overline{H(z)} \= 2\,\mathrm{Re} H(z)\ , \end{equation} where it is convenient to pass to complex coordinates, \begin{eqnarray} & \begin{array}{llll} z=x+\mathrm i y\ ,\quad& \partial_z=\sfrac12(\partial_x-\mathrm i\partial_y)\ ,\quad& h_i=g_i+\mathrm i f_i\ ,\quad& \chi_{(i)}=\psi_{(i)}+\mathrm i\lambda_{(i)} \\[4pt] \bar z=x-\mathrm i y\ ,& \partial_{\bar z}=\sfrac12(\partial_x+\mathrm i\partial_y)\ ,& \bar h_i=g_i-\mathrm i f_i\ ,& \bar\chi_{(i)}=\psi_{(i)}-\mathrm i\lambda_{(i)}\ . \end{array}& \end{eqnarray} Inserting (\ref{harmonicF}) into~(\ref{N8action}), we obtain \begin{eqnarray} {\cal L}&=& 2\,\mathrm{Re}\,\bigl\{ H_{zz} (\dot{\bar z}\dot{z}\,+\,\bar h_ih_i\,+\,c\,h_3 \,+\,\sfrac12\dot{\bar\chi}\chi-\sfrac12\bar\chi\dot\chi \,+\,\sfrac12\dot{\bar\chi}_i\chi_i-\sfrac12\bar\chi_i\dot\chi_i) \nonumber\\[4pt]&+& H_{zzz} (\chi\chi_ih_i\,+\,\sfrac12\epsilon_{ijk}\chi_i\chi_j h_k\,+\,c\,\chi\chi_3) \ +\ \sfrac16 H_{zzzz} \epsilon_{ijk}\chi\chi_i\chi_j\chi_k \bigr\}\ , \end{eqnarray} where the inhomogeneous extension is clearly visible in the terms containing the parameter~$c$. The bosonic metric $g_{z\bar z}=H_{zz}{+}\bar H_{\bar z\bar z}$ is special K\"ahler of rigid type~\cite{fre}. Reverting to real notation and introducing the Weyl factors \begin{equation} \Phi \= 2\,\mathrm{Re} H_{zz}\=\sfrac12(F_{xx}{-}F_{yy}) \qquad\text{and}\qquad \widetilde\Phi\= -2\,\mathrm{Im} H_{zz}\=F_{xy}\ , \end{equation} the Lagrangian reads \begin{eqnarray} {\cal L}&=&\Phi\bigl({\dot x}^2+{\dot y}^2+{g_i}^2+{f_i}^2-\psi{\dot\psi} -\lambda{\dot\lambda}-\psi_i{\dot\psi_i}-\lambda_i{\dot\lambda_i}\bigr) \nonumber\\[4pt]&+& \Phi_x\bigl(\psi\psi_ig_i-\psi\lambda_if_i-\lambda\psi_if_i-\lambda\lambda_ig_i\,+\, \epsilon_{ijk}(\sfrac12 g_i\psi_j\psi_k-\sfrac12 g_i\lambda_j\lambda_k-f_i\lambda_j\psi_k) \bigr) \nonumber\\[4pt]&+& \Phi_y\bigl(\lambda\psi_ig_i-\lambda\lambda_if_i+\psi\psi_if_i+\psi\lambda_ig_i\,+\, \epsilon_{ijk}(\sfrac12 f_i\psi_j\psi_k-\sfrac12 f_i\lambda_j\lambda_k+g_i\lambda_j\psi_k) \bigr) \nonumber\\[4pt]&+& \sfrac12(\Phi_{xx}{-}\Phi_{yy})\epsilon_{ijk}\bigl(\sfrac16\psi\psi_i\psi_j\psi_k+\sfrac16\lambda\lambda_i\lambda_j\lambda_k-\sfrac12\psi\psi_i\lambda_j\lambda_k-\sfrac12\lambda\lambda_i\psi_j\psi_k\bigr) \nonumber\\[4pt]&+& \Phi_{xy}\,\epsilon_{ijk}\bigl(\sfrac16\lambda\psi_i\psi_j\psi_k-\sfrac16\psi\lambda_i\lambda_j\lambda_k+\sfrac12\psi\lambda_i\psi_j\psi_k-\sfrac12\lambda\psi_i\lambda_j\lambda_k)\bigr) \nonumber\\[4pt]&+& c\,\bigl( \Phi g_3 +{\widetilde \Phi}f_3 +\Phi_x(\psi\psi_3-\lambda\lambda_3)+\Phi_y(\lambda\psi_3+\psi\lambda_3) \bigr)\ , \end{eqnarray} to which we add the Fayet-Iliopoulos terms \begin{equation} {\cal L}_{\textrm{FI}} \= q_ig_i+r_if_i\ . \end{equation} The harmonic prepotential with the correct scaling dimension~$[H]=-1$ is \footnote{ Multiplying $H$ with a phase corresponds to an irrelevant rotation in the complex plane.} \begin{equation} H(z) \= \sfrac18\,z\ln z \qquad\longleftrightarrow\qquad F(x,y)\=\sfrac18\,x\ln(x^2{+}y^2)-\sfrac14\,y\arctan\sfrac{y}{x}\ , \end{equation} and the corresponding Weyl factors read \begin{equation} \Phi\=\sfrac14\,\mathrm{Re}\frac1z\=\sfrac14\frac{x}{x^2{+}y^2} \qquad\text{and}\qquad \widetilde{\Phi}\=-\sfrac14\,\mathrm{Im}\frac1z\=\sfrac14\frac{y}{x^2{+}y^2}\ . \end{equation} Note that the corresponding metric is an indefinite one, as it must be for any harmonic Weyl factor. In the bosonic limit, obtained by setting all fermions equal to zero, we obtain \begin{equation} {\cal L}_{\textrm{bos}}+{\cal L}_{\textrm{FI}}\= \Phi\,({\dot x}^2+{\dot y}^2+{g_i}^2+{f_i}^2) +c\,(\Phi\,g_3+{\widetilde\Phi}f_3) +q_ig_i+r_if_i\ . \end{equation} We eliminate the auxiliary fields via their algebraic equations of motion, \begin{eqnarray} & \begin{array}{lll} g_1=-\frac{q_1}{2\Phi}\ ,\quad& g_2=-\frac{q_2}{2\Phi}\ ,\quad& g_3=-\frac{q_3{+}c\Phi}{2\Phi}\ \\[4pt] f_1=-\frac{r_1}{2\Phi}\ ,\quad& f_2=-\frac{r_2}{2\Phi}\ ,\quad& f_3=-\frac{r_3{+}c{\widetilde\Phi}}{2\Phi}\ , \end{array}& \end{eqnarray} and arrive at \begin{eqnarray} {\cal L}'_{\textrm{bos}}&=& \Phi\,\bigl({\dot x}^2+{\dot y}^2\bigr)\ -\ \sfrac1{4\Phi}\bigl( q_1^2+q_2^2+(q_3{+}c\Phi)^2+r_1^2+r_2^2+(r_3{+}c\widetilde{\Phi})^2\bigr)\nonumber\\[4pt] &=& \frac{x}{x^2{+}y^2}\frac{{\dot x}^2+{\dot y}^2}{4}\ -\ \frac{(q_i^2{+}r_i^2)(x^2{+}y^2)}{x}\ -\ c\,\frac{q_3x{+}r_3y}{2x}\ -\ \frac{c^2}{16x}\\[6pt] &=:& K\ -\ V\ ,\nonumber \end{eqnarray} making explicit the effect of both the inhomogeneous supersymmetry transformation ($c$) and the Fayet-Iliopoulos terms ($q_i,r_i$) on the potential~$V$. It is tempting to perform the same coordinate change as for the (1,4,3) multiplet, $x=\sfrac12w^2$, which yields \begin{equation} \label{wy} {\cal L}'_{\textrm{bos}}\= \sfrac12(1{+}\gamma^2)^{-1}\Bigl({\dot w}^2+\frac{{\dot y}^2}{w^2}\Bigr)\ -\ \sfrac12(1{+}\gamma^2)(q_i^2{+}r_i^2)w^2\ -\ \sfrac12\,c\,(q_3{+}r_3\gamma)\ -\ \frac{c^2}{8w^2}\ , \end{equation} where $\gamma=2y/w^2$. This form reveals both the oscillator and Calogero terms, but also shows the added complexity in two dimensions (mostly hidden in~$\gamma$). Putting $y\equiv0$ (also $\gamma{=}0$) brings back the (1,4,3) result~(\ref{143pot}). \section{Trajectories of a (2,8,6) particle} Without loss of generality, let us drop inessential Fayet-Iliopoulos terms and put \begin{equation} q_1=q_2=r_1=r_2=0 \qquad\text{and}\qquad q_3=:q\ ,\quad r_3=:r\ ,\quad q{+}\mathrm i r=:s\ . \end{equation} In complex coordinates, the kinetic and potential energies then read \begin{eqnarray} K&=& \Phi\,{\dot z}\dot{\bar z}\= \sfrac18\frac{z{+}\bar z}{z\bar z}\,{\dot z}\dot{\bar z}\ ,\\[4pt] V&=& \bigl( (q{+}c\Phi)^2+(r{+}c\widetilde\Phi)^2\bigr)/4\Phi\= \sfrac18\frac{1}{z{+}\bar z}\bigl(4s\bar z+c\bigr)\bigl(4\bar s z+c\bigr)\ . \end{eqnarray} \begin{figure}[ht] \centerline{ \lower2ex\hbox{ \includegraphics[width=9cm]{pot.eps} } \hfill \includegraphics[width=5cm]{lev.eps} } \caption{Potential~$V$ and its level curves for $(c,q,r)=(4,1,2) \quad\longrightarrow\quad z_{\textrm{min}}=\sfrac15(1{-}2\mathrm i)$. } \label{fig:1} \end{figure} The level curves of this potential are circles of center and radius \begin{equation} z_0(V)=\frac{2V-c\,s}{4(q^2{+}r^2)} \qquad\text{and}\qquad r(V)=\frac{\sqrt{V(V{-}c\,q)}}{2(q^2{+}r^2)}\ , \end{equation} respectively, and its only minimum $V_{\textrm{min}}=cq$ is located at \begin{equation} z_{\textrm{min}}=z_0(cq)=\frac{c\,\bar s}{4(q^2{+}r^2)}\ . \end{equation} The parameter~$r$ governs the asymmetry under $y\to-y$. The reflection $x\to-x$ flips the sign of $V{-}\sfrac12cq_3$. Due to the factor of $z{+}\bar z=2x$, both the Weyl factor and the potential are strictly positive on the right half-space $x{>}0$ and strictly negative for $x{<}0$. Therefore, the (2,8,6) particle is a reasonable dynamical system only if its trajectories do not cross the $x{=}0$ dividing line. Seen from the right half-space, the potential barrier for $x{\to}0$ has a hole at $y{=}0$ if $c{=}0$, but the Weyl factor explodes precisely there. For large coordinate values, the potential grows linearly with~$x$ and quadratically with~$y$, so the $x{>}0$ trajectories remain bounded. The equation of motion takes the form \begin{eqnarray} 0&=&\Phi^3\ddot z\ +\ \Phi^2\Phi_z {\dot z}^2\ -\ \sfrac14\Phi_{\bar z}\bigl(q^2+(r+2\mathrm i cH_{zz})^2\bigr) \nonumber\\[4pt] &\propto &(z{+}\bar z)^3 z\bar z\,\ddot z\ -\ (z{+}\bar z)^2 \bar z^2{\dot z}^2\ +\ z^2\bar z^2\bigl( (4qz)^2+(4rz{+}\mathrm i c)^2\bigr)\ , \end{eqnarray} which in real coordinates reads \begin{eqnarray} 0&=& \ddot x-\frac{1}{2x}\frac{x^2{-}y^2}{x^2{+}y^2}({\dot x}^2{-}{\dot y}^2) -\frac{2y}{x^2{+}y^2}\,\dot x\,\dot y +\frac{x^2{+}y^2}{x^3}\bigl(2(q^2{+}r^2)(x^2{-}y^2)-cr\,y-\sfrac18 c^2\bigr) \ ,\nonumber\\[4pt] 0&=& \ddot y+\frac{y}{x^2{+}y^2}({\dot x}^2{-}{\dot y}^2) -\frac{1}{x}\frac{x^2{-}y^2}{x^2{+}y^2}\,\dot x\,\dot y +\frac{x^2{+}y^2}{x^3}\bigl(4(q^2{+}r^2)\,x\,y+cr\,x\bigr)\ . \end{eqnarray} The only constant of motion of this system is the energy $E=T+V$, so the generic particle motion is not integrable. Figure~2 shows the trajectory for the $(c,q,r)$-value chosen in Figure~1 and a couple of initial conditions. \begin{figure}[ht] \centerline{ \includegraphics[width=7.7cm]{tra_1_0.eps} \hfill \includegraphics[width=6.3cm]{tra_0.1_1.eps} } \caption{Trajectories for $(c,q,r)=(4,1,2)$ with initial conditions $(z,\dot z)(0)=(1,0)$ (left) and $(z,\dot z)(0)=(\sfrac{1}{10}{+}\mathrm i,0)$ (right). } \label{fig:2} \end{figure} One sees that the curve does not fill out the region~$V(x,y)\le E$, on effect of the position-dependent effective mass $M=2\Phi(x,y)$. It is also clear that the $x{=}0$ barrier is impenetrable. Therefore, it makes sense to substitute $w=\sqrt{2x}$ and introduce the dynamics in the $wy$-plane according to~(\ref{wy}). The trajectories of Figure~2 get somewhat distorted in these variables, but their qualitative behavior is unchanged. \bigskip \noindent {\bf Acknowledgements} \noindent O.L. thanks CBPF for warm hospitality. This work was partially supported by CNPq and by DFG grant Le-838/9-2. \newpage
1105.4054
\section{Introduction.} A particle moving in the Newtonian gravitational field is known as the Kepler problem. The equations of motion are $$\ddot{x}=-\frac{x}{\parallel x\parallel^3},$$ where $x \in \mathbb{R}^3\setminus\{0\}$ is the position vector. It is well known that the motions for the Kepler problem are planar. If we consider the plane $Ox_1x_2$, the equations of motion becomes $$ \left\{ \begin{array}{ll} \dot{x_1}=y_1\\ \dot{x_2}=y_2\\ \dot{y_1}=-\frac{x_1}{(x_1^2+x_2^2)^{3/2}}\\ \dot{y_2}=-\frac{x_2}{(x_1^2+x_2^2)^{3/2}}\\ \end{array} \right. $$ These equations are Hamiltonian with the standard symplectic form on $\mathbb{R}^4$ and the Hamiltonian function $H=\frac{1}{2}(y_1^2+y_2^2)-\frac{1}{\sqrt{x_1^2+x_2^2}}$. From Kepler's second law we have another conserved quantity given by $A=x_1y_2-x_2y_1$. For $a>0$, consider the following conserved quantity $\emph{K}=H+\frac{1}{a^3}A$. After a straightforward computation we will obtain that $\nabla\emph{K}=0$ if and only if $y_1=\frac{1}{a^3}x_2$, and $y_2=-\frac{1}{a^3}x_1$ and $||(x_1,x_2)||=a^2$. Equivalently, the set $\{\nabla\emph{K}=0\}$ is equal with the set $\{(x_1,x_2,y_1,y_2)\mid(x_1,x_2)\cdot(y_1,y_2)=0,\,\,\, \textit{and}\,\,\, ||(x_1,x_2)||=a^2,\,\,\, \textit{and}\,\,\,||(y_1,y_2)||=\frac{1}{a}\,\,\,\textit{and}\,\,\,sgn(y_1)=sgn(x_2) \}$ which is invariant under the dynamics and is filled with solutions that represent uniform circular motions moving clockwise. These particular motions are also solutions for the linear Hamiltonian system $$ \left\{ \begin{array}{ll} \dot{x_1}=-x_2\\ \dot{x_2}=x_1\\ \dot{y_1}=-y_2\\ \dot{y_2}=y_1\,,\\ \end{array} \right. $$ where the Hamiltonian function is $A$. Given the above analysis, we can raise at least two questions. Is it true that for a dynamical system that admits conserved quantities, the set of points where the gradients of these conserved quantities are zero, is an invariant set? When do two Hamiltonian systems have common solutions? In what follows, we will give an answer to these questions. For the first question, the answer is positive and it is given in section two where we will also discuss various generalizations of this answer. In section three, we will present the conditions under which we can give an answer to the second question. In section four we will illustrate these results for the example of Toda lattice. Detailed computations are presented in the Appendix. \section{Invariant sets} Let $f:\mathbb{R}^n\rightarrow \mathbb{R}^n$ be a $C^q$, $q\geq 1$ function which generates the differential equation \begin{equation}\label{sys} \dot{x}=f(x). \end{equation} We suppose that equation (\ref{sys}) admits a $C^q$ vectorial conserved quantity $\mathbf{F}:\mathbb{R}^n\rightarrow \mathbb{R}^k$ with $k\leq n$. We denote by $F_1,...,F_k$ the components of $\mathbf{F}$. For $s\in \{0,...,k\}$ we introduce the sets: \begin{equation}\label{Mi} M_{(s)}^F=\{x\in \mathbb{R}^n\,|\,rank \nabla \mathbf{F}(x)=s\} \end{equation} where $\nabla\mathbf{F}(x)$ is the Jacobian matrix \begin{equation} \nabla \mathbf{F}(x)= \left \begin{array}{ccc} \frac{\partial F_1}{\partial x_1}(x) & ... & \frac{\partial F_1}{\partial x_n}(x) \\ ... & ... & ... \\ \frac{\partial F_k}{\partial x_1}(x) & ... & \frac{\partial F_k}{\partial x_n}(x) \\ \end{array \right). \end{equation} For $r\in \{1,...,q\}$ we introduce the sets: \begin{equation}\label{Nj} N_{(r)}^F=\{x\in \mathbb{R}^n\,|\,\partial^{\mathbf{\alpha}}F_i(x)=0,\,\,\forall i\in\{1,...,k\},\,\,\,\forall \mathbf{\alpha}\in \{1,...,n\}^l,\,\,\,\forall l\leq r \} \end{equation} where we note $\partial^{\mathbf{\alpha}}F_i=\frac{\partial ^l F_i}{\partial x_{\alpha_1}...\partial x_{\alpha_l}}$ if $\mathbf{\alpha}=(\alpha_1,...,\alpha_l)$. \begin{remark} We observe that \begin{equation} M_{(0)}^F=N_{(1)}^F=\{x\in \mathbb{R}^n\,|\,\frac{\partial F_i}{\partial x_j}(x)=0,\,\,i\in \{1,...,k\},\,\,j\in \{1,..,n\}\}. \end{equation} \end{remark} The set $\{M_{(s)}^F\}$ with $s\in \{0,...,k\}$ is a partition of $\mathbb{R}^n$. A critical point of $\mathbf{F}$ is a point in $\mathbb{R}^n$ at which the rank of the matrix $\nabla \mathbf{F}(x)$ is less than the maximum rank. A critical value is the image under $\mathbf{F}$ of a critical point. The set of critical points of $\mathbf{F}$ is \begin{equation} M_c^F=\cup_{s=0}^{k-1}M_{(s)}^F. \end{equation} Using Sard's Theorem (see \cite{sard}) we have that the set of critical values $\mathbf{F}(M_c^F)$ is of $k$-dimensional measure zero providing that $q\geq n-k+1$. \begin{remark} We also have the obvious inclusions $N_{(q)}^F\subseteq N_{(q-1)}^F\subseteq ...\subseteq N_{(1)}^F.$ \end{remark} \begin{theorem}\label{principal theorem} The sets $M_{(s)}^F$ are invariant under the dynamics generated by the differential equation (\ref{sys}). \end{theorem} \begin{proof} Because $\mathbf{F}$ is a conserved quantity, we have $$\mathbf{F}(\Phi _t(x))=\mathbf{F}(x),$$ where $\Phi _t:\mathbb{R}^n\rightarrow \mathbb{R}^n$ is the flow generated by (\ref{sys}). Differentiating, we have \begin{equation}\label{formula principala} \nabla \mathbf{F}(\Phi_t(x))\nabla \Phi_t(x)=\nabla \mathbf{F}(x). \end{equation} As $\nabla \Phi_t(x)$ is an invertible matrix for any $x\in \mathbb{R}^n$ which is not an equilibrium point for (\ref{sys}), (see \cite{abra}), we have that $$rank \nabla \mathbf{F}(\Phi_t(x))=rank \nabla \mathbf{F}(x),$$ which implies the stated result. \end{proof} \medskip As a consequence we will obtain the following well known result which was applied for studying invariant sets of various mechanical systems, see for example \cite{irtegov}. \begin{corollary} The set of critical points of $\mathbf{F}$ is an invariant set of the dynamics generated by the differential equation \eqref{sys}. \end{corollary} \begin{theorem} The sets $N_{(r)}^F$ are invariant sets for the dynamics generated by the differential equation (\ref{sys}). \end{theorem} \begin{proof} Let $i\in \{1,...,k\}$, $l\in \{1,...,q\}$ and $\mathbf{\alpha}=(\alpha_1,...,\alpha_l)\in \{1,...,n\}^l$. We will prove by mathematical induction that \begin{equation}\label{relatia baza} \partial^{\mathbf{\alpha}}F_i(x)=\sum_{\beta_1,...,\beta_l=1}^n(\nabla\Phi_t)_{\alpha_1\beta_1}(x)... (\nabla\Phi_t)_{\alpha_l\beta_l}(x)\partial^{\mathbf{\beta}}F_i(\Phi_t(x))+S_{i,\mathbf{\alpha}}(t,x), \end{equation} where $\mathbf{\beta}=(\beta_1,...,\beta_l)$ and $S_{i,\mathbf{\alpha}}(t,x)$ is a sum with the property: \emph{"all the terms contain a factor of the form $\partial^{\mathbf{\gamma}}F_i(\Phi_t(x))$ with $|\gamma |<l$"}. We will prove this result by mathematical induction with respect to $l$. Componentwise the relation (\ref{formula principala}) implies our result for $l=1$. Let $\mathbf{\alpha}'=(\alpha_1,...,\alpha_l,\alpha_{l+1})\in \{1,...,n\}^{l+1}$ where $\mathbf{\alpha}=(\alpha_1,...,\alpha_l)\in \{1,...,n\}^l$. Using the induction hypothesis we have: $$\partial^{\mathbf{\alpha}'}F_i(x)=\frac{\partial}{\partial x_{\alpha_{l+1}}}(\sum_{\beta_1,...,\beta_l=1}^n (\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x)\partial^{\mathbf{\beta}}F_i(\Phi_t(x)) +\frac{\partial}{\partial x_{\alpha_{l+1}}}(S_{i,\mathbf{\alpha}}(t,x)).$$ By a straightforward computation we obtain $$\frac{\partial}{\partial x_{\alpha_{l+1}}}(\sum_{\beta_1,...,\beta_l=1}^n(\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x) \partial^{\mathbf{\beta}}F_i(\Phi_t(x)))=$$ $$=\sum_{\beta_1,...,\beta_l=1}^n(\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x) \frac{\partial}{\partial x_{\alpha_{l+1}}}\partial^{\mathbf{\beta}}F_i(\Phi_t(x))+$$ $$+\sum_{\beta_1,...,\beta_l=1}^n \frac{\partial}{\partial x_{\alpha_{l+1}}}((\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x))\partial^{\mathbf{\beta}} F_i(\Phi_t(x))=$$ $$=\sum_{\beta_1,...,\beta_l,\beta_{l+1}=1}^n(\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x) (\nabla\Phi_t)_{\alpha_{l+1}\beta_{l+1}}(x) \partial^{\mathbf{\beta}'}F_i(\Phi_t(x))+$$ $$+\sum_{\beta_1,...,\beta_l=1}^n \frac{\partial}{\partial x_{\alpha_{l+1}}}((\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x))\partial^{\mathbf{\beta}} F_i(\Phi_t(x)).$$ We will note that $$S_{i,\alpha '}(t,x)=\sum_{\beta_1,...,\beta_l=1}^n \frac{\partial}{\partial x_{\alpha_{l+1}}}((\nabla\Phi_t)_{\alpha_1\beta_1}(x)...(\nabla\Phi_t)_{\alpha_l\beta_l}(x)) \partial^{\mathbf{\beta}}F_i(\Phi_t(x))+ \frac{\partial}{\partial x_{\alpha_{l+1}}}(S_{i,\mathbf{\alpha}}(t,x)).$$ All the terms of $S_{i,\alpha '}(t,x)$ contain a factor of the form $\partial^{\mathbf{\gamma}}F_i(\Phi_t(x))$ with $|\gamma |<l+1$, which had to be proved. Let $\beta=(\beta_1,...,\beta_l)\in \{1,...,n\}^l$ and $\nabla\Phi_t ^{-1}(x)$ be the inverse matrix of $\nabla\Phi_t(x)$, where $x$ is not an equilibrium point for (\ref{sys}). Consequently, we have \begin{equation}\label{relatia baza 1} \partial^{\mathbf{\beta}}F_i(\Phi_t(x))=\sum_{\alpha_1,...,\alpha_l=1}^n(\nabla\Phi_t)_{\beta_1\alpha_1}^{-1}(x) ...(\nabla\Phi_t)_{\beta_l\alpha_l}^{-1}(x)[\partial^{\mathbf{\alpha}}F_i(x)-S_{i,\mathbf{\alpha}}(t,x)]. \end{equation} Also, we will prove by mathematical induction that the sets $N_{(j)}^F$ are invariant under the dynamics generated by the differential equation (\ref{sys}). For $j=1$ we have $N_{(1)}^F=M_{(0)}^F$, which is an invariant set (see Theorem \ref{principal theorem}). We suppose that for all $j\leq l$ the sets $N_{(j)}^F$ are invariant. Let $x\in N_{(l+1)}^F$ be arbitrary chosen. Using \eqref{relatia baza 1} for $l+1$-order of derivation and the induction hypothesis, we deduce that $\Phi_t(x)\in N_{(l+1)}^F,\,\,\,\forall t$. Summing up, $N_{(j)}^F$ are invariant sets for all $j\in \{1,...,q\}$. \end{proof} \section{Finding solutions using simpler dynamics} Let $F,G:\mathbb{R}^n\rightarrow \mathbb{R}$ be $C^q$ functions and the differential equations: \begin{equation}\label{unu} \dot{x}=f(x,\nabla F(x)) \end{equation} and \begin{equation}\label{doi} \dot{x}=f(x,\nabla G(x)) \end{equation} where $f:\mathbb{R}^{2n}\rightarrow \mathbb{R}^n$ is a $C^q$ vectorial function. For $x\in \mathbb{R}^n$, we denote with $\Phi_t^F(x)$ and $\Phi_t^G(x)$ the solutions of (\ref{unu}) and (\ref{doi}) with the initial conditions $\Phi_0^F(x)=x$ and $\Phi_0^G(x)=x$. We will introduce the following set $$E_1=\{x\in \mathbb{R}^n\,|\,\nabla F(x)=\nabla G(x)\}.$$ \begin{theorem} If $F-G$ is a conserved quantity for (\ref{unu}) and $x\in E_1$, then for all $t$ we have $$\Phi_t^F(x)=\Phi_t^G(x).$$ \end{theorem} \begin{proof} For $L=F-G$ we have the equality $E_1=M_{(0)}^L$. By Theorem (\ref{principal theorem}) the set $E_1$ is invariant under the dynamics of (\ref{unu}). For $x\in E_1$ we have $\nabla F(\Phi_t^F(x))=\nabla G(\Phi_t^F(x))$ for all $t$ and consequently $$\frac{d}{dt}\Phi_t^F(x)=f(\Phi_t^F(x),\nabla F(\Phi_t^F(x))=f(\Phi_t^F(x),\nabla G(\Phi_t^F(x)).$$ The above equality shows that $\Phi_t^F(x)$ is also a solution for (\ref{doi}). Given the uniqueness of the solutions, we obtain the desired equality. \end{proof} \bigskip Next we will discus a particular case of the result presented above. For this we will take $f(x,y)=h(x)+g(y)$ and $F\equiv 0$. Thus we have the two dynamics \begin{equation}\label{neperturbat} \dot{x}=h(x) \end{equation} and the perturbed dynamics \begin{equation}\label{} \dot{x}=h(x)+g(\nabla G(x)). \end{equation} We denote by $\Phi_t^h$ the flux for \eqref{neperturbat}. \begin{corollary}\label{neperturb} If $G$ is a conserved quantity for \eqref{neperturbat}, then for all initial conditions $x\in\{x\in \mathbb{R}^n\,|\,\nabla G(x)=0\}$ we have $$\Phi_t^h(x)=\Phi_t^G(x).$$ \end{corollary} Another particular case is when the function $f$ in \eqref{unu} and \eqref{doi} verifies the equality $$<f(x,y),y>=0,$$ where $<\cdot,\cdot>$ is the Euclidian product on $\mathbb{R}^n$. In this case $F$ is a conserved quantity for \eqref{unu} and $G$ is a conserved quantity for \eqref{doi}. The above equality is verified for the case when $$f(x,y)=\Pi(x)y,$$ where $\Pi(x)$ is an antisymmetric matrix. This is true for all almost Poisson manifolds (see \cite{ortega}). In this situation the differential equations \eqref{unu} and \eqref{doi} become \begin{equation}\label{poisson-unu} \dot{x}=\Pi(x)\nabla F(x) \end{equation} and \begin{equation}\label{poisson-doi} \dot{x}=\Pi(x)\nabla G(x). \end{equation} We observe that $F$ is a conserved quantity for \eqref{poisson-doi} if and only if $G$ is a conserved quantity for \eqref{poisson-unu}. \begin{corollary}\label{simplectic} If $G$ is a conserved quantity for \eqref{poisson-unu}, then for the initial conditions in $\{x\in \mathbb{R}^n\,|\,\nabla F(x)=\nabla G(x)\}$ we have $\Phi_t^F(x)=\Phi_t^G(x)$. \end{corollary} A particular case of Corollary \ref{simplectic} is when we have a symplectic manifold with $G$ being a quadratic function. In this case, the solutions of the Hamiltonian vector field $X_F$ starting in $\{x\in \mathbb{R}^n\,|\,\nabla F(x)=\nabla G(x)\}$ are also the solutions of the linear Hamiltonian system $X_G$. \bigskip Analogous results are valid for the more general case of the vector valued conserved quantities and when the right side of equations \eqref{unu} and \eqref{doi} depends on higher order derivatives. Firstly, we will introduce some notations. Let $\mathbf{F},\mathbf{G}:\mathbb{R}^n\rightarrow \mathbb{R}^k$ be $C^q$ vectorial functions with $q\geq 1$ and also $k\leq n$. If $F_1,...,F_k$ are the components of $\mathbf{F}$ we denote with \begin{equation*} \Delta^1 \mathbf{F}(x)=(\frac{\partial F_1}{\partial x_1}(x),...,\frac{\partial F_1}{\partial x_n}(x),\frac{\partial F_2}{\partial x_1}(x),...,\frac{\partial F_2}{\partial x_n}(x),...,\frac{\partial F_k}{\partial x_1}(x),...,\frac{\partial F_k}{\partial x_n}(x))\in \mathbb{R}^{kn}. \end{equation*} and \begin{equation*} \Delta^r \mathbf{F}(x)=(...,\partial^{\alpha}F_i(x),...)\in \mathbb{R}^{kn^r}, \end{equation*} where $r\in \{1,...,q\}$, $i\in \{1,...,k\}$, $\alpha\in \{1,...,n\}^r$ ($|\alpha|=r$) and the components appear in the lexicographical order for $(i,\alpha)$ in $\mathbb{N}^{r+1}$. For a fix $r\in \{1,...,q\}$, we will consider, as before, the two differential equations \begin{equation}\label{eqF} \dot{x}=f_r(x,\Delta^1 \mathbf{F}(x),...,\Delta^r \mathbf{F}(x)) \end{equation} and \begin{equation}\label{eqG} \dot{x}=f_r(x,\Delta^1 \mathbf{G}(x),...,\Delta^r \mathbf{G}(x)). \end{equation} Let $x\in \mathbb{R}^n$, we denote with $\Phi_t^\mathbf{F}(x)$ and $\Phi_t^\mathbf{G}(x)$ the solutions of (\ref{eqF}) and (\ref{eqG}) with initial conditions $\Phi_0^\mathbf{F}(x)=x$ and $\Phi_0^\mathbf{G}(x)=x$. We will introduce the following set $$E_r=\{x\in \mathbb{R}^n\,|\,\partial^{\alpha}\mathbf{F}(x)=\partial^{\alpha}\mathbf{G}(x),\,\,\,\forall |\alpha|\leq r\}.$$ \begin{theorem}\label{thprincipal} If $\mathbf{F}-\mathbf{G}$ is a conserved quantity for \eqref{eqF} and $x\in E_r$ then for all $t$ we have $$\Phi_t^\mathbf{F}(x)=\Phi_t^\mathbf{G}(x).$$ \end{theorem} Also the obvious extension of Corollary \ref{neperturb} takes place. \section{Invariant sets for Toda lattices} \hspace{0.5cm} The Toda lattice describes the one-dimensional motions of a chain of particles with nearest neighbor interactions. For a chain of particles with the equal masses $m$, Morikazu Toda came up with the choice of the interaction potential $$V(r)=e^{-r}+r-1.$$ The system of motion reads explicitly \begin{equation} m\ddot{x_i}=e^{-(x_i-x_{i-1})}-e^{-(x_{i+1}-x_i)},\,\,\,i\in \mathbb{Z}. \end{equation} This second order differential system is equivalent with the first order differential system \begin{equation} \left\ \begin{array}{ll} \dot{x_i}=u_i \\ m\dot{u_i}=e^{-(x_i-x_{i-1})}-e^{-(x_{i+1}-x_i)},\,\,\,i\in \mathbb{Z}, \\ \end{array \right. \end{equation} where $u_i$ is the velocity of the particle $i$. An equilibrium of the Toda lattice has the form $$x_i=x_0+\lambda i,\,\,\,u_i=0,\,\,\,\lambda\in \mathbb{R}^*,\,\,\,x_0\in \mathbb{R},\,\,\,i\in \mathbb{Z}.$$ Let an equilibrium of the Toda lattice and $y_i=x_i-x_0-\lambda i$ the displacement of the $i$ particle from its equilibrium position. The system in the variables $y_i$ and $u_i$ is \begin{equation}\label{sistemul mecanic} \left\ \begin{array}{ll} \dot{y_i}=u_i \\ \dot{u_i}=\frac{e^{-\lambda}}{m}(e^{-(y_i-y_{i-1})}-e^{-(y_{i+1}-y_i)}),\,\,\,i\in \mathbb{Z} \\ \end{array \right. \end{equation} Let us define \begin{equation}\label{schimbare coordonate} X_i:=\frac{e^{-\lambda}}{m}e^{-(y_{i+1}-y_i)}, \end{equation} then the equations of motion become \begin{equation} \left\ \begin{array}{ll}\label{sistemul de baza} \dot{X_i}=X_i(u_i-u_{i+1}) \\ \dot{u_i}=X_{i-1}-X_i,\,\,\,i\in \mathbb{Z}\,. \\ \end{array \right. \end{equation} The following particular cases are interesting: \noindent {\bf 1.} the case of a periodic lattice, $X_{i+n}=X_i\,\,\forall i\in \mathbb{Z}$, \noindent {\bf 2.} the case of a non-periodic lattice with the boundary conditions $X_0=0$ (correspond to formally setting $y_0=-\infty$) and $X_n=0$ (correspond to formally setting $y_{n+1}=\infty$). In both cases we investigate the motions of the particles $1$ to $n$ ($n\in \mathbb{N}^*$). \subsection{The case of a periodic lattice} \hspace{0.5cm} In this case {\it M. H$\acute{e}$non} proves in \cite{henon} that the following expressions are scalar conserved quantities \begin{equation} I_m=\sum u_{i_1}...u_{i_k}(-X_{j_1})...(-X_{j_l}), \end{equation} where $m\in \{1,...,n\}$ and the summation are extended to all terms which satisfy the following conditions: \noindent (i) the indices $i_1,...,i_k,j_1,j_1+1,...,j_l,j_l+1$, which appear in the term (either explicitly, or implicitly through a factor $X_j$) are all different (modulo $n$); \noindent (ii) the number of these indices is $m$, i.e. $k+2l=m$. Two terms differing only in the order of factors are not considered different, and therefore only one of them appears in the sum. In \cite{flaschkaperiodic}, Flaschka has proved that the above functions are conserved quantities using a Lax formulation. This was generalized to arbitrary Lie algebras by Adler \cite{adler} and Kostant \cite{kostant}. The first three scalar conserved quantities, depending on the variables $(X_1,...,X_n,u_1,...,u_n)$, are \begin{equation} I_1=\sum_{1\leq i\leq n} u_i \end{equation} \begin{equation} I_2=\sum _{1\leq i_1<i_2\leq n} u_{i_1}u_{i_2}-\sum_{1\leq j\leq n} X_j \end{equation} \begin{equation} I_3=\sum _{1\leq i_1<i_2<i_3\leq n} u_{i_1}u_{i_2}u_{i_3}-\sum_{1\leq i,j\leq n,\ j\neq i,\,j\neq i-1 (mod\,n)} u_iX_j,\,\,\,(X_0=X_n). \end{equation} We will introduce the vectorial conserved quantities, \begin{equation}\label{} \mathbb{I}_{12}=(I_1,I_2),\,\,\mathbb{I}_{13}=(I_1,I_3),\,\,\mathbb{I}_{23}=(I_2,I_3),\,\,\mathbb{I}_{123}=(I_1,I_2,I_3). \end{equation} {\bf I. The case $n$ odd} For this case we obtain as invariant sets only subsets of the set of equilibrium points or the empty set. More precisely, $M_{(0)}^{I_{1}}=M_{(0)}^{I_{2}}=M_{(0)}^{\mathbb{I}_{12}}=M_{(1)}^{\mathbb{I}_{12}}= M_{(0)}^{\mathbb{I}_{13}}=M_{(0)}^{\mathbb{I}_{23}}=M_{(0)}^{\mathbb{I}_{123}}=M_{(1)}^{\mathbb{I}_{123}}=\emptyset$. The following are subsets of the set of equilibrium points: \begin{equation*} M_{(0)}^{I_3}=\{(0,...,0,0,...,0)\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{I}_{13}}=\{(X,...,X,0,...,0)\,|\,X\in \mathbb{R}\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{I}_{23}}=\{(-\frac{n-1}{2}u^2,...,-\frac{n-1}{2}u^2,u,...,u)\,|\,u\in \mathbb{R}\}, \end{equation*} \begin{equation*}\label{} M_{(2)}^{\mathbb{I}_{123}}=\{(X,...,X,u,...,u)\,|\,X,u\in \mathbb{R}\}. \end{equation*} {\bf II. The case $n$ even} We have, $M_{(0)}^{I_{1}}=M_{(0)}^{I_{2}}=M_{(0)}^{\mathbb{I}_{12}}=M_{(1)}^{\mathbb{I}_{12}}= M_{(0)}^{\mathbb{I}_{13}}=M_{(0)}^{\mathbb{I}_{23}}=M_{(0)}^{\mathbb{I}_{123}}=M_{(1)}^{\mathbb{I}_{123}}=\emptyset$. As nontrivial invariant sets we have the following: \begin{equation*} M_{(0)}^{I_3}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1+X_2=u_1u_2,\,u_1+u_2=0\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{I}_{13}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,u_1+u_2=0\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{I}_{23}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1+X_2=-\frac{n}{4}(u_1+u_2)^2+u_1u_2\}, \end{equation*} \begin{equation*}\label{} M_{(2)}^{\mathbb{I}_{123}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1,X_2,u_1,u_2\in \mathbb{R}\}. \end{equation*} We obtain $M_{(2)}^{\mathbb{I}_{123}}$ as the largest invariant set and the restricted dynamics is the dynamics of two particles \begin{equation*}\label{} \left\ \begin{array}{ll} \dot{X}_1=X_1(u_1-u_2) \\ \dot{X}_2=X_2(u_2-u_1) \\ \dot{u}_1=X_2-X_1 \\ \dot{u}_2=X_1-X_2 \\ \end{array \right. \end{equation*} On the invariant set $M_{(1)}^{\mathbb{I}_{23}}$ we have the above dynamics subject to $X_1+X_2=-\frac{n}{4}(u_1+u_2)^2+u_1u_2$. On the invariant set $M_{(1)}^{\mathbb{I}_{13}}$ we have the above dynamics subject to $u_1+u_2=0$ and on the invariant set $M_{(0)}^{I_{3}}$ we have the above dynamics subject to $u_1+u_2=0$ and $X_1+X_2=u_1u_2$. For the sets $M_{(0)}^{I_3}$ and $M_{(1)}^{\mathbb{I}_{23}}$ the variables $X_i$ have to take also negative values which from a mathematical point of view is correct and can be the solutions for the system \eqref{sistemul de baza}. As the mechanical system is given by \eqref{sistemul mecanic} and we do the change of variables \eqref{schimbare coordonate}, the variables $X_i$ have to be strictly positive in order to have a physical meaning. Consequently, from a mechanical point of view only the sets $M_{(1)}^{\mathbb{I}_{13}}$ and $M_{(2)}^{\mathbb{I}_{123}}$ are meaningful. The computations can be found in the Appendix. \subsection{The case of non-periodic lattice} It is known that if we have the matrices $$L=\left \begin{array}{ccccc} u_1 & X_1 & 0 & ... & 0 \\ 1 & u_2 & X_2 & ... & 0 \\ ... & ... & ... & ... & ... \\ 0 & ... & 1 & u_{n-1} & X_{n-1} \\ 0 & ... & 0 & 1 & u_n \\ \end{array \right)\,\,\,\texttt{and}\,\,\,B=\left \begin{array}{ccccc} 0 & -X_1 & 0 & ... & 0 \\ 0 & 0 & -X_2 & ... & 0 \\ ... & ... & ... & ... & ... \\ 0 & ... & 0 & 0 & -X_{n-1} \\ 0 & ... & 0 & 0 & 0 \\ \end{array \right),$$ then the system \eqref{sistemul de baza}, in this case, has the Lax form \begin{equation} \dot{L}=[B,L], \end{equation} where $[B,L]=BL-LB$. Using the Flaschka theorem (see \cite{flaschkaperiodic}) we have the following scalar conserved quantities depending on the variables $(X_1,...,X_{n-1},u_1,...,u_n)$ \begin{equation} F_k=\frac{1}{k}tr(L^k),\,\,\,k\in\{1,...,n\}. \end{equation} For $k\in\{1,2,3\}$ we have $$F_1=\sum_{i=1}^n u_i$$ $$F_2=\sum_{i=1}^n (X_i+\frac{u_i^2}{2})$$ $$F_3=\sum_{i=1}^{n-1} X_i(u_i+u_{i+1})+\frac{1}{3}\sum_{i=1}^n u_i^3.$$ We will introduce the vectorial conserved quantities, \begin{equation}\label{} \mathbb{F}_{12}=(F_1,F_2),\,\,\mathbb{F}_{13}=(F_1,F_3),\,\,\mathbb{F}_{23}=(F_2,F_3),\,\,\mathbb{F}_{123}=(F_1,F_2,F_3). \end{equation} As before we will distinguish two cases.\vspace{2mm} {\bf I. The case $n$ odd} In this case we obtain as invariant sets only subsets of the set of equilibrium points or the empty set. More precisely, $M_{(0)}^{F_{1}}=M_{(0)}^{F_{2}}=M_{(0)}^{\mathbb{F}_{12}}=M_{(1)}^{\mathbb{F}_{12}}= M_{(0)}^{\mathbb{F}_{13}}=M_{(0)}^{\mathbb{F}_{23}}=M_{(0)}^{\mathbb{F}_{123}}=M_{(1)}^{\mathbb{F}_{123}}=\emptyset$. The following are subsets of the set of equilibrium points: \begin{equation*} M_{(0)}^{F_3}=M_{(1)}^{\mathbb{F}_{13}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{0,...,0}_{n})\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{F}_{23}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{u_1,0,...,u_1,0,u_1}_n)\,|\,u_1\in \mathbb{R}\}\cup\{(\underbrace{0,...,0}_{n-1}, \underbrace{0,u_2,...,0,u_2,0}_n)\,|\,u_2\in\mathbb{R}\}, \end{equation*} \begin{equation*}\label{} M_{(2)}^{\mathbb{F}_{123}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{u_1,u_2,...,u_1}_{n})\,|\,u_1,u_2\in \mathbb{R}\}. \end{equation*} {\bf II. The case $n$ even} We have, $M_{(0)}^{F_{1}}=M_{(0)}^{F_{2}}=M_{(0)}^{\mathbb{F}_{12}}=M_{(1)}^{\mathbb{F}_{12}}= M_{(0)}^{\mathbb{F}_{13}}=M_{(0)}^{\mathbb{F}_{23}}=M_{(0)}^{\mathbb{F}_{123}}=M_{(1)}^{\mathbb{F}_{123}}=\emptyset$. As nontrivial invariant sets we have the following: \begin{equation*} M_{(0)}^{F_3}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,u_1+u_2=0,\,X=u_1u_2\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{F}_{13}}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,u_1+u_2=0\}, \end{equation*} \begin{equation*} M_{(1)}^{\mathbb{F}_{23}}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,X=u_1u_2\}, \end{equation*} \begin{equation*}\label{} M_{(2)}^{\mathbb{F}_{123}}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,X,u_1,u_2\in \mathbb{R}\}. \end{equation*} We obtain $M_{(2)}^{\mathbb{F}_{123}}$ as the largest invariant set and the restricted dynamics is given by \begin{equation*}\label{} \left\ \begin{array}{ll} \dot{X}=X(u_1-u_2) \\ \dot{u}_1=-X \\ \dot{u}_2=X \\ \end{array \right. \end{equation*} On the invariant set $M_{(1)}^{\mathbb{F}_{23}}$ we have the above dynamics subject to $X=u_1u_2$. On the invariant set $M_{(1)}^{\mathbb{F}_{13}}$ we have the above dynamics subject to $u_1+u_2=0$ and on the invariant set $M_{(0)}^{F_{3}}$ we have the above dynamics subject to $u_1+u_2=0$ and $X=u_1u_2$. For the set $M_{(2)}^{\mathbb{F}_{123}}$ the variables $X_i$ with $i$ even are all equal with zero which from a mathematical point of view is correct. As before, the mechanical system is given by \eqref{sistemul mecanic} and we do the change of variables \eqref{schimbare coordonate}. Consequently, the variables $X_i$ have to be strictly positive in order to have a physical meaning. The computations can be found in the Appendix. \section{Appendix} {\bf The computations for the case of periodic lattice.}\vspace{2mm} \noindent We will make the notations \begin{equation} U=\sum_{1\leq i\leq n}u_i,\,\,\,V=\sum_{1\leq i_1<i_2\leq n}u_{i_1}u_{i_2},\,\,\,Y=\sum_{1\leq j\leq n}X_j. \end{equation} We observe that \begin{equation}\label{2V} 2V=U^2-\sum_{i=1}^n u_i^2. \end{equation} With these notations we have the following, \begin{equation} \nabla I_1=(0,...,0,1,...,1) \end{equation} \begin{equation} \nabla I_2=(-1,...,-1,\underbrace{U-u_1}_{n+1},...,\underbrace{U-u_k}_{n+k},...,\underbrace{U-u_n}_{2n}) \end{equation} \begin{equation} \nabla I_3=(...,\underbrace{-(U-u_k-u_{k+1})}_{k},...,\underbrace{V-u_k(U-u_k)-(Y-X_{k-1}-X_k)}_{n+k},...). \end{equation} \noindent {\bf The study of $M_{(0)}^{I_3}$}. \noindent The elements of $M_{(0)}^{I_3}$ are the solutions of the algebraic system \begin{equation}\label{sistemul1} \left\ \begin{array}{ll} U-u_k-u_{k+1}=0, \\ V-u_k(U-u_k)-(Y-X_{k-1}-X_k)=0,\,\,\,\forall k\in\{1,...,n\}. \end{array \right. \end{equation} Adding the first $n$ equations, we obtain $U=0$ and $u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=u_n+u_1.$ We deduce the following results:\vspace{2mm} \noindent {\bf The case $n\in 2\mathbb{N}+1$}. In this situation we have $u_1=u_2=...=u_n=0$. Adding the last $n$ equations of \eqref{sistemul1} we obtain $Y=0\,\,\texttt{and}\,\,\,X_1+X_2=X_2+X_3=...=X_{n-1}+X_n=X_n+X_1.$ It implies that $X_1=...=X_n=0$ and consequently \begin{equation*} M_{(0)}^{I_3}=\{(0,...,0,0,...,0)\}. \end{equation*} \noindent {\bf The case $n\in 2\mathbb{N}$}. For this situation $u_i=(-1)^{i+1}u,\,\,\,u\in \mathbb{R}$. In this case, using \eqref{2V}, we have $V=-\frac{n}{2}u^2$. The last $n$ equations of \eqref{sistemul1} become $$-Y+X_{k-1}+X_k=(\frac{n}{2}-1)u^2\,\,\,\forall k\in \{1,...,n\}.$$ Making the addition we have $Y=-\frac{n}{2}u^2$ and consequently $X_1+X_2=X_2+X_3=...=X_{n-1}+X_n=X_n+X_1=-u^2$ which implies $X_1=X_3=...=X_{n-1},\,\,\,\texttt{and}\,\,\,X_2=X_4=...=X_{2n}.$ In this case we have \begin{equation*} M_{(0)}^{I_3}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1+X_2=u_1u_2,\,u_1+u_2=0\}. \end{equation*} \noindent {\bf The study of $M_{(0)}^{\mathbb{I}_{ij}}$, $M_{(1)}^{\mathbb{I}_{ij}}$ with $(i,j)\in \{(1,2),(1,3),(2,3)\}.$\vspace{2mm} } \noindent A point $(X_1,...,X_n,u_1,...,u_n)\in M_{(1)}^{\mathbb{I}_{13}}$ if and only if we have, for all $k,q\in \{1,...,n\}$, \begin{equation} \left\ \begin{array}{ll} U-u_k-u_{k+1}=0 \\ V-u_k(U-u_k)-(Y-X_{k-1}-X_k)=V-u_q(U-u_q)-(Y-X_{q-1}-X_q). \\\end{array \right. \end{equation} Adding the first $n$ equations we obtain $U=0$, $u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=u_n+u_1$ and consequently $u_i=(-1)^{i+1}u,\,\,\,u\in \mathbb{R}$. The last $n$ equations become $X_1+X_2=X_2+X_3=...=X_{n-1}+X_n=X_n+X_1$. \vspace{2mm} \noindent {\bf The case $n\in 2\mathbb{N}+1$}. For this case note that $u_1=u_2=...=u_n=0$ and $X_1=...=X_n=X\in \mathbb{R}$. In this case we have \begin{equation*} M_{(1)}^{\mathbb{I}_{13}}=\{(X,...,X,0,...,0)\,|\,X\in \mathbb{R}\}. \end{equation*} \noindent {\bf The case $n\in 2\mathbb{N}$}. It is easy to see that $X_1=X_3=...=X_{n-1},\,\,\,X_2=X_4=...=X_{n}.$ In this case we have \begin{equation*} M_{(1)}^{\mathbb{I}_{13}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,u_1+u_2=0\}. \end{equation*} A point belongs to the set $M_{(1)}^{\mathbb{I}_{23}}$ if and only if \begin{equation}\label{conditii} det(A_{kq})=0,\,\,\,\det(B_{kq})=0,\,\,\,det(C_{kq})=0,\,\,\,\forall k,q\in \{1,...,n\}, \end{equation} where \begin{equation*} A_{kq}=\left \begin{array}{cc} -1 & -1 \\ -(U-u_k-u_{k+1}) & -(U-u_q-u_{q+1}) \\ \end{array \right) \end{equation*} \begin{equation*} B_{kq}=\left \begin{array}{cc} -1 & U-u_q \\ -(U-u_k-u_{k+1}) & V-u_q(U-u_q)-(Y-X_{q-1}-X_q) \\ \end{array \right) \end{equation*} \begin{equation*} C_{kq}=\left \begin{array}{cc} U-u_k & U-u_q \\ V-u_k(U-u_k)-(Y-X_{k-1}-X_k) & V-u_q(U-u_q)-(Y-X_{q-1}-X_q) \\ \end{array \right). \end{equation*} Using the expression of $A_{kq}$ we deduce that $u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=u_n+u_1$. \vspace{2mm} \noindent {\bf The case $n\in 2\mathbb{N}+1$.} In this case we have $u_1=u_2=...=u_n=u$, and $U=nu$, and $V=\frac{(n-1)n}{2}u^2$ and $$B_{kq}=\left \begin{array}{cc} -1 & (n-1)u \\ -(n-2)u & \frac{(n-2)(n-1)}{2}u^2-(Y-X_q-X_{q-1}) \\ \end{array \right).$$ Using the condition that $det (B_{kq})=0$, we obtain $$Y-X_q-X_{q-1}=-\frac{(n-2)(n-1)}{2}u^2,\,\,\,\forall q\in \{1,...,n\}.$$ Adding these relations we have $$Y=-\frac{(n-1)n}{2}u^2,\,\,\,\texttt{and}\,\,\,X_1=X_2=....=X_n=X\,\,\,\texttt{and}\,\,\,X=-\frac{n-1}{2}u^2.$$ With this relation we have $$C_{kq}=\left \begin{array}{cc} (n-1)u & (n-1)u \\ (n-2)(n-1)u^2 & (n-2)(n-1)u^2 \\ \end{array \right).$$ We observe that the equality $det (C_{kq})=0$ is verified. In conclusion we have \begin{equation*} M_{(1)}^{\mathbb{I}_{23}}=\{(-\frac{n-1}{2}u^2,...,-\frac{n-1}{2}u^2,u,...,u)\,|\,u\in \mathbb{R}\}. \end{equation*} \noindent {\bf The case $n\in 2\mathbb{N}$.} In this case $u_1=u_3=...=u_{n-1},\,\,\,u_2=u_4=....=u_n$ and we have $U=\frac{n}{2}(u_1+u_2)$. Using \eqref{2V} we obtain $V=\frac{n(n-2)}{8}(u_1^2+u_2^2)+\frac{n^2}{4}u_1u_2$. From the relation $det(B_{kq})=0$, we have \begin{equation}\label{relatieY} Y-X_{q-1}-X_q=V-(U-u_q)(\frac{n-2}{n}U+u_q). \end{equation} Consequently, $X_1+X_2=X_3+X_4=...=X_{n-1}+X_n\,\,\,\texttt{and}\,\,\,X_2+X_3=X_4+X_5=...=X_{n}+X_1$ which further implies that $X_1=X_3=...=X_{n-1},\,\,\,X_2=X_4=...=X_n\,\,\,\texttt{and}\,\,\,Y=\frac{n}{2}(X_1+X_2).$ By substitution into \eqref{relatieY} we obtain $X_1+X_2=-\frac{n}{4}(u_1+u_2)^2+u_1u_2.$ We have \begin{equation*}\begin{array}{c} M_{(1)}^{\mathbb{I}_{23}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1+X_2=-\frac{n}{4}(u_1+u_2)^2+u_1u_2\}. \end{array} \end{equation*} \noindent {\bf The study of $M_{(2)}^{\mathbb{I}_{123}}$}. \vspace{2mm} \noindent We introduce, for $k,q,r\in \{1,...,n\}$, the matrices \begin{equation*}\label{} A_{kqr}=\left \begin{array}{ccc} 0 & 0 & 1 \\ -1 & -1 & U-u_r \\ -(U-u_k-u_{k+1}) & -(U-u_q-u_{q+1}) & V_r \\ \end{array \right) \end{equation*} \begin{equation*}\label{} B_{kqr}=\left \begin{array}{ccc} 0 & 1 & 1 \\ -1 & U-u_q & U-u_r \\ -(U-u_k-u_{k+1}) & V_q & V_r \\ \end{array \right) \end{equation*} \begin{equation*}\label{} C_{kqr}=\left \begin{array}{ccc} 1 & 1 & 1 \\ U-u_k & U-u_q & U-u_r \\ V_k & V_q & V_r \\ \end{array \right), \end{equation*} where $V_s=V-u_s(U-u_s)-(Y-X_{s-1}-X_s)$. We observe that $rank (\nabla \mathbb{I}_{123})=2\,\,\Leftrightarrow\,\,det (A_{kqr})=det (B_{kqr})=det (C_{kqr})=0\,\,\forall k,q,r\in\{1,...,n\}$. The equations $det (A_{kqr})=0\,\,\forall k,q,r\in\{1,...,n\}$ give us $$u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=u_n+u_1.$$ \noindent {\bf The case $n\in 2\mathbb{N}+1$}. We have $u:=u_1=...=u_n$. With this notation we obtain $U=nu$, and $V=\frac{n(n-1)}{2}u^2$ and $V_k=-\frac{n(n-1)}{2}u^2-(Y-X_{k-1}-X_k)$. It is easy to see that $det (C_{kqr})=0$. We have the equivalences $$det (B_{kqr})=0\,\,\Leftrightarrow\,\,det \left \begin{array}{ccc} 0 & 0 & 1 \\ -1 & 0 & (n-1)u \\ V_k & V_q-V_r & V_r \\ \end{array \right)=0$$ $$\Leftrightarrow\,\,V_r=V_q\,\,\Leftrightarrow\,\,X_{r-1}+X_r=X_{q-1}+X_q.$$ Because $n\in 2\mathbb{N}+1$, we obtain $X_1=X_2=...=X_n.$ In this case we have \begin{equation*}\label{} M_{(2)}^{\mathbb{I}_{123}}=\{(X,...,X,u,...,u)\,|\,X,u\in \mathbb{R}\}. \end{equation*} \noindent {\bf The case $n\in 2\mathbb{N}$}. We have $u_1=u_3=...=u_{n-1},\,\,\,u_2=u_4=....=u_n$. By calculus we obtain $$det(C_{kqr})=0\,\,\Leftrightarrow\,\,det \left \begin{array}{ccc} 1 & 0 & 0 \\ U-u_k & u_k-u_q & u_k-u_r \\ V_k & V_q-V_k & V_r-V_k \\ \end{array \right)=0.$$ If $q=k+1$ and $r=k+2$ we obtain $V_k=V_{k+2}$, which implies that $X_1+X_2=X_3+X_4=...=X_{n-1}+X_n$ and $X_2+X_3=X_4+X_5=...=X_n+X_1$. Consequently, we have $X_1=X_3=...=X_{n-1}$ and $X_2=X_4=...=X_n.$ We observe that if $s-t\in 2\mathbb{Z}$, then $u_s=u_t$ and $V_s=V_t$, which implies that $det(C_{kqr})=0\,\,\,\forall k,q,r\in \{1,...,n\}$. For the matrices $B_{kqr}$ we have the following properties: \noindent i) $det B_{111}=detB_{112}=0$ (by calculus). \noindent ii) $det B_{kqr}=-det B_{krq}$. \noindent iii) If $q_1-q_2\in 2\mathbb{N}$ and $r_1-r_2\in 2\mathbb{N}$ then $det B_{k_1q_1r_1}=det B_{k_2q_2r_2}$. \noindent Using these properties we deduce that $det B_{kqr}=0,\,\,\,\forall k,q,r\in \{1,...,n\}$ and we obtain \begin{equation*}\label{} M_{(2)}^{\mathbb{I}_{123}}=\{(X_1,X_2,...,X_1,X_2,u_1,u_2,...,u_1,u_2)\,|\,X_1,X_2,u_1,u_2\in \mathbb{R}\}. \end{equation*} \noindent {\bf The computations for the case of non-periodic lattice.}\vspace{2mm} \noindent If we consider the variables $(X_1,...,X_{n-1},u_1,...,u_n)$ we obtain \begin{equation} \nabla F_1=(\underbrace{0,...,0}_{n-1},\underbrace{1,...,1}_n) \end{equation} \begin{equation} \nabla F_2=(\underbrace{1,...,1}_{n-1},u_1,...,u_n) \end{equation} \begin{equation} \nabla F_3=(u_1+u_2,...,u_{n-1}+u_n,X_1+u_1^2,X_2+X_1+u_2^2,...,X_{n-2}+X_{n-1}+u_{n-1}^2,X_{n-1}+u_n^2). \end{equation} \noindent {\bf The study of $M_{(0)}^{F_3}$}. \noindent The elements of $M_{(0)}^{F_3}$ are the solutions of the system $$\left\ \begin{array}{ll} u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=0 \\ X_1+u_1^2=X_2+X_1+u_2^2=...=X_{n-2}+X_{n-1}+u_{n-1}^2=X_{n-1}+u_n^2=0. \\\end{array \right.$$ \noindent {\bf The case $n\in 2\mathbb{N}+1$}. We obtain $$M_{(0)}^{F_3}=\{(\underbrace{0,...,0}_{n-1},\underbrace{0,...,0}_n)\}.$$ \noindent {\bf The case $n\in 2\mathbb{N}$}. In this situation $u_1=u_3=...=u_{n-1}=u,\,\,\,u_2=u_4=...=u_n=-u$ and $X_k=-u^2$ if $k\in 2\mathbb{N}+1$ and $X_k=0$ if $k\in 2\mathbb{N}$. We have $$M_{(0)}^{F_3}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,u_1+u_2=0,\,X=u_1u_2\}.$$ \noindent {\bf The study of $M_{(0)}^{\mathbb{F}_{ij}}$, $M_{(1)}^{\mathbb{F}_{ij}}$ with $(i,j)\in \{(1,2),(1,3),(2,3)\}$ }.\vspace{2mm} \noindent The elements of $M_{(1)}^{\mathbb{F}_{13}}$ are the solutions of the system $$\left\ \begin{array}{ll} u_1+u_2=u_2+u_3=...=u_{n-1}+u_n=0 \\ X_1+u_1^2=X_2+X_1+u_2^2=...=X_{n-2}+X_{n-1}+u_{n-1}^2=X_{n-1}+u_n^2. \\\end{array \right.$$ \noindent {\bf The case $n\in 2\mathbb{N}+1$}. We obtain that $$M_{(1)}^{\mathbb{F}_{13}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{0,...,0}_n)\}.$$ \noindent {\bf The case $n\in 2\mathbb{N}$}. In this situation $u_1=u_3=...=u_{n-1}=u,\,\,\,u_2=u_4=...=u_n=-u$ and $X_k=X$ if $k\in 2\mathbb{N}+1$ and $X_k=0$ if $k\in 2\mathbb{N}$. We have $$M_{(1)}^{\mathbb{F}_{13}}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_n)\,|\,u_1+u_2=0\}.$$ \noindent For a point of the set $M_{(1)}^{\mathbb{F}_{23}}$ we have \begin{equation}\label{conditii} det(A_{kq})=0,\,\,\,\det(B_{kq})=0,\,\,\,det(C_{kq})=0,\,\,\,\forall k,q \end{equation} where \begin{equation} A_{kq}=\left \begin{array}{cc} 1 & 1 \\ u_k+u_{k+1} & u_q+u_{q+1} \\ \end{array \right),\,\,\,k,q\in\{1,...,n-1\} \end{equation} \begin{equation} B_{kq}=\left \begin{array}{cc} 1 & u_q \\ u_k+u_{k+1} & X_{q-1}+X_q+u_q^2 \\ \end{array \right),\,\,\,k\in\{1,...,n-1\}\,\,\,\texttt{and}\,\,\,q\in\{1,...,n\} \end{equation} \begin{equation} C_{kq}=\left \begin{array}{cc} u_k & u_q \\ X_{k-1}+X_k+u_k^2 & X_{q-1}+X_q+u_q^2 \\ \end{array \right),\,\,\,k,q\in\{1,...,n\}. \end{equation} Using the expression of $A_{kq}$ we deduce that $u_1+u_2=u_2+u_3=...=u_{n-1}+u_n$ and consequently $u_k=u_q$ if $k-q\in \,2\mathbb{Z}$. The matrices $B_{kq}$ have the form $$B_{kq}=\left \begin{array}{cc} 1 & u_1 \\ u_1+u_2 & X_{q-1}+X_q+u_1^2 \\ \end{array \right)\,\,\,\texttt{if}\,\,\,q\in 2\mathbb{N}+1$$and $$B_{kq}=\left \begin{array}{cc} 1 & u_2 \\ u_1+u_2 & X_{q-1}+X_q+u_2^2 \\ \end{array \right)\,\,\,\texttt{if}\,\,\,q\in 2\mathbb{N}.$$ We have $det B_{kq}=0$ if and only if $X_1=X_1+X_2=...=X_{n-2}+X_{n-1}=X_{n-1}=u_1u_2$.\vspace{2mm} \noindent {\bf The case $n\in 2\mathbb{N}+1$}. It is easy to see that $X_1=X_2=...=X_{n-1}=0$ and $u_1u_2=0$. We observe that $$M_{(1)}^{\mathbb{F}_{23}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{u_1,0,...,u_1,0,u_1}_n)\,|\,u_1\in \mathbb{R}\}\cup\{(\underbrace{0,...,0}_{n-1}, \underbrace{0,u_2,...,0,u_2,0}_n)\,|\,u_2\in\mathbb{R}\}.$$ \noindent {\bf The case $n\in 2\mathbb{N}$}. We deduce that $X_1=X_3=...=X_{n-1}=X$, $X_2=X_4=...=X_{n-2}=0$ and $u_1u_2=X$. In this case we have $$M_{(1)}^{\mathbb{F}_{23}}=\{(\underbrace{X,0,X,...0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_{n})\,|\,X=u_1u_2\}.$$ \noindent {\bf The study of $M_{(2)}^{\mathbb{F}_{123}}$}. \noindent For a point of the set $M_{(2)}^{\mathbb{F}_{123}}$ we have \begin{equation}\label{rang2} det(A_{kqr})=0,\,\,\,\det(B_{kqr})=0,\,\,\,det(C_{kqr})=0,\,\,\,\forall k,q,r \end{equation} where \begin{equation} A_{kqr}=\left \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 1 & u_r \\ u_k+u_{k+1} & u_q+u_{q+1} & X_{r-1}+X_r+u_r^2 \\ \end{array \right),\,\,\,k,q\in\{1,...,n-1\},\,\,\,r\in\{1,...,n\} \end{equation} \begin{equation} B_{kqr}=\left \begin{array}{ccc} 0 & 1 & 1 \\ 1 & u_q & u_r \\ u_k+u_{k+1} & X_{q-1}+X_q+u_q^2 & X_{r-1}+X_r+u_r^2 \\ \end{array \right),\,\,\,k\in\{1,...,n-1\},\,q,r\in\{1,...,n\} \end{equation} \begin{equation} C_{kqr}=\left \begin{array}{ccc} 1 & 1& 1 \\ u_k & u_q & u_r\\ X_{k-1}+X_k+u_k^2 & X_{q-1}+X_q+u_q^2 & X_{r-1}+X_r+u_r^2 \\ \end{array \right),\,\,\,k,q,r\in\{1,...,n\} \end{equation} Using the expression of $A_{kqr}$ we deduce that $u_1+u_2=u_2+u_3=...=u_{n-1}+u_n$ and consequently $u_k=u_q$ if $k-q\in \,2\mathbb{Z}$. For $k\in \{1,...,n-1\}$, we have $$det B_{k,k+1,k}=det \left \begin{array}{ccc} 0 & 0 & 1 \\ 1 & u_{k+1}-u_k & u_k \\ u_k+u_{k+1} & X_{k+1}-X_{k-1}+u_{k+1}^2-u_k^2 & X_{k-1}+X_k+u_k^2 \\ \end{array \right)=0$$ and we deduce that $X_{k+1}=X_{k-1}$.\vspace{2mm} \noindent {\bf The case $n\in 2\mathbb{N}+1$}. It is easy to see that $X_1=X_2=...=X_{n-1}=0$, $u_1=u_3=...=u_n$ and $u_2=u_4=...=u_{n-1}$. All the conditions \eqref{rang2} are verified and the set is $$M_{(2)}^{\mathbb{F}_{123}}=\{(\underbrace{0,...,0}_{n-1},\underbrace{u_1,u_2,...,u_1}_{n})\,|\,u_1,u_2\in \mathbb{R}\}.$$ \noindent {\bf The case $n\in 2\mathbb{N}$}. It is easy to see that $X_1=X_3=...=X_{n-1}=X$, $X_2=X_4=...=X_{n-2}=0$, $u_1=u_3=...=u_{n-1}$ and $u_2=u_4=...=u_n$. All the conditions \eqref{rang2} are verified and the set is $$M_{(2)}^{\mathbb{F}_{123}}=\{(\underbrace{X,0,...,X,0,X}_{n-1},\underbrace{u_1,u_2,...,u_1,u_2}_{n})\,|\,X,u_1,u_2\in \mathbb{R}\}.$$ {\bf Acknowledgments}. Petre Birtea has been supported by CNCSIS { UEFISCSU, project number PN II - IDEI 1081/2008.
1407.3394
\subsection{Introduction} The first few steps in all approaches to the semantics of dependent type theories remain insufficiently understood. The constructions which have been worked out in detail in the case of a few particular type systems by dedicated authors are being extended to the wide variety of type systems under consideration today by analogy. This is not acceptable in mathematics. Instead we should be able to obtain the required results for new type systems by {\em specialization} of general theorems formulated and proved for abstract objects the instances of which combine together to produce a given type system. One such class of objects is the class of C-systems introduced in \cite{Cartmell0} (see also \cite{Cartmell1}) under the name ``contextual categories''. A modified axiomatics of C-systems and the construction of new C-systems as sub-objects and regular quotients of the existing ones in a way convenient for use in type-theoretic applications are considered in \cite{Csubsystems}. Modules over monads were introduced in \cite{HM2007} in the context of syntax with binding and substitution. In the present paper, after some general comments about monads on $Sets$ and their modules, we construct for any such monad $R$ and a left module $LM$ over $R$ a C-system (contextual category) $CC(R,LM)$. We describe, using the results of \cite{Csubsystems}, all the C-subsystems of $CC(R,LM)$ in terms of objects directly associated with $R$ and $LM$. We then define two additional operations $\sigma$ and $\widetilde{\sigma}$ on $CC(R,LM)$ and describe the regular congruence relations (see \cite{Csubsystems}) on C-subsystems of $CC(R,LM)$ which are compatible in a certain sense with $\sigma$ and $\widetilde{\sigma}$. Of a particular interest is the case of ``syntactic'' pairs where $R(\{x_1,\dots,x_n\})$ and $LM(\{x_1,\dots,x_n\})$ are the sets of expressions of some kind with free variables from $\{x_1,\dots,x_n\}$ modulo an equivalence relation such as $\alpha$-equivalence. The simplest class of syntactic pairs where $LM=R$ arises from signatures considered in \cite[p.228]{HM2007}. To any such signature $\Sigma$ one associates a class of expressions with bindings and $R(\{x_1,\dots,x_n\})$ is the set of such expressions with free variables from the set $\{x_1,\dots,x_n\}$ modulo the $\alpha$-equivalence. Suppose now that we are given a type theory based on the syntax of expressions specified by $\Sigma$ that is formulated in terms of the four kinds of basic judgements originally introduced by Per Martin-Lof in \cite[p.161]{ML78}. Since we are only interested in the $\alpha$-equivalence classes of judgements we may assume that the variables declared in the context are taken from the set of natural numbers such that the first declared variable is $1$, the second is $2$ etc. Then, the set of judgements of the form $$(1:A_1,\dots,n:A_n\vdash A\, type)$$ (in the notation of Martin-Lof ``$A\,type\,(1\in A_1,\dots,n\in A_n)$'') can be identified with the set of judgements of the form $$(1:A_1,\dots,n:A_n, n+1:A\rhd)$$ stating that the context $(1:A_1,\dots,n:A_n, n+1:A)$ is well-formed. With this identification the type theory is specified by four sets $C,\widetilde{C},Ceq$ and $\widetilde{Ceq}$ where $$C \subset \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n-1\})$$ $$\widetilde{C}\subset \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n-1\})\times R(\{1,\dots,n\})\times LM(\{1,\dots,n\})$$ $$Ceq \subset \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n-1\})\times LM(\{1,\dots,n\})^2$$ $$\widetilde{Ceq} \subset \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n-1\})\times R(\{1,\dots,n\})^2\times LM(\{1,\dots,n\})$$ On the other hand we show that any pair $(CC,\sim)$, where $CC$ is a sub-C-system of $CC(R,LM)$ and $\sim$ is a regular congruence relation on $CC$, defines four subsets of such form. Proposition \ref{2014.07.10.prop1} spells out the necessary and sufficient conditions that the sets $C,\widetilde{C},Ceq,\widetilde{Ceq}$ should satisfy in order to correspond to a pair $(CC,\sim)$. A wider class of syntactic pairs $(R,LM)$ that arises from nominal signatures is considered in Section \ref{2014.07.22.sec}. This is one the papers extending the material which I started to work on in \cite{NTS}. I would like to thank the Institute Henri Poincare in Paris and the organizers of the ``Proofs'' trimester for their hospitality during the preparation of this paper. The work on this paper was facilitated by discussions with Richard Garner and Egbert Rijke. Notation: For morphisms $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ we denote their composition as $f\circ g$. For functors $F:{\cal C}\rightarrow {\cal C}'$, $G:{\cal C}'\rightarrow {\cal C}''$ we use the standard notation $G\circ F$ for their composition. \subsection{Left modules over monads} Recall (cf. \cite{HM2007}) that a monad on a category $\cal C$ is a functor $M:{\cal C}\rightarrow {\cal C}$ together with two families of morphisms: \begin{enumerate} \item for any $X\in {\cal C}$, a morphism $\eta_X:X \rightarrow R(X)$, \item for any $X\in {\cal C}$, a morphism $\mu_X:R(R(X))\rightarrow R(X)$ \end{enumerate} which satisfy certain conditions. For objects $X$, $X'$ and a morphism $f:X'\rightarrow R(X)$, the composition $R(X') \stackrel{R(f)}{\rightarrow} R(R(X))\stackrel{\mu_X}{\rightarrow} R(X)$ is a morphism $bind(f):R(X')\rightarrow R(X)$. This allows one to describe monads as follows: \begin{lemma} \llabel{2014.06.30.l1} The construction outline above defines an equivalence between (the type of) monads on $\cal C$ and (the type of) collections of data of the form: \begin{enumerate} \item for every object $X$ an object $R(X)$, \item for every object $X$ a morphism $\eta_X: X \rightarrow R(X)$, \item for every two objects $X$, $X'$ and a morphism $f:X\rightarrow R(X')$, a morphism $ bind(f):R(X)\rightarrow R(X')$ \end{enumerate} which satisfy the following conditions: \begin{enumerate} \item for an object $X$, $ bind(\eta_X)=id_{R(X)}$, \item for a morphism $f:X\rightarrow X'$, $\eta_X\circ bind(f)=f$, \item for two morphisms $f:X\rightarrow R(X')$, $g:X'\rightarrow R(X'')$, $ bind(f \circ bind(g))= bind(f)\circ bind(g)$. \end{enumerate} \end{lemma} \begin{proof} Straightforward. Cf. \cite{Moggi91}, \cite[Prop. 1]{HM2010}. \end{proof} \begin{lemma} \llabel{2014.07.28.l2} Let $R$ be a monad on the product category ${\cal C}\times {\cal D}$. Let $A\in {\cal D}$. Then the functor $R_{A,1}:X\mapsto pr_{\cal C}(R(X,A))$ has a natural structure of a monad on $\cal C$. \end{lemma} \begin{proof} One defines the morphisms $\eta_X:X\rightarrow R_{A,1}(X)$ by $$\eta_X := pr_{\cal C}(\eta_{(X,A)})$$ and morphisms $ bind(f):R_{A,1}(X)\rightarrow R_{A,1}(X')$ for $f:X\rightarrow R_{A,1}(X')$ by $$ bind(f) := pr_{\cal C}( bind(f,pr_{\cal D}(\eta_{(X,A)})))$$ The verification of the conditions of Lemma \ref{2014.06.30.l1} is straightforward. \end{proof} The concept of a module over a monad was first explicitly introduced in \cite{HM2007}. \begin{definition} \llabel{2014.07.26.d1} Let $R$ be a monad on a category ${\cal C}$. A left module over $R$ with values in a category $\cal D$ is a functor $LM:{\cal C}\rightarrow {\cal D}$ together with, for all $X, X'\in {\cal C}$ and $f:X\rightarrow R(X')$, a morphism $\rho(f):LM(X)\rightarrow LM(X')$ such that \begin{enumerate} \item $\rho(\eta_X)=Id_{LM(X)}$, \item for $f:X\rightarrow R(X')$, $g:X'\rightarrow R(X'')$, $\rho(f) \rho(g)=\rho(f\, bind(g))$. \end{enumerate} \end{definition} One verifies easily (cf. \cite[Def. 9]{HM2010}) that a left $R$-module structure on $LM$ is the same as a natural transformation $LM\circ R\rightarrow LM$ which satisfies the expected compatibility conditions with respect to $Id\rightarrow R$ and $R\circ R\rightarrow R$. \begin{lemma} \llabel{2014.07.28.l1} Let $R$ be a monad on a category $\cal C$. Then one has: \begin{enumerate} \item If $LM_1$, $LM_2$ are left $R$-modules with values in ${\cal D}_1$ and ${\cal D}_2$ respectively then the functor $X\mapsto (LM_1(X),LM_2(X))$ has a natural structure of a left $R$-module with values in ${\cal D}_1\times {\cal D}_2$. \item If $LM$ is a left $R$-module with values in $\cal D$ and $F:{\cal D}\rightarrow {\cal D'}$ is a functor then $F\circ LM$ has a natural structure of a left $R$-module with values in $\cal D'$. \end{enumerate} \end{lemma} \begin{proof} Straightforward. \end{proof} \begin{lemma} \llabel{2014.07.28.l3} Under the assumptions and in the notation of Lemma \ref{2014.07.28.l2} the morphisms $$\rho(f:X\rightarrow R_{A,1}(X'))= bind(f, pr_{\cal D}(\eta_{(X',A)})) : R(X,A)\rightarrow R(X',A)$$ define a structure of a left $R_{A,1}$-module with values in ${\cal C}\times {\cal D}$ on the functor $$M_{A,1}:X\mapsto R(X,A)$$ \end{lemma} \begin{proof} Direct verification of the conditions of Definition \ref{2014.07.26.d1}. \end{proof} In the case of a monad $R$ on $Sets$ and a left $R$-module $LM$ with values in $Sets$, for $E\in LM(\{x_1,\dots,x_n\})$ and $f:\{x_1,\dots,x_n\}\rightarrow R(X')$ such that $f(x_i)=f_i$ we write $\rho(f)(E)$ as $E(f_1/x_1,\dots,f_n/x_n)$. For $E\in LM(\{1,\dots,m\})$ and $n\ge 1$ we set: $$t_n(E)=E[n+1/n,n+2/n+1,\dots,m+1/m]$$ $$s_n(E)=E[n/n+1,n+1/n+2,\dots,m-1/m]$$ If we were numbering elements of a set with $n$ elements from $0$ then we would have $t_n=LM(\partial_{n-1})$ and $s_n=LM(\sigma_{n-1})$ where $\partial_i$ and $\sigma_i$ are the usual generators of the simplicial category. For a monad $R$ on $Sets$ we let $R-cor$ (``R-correspondences'') to be the full subcategory of the Kleisli category of $R$ whose objects are finite sets. Recall, that the set of morphisms from $X$ to $Y$ in $R-cor$ is the set of maps from $X$ to $R(Y)$ i.e. $R(Y)^X$ (in other words, $R-cor$ is the category of free, finitely generated $R$-algebras). We further let $C(R)$ denote the pre-category\footnote{See the introduction to \cite{Csubsystems}.} with $$Ob(C(R))={\bf N\rm}$$ $$Mor(C(R))=\coprod_{m,n\in{\bf N\rm}} R(\{1,\dots,m\})^n$$ which is equivalent, as a category, to $(R-cor)^{op}$. \begin{remark}\rm A finitary monad (on sets) is a monad $R:Sets\rightarrow Sets$ that, as a functor, commutes with filtering colimits. Since any set is, canonically, the colimit of the filtering diagram of its finite subsets, a functor $Sets \rightarrow Sets$ that commutes with filtering colimits can be equivalently described as a functor $FSets \rightarrow Sets$ where $FSets$ is the category of finite sets. Furthermore, Lemma \ref{2014.06.30.l1} can be used to show that finitary monads on $Sets$ can be defined as collections of data of the form: \begin{enumerate} \item for every finite set $X$ a set $R(X)$, \item for every finite set $X$ a function $\eta_X: X \rightarrow R(X)$, \item for every finite sets $X$, $X'$ and a function $f:X\rightarrow R(X')$, a function $ bind(f):R(X)\rightarrow R(X')$ \end{enumerate} which satisfy the conditions: \begin{enumerate} \item for a finite set $X$, $ bind(\eta_X)=id_{R(X)}$, \item for a function of finite sets $f:X\rightarrow X'$, $\eta_X\circ bind(f)=f$, \item for two functions $f:X\rightarrow R(X')$, $g:X'\rightarrow R(X'')$, $ bind(f\circ bind(g))= bind(f)\circ bind(g)$. \end{enumerate} This description shows that for any monad $R$ the restriction of $R$ to a functor $R^{fin}:FSets\rightarrow Sets$ is a finitary monad. Similar observations apply to left $R$-modules. The constructions of this paper, while done for a general pair $(R,LM)$, only depend on the corresponding finitary pair $(R^{fin},LM^{fin})$. \end{remark} \begin{remark}\rm The correspondence $R\mapsto C(R)$ defines an equivalence between the type of the finitary monads on $Sets$ and the type of the pre-category structures on ${\bf N\rm}$ that extend the pre-category structure of finite sets and where the addition remains to be the coproduct. \end{remark} \begin{remark}\rm A finitary sub-monad of $R$ is the same as a sub-pre-category in $C(R)$ which contains all objects. Intersection of two sub-monads is a sub-monad which allows one to speak of sub-monads generated by a set of elements. \end{remark} \subsection{The C-system $CC(R,LM)$.} Let $R$ be a monad on $Sets$ and $LM$ a left module over $R$ with values in $Sets$. Let $CC(R,LM)$ be the pre-category whose set of objects is $Ob(CC(R,LM))=\amalg_{n\ge 0} Ob_n$ where $$Ob_n=LM(\emptyset)\times\dots\times LM(\{1,\dots,n-1\})$$ and the set of morphisms is $$Mor(CC(R,LM))=\coprod_{m,n\ge 0} Ob_m\times Ob_n\times R(\{1,\dots,m\})^n$$ with the obvious domain and codomain maps. The composition of morphisms is defined in the same way as in $C(R)$ such that the mapping $Ob(CC(R,LM))\rightarrow {\bf N\rm}$ which sends all elements of $Ob_n$ to $n$, is a functor from $CC(R,LM)$ to $C(R)$. The associativity of compositions follows immediately from the associativity of compositions in $R-cor$. Note that if $LM(\emptyset)=\emptyset$ then $CC(R,LM)=\emptyset$ and otherwise the functor $CC(R,LM)\rightarrow C(R)$ is an equivalence, so that in the second case $C(R)$ and $CC(R,LM)$ are indistinguishable as categories. However, as pre-categories they are quite different unless $LM=(X\mapsto pt)$ in which case the functor $CC(R,LM)\rightarrow C(R)$ is an isomorphism. The pre-category $CC(R,LM)$ is given the structure of a C-system as follows. The final object is the only element of $Ob_0$, the map $ft$ is defined by the rule $$ft(T_1,\dots,T_n)=(T_1,\dots,T_{n-1}).$$ The canonical pull-back square defined by an object $(T_1,\dots,T_{n+1})$ and a morphism $$(f_1,\dots,f_{n}):(R_1,\dots,R_m)\rightarrow (T_1,\dots,T_{n})$$ is of the form: \begin{eq} \label{2009.11.05.oldeq1} \begin{CD} (R_1,\dots,R_m,T_{n+1}(f_1/1,\dots, f_{n}/n)) @>(f_1,\dots,f_{n},m+1)>> (T_1,\dots,T_{n+1})\\ @V(1,\dots,m)VV @VV(1,\dots,n)V\\ (R_1,\dots,R_m) @>(f_1,\dots,f_{n})>> (T_1,\dots,T_{n}) \end{CD} \end{eq} \begin{proposition} \llabel{2009.10.01.prop2} With the structure defined above $CC(R,LM)$ is a C-system. \end{proposition} \begin{proof} Straightforward. \end{proof} \begin{remark}\rm There is another construction of a pre-category from $(R,LM)$ which takes as an additional parameter a set $Var$ which is called the set of variables. Let $F_n(Var)$ be the set of sequences of length $n$ of pair-wise distinct elements of $Var$. Define the pre-category $CC(R,LM,Var)$ as follows. The set of objects of $CC(R,LM,Var)$ is $$Ob(CC(R,LM,Var))= \amalg_{n\ge 0} \amalg_{(x_1,\dots,x_n)\in F_n(Var)} LM(\emptyset)\times\dots\times LM(\{x_1,\dots,x_{n-1}\})$$ For compatibility with the traditional type theory we will write the elements of $Ob(CC(R,LM,X))$ as sequences of the form $x_1:E_1,\dots,x_n:E_n$. The set of morphisms is given by $$Hom_{CC(R,LM,Var)}((x_1:E_1,\dots,x_m:E_m),(y_1:T_1,\dots,y_n:T_n))=R(\{x_1,\dots,x_m\})^n$$ The composition is defined in such a way that the projection $$(x_1:E_1,\dots,x_n:E_n)\mapsto (E_1,E_2(1/x_1),\dots,E_n(1/x_1,\dots,n-1/x_{n-1}))$$ is a functor from $CC(R,LM,Var)$ to $CC(R,LM)$. This functor is clearly an equivalence of categories but not an isomorphism of pre-categories. There are an obvious final object and the map $ft$ on $CC(R,LM,Var)$. There is however a real problem in making it into a C-system which is due to the following. Consider an object $(y_1:T_1,\dots,y_{n+1}:T_{n+1})$ and a morphism $(f_1,\dots,f_n):(x_1:R_1,\dots,x_m:R_m)\rightarrow (y_1:T_1,\dots,y_{n}:T_{n})$. In order for the functor to $CC(R,LM)$ to be a C-system morphism the canonical square build on this pair should have the form $$ \begin{CD} (x_1:R_1,\dots,x_m:R_m,x_{m+1}:T_{n+1}(f_1/1,\dots, f_{n}/n)) @>(f_1,\dots,f_{n},x_{n+1})>> (y_1:T_1,\dots,y_{n+1}:T_{n+1})\\ @VVV @VVV\\ (x_1:R_1,\dots,x_m:R_m) @>(f_1,\dots,f_{n})>> (y_1:T_1,\dots,y_n:T_{n}) \end{CD} $$ where $x_{m+1}$ is an element of $Var$ which is distinct from each of the elements $x_1,\dots,x_m$. Moreover, we should choose $x_{m+1}$ in such a way the the resulting construction satisfies the C-system axioms for $(f_1,\dots,f_{n})=Id$ and for the compositions $(g_1,\dots,g_m)\circ (f_1,\dots,f_n)$. One can easily see that no such choice is possible for a finite set $Var$. At the moment it is not clear to me whether or not it is possible for an infinite $Var$. \end{remark} Recall from \cite{Csubsystems} that for a C-system $CC$ one defines $\widetilde{Ob}(CC)$ as the subset of $Mor(CC)$ which consists of morphisms $s$ of the form $ft(X)\rightarrow X$ such that $l(X)>0$ and $s\circ p_X=Id_{ft(X)}$. \begin{lemma} \llabel{2014.06.30.l2} One has: $$\widetilde{Ob}(CC(R,LM))\cong \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n\})\times R(\{1,\dots,n\})$$ \end{lemma} \begin{proof} An element of $\widetilde{Ob}(CC(R,LM))$ is a section $s$ of the canonical morphism $p_{\Gamma}:\Gamma\rightarrow ft(\Gamma)$. It follows immediately from the definition of $CC(R,LM)$ that for $\Gamma=(E_1,\dots,E_{n+1})$, a morphism $(f_1,\dots,f_{n+1})\in R(\{1,\dots,n\})^{n+1}$ from $ft(\Gamma)$ to $\Gamma$ is a section of $p_{\Gamma}$ if an only if $f_i=i$ for $i=1,\dots,n$. Therefore, any such section is determined by its last component $f_{n+1}$ and mapping $((E_1,\dots,E_n), (E_1,\dots,E_{n+1}), (f_1,\dots,f_{n+1}))$ to $(E_1,\dots,E_n,E_{n+1},f_{n+1})$ we get a bijection \begin{eq} \llabel{2009.10.15.eq2} \widetilde{Ob}(CC(R,LM))\cong \coprod_{n\ge 0} LM(\emptyset)\times\dots\times LM(\{1,\dots,n\})\times R(\{1,\dots,n\}) \end{eq} \end{proof} Using the notations of type theory we can write elements of $Ob(CC(R,LM))$ as $$\Gamma=(T_1,\dots,T_n\rhd)$$ where $T_i\in LM(\{1,\dots,i-1\})$ and the elements of $\widetilde{Ob}(CC(R,LM))$ as $${\cal J} = (T_1,\dots,T_n\vdash t:T)$$ where $T_i\in LM(\{1,\dots,i-1\})$, $T\in LM(\{1,\dots,n\})$ and $t\in R(\{1,\dots,n\})$. In this notation the operations $T,\widetilde{T},S,\widetilde{S}$ and $\delta$ which were introduced in \cite{Csubsystems} take the form: \begin{enumerate} \item $T((\Gamma,T_{n+1}\rhd),(\Gamma,\Delta\rhd))=(\Gamma,T_{n+1},t_{n+1}(\Delta)\rhd)$ when $l(\Gamma)=n$, \item $\widetilde{T}((\Gamma,T_{n+1}\rhd),(\Gamma,\Delta\vdash r:R))=(\Gamma,T_{n+1},t_{n+1}(\Delta)\vdash t_{n+1}(r:R))$ when $l(\Gamma)=n$, \item $S((\Gamma\vdash s:S),(\Gamma,S,\Delta\rhd))=(\Gamma,s_{n+1}(\Delta[s/n+1])\rhd)$ when $l(\Gamma)=n$, \item $\widetilde{S}((\Gamma\vdash s:S),(\Gamma,S,\Delta\vdash r:R))=(\Gamma,s_{n+1}(\Delta[s/n+1])\vdash s_{n+1}((r:R)[s/n+1])$ when $l(\Gamma)=n$, \item $\delta(\Gamma,T\rhd)=(\Gamma,T\vdash (n+1):T)$ when $l(\Gamma)=n$. \end{enumerate} \begin{remark}\rm \llabel{2014.09.28.rm1} One can easily construct on the function $(R,LM)\mapsto CC(R,LM)$ the structure of a functor from the ``large module category'' of \cite{HM2008} to the category of C-systems and their homomorphisms. \end{remark} \subsection{C-subsystems of $CC(R,LM)$.} Let $CC$ be a C-subsystem of $CC(R,LM)$. By \cite{Csubsystems} $CC$ is determined by the subsets $C=Ob(CC)$ and $\widetilde{C}=\widetilde{Ob}(CC)$ in $Ob(CC(R,LM))$ and $\widetilde{Ob}(CC(R,LM))$. For $\Gamma=(E_1,\dots,E_n)$ we write $(\Gamma\rhd_{C})$ if $(E_1,\dots,E_n)$ is in $C$ and $(\Gamma\vdash_{\widetilde{C}} t:T)$ if $(E_1,\dots,E_n,T,t)$ is in $\widetilde{C}$. The following result is an immediate corollary of \cite[Proposition 4.3]{Csubsystems} together with the description of the operations $T,\widetilde{T},S,\widetilde{S}$ and $\delta$ for $CC(R,LM)$ which is given above. \begin{proposition} \llabel{2009.10.16.prop3} Let $(R,LM)$ be a monad on $Sets$ and a left module over it with values in $Sets$. A pair of subsets $$C\subset \coprod_{n\ge 0} \prod_{i=0}^{n-1} LM(\{1,\dots,i\})$$ $$\widetilde{C}\subset \coprod_{n\ge 0} (\prod_{i=0}^{n} LM(\{1,\dots,i\}))\times R(\{1,\dots,n\})$$ corresponds to a C-subsystem $CC$ of $CC(R,LM)$ if and only if the following conditions hold: \begin{enumerate} \item $(\rhd_{C})$ \item $(\Gamma, T\rhd_{C})\Rightarrow (\Gamma\rhd_{C})$ \item $(\Gamma\vdash_{\widetilde{C}} r:R)\Rightarrow (\Gamma,R\rhd_{C})$ \item $(\Gamma, T\rhd_{C})\wedge(\Gamma,\Delta,\vdash_{\widetilde{C}} r:R)\Rightarrow (\Gamma, T, t_{n+1}(\Delta)\vdash_{\widetilde{C}} t_{n+1} (r: R))$ where $n=l(\Gamma_1)$ \item $(\Gamma\vdash_{\widetilde{C}} s:S)\wedge (\Gamma,S,\Delta\vdash_{\widetilde{C}} r:R)\Rightarrow (\Gamma, s_{n+1}(\Delta[s/n+1]) \vdash_{\widetilde{C}} s_{n+1} (( r : R ) [s/n+1]))$ where $n=l(\Gamma_1)$, \item $(\Gamma,T\rhd_{C})\Rightarrow (\Gamma,T\vdash_{\widetilde{C}} n+1:T)$ where $n=l(\Gamma)$. \end{enumerate} \end{proposition} Note that conditions (4) and (5) together with condition (6) and condition (3) imply the following \begin{description} \item[{\em 4a}] $(\Gamma, T\rhd_{C})\wedge (\Gamma,\Delta\rhd_{C})\Rightarrow (\Gamma, T, t_{n+1}(\Delta)\rhd_{C})$ where $n=l(\Gamma_1)$, \item[{\em 5a}] $(\Gamma\vdash_{\widetilde{C}} s:S)\wedge (\Gamma,S,\Delta\rhd_{C})\Rightarrow (\Gamma, s_{n+1}(\Delta[s/n+1])\rhd_{C})$ where $n=l(\Gamma_1)$. \end{description} Note also that modulo condition (2), condition (1) is equivalent to the condition that $C\ne\emptyset$. \begin{remark}\rm\llabel{2010.08.07.rem1} If one re-writes the conditions of Proposition \ref{2009.10.16.prop3} in the more familiar in type theory form where the variables introduced in the context are named rather than directly numbered one arrives at the following rules: \begin{center} $$\frac{}{\rhd_{C}}\,\,\,\,\,\,\,\,\,\, \frac{x_1:T_1,\dots,x_n:T_n\rhd_{C}}{x_1:T_1,\dots,x_{n-1}:T_{n-1}\rhd_{C}} \,\,\,\,\,\,\,\,\,\, \frac{x_1:T_1,\dots,x_n:T_n\vdash_{\widetilde{C}} t:T}{x_1:T_1,\dots,x_n:T_n, y:T\rhd_{C}}$$ $$\frac{x_1:T_1,\dots,x_n:T_n, y:T\rhd_{C}\,\,\,\,\,\,\,x_1:T_1,\dots,x_n:T_n,\dots, x_m:T_m\vdash_{\widetilde{C}} r:R}{x_1:T_1,\dots,x_n:T_n, y:T, x_{n+1}:T_{n+1},\dots,x_m:T_m\vdash_{\widetilde{C}} r:R}$$ $$\frac{x_1:T_1,\dots,x_n:T_n\vdash_{\widetilde{C}} s:S\,\,\,\,\,\,\,x_1:T_1,\dots,x_n:T_n,y:S,x_{n+1}:T_{n+1},\dots,x_m:T_m\vdash_{\widetilde{C}} r:R} {x_1:T_1,\dots,x_n:T_n,x_{n+1}:T_{n+1}[s/y],\dots,x_m:T_m[s/y]\vdash_{\widetilde{C}} (r:R)[s/y]}$$ $$\frac{x_1:E_1,\dots,x_n:E_n\rhd_{C}}{x_1:E_1,\dots,x_n:E_n\vdash_{\widetilde{C}} x_n:E_n}$$ \end{center} which are similar (and probably equivalent) to the ``basic rules of DTT'' given in \cite[p.585]{Jacobs1}. The advantage of the rules given here is that they are precisely the ones which are necessary and sufficient for a given collection of contexts and judgements to define a C-system. \end{remark} \begin{lemma} \llabel{2009.11.05.l1} Let $CC$ be as above and let $(E_1,\dots, E_m), (T_1,\dots,T_n)\in Ob(CC)$ and $(f_1,\dots,f_n)\in R(\{1,\dots,m\})^n$. Then $$(f_1,\dots,f_n)\in Hom_{CC}((E_1,\dots, E_m), (T_1,\dots,T_n))$$ if and only if $(f_1,\dots,f_{n-1})\in Hom_{CC}((E_1,\dots, E_m), (T_1,\dots,T_{n-1}))$ and $$E_1,\dots,E_m\vdash_{\widetilde{C}} f_n : T_{n}(f_1/1,\dots,f_{n-1}/n-1)$$ \end{lemma} \begin{proof} Straightforward using the fact that the canonical pull-back squares in $CC(R,LM)$ are given by (\ref{2009.11.05.oldeq1}). \end{proof} \begin{example}\rm The category $CC(R,R)$ for the identity monad is empty. For the monad of the form $R(X)=pt$ the C-system $CC(R,R)$ has only two subsystems - itself and the trivial one for which $C={pt}$. The first non-trivial example is the monad $R(X)=X\amalg \{*\}$. We conjecture that in this case the set of all subsystems of $CC(R,R)$ is {\em uncountable}. One can probably show this as follows. Let $\epsilon:{\bf N\rm}\rightarrow\{0,1\}$, be a sequence of $0$'s and $1$'s. Consider the C-subsystem of $CC_{\epsilon}$ of $CC(R,R)$ which is generated by the set of elements of the form $(*, 1, 2, \dots, n\rhd)\in Ob(CC(R,R))$ for all $n\ge 0$ and elements $(*,1,\dots,n+1\vdash n+2:*)\in \widetilde{Ob}(CC(R,R))$ for $n$ such that $\epsilon(n)=1$. It should be possible to show that $CC_{\epsilon}\ne CC_{\epsilon'}$ for $\epsilon\ne \epsilon'$ which would imply the conjecture. \end{example} \subsection{Operations $\sigma$ and $\widetilde{\sigma}$ on $CC(R,LM)$.} C-systems of the form $CC(R,LM)$ have an important additional structure which will play a role in the next section. This structure is given by two operations: \begin{enumerate} \item for $\Gamma=(T_1,\dots,T_n,\dots,T_{n+i})$ and $\Gamma'=(T_1',\dots,T'_{n})$ we set $$\sigma(\Gamma,\Gamma')=(T_1',\dots,T'_n,T_{n+1},\dots,T_{n+i})$$ This gives us an operation with values in $Ob$ defined on the subset of $Ob\times Ob$ which consists of pairs $(\Gamma,\Gamma')$ such that $l(\Gamma)>l(\Gamma')$, \item for ${\cal J}=(T_1,\dots,T_{n-1},\dots,T_{n-1+i}\vdash t:T_{n+i})$, $\Gamma'=(T_1',\dots,T_n')$ we set $$\widetilde{\sigma}({\cal J},\Gamma')= \left\{ \begin{array}{ll} (T_1',\dots,T_n',T_{n+1},\dots,T_{n+i-1}:t:T_{n+i})&\mbox{\rm for $i>0$}\\ (T_1',\dots,T_{n-1}'\vdash t:T_n')&\mbox{\rm for $i=0$} \end{array} \right. $$ This gives us an operation with values in $\widetilde{Ob}$ defined on the subset of $\widetilde{Ob}\times Ob$ which consists of pairs $({\cal J},\Gamma')$ such that $l(\partial({\cal J}))\le l(\Gamma')$. \end{enumerate} \subsection{Regular sub-quotients of $CC(R,LM)$.} Let $(R,LM)$ be as above and $$Ceq\subset \coprod_{n\ge 0} (\prod_{i=0}^{n-1} LM(\{1,\dots,i\}))\times LM(\{1,\dots,n\})^2$$ $$\widetilde{Ceq}\subset \coprod_{n\ge 0} (\prod_{i=0}^{n} LM(\{1,\dots,i\}))\times R(\{1,\dots,n\})^2$$ be two subsets. For $\Gamma=(T_1,\dots,T_n)\in ob(CC(R,LM))$ and $S_1,S_2\in LM(\{1,\dots,n\})$ we write $(\Gamma\vdash_{Ceq} S_1=S_2)$ to signify that $(T_1,\dots,T_n,S_1,S_2)\in Ceq$. Similarly for $T\in LM(\{1,\dots,n\})$ and $o,o'\in R(\{1,\dots,n\})$ we write $(\Gamma\vdash_{\widetilde{Ceq}} o=o':S)$ to signify that $(T_1,\dots,T_n,S,o,o')\in \widetilde{Ceq}$. When no confusion is possible we will omit the subscripts $Ceq$ and $\widetilde{Ceq}$ at $\vdash$. Similarly we will write $\rhd$ instead of $\rhd_C$ and $\vdash$ instead of $\vdash_{\widetilde{C}}$ if the subsets $C$ and $\widetilde{C}$ are unambiguously determined by the context. \begin{definition} \llabel{simandsimeq} Given subsets $C$, $\widetilde{C}$, $Ceq$, $\widetilde{Ceq}$ as above define relations $\sim$ on $C$ and $\simeq$ on $\widetilde{C}$ as follows: \begin{enumerate} \item for $\Gamma=(T_1,\dots,T_n)$, $\Gamma'=(T_1',\dots,T_n')$ in $C$ we set $\Gamma\sim\Gamma'$ iff $ft(\Gamma)\sim ft(\Gamma')$ and $$T_1,\dots,T_{n-1}\vdash T_n=T_n',$$ \item for $(\Gamma\vdash o:S)$, $(\Gamma'\vdash o':S')$ in $\widetilde{C}$ we set $(\Gamma\vdash o:S)\simeq(\Gamma'\vdash o':S')$ iff $(\Gamma,S)\sim(\Gamma',S')$ and $$(\Gamma\vdash o=o':S).$$ \end{enumerate} \end{definition} \begin{proposition} \llabel{2014.07.10.prop1} Let $C$, $\widetilde{C}$, $Ceq$, $\widetilde{Ceq}$ be as above and suppose in addition that one has: \begin{enumerate} \item $C$ and $\widetilde{C}$ satisfy conditions (1)-(6) of Proposition \ref{2009.10.16.prop3} which are referred to below as conditions (1.1)-(1.6) of the present proposition, \item $$ \begin{array}{l} (a){\,\,\,\,\,\,\,}(\Gamma\vdash T=T'){\Rightarrow} (\Gamma,T\rhd)\\ (b){\,\,\,\,\,\,\,}(\Gamma,T\rhd){\Rightarrow} (\Gamma\vdash T=T)\\ (c ){\,\,\,\,\,\,\,}(\Gamma\vdash T=T'){\Rightarrow}(\Gamma\vdash T'=T)\\ (d){\,\,\,\,\,\,\,}(\Gamma\vdash T=T')\wedge(\Gamma\vdash T'=T''){\Rightarrow}(\Gamma\vdash T=T'') \end{array} $$ \item $$ \begin{array}{l} (a){\,\,\,\,\,\,\,}(\Gamma\vdash o=o':T){\Rightarrow} (\Gamma\vdash o:T)\\ (b){\,\,\,\,\,\,\,}(\Gamma\vdash o:T){\Rightarrow} (\Gamma\vdash o=o:T)\\ (c ){\,\,\,\,\,\,\,}(\Gamma\vdash o=o':T){\Rightarrow}(\Gamma\vdash o'=o:T)\\ (d){\,\,\,\,\,\,\,} (\Gamma\vdash o=o':T)\wedge(\Gamma\vdash o'=o'':T){\Rightarrow}(\Gamma\vdash o=o'':T) \end{array} $$ \item $$ \begin{array}{l} (a){\,\,\,\,\,\,\,} (\Gamma_1\vdash T=T')\wedge(\Gamma_1,T,\Gamma_2\vdash S=S'){\Rightarrow}(\Gamma_1,T',\Gamma_2\vdash S=S')\\ (b){\,\,\,\,\,\,\,} (\Gamma_1\vdash T=T')\wedge(\Gamma_1,T,\Gamma_2\vdash o=o':S){\Rightarrow}(\Gamma_1,T',\Gamma_2'\vdash o=o':S)\\ (c ){\,\,\,\,\,\,\,} (\Gamma\vdash S=S')\wedge(\Gamma\vdash o=o':S){\Rightarrow}(\Gamma\vdash o=o':S') \end{array} $$ \item $$ \begin{array}{ll} (a){\,\,\,\,\,\,\,} (\Gamma_1,T\rhd)\wedge(\Gamma_1,\Gamma_2\vdash S=S'){\Rightarrow}(\Gamma_1,T,t_{i+1}\Gamma_2\vdash t_{i+1}S=t_{i+1}S')& i=l(\Gamma)\\ (b){\,\,\,\,\,\,\,} (\Gamma_1,T\rhd)\wedge(\Gamma_1,\Gamma_2\vdash o=o':S){\Rightarrow}(\Gamma_1,T,t_{i+1}\Gamma_2\vdash t_{i+1}o=t_{i+1}o':t_{i+1}S)& i=l(\Gamma) \end{array} $$ \item $$ \begin{array}{ll} (a){\,\,\,\,\,\,\,} (\Gamma_1,T,\Gamma_2\vdash S=S')\wedge(\Gamma_1\vdash r:T){\Rightarrow}&\\ (\Gamma_1,s_{i+1}(\Gamma_2[r/i+1])\vdash s_{i+1}(S[r/i+1])=s_{i+1}(S'[r/i+1]))&i=l(\Gamma_1)\\ (b){\,\,\,\,\,\,\,} (\Gamma_1,T,\Gamma_2\vdash o=o':S)\wedge(\Gamma_1\vdash r:T){\Rightarrow}&\\ (\Gamma_1,s_{i+1}(\Gamma_2[r/i+1])\vdash s_{i+1}(o[r/i+1])=s_{i+1}(o'[r/i+1]):s_{i+1}(S[r/i+1]))&i=l(\Gamma_1) \end{array} $$ \item $$ \begin{array}{ll} (a){\,\,\,\,\,\,\,} (\Gamma_1,T,\Gamma_2,S\rhd)\wedge(\Gamma_1\vdash r=r':T){\Rightarrow}&\\ (\Gamma_1,s_{i+1}(\Gamma_2[r/i+1])\vdash s_{i+1}(S[r/i+1])=s_{i+1}(S[r'/i+1]))&i=l(\Gamma_1)\\ (b){\,\,\,\,\,\,\,} (\Gamma_1,T,\Gamma_2\vdash o:S)\wedge(\Gamma_1\vdash r=r':T){\Rightarrow}&\\ (\Gamma_1,s_{i+1}(\Gamma_2[r/i+1])\vdash s_{i+1}(o[r/i+1])=s_{i+1}(o[r'/i+1]):s_{i+1}(S[r/i+1]))&i=l(\Gamma_1) \end{array} $$ \end{enumerate} Then the relations $\sim$ and $\simeq$ are equivalence relations on $C$ and $\widetilde{C}$ which satisfy the conditions of \cite[Proposition 5.4]{Csubsystems} and therefore they correspond to a regular congruence relation on the C-system defined by $(C,\widetilde{C})$. \end{proposition} \begin{lemma} \llabel{iseqrelsiml1} One has: \begin{enumerate} \item If conditions (1.2), (4a) of the proposition hold then $(\Gamma\vdash S=S')\wedge(\Gamma\sim\Gamma'){\Rightarrow} (\Gamma'\vdash S=S')$. \item If conditions (1.2), (1.3), (4a), (4b), (4c) hold then $(\Gamma\vdash o=o':S)\wedge((\Gamma,S)\sim(\Gamma',S')){\Rightarrow} (\Gamma'\vdash o=o':S')$. \end{enumerate} \end{lemma} \begin{proof} By induction on $n=l(\Gamma)=l(\Gamma')$. (1) For $n=0$ the assertion is obvious. Therefore by induction we may assume that $(\Gamma\vdash S=S')\wedge(\Gamma\sim\Gamma'){\Rightarrow} (\Gamma'\vdash S=S')$ for all $i<n$ and all appropriate $\Gamma$,$\Gamma'$, $S$ and $S'$ and that $(T_1,\dots,T_n\vdash S=S')\wedge(T_1,\dots,T_n\sim T'_1,\dots,T'_n)$ holds and we need to show that $(T'_1,\dots,T'_n\vdash S=S')$ holds. Let us show by induction on $j$ that $(T'_1,\dots,T'_j,T_{j+1},\dots,T_n\vdash S=S')$ for all $j=0,\dots,n$. For $j=0$ it is a part of our assumptions. By induction we may assume that $(T'_1,\dots,T'_j,T_{j+1},\dots,T_n\vdash S=S')$. By definition of $\sim$ we have $(T_1,\dots,T_j\vdash T_{j+1}=T'_{j+1})$. By the inductive assumption we have $(T'_1,\dots,T'_j\vdash T_{j+1}=T'_{j+1})$. Applying (4a) with $\Gamma_1=(T'_1,\dots T'_j)$, $T=T_{j+1}$, $T'=T'_{j+1}$ and $\Gamma_2=(T_{j+2},\dots,T_n)$ we conclude that $(T'_1,\dots,T'_{j+1},T_{j+2},\dots,T_n\vdash S=S')$. (2) By the first part of the lemma we have $\Gamma'\vdash S=S'$. Therefore by (4c) it is sufficient to show that $(\Gamma\vdash o=o':S)\wedge(\Gamma\sim\Gamma'){\Rightarrow} (\Gamma'\vdash o=o':S)$. The proof of this fact is similar to the proof of the first part of the lemma using (4b) instead of (4a). \end{proof} \begin{lemma} \llabel{iseqrelsim} One has: \begin{enumerate} \item Assume that conditions (1.2), (2b), (2c), (2d) and (4a) hold. Then $\sim$ is an equivalence relation. \item Assume that conditions of the previous part of the lemma as well as conditions (1.3), (3b), (3c), (3d), (4b) and (4c) hold. Then $\simeq$ is an equivalence relation. \end{enumerate} \end{lemma} \begin{proof} By induction on $n=l(\Gamma)=l(\Gamma')$. (1) Reflexivity follows directly from (1.2) and (2b). For $n=0$ the symmetry is obvious. Let $(\Gamma,T)\sim(\Gamma',T')$. By induction we may assume that $\Gamma'\sim\Gamma$. By Lemma \ref{iseqrelsiml1}(a) we have $(\Gamma'\vdash T=T')$ and by (2c) we have $(\Gamma'\vdash T'=T)$. We conclude that $(\Gamma',T')\sim(\Gamma,T)$. The proof of transitivity is by a similar induction. (2) Reflexivity follows directly from reflexivity of $\sim$, (1.3) and (3b). Symmetry and transitivity are also easy using Lemma \ref{iseqrelsiml1}. \end{proof} From this point on we assume that all conditions of Proposition \ref{2014.07.10.prop1} hold. Let $C'=C/\sim$ and $\widetilde{C}'=\widetilde{C}/\simeq$. It follows immediately from our definitions that the functions $ft:C\rightarrow C$ and $\partial:\widetilde{C}\rightarrow C$ define functions $ft':C'\rightarrow C'$ and $\partial':\widetilde{C}'\rightarrow C'$. \begin{lemma} \llabel{surjl1} The conditions (3) and (4) of \cite[Proposition 5.4]{Csubsystems} hold for $\sim$ and $\simeq$. \end{lemma} \begin{proof} 1. We need to show that for $(\Gamma,T\rhd)$, and $\Gamma\sim\Gamma'$ there exists $(\Gamma',T'\rhd)$ such that $(\Gamma,T)\sim(\Gamma',T')$. It is sufficient to take $T=T'$. Indeed by (2b) we have $\Gamma\vdash T=T$, by Lemma \ref{iseqrelsiml1}(1) we conclude that $\Gamma'\vdash T=T$ and by (1a) that $\Gamma',T\rhd$. 2. We need to show that for $(\Gamma\vdash o:S)$ and $(\Gamma,S)\sim(\Gamma',S')$ there exists $(\Gamma'\vdash o':S')$ such that $(\Gamma'\vdash o':S')\simeq(\Gamma\vdash o:S)$. It is sufficient to take $o'=o$. Indeed, by (3b) we have $(\Gamma\vdash o=o:S)$, by Lemma \ref{iseqrelsiml1}(2) we conclude that $(\Gamma'\vdash o=o:S')$ and by (2a) that $(\Gamma'\vdash o:S')$. \end{proof} \begin{lemma} \llabel{TSetc} The equivalence relations $\sim$ and $\simeq$ are compatible with the operations $T,\widetilde{T},S,\widetilde{S}$ and $\delta$. \end{lemma} \begin{proof} (1) Given $(\Gamma_1,T\rhd)\sim(\Gamma_1',T'\rhd)$ and $(\Gamma_1,\Gamma_2\rhd)\sim(\Gamma_1',\Gamma_2'\rhd)$ we have to show that $$(\Gamma_1,T,t_{n+1}\Gamma_2)\sim (\Gamma'_1,T',t_{n+1}\Gamma'_2).$$ where $n=l(\Gamma_1)=l(\Gamma_1')$. Proceed by induction on $l(\Gamma_2)$. For $l(\Gamma_2)=0$ the assertion is obvious. Let $(\Gamma_1,T\rhd)\sim(\Gamma_1',T'\rhd)$ and $(\Gamma_1,\Gamma_2,S\rhd)\sim(\Gamma_1',\Gamma_2',S'\rhd)$. The later condition is equivalent to $(\Gamma_1,\Gamma_2\rhd)\sim(\Gamma_1',\Gamma_2'\rhd)$ and $(\Gamma_1,\Gamma_2\vdash S=S')$. By the inductive assumption we have $(\Gamma_1,T,t_{n+1}\Gamma_2)\sim (\Gamma'_1,T',t_{n+1}\Gamma'_2)$. By (5a) we conclude that $(\Gamma_1,T,t_{n+1}\Gamma_2\vdash t_{n+1}S=t_{n+1}S')$. Therefore by definition of $\sim$ we have $(\Gamma_1,T,t_{n+1}\Gamma_2,t_{n+1}S)\sim(\Gamma'_1,T',t_{n+1}\Gamma'_2, t_{n+1}S')$. (2) Given $(\Gamma_1,T\rhd)\sim(\Gamma_1',T'\rhd)$ and $(\Gamma_1,\Gamma_2\vdash o:S)\simeq(\Gamma_1',\Gamma_2'\vdash o':S')$ we have to show that $(\Gamma_1,T,t_{n+1}\Gamma_2\vdash t_{n+1}o:t_{n+1}S)\simeq (\Gamma'_1,T',t_{n+1}\Gamma'_2\vdash t_{n+1}o':t_{n+1}S')$ where $n=l(\Gamma_1)=l(\Gamma_1')$. We have $(\Gamma_1,\Gamma_2,S)\sim(\Gamma_1',\Gamma'_2,S')$ and $(\Gamma_1,\Gamma_2\vdash o=o':S)$. By (5b) we get $(\Gamma_1,T, t_{n+1}\Gamma_2\vdash t_{n+1}o=t_{n+1}o':t_{n+1}S)$. By (1) of this lemma we get $(\Gamma_1,T,t_{n+1}\Gamma_2,t_{n+1}S)\sim(\Gamma'_1,T',t_{n+1}\Gamma'_2,t_{n+1}S')$ and therefore by definition of $\simeq$ we get $(\Gamma_1,T,t_{n+1}\Gamma_2\vdash t_{n+1}o:t_{n+1}S)\simeq (\Gamma'_1,T',t_{n+1}\Gamma'_2\vdash t_{n+1}o':t_{n+1}S')$. (3) Given $(\Gamma_1\vdash r:T)\simeq(\Gamma_1'\vdash r':T')$ and $(\Gamma_1,T,\Gamma_2\rhd)\sim(\Gamma_1',T',\Gamma_2'\rhd)$ we have to show that $$(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1]))\sim(\Gamma'_1,s_{n+1}(\Gamma'_2[r'/n+1])).$$ where $n=l(\Gamma_1)=l(\Gamma_1')$. Proceed by induction on $l(\Gamma_2)$. For $l(\Gamma_2)=0$ the assertion follows directly from the definitions. Let $(\Gamma_1\vdash r:T)\simeq(\Gamma_1'\vdash r':T')$ and $(\Gamma_1,T,\Gamma_2,S\rhd)\sim(\Gamma_1',T',\Gamma_2',S'\rhd)$. The later condition is equivalent to $(\Gamma_1,T,\Gamma_2\rhd)\sim(\Gamma_1',T',\Gamma_2'\rhd)$ and $(\Gamma_1,T,\Gamma_2\vdash S=S')$. By the inductive assumption we have $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1]))\sim(\Gamma'_1,s_{n+1}(\Gamma'_2[r'/n+1]))$. It remains to show that $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(S[r/n+1])=s_{n+1}(S'[r'/n+1]))$. By (2d) it is sufficient to show that $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(S[r/n+1])=s_{n+1}(S'[r/n+1]))$ and $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(S'[r/n+1])=s_{n+1}(S'[r'/n+1]))$. The first relation follows directly from (6a). To prove the second one it is sufficient by (7a) to show that $(\Gamma_1,T,\Gamma_2,S'\rhd)$ which follows from our assumption through (2c) and (2a). (4) Given $(\Gamma_1\vdash r:T)\simeq(\Gamma_1'\vdash r':T')$ and $(\Gamma_1,T,\Gamma_2\vdash o:S)\simeq(\Gamma_1',T',\Gamma_2'\vdash o':S')$ we have to show that $$(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(o[r/n+1]):s_{n+1}(S[r/n+1]))\simeq$$ $$ (\Gamma'_1,s_{n+1}(\Gamma'_2[r'/n+1])\vdash s_{n+1}(o'[r'/n+1]):s_{n+1}(S'[r'/n+1])).$$ where $n=l(\Gamma_1)=l(\Gamma_1')$ or equivalently that $$(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1]),s_{n+1}(S[r/n+1]))\sim(\Gamma'_1,s_{n+1}(\Gamma'_2[r'/n+1]), s_{n+1}(S'[r'/n+1]))$$ and $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(o[r/n+1])=s_{n+1}(o'[r'/n+1]):s_{n+1}(S[r/n+1]))$. The first statement follows from part (3) of the lemma. To prove the second statement it is sufficient by (3d) to show that $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(o[r/n+1])=s_{n+1}(o'[r/n+1]):s_{n+1}(S[r/n+1]))$ and $(\Gamma_1,s_{n+1}(\Gamma_2[r/n+1])\vdash s_{n+1}(o'[r/n+1])=s_{n+1}(o'[r'/n+1]):s_{n+1}(S[r/n+1]))$. The first assertion follows directly from (6b). To prove the second one it is sufficient in view of (7b) to show that $(\Gamma_1,T,\Gamma_2\vdash o':S)$ which follows conditions (3c) and (3a). (5) Given $(\Gamma,T)\sim(\Gamma',T')$ we need to show that $(\Gamma,T\vdash (n+1):T)\simeq(\Gamma',T'\vdash (n+1):T')$ or equivalently that $(\Gamma,T,T)\sim(\Gamma,T',T')$ and $(\Gamma,T\vdash (n+1)=(n+1):T)$. The second part follows from (3b). To prove the first part we need to show that $(\Gamma,T\vdash T=T')$. This follows from our assumption by (5a). \end{proof} \begin{lemma} \llabel{2014.07.12.l1} Let $C$ be a subset of $Ob(CC(R,LM))$ which is closed under $ft$. Let $\le$ be a transitive relation on $C$ such that: \begin{enumerate} \item $\Gamma\le \Gamma'$ implies $l(\Gamma)=l(\Gamma')$, \item $\Gamma\in C$ and $ft(\Gamma)\le F$ implies $\sigma(\Gamma,F)\in C$ and $\Gamma\le \sigma(\Gamma,F)$. \end{enumerate} Then $\Gamma\in C$ and $ft^i(\Gamma)\le F$ for some $i\ge 1$, implies that $\Gamma\le \sigma(\Gamma,F)$. \end{lemma} \begin{proof} Simple induction on $i$. \end{proof} \begin{lemma} \llabel{2014.07.12.l2} Let $C$ and $\le$ be as in Lemma \ref{2014.07.12.l1}. Then one has: \begin{enumerate} \item $(\Gamma,T)\le (\Gamma,T')$ and $\Gamma\le \Gamma'$ implies that $(\Gamma,T)\le (\Gamma',T')$, \item if $\le$ is $ft$-monotone (i.e. $\Gamma\le \Gamma'$ implies $ft(\Gamma)\le ft(\Gamma')$) and symmetric then $(\Gamma,T)\le (\Gamma',T')$ implies that $(\Gamma,T)\le (\Gamma,T')$. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows from $$(\Gamma,T)\le (\Gamma,T')\le \sigma((\Gamma,T'),\Gamma')=(\Gamma',T')$$ The second assertion follows from $$(\Gamma,T)\le (\Gamma',T')\le \sigma((\Gamma',T'),\Gamma)=(\Gamma,T')$$ where the second $\le$ requires $\Gamma'\le \Gamma$ which follows from $ft$-monotonicity and symmetry. \end{proof} \begin{lemma} \llabel{2014.07.12.l3} Let $C,\le$ be as in Lemma \ref{2014.07.12.l1}, let $\widetilde{C}$ be a subset of $\widetilde{Ob}(CC(R,LM))$ and $\le'$ a transitive relation on $\widetilde{C}$ such that: \begin{enumerate} \item ${\cal J}\le' {\cal J}'$ implies $\partial({\cal J})\le\partial({\cal J}')$, \item ${\cal J}\in \widetilde{C}$ and $\partial({\cal J})\le F$ implies $\widetilde{\sigma}({\cal J},F)\in \widetilde{C}$ and ${\cal J}\le' \widetilde{\sigma}({\cal J},F)$. \end{enumerate} Then ${\cal J}\in \widetilde{C}$ and $ft^i(\partial({\cal J}))\le F$ for some $i\ge 0$ implies ${\cal J}\le \widetilde{\sigma}({\cal J},F)$. \end{lemma} \begin{proof} Simple induction on $i$. \end{proof} \begin{lemma} \llabel{2014.07.12.l4} Let $C,\le$ and $\widetilde{C},\le'$ be as in Lemma \ref{2014.07.12.l3}. Then one has: \begin{enumerate} \item $(\Gamma\vdash o:T)\le' (\Gamma\vdash o':T)$ and $(\Gamma,T)\le (\Gamma',T')$ implies that $(\Gamma\vdash o:T)\le' (\Gamma'\vdash o':T')$, \item if $(\le,\le')$ is $\partial$-monotone (i.e. ${\cal J}\le' {\cal J}'$ implies $\partial({\cal J})\le \partial({\cal J}')$) and $\le$ is symmetric then $(\Gamma\vdash o:T)\le' (\Gamma'\vdash o':T')$ implies that $(\Gamma\vdash o:T)\le' (\Gamma\vdash o':T)$. \end{enumerate} \end{lemma} \begin{proof} The first assertion follows from $$(\Gamma\vdash o:T)\le' (\Gamma\vdash o':T)\le' \widetilde{\sigma}((\Gamma\vdash o':T) ,(\Gamma',T'))=(\Gamma'\vdash o':T')$$ The second assertion follows from $$\Gamma\vdash o:T)\le' (\Gamma'\vdash o':T')\le' \sigma((\Gamma'\vdash o':T'),(\Gamma,T))=(\Gamma\vdash o':T)$$ where the second $\le$ requires $\Gamma'\le \Gamma$ which follows from $\partial$-monotonicity of $\le'$ and symmetry of $\le$. \end{proof} \begin{proposition} \llabel{2014.07.10.prop2} Let $(C,\widetilde{C})$ be subsets in $Ob(CC(R,LM))$ and $\widetilde{Ob}(CC(R,LM))$ respectively which correspond to a C-subsystem $CC$ of $CC(R,LM)$. Then the constructions presented above establish a bijection between pairs of subsets $(Ceq,\widetilde{Ceq})$ which together with $(C,\widetilde{C})$ satisfy the conditions of Proposition \ref{2014.07.10.prop1} and pairs of equivalence relations $(\sim,\simeq)$ on $(C,\widetilde{C})$ such that: \begin{enumerate} \item $(\sim,\simeq)$ corresponds to a regular congruence relation on $CC$ (i.e., satisfies the conditions of \cite[Proposition 5.4]{Csubsystems}), \item $\Gamma\in C$ and $ft(\Gamma)\sim F$ implies $\Gamma\sim \sigma(\Gamma,F)$, \item ${\cal J}\in \widetilde{C}$ and $\partial({\cal J})\sim F$ implies ${\cal J}\simeq \widetilde{\sigma}({\cal J},F)$. \end{enumerate} \end{proposition} \begin{proof} One constructs a pair $(\sim,\simeq)$ from $(Ceq,\widetilde{Ceq})$ as in Definition \ref{simandsimeq}. This pair corresponds to a regular congruence relation by Proposition \ref{2014.07.10.prop1}. Conditions (2),(3) follow from Lemma \ref{iseqrelsiml1}. Let $(\sim,\simeq)$ be equivalence relations satisfying the conditions of the proposition. Define $Ceq$ as the set of sequences $(\Gamma,T,T')$ such that $(\Gamma,T), (\Gamma,T')\in C$ and $(\Gamma,T)\sim (\Gamma,T')$. Define $\widetilde{Ceq}$ as the set of sequences $(\Gamma,T,o,o')$ such that $(\Gamma,T,o),(\Gamma,T,o')\in \widetilde{C}$ and $(\Gamma,T,o)\simeq (\Gamma,T,o')$. Let us show that these subsets satisfy the conditions of Proposition \ref{2014.07.10.prop1}. Conditions (2.a-2.d) and (3.a-3d) are obvious. Condition (4a) follows from (2) by Lemma \ref{2014.07.12.l1}. Conditions (4b) and (4c) follow from (3) by Lemma \ref{2014.07.12.l3}. Conditions (5a) and (5b) follow from the compatibility of $(\sim,\simeq)$ with $T$ and $\widetilde{T}$. Conditions (6a),(6b),(7a),(7b) follow from the compatibility of $(\sim,\simeq)$ with $S$ and $\widetilde{S}$. \end{proof} \subsection{Pairs $(R,LM)$ associated with nominal signatures.} \llabel{2014.07.22.sec} The constructions of this paper produce C-systems from a pair $(R,LM)$ where $R$ is a monad on $Sets$ and $LM$ is a left $R$-module with values in $Sets$ together with sets $C$, $\widetilde{C}$, $Ceq$ and $\widetilde{Ceq}$. One class of such pairs is obtained by taking $R$ to be the monad defined by a signature as in \cite[p.228]{HM2007}. For example, the contextual category of the Martin-Lof type theory from 1972, $MLTT72$ defined in \cite{ML72}, is obtained by applying Proposition \ref{2014.07.10.prop1} in the case of the pair $(R,R)$ where $R$ is the monad defined by the signature that corresponds to the nominal signature of Example \ref{2014.08.ex}. The following construction that covers more examples associates a pair $(R,LM)$ to a quadruple $(\Sigma, Term, P, {\bf Type})$ where $\Sigma$ is a nominal signature with one name-sort $Var$ and a set of data-sorts $D$, $Term\in {\bf D}$ is a data-sort, $P$ is a family of sets parametrized by ${\bf D}-\{Term\}$, and ${\bf Type}\subset \{\bf D\}$ is a subset of data-sorts. In most examples either ${\bf D}=\{Term\}$ or ${\bf D}=\{Term,Type\}$, ${\bf Type}=\{Type\}$ and $P=P_{Type}$ is the set of "type-variables". The only example which I know of where there are more than two data-sorts is the logic-enriched type theory of \cite{AczelGambino} where ${\bf D}=\{Term, Type, Prop\}$, ${\bf Type}=\{Type\}$, $P_{Type}$ is the set of type variables and $P_{Prop}$ is the set of propositional variables. The construction is as follows. A {\em nominal signature} (see \cite[Section 8.1]{Pitts}) starts with a set of name-sorts $\bf N$ and the set of data-sorts $\bf D$. We will be interested in the case when there is only one name-sort $Var$. A compound sort $S$ is defined as an expression formed from $Var$, elements of $\bf D$, and the unit sort $1$ using two operations: one sending $S_1$ and $S_2$ to $(S_1,S_2)$ and another one sending $S$ to $Var.S$. For better notations one takes $(\_,\_)$ to associate on the left i.e. $(S_1,S_2,S_3)$ means $((S_1,S_2),S_3)$ and similarly for longer sequences. Let $CS$ be the set of compound sorts. An arity is a pair $(S,D)$ where $S\in CS$ and $D\in {\bf D}$. Let $A({\bf D})$ be the set of arities for the set of data-sorts $\bf D$. A nominal signature is defined as a set $Op$, which is called the set of operations, together with a function $Ar:Op\rightarrow A({\bf D})$ which assigns to any operation its ``arirty". One writes $O:S\rightarrow D$ to denote that operation $O$ has arity $(S,D)$. We let $Ar_{CS}$ and $Ar_{\bf D}$ denote the two components of the arity. For example, the nominal signature of the lambda calculus has one data-sort $Term$ and three operations $V$, $L$, and $A$ of the form: $$V:Var\rightarrow Term$$ $$L:Var.Term\rightarrow Term$$ $$A:Term.Term\rightarrow Term$$ The algebraic signature with one sort $Term$, one operation $S$ in one variable and one constant $O$ will, in this language, have {\em three} operations: $$V:Var\rightarrow Term$$ $$S: Term\rightarrow Term$$ $$O:1\rightarrow Term$$ More generally, an algebraic signature is a nominal signature where $$Op=Op_0\coprod\{v_D\}_{D\in {\bf D}}$$ with $$v_D:Var\rightarrow D$$ and for $O\in Op_0$ $$O:(D_1,\dots,D_n)\rightarrow D$$ for some $n\ge 0$ and $D_1,\dots,D_n,D\in {\bf D}$ where $n$ and $D$'s may depend on $O$. An example of a signature where variables are not terms is given in \cite{Pitts}. A nominal signature can be used to construct terms of all compound sorts in the more or less obvious way. Next one defines the notion free and bound occurrences of variables in these terms and the notion of the $\alpha$-equivalence. For a nominal signature $\Sigma$ and a compound sort $S$ one writes $\Sigma_{\alpha}(S)$ for the set of $\alpha$-equivalence classes of terms of sort $S$ build using $\Sigma$. In the case when $\Sigma$ is the $\lambda$-calculus signature one gets the usual set of $\alpha$-equivalence classes of $\lambda$-terms considering $\Sigma_{\alpha}(Term)$. To any nominal signature $\Sigma$ one associates, following \cite{Pitts}, a functor $T_{\Sigma}:Nom^{\bf D}\rightarrow Nom^{\bf D}$ where $Nom$ is the category of nominal sets, as follows. First one associates a functor $[S]:Nom^{\bf D}\rightarrow Nom$ to any compound sort $S$ by the rule: $$[Var](X)={\bf A}$$ $$[D](X)=X_D$$ $$[1](X)=1$$ $$[(S_1,S_2)](X)=X\times X$$ $$[(Var.S)]=[{\bf A}](X)$$ where ${\bf A}$ is the standard atomic nominal set (the set of names with the canonical action of the permutation group $Perm$) and $[{\bf A}]$ is the name-abstraction functor $Nom\rightarrow Nom$ which is defined in \cite[Section 4]{Pitts}. Let $Op_D$ for $D\in {\bf D}$ be the set of operations $O$ with the target sort $D$, i.e., such that $Ar_{\bf D}(O)=D$. Then one defines $T_{\Sigma}(X)$ by the rule $$T_{\Sigma}(X)_D=\coprod_{O\in Op_D} [Ar_{CS}(O)].$$ For example, if $\Sigma$ is the signature of $\lambda$-calculus then $$T_{\Sigma}(X)={\bf A}\coprod [{\bf A}](X)\coprod (X\times X)$$ One of the main results of \cite{Pitts} is that the functor $T_{\Sigma}$ has an initial algebra $I_{\Sigma}$ for any $\Sigma$ and $(I_{\Sigma})_D=\Sigma_{\alpha}(D)$. Let us extend this construction to a monad on $Nom^{\bf D}$ and then on $Sets^{\bf D}$. First observe that for any $X\in Nom^{\bf D}$ the functor $Y\mapsto T_{\Sigma}(Y)\coprod X$ is finitely presented and therefore it has an initial algebra. Let us denote this algebra by $NR_{\Sigma}(X)$. By \cite[pp. 243-244]{Awodey2010}, $NR_{\Sigma}$ is a monad on $Nom^{\bf D}$ whose category of algebras is equivalent to the category of $T_{\Sigma}$-algebras (i.e. $NR_{\Sigma}$ is the free monad generated by $T_{\Sigma}$). The functor $Discr:Sets \rightarrow Nom$ which takes a set to the corresponding discrete nominal set has a right adjoint $Inv:Nom\rightarrow Sets$ which sends a nominal set $X$ to the set of its fixed points $X^{Perm}$. The functors $Discr^{\bf D}$ and $Inv^{\bf D}$ form an adjoint pair between the categories $Nom^{\bf D}$ and $Sets^{\bf D}$. Given a monad $R$ on a category $\cal C$ and an adjoint pair $(LF,RF)$ where $RF:{\cal C}\rightarrow{\cal C}'$ is the right adjoint, the composition $R'=RF\circ R\circ LF$ is a monad on ${\cal C}'$. Applying this fact to the monad $NR_{\Sigma}$ and the pair $(Discr^{\bf D},Inv^{\bf D})$ we conclude that the functor $$R_{\Sigma}:X\mapsto NR_{\Sigma}(Discr^{\bf D}(X))^{Perm}$$ is a monad on $Sets^{\bf D}$. For a family of sets $X$ the functor $T_{\Sigma}\coprod Discr^{\bf D}(X)$ is naturally isomorphic to the functor $T_{\Sigma+X}$ where $\Sigma+X$ is the signature with the set of operations $Op\coprod (\coprod_{D\in {\bf D}} X_D)$ and the arity function defined on $x\in X_D$ by $Ar(x)=(1,D)$ and $$R_{\Sigma}(X)=NR_{\Sigma}(Discr^{\bf D}(X))^{Perm}=I_{T_{\Sigma}\coprod Discr^{\bf D}(X)}^{Perm}=I_{T_{\Sigma+X}}^{Perm}.$$ Therefore $$(R_{\Sigma}(X))_D=(\Sigma+X)_{\alpha}(D)^{Perm}$$ is the set of invariants in the set of $\alpha$-equivalence classes of terms of sort $D$ with respect to the signature $\Sigma+X$ i.e. the set of $\alpha$-equivalence classes of closed terms of sort $D$ with respect to $\Sigma+X$. If $X_D=\{x_{1,D},\dots,x_{n_D,D}\}$ are finite sets, then the terms with respect to the signature $\Sigma+X$ can be seen as terms with respect to $\Sigma$ which depend on additional parameters $x_{i,D}$ of the corresponding sorts and the closed terms as the terms with respect to $\Sigma$ relative to the name space ${\bf A}^{\bf D}+X$ such that all the occurrences of names from ${\bf A}^{\bf D}$ are bound and all the occurrences of names from $X$ are free. To obtain from this construction a pair $(R,LM)$ of a monad on $Sets$ and a left module over this monad with values in $Sets$ we will use Lemma \ref{2014.07.28.l3}. Let $Term\in{\bf D}$ and ${\bf Type}\subset{\bf D}$. Let $P$ a family of sets parametrized by ${\bf D}-{Term}$. For a set $X$ let $(X,P)$ be the family such that $(X,P)_{Term}=X$ and $(X,P)_{D}=P_{D}$ for $D\ne Term$. Then $X\mapsto (R_{\Sigma}(X,P))_{Term}$ is a monad $R_{\Sigma,Term,P}$ on $Sets$ by Lemma \ref{2014.07.28.l2} and $$X\mapsto \coprod_{D\in {\bf Type}} (R_{\Sigma}(X,P))_D$$ is a left module $LM_{\Sigma,Term,P,{\bf Type}}$ over $R_{\Sigma,Term,P}$ by Lemmas \ref{2014.07.28.l3} and \ref{2014.07.28.l1}(b). \begin{example}\rm The C-systems of generalized algebraic theories (GATs) of \cite{Cartmell0},\cite{Cartmell1} (see also \cite{Garner}) are obtained by using algebraic signatures with two data sorts ${\bf D}=\{Term,Type\}$, ${\bf Type}=\{Type\}$ and $P=\emptyset$. The ``symbols'' of the GAT are operations of the corresponding algebraic signature. The term symbols of degree $n$ have arity $(Term,\dots,Term)\rightarrow Term$ and the type symbols of degree $n$ have arity $(Term,\dots,Term)\rightarrow Type$ where in both cases the lentth of the sequence $(Term,\dots,Term)$ is $n$. \end{example} \begin{example} \llabel{2014.08.ex}\rm To define the Martin-Lof Type theory MLTT72 of \cite{ML72} one needs to consider the case when ${\bf D}=\{Term\}$ and the nominal signature is of the form: $$v:Var\rightarrow Term$$ $$\Pi:(Term, Var.Term)\rightarrow Term{\,\,\,\,\,\,\,}\lambda:(Term,Var.Term)\rightarrow Term$$ $$app:(Term,Term)\rightarrow Term$$ $$\Sigma:(Term,Var.Term)\rightarrow Term{\,\,\,\,\,\,\,} pair:(Term,Term)\rightarrow Term$$ $$E:(Term,Var.(Var.Term))\rightarrow Term$$ $$+:(Term,Term)\rightarrow Term{\,\,\,\,\,\,\,} i:Term\rightarrow Term{\,\,\,\,\,\,\,} j:Term\rightarrow Term$$ $$D:((Term,Var.Term),Var.Term))\rightarrow Term$$ $$V:1\rightarrow Term$$ $$N_n:1\rightarrow Term{\,\,\,\,\,\,\,} i_n:1\rightarrow Term{\,\,\,\,\,\,\,} R_n:(Term,\dots,Term),\dots)\rightarrow Term$$ $$n\ge 0{\,\,\,\,\,\,\,} 1\le i \le n$$ $$N:1\rightarrow Term{\,\,\,\,\,\,\,} 0:1\rightarrow Term{\,\,\,\,\,\,\,} s:Term\rightarrow Term$$ $$R:((Term,Term),(Var.(Var.Term)))\rightarrow Term$$ Note that in fact $E$, $D$, $R_n$, and $R$ should also have the type family $C$ (see \cite[2.3.6, 2.3.8, 2.3.10, 2.3.12]{ML72}) as an argument which, in our notation, means an additional component of the form $Var.Term$ in their arities. In fact, the original definition from \cite{ML72} allows for additional ``type constants'' (see \cite[2.2.1]{ML72}) of various algebraic arities which are analogous to the predicate constants in the predicate logic. As such it is a definition of a family of type systems. The signatures underlying all type systems in this family are obtained by extending the signature described above by a set of operations of the form $P:(Term,\dots,Term)\rightarrow Term$. For the signature of the MLTT78 see \cite[p. 158]{ML78} \end{example} \begin{remark}\rm It is possible to ``encode'' a nominal signature in typed $\lambda$-calculus using the idea that closed terms are objects of a base type $term$, terms with one free variable are objects of the type $term\rightarrow term$, terms with two free variables are objects of $term\rightarrow term\rightarrow term$ etc. This encoding allows one to describe the substitutions of closed terms into terms with free variables as applications in the meta-theory. However, it does not allow to describe the substitution of, e.g., terms with one free variable into terms with one free variable, i.e., the full monadic structure is not recoverable from such a description. This is the reason why the use of typed $\lambda$-calculus systems such as the Logical Framework for the description of the syntax of dependent type theories is of limited use. \end{remark} \comment{To be more precise, the input data consists of a nominal signature $\Sigma$ with one name-sort and a set of data-sorts $\bf D$, a distinguished data-sort $Term\in {\bf D}$, a subset of data-sorts ${\bf Type}\subset {\bf D}$ and a family of sets $(P_{D})_{D\in {\bf D}-\{Term\}}$ parametrized by data-sorts distinct from $Term$. For such a quadruple we describe a monad $R$ such that $R(X)$ is the set of $\alpha$-equivalence classes of expressions of sort $Term$ with variables from the name-space $$varnames := {\bf A}\amalg X\amalg (\amalg_{D\in {\bf D}-\{Term\}} P_D)$$ where ${\bf A}$ is a countable set, all occurrences of variables from ${\bf A}$ are bound and all occurrences of variables from $X\coprod (\coprod_{D\in {\bf D}-\{Term\}} P_D)$ are free. Note that $R(X)$ depends, up to a canonical isomorphism, only on $X$ but not on the choice of a countable set ${\bf A}$. We also describe a left module $LM$ over $R$ such that $LM(X)$ is the disjoint union of $\alpha$-equivalence classes of similar expressions of sorts $D$ for $D\in {\bf Type}$. When $\Sigma$ Suppose now that we are given a type theory based on the syntax of expressions with free and bound variables specified by a nominal signature $\Sigma$ with one name-sort $var$ and one data-sort $Term$ that is formulated in terms of four kinds of basic judgements originally introduced by Per Martin-Lof in \cite[p.161]{ML78}. Choosing $\bf Type$ to be $\{Term\}$ we obtain a pair $(R,LM)$ where $LM$ is isomorphic to $R$ considered as a left module over itself. For a set $X$ the set $R(X)$ is the set of $\alpha$-equivalence classes of $\Sigma$-expressions over the name space ${\bf A}\amalg X$ where all occurrences of names from $\bf A$ are bound and all occurrences of names from $X$ are free. Since we are only interested in the $\alpha$-equivalence classes of judgements we may assume that the variables declared in the context are taken from the set of natural numbers such that the first declared variable is $1$, the second is $2$ etc. Then, the set of judgements of the form $(1:A_1,\dots,n:A_n\vdash A\, type)$ (in the notation of Martin-Lof ``$A\,type\,(1\in A_1,\dots,n\in A_n)$'') can be identified with the set of judgements of the form $(1:A_1,\dots,n:A_n, n+1:A\rhd)$ stating that the context $(1:A_1,\dots,n:A_n, n+1:A)$ is well-formed. With this identification the type theory is specified by four sets $C,\widetilde{C},Ceq$ and $\widetilde{Ceq}$ where $$C \subset \coprod_{n\ge 0} R(\emptyset)\times\dots\times R(\{1,\dots,n-1\})$$ $$\widetilde{C}\subset \coprod_{n\ge 0} R(\emptyset)\times\dots\times R(\{1,\dots,n-1\})\times R(\{1,\dots,n\})\times R(\{1,\dots,n\})$$ $$Ceq \subset \coprod_{n\ge 0} R(\emptyset)\times\dots\times R(\{1,\dots,n-1\})\times R(\{1,\dots,n\})^2$$ $$\widetilde{Ceq} \subset \coprod_{n\ge 0} R(\emptyset)\times\dots\times R(\{1,\dots,n-1\})\times R(\{1,\dots,n\})^2\times R(\{1,\dots,n\})$$ and Proposition \ref{2014.07.10.prop1} spells out the necessary and sufficient conditions that these sets should satisfy in order for it to be possible to construct from them a C-system. More generally, one may consider the case when $\Sigma$ has more than one data-sort as is for example the case in the description of type theories where there is a strict distinction between type expressions and term expressions. In order to define the associated C-system one only needs to have the substitution of expressions for variables when expression is a term expression i.e. an expression of the sort $Term$. Also not all data-sorts of the signature need to correspond to type expressions as for example in the case of logic enriched type theory (see \cite{AczelGambino}) where there is an additional data-sort of propositional expressions. This leads to the idea to consider $\bf Type$ in our construction as a subset of the set of data-sorts of the signature. In order to have everything properly defined one also needs to specify a family $P$. While one can always take it to be the family of empty sets it might be interesting to consider also non-empty cases which correspond to the C-systems determined by a choice of fixed sets of variables or parameters of data-sorts other than the $Term$ sort. } \def$'${$'$}
1407.3403
\section{Introduction} A map $f:W\to \mathbb R^2$ defined on a domain $W$ of ${\Bbb R}^2$ is called planar harmonic if both components of $f$ are harmonic functions. When $W$ is simply connected, identifying ${\Bbb R}^2$ with ${\Bbb C}$, the map $f$ can also be written in the complex form $p(z)+\overline{q(z)}$ where $p$ and $q$ are holomorphic functions on $W$. We are interested on germs $f$ of planar harmonic maps defined in a neighborhood of a point $a$ and we will use mainly the local complex form. For such a germ, we restrict ourselves to the situation given by the two following conditions: \begin{enumerate} \item The fiber at $f(a)$ is the single point $a$. \item The critical set ${\mathcal C}_f$ is a smooth curve at the point $a$. (The critical set is the vanishing locus of the Jacobian). remark}{{\medskip}{enumerate} The first condition means that $f$ is light in the sense of Lyzzaik. In our paper, we will also present some remarks for the non light case. The second condition implies that ${\mathcal C}_f$ is not a single point. Otherwise, one can prove that up to ${\emph C}^1$ change of coordinates, $f$ is holomorphic or anti-holomorphic . The critical value set ${\mathcal V}_f=f({\mathcal C}_f)$ will play an important role in our work. In our setting, the following natural equivalence relation is introduced: we say that two germs $f,g$ of planar harmonic maps defined respectively at points $a$ and $b$, are equivalent if there exists a germ of biholomorphism $u$ between neighborhoods of $a$ and $b$, and a real affine bijection $\ell$ such that: $\ell\circ f\circ u= g$. For this relation, we get four numerical invariants and miscellaneous normal forms. These tools allow to understand analytic and topological facts about germs of harmonic maps. In particular they shed a new light on the geometric models which appear in the work of Lyzzaik. In order to fully understand these invariants, the study of the complexification of $f$ plays a fundamental role. Explicitly, the four numerical invariants are: \begin{itemize} \item[--] the absolute value $d$ of the local topological degree. In our situation, Lyzzaik's work shows that $d$ is $0$ or $1$, \item[--]the number $m$ which is the lowest degree of a non constant monomial in the power series expansion of the harmonic germ $f$ at the point $a$, \item[--] the local multiplicity $\mu$ of the complexified map of $f$ is defined as the cardinality of the generic fiber, \item[--] the number $j$ which is the valuation of the analytic curve ${\mathcal V}_f$, in a locally injective parametrization. remark}{{\medskip}{itemize} Inspired by Lyzzaik's models \cite{Ly-light}, we prove in a self-contained way that the germs $f$ are classified topologically by the numbers $m$ and $d$ or equivalently by $m$ and the parity of $m+j$. We give a simple description of these classes in terms of generalized folds and cusps. A main result of our work is that the conditions $j\geq m$ and $\mu= j+m^2$ are necessary and sufficient for the existence of a harmonic planar germ satisfying the conditions 1. and 2. above. The second relation above was at first guessed, using many computations. Our proof is based on the theory of Milnor fibration for germs of holomorphic functions on ${\Bbb C}^2$. The number $j$ occurs also at another level. In fact we prove that the critical value set ${\mathcal V}_f$ can be parametrized by: $x(t)= Ct^j +h.o.t$ and $y(t)= C't^{j+1}+h.o.t$, where $C$ and $C'$ are nonzero constants. In other words, ${\mathcal V}_f$ is a curve with Puiseux pair $(j, j+1)$, and Puiseux theory gives a topological characterization of the complexification of the critical value set. This pair completes also Lyzzaik's description of ${\mathcal V}_f$ itself. Every equivalence class of harmonic germs contains a normal form $p(z)^m-\overline{z}^m$, with $p$ a holomorphic germ tangent to the identity at $0$. When $m=1$, $\mu$ is equal to the order at the origin of the holomorphic germ $\overline{p}\circ p$ where $\overline{p}(z)$ is defined as $\overline{p(\overline{z})}$. As a by-product, one gets an algorithm to compute $\mu$ in the polynomial case. We obtain also relations between the numerical invariants of $p(z)-\overline{z}$ and $p(z)^m-\overline{z}^m$. The condition $m=1$ for a harmonic germ means that the gradient of its Jacobian does not vanish at the point $a$. In an appendix, we generalize results obtained for numerical invariants in the harmonic case to real and complex analytic planar germs satisfying the previous Jacobian condition. In this situation, we get that ${\mathcal V}_f$ has still Puiseux pair $(j, j+1)$, with the same geometric consequences. Moreover, as for the harmonic case, $\mu=j+1$. In the analytic case, we get also an algorithm to compute $\mu$ inspired by a fundamental work of Whitney \cite{Wh} on cusps and folds. At the end of the appendix, some examples are presented in order to compare the harmonic situation to the real analytic one. Lyzzaik has also studied the case of non smooth critical sets. In a forthcoming work, we hope to extend some of our results to this more general situation. {\bf Acknowledgement}. We would like to thank J.H. Hubbard, F. Laudenbach, A. Parusi\'nski and H.H. Rugh for inspiring discussions. \section{Basic concepts and main results} Let $K=\mathbb R$ or $ \mathbb C$. Let $U$ be a domain in $K^2$. By convention a domain is a connected open set. Consider a planar mapping. $$f: U\to K^2,\quad \left(\begin{matrix} x\\ yremark}{{\medskip}{matrix}\right) \mapsto \left(\begin{matrix} f_1(x,y)\\ f_2(x,y)remark}{{\medskip}{matrix}\right).$$ We say that $f$ is {\it a $K$-analytic map} if each of $f_1$ and $f_2$ can be expressed locally as convergent power series. In this case denote by \\ -- $J_f$ the jacobian of $f$\\ -- ${\mathcal C}_f= \{ J_f=0\,\}$ the {\it critical set} \\ -- ${\mathcal V}_f=f({\mathcal C}_f)$ the {\it critical value set}. We say that \\ -- $z_0\in {\mathcal C}_f$ is a {\it regular critical point }of $f$ if $\nabla J_f(z_0)\ne (0,0)$;\\ -- $z_0\in {\mathcal C}_f$ is a {\it smooth critical point }of $f$ if ${\mathcal C}_f$ is an 1-dimensional submanifold near $z_0$. By implicit function theorem a regular critical point is necessarily a smooth critical point. But the converse is not true. We will see many examples in the following. We say that $f$ is {\it a planar harmonic map} if $K=\mathbb R$ and each of $f_1$, $f_2$ is $C^2$ with a laplacian equal to zero. Recall that $\Delta f_j(x,y)=(\partial^2_x+\partial^2_y)f_j(x,y)$. Note that in this case each of $f_i$ is locally the real part of a holomorphic map. Thus a planar harmonic map is in particular $\mathbb R$-analytic. The {\it order} of an analytic map $f: K^m\to K^n$ at a point $p$ in the source, is the lowest total degree on a non zero monomial in the coordinate-wise Taylor expansions of one of the components of $f-f(p)$ around $p$. For an $ \mathbb C$-analytic map $F: W\to \mathbb C^n$ with $W$ an open set of $ \mathbb C^n$ and for a point $w_0\in W$, we define the {\it multiplicity} of $F$ at $w_0$, by (see \cite{Ch}) $$ \mu(F,w_0)=\limsup_{w \to w_0} \# F^{-1}F(w )\cap X$$ where $X$ is an open neighborhood of $w_0$ relatively compact in $W$ such that $F^{-1}F(w_0)\cap \overline X=\{w_0\}$. If such $X$ does not exist, set $\mu(F,w_0)=\infty$. In the situation above there is an open neighborhood $U$ of $F(w_0)$ and an open dense subset $U_1\subset U$ such that for all $w\in U_1$, $\mu(F,w_0)=\# F^{-1}F(w )\cap X$. For a holomorphic map $\eta: U\to \mathbb C$ with $U$ an open set of $ \mathbb C$, and $w_0\in U$, the two notions coincide. For a $\mathbb R$-analytic map $f: U\to \mathbb R^2$, in particular a planar harmonic map, we define its multiplicity at a point to be the multiplicity of its holomorphic extension in $ \mathbb C^2$. We will also frequently use the well known fact that for a holomorphic map $f: U\to \mathbb C^2$ its multiplicity at a point $p_0=(x_0,y_0)\in U$ is equal to the codimension in the ring of power series of the ideal defined by its component : \[ \mu(f,p_0)=\dim_{ \mathbb C}\frac{ \mathbb C\{u,v\}}{f_1(x_0+u,y_0+v),f_2(x_0+u,y_0+v)}. \] See for example \cite[theorem 6.1.4]{dJP}. One objective of this work is to show that harmonic maps around a smooth critical point of a given order have only two types of topological behaviours, depending on the parity of the multiplicity. Our investigation is based on Whitney's singularity theory on $C^\infty$ planar mappings, multiplicity theory of holomorphic maps of two variables and Lyzzaik's work on light harmonic mappings. A smooth 1-dimensional manifold in $\mathbb R^2$ admits a smooth parametrization. If the critical set of a harmonic mapping is smooth somewhere, there is actually a parametrization that is in some sense natural. This induces a natural parametrization $\beta(t)$ of the critical value set. \begin{definition} We denote by $R_{\ge k} (s)$ a convergent power series on $s$ whose lowest power in $s$ is at least $k$. A planar $K$-analytic curve $\beta: \{|s|<\varepsilon\}\ni s\mapsto \beta(s)\in K^2 $ is said to have the \emph{order pair $(j,k)$} at $ \beta(0)$, for some $1\le j<k\le \infty$, if up to reparametrization in the source and an analytic change of coordinates in the range $K^2$, the curve takes the form $\beta(s)=\beta(0)+ \left(\begin{matrix} {\rule[-2ex]{0ex}{4.5ex}{}} Cs^j + R_{\ge j+1} (s)\\ C's^k + R_{\ge k+1 }(s)remark}{{\medskip}{matrix}\right)$ with $C\cdot C'\ne 0$. remark}{{\medskip}{definition} Let us assume that $k$ is not a multiple of $j$, and that the complexified parametrization of $\beta$ is locally injective. This is the case in particular if $(j,k)$ are co-prime. Then the order $j$ and more generally the order-pair $(j,k)\in \mathbb N^2$ is an analytic invariant of the curve independently of such a parametrisation : $j$ is the minimum, and $k$ the maximum of the intersection multiplicities $(\beta,\gamma)$ among all smooth $K$-analytic curves $\gamma$. This order-pair is also a topological invariant of the complexified curve because $\frac1{gcd(j,k)}(j,k)$ is its first Puiseux pair. Such a unique order-pair exists unless $j=1$, or $j=+\infty$. Our main goal is to establish a relationship between the order of the critical value curve and the multiplicity, and then to connect these invariants to Lyzzaik's topological models. More precisely, we will prove: \REFTHM{main} Let $f$ be a planar harmonic map in a neighborhood of $z_0$ with $z_0$ as a smooth critical point. \begin{enumerate} \item (Critical value order-pair) The critical value curve has a natural parametrization and an order $j$ at $f(z_0)$. It has an order-pair of the form $(1,\infty)$ if $j=1$, and $(j,j+1)$ if $1<j<\infty$. \item (Critical value order and multiplicity) The three invariants $m$ order of $f$, $j$ order of the critical values curve and $\mu $ multiplicity of the complexified map on $( \mathbb C^2,z_0)$ are related by the (in)equalities : \REFEQN{triple} \left\{\begin{array}{l} \infty\ge j\ge m\ge 1\\ j+m^2=\mu.remark}{{\medskip}{array}\right. remark}{{\medskip}{equation} \item (Topological model) Assume $\mu<\infty$. Let $\mathbb D$ be the unit disc in $ \mathbb C$. There is a neighborhood $\Delta$ of $z_0$, a pair of orientation preserving homeomorphisms $h_1: \Delta\to \mathbb D, \ z_0\mapsto 0, \quad h_2 : \mathbb C\to \mathbb C,\ f(z_0)\mapsto 0$, and a pair of positive odd integers $2n^\pm-1$ satisfying \Ref{N-plus} below, such that $$h_2\circ f\circ h_1^{-1}(re^{i\theta})=\left\{\begin{array}{ll} re^{i(2n^+-1)\theta} & 0\le \theta\le \pi\\ re^{-i(2n^--1)\theta} & \pi\le \theta\le 2\pi\ . remark}{{\medskip}{array}\right.$$ Moreover $\# f^{-1}(z)=n^++n^-$ or $n^+ + n^- -2$ depending on whether $z$ is in one sector or the other of $f(\Delta) {\smallsetminus } \beta$. \REFEQN{N-plus}\text{\fbox{$ \begin{array}{cl} \text{ $\mu$ even},& \left(\begin{matrix} {\rule[-2ex]{0ex}{4.5ex}{}} 2n^+-1\\ 2n^--1remark}{{\medskip}{matrix}\right) \in \left\{ \left(\begin{matrix} m \\ m remark}{{\medskip}{matrix}\right), \left(\begin{matrix} m+1 \\ m+1 remark}{{\medskip}{matrix}\right)\right\} \\ \text{$\mu$ odd}, & \left(\begin{matrix} {\rule[-2ex]{0ex}{4.5ex}{}} 2n^+-1\\ 2n^--1remark}{{\medskip}{matrix}\right) \in \left\{ \left(\begin{matrix} m+1 \\ m-1 remark}{{\medskip}{matrix}\right), \left(\begin{matrix} m-1 \\ m+1 remark}{{\medskip}{matrix}\right), \left(\begin{matrix} m \\ m+2 remark}{{\medskip}{matrix}\right), \left(\begin{matrix} m+2 \\ m remark}{{\medskip}{matrix}\right)\right\} remark}{{\medskip}{array}$}} remark}{{\medskip}{equation}remark}{{\medskip}{enumerate} remark}{{\medskip}{Thm} We want to emphasise that guessing and proving the relation $\mu=j+m^2$ was the main point of the work. Our starting point was the case $m=1$, which corresponds to the critically regular case. We could establish then $\mu=j+1$. But this case does not indicate a general formula. A considerable amount of numerical experiments have been necessary to reveal a plausible general relation, and then results in singularity theory about Milnor's fibres had to be employed to actually prove the relation. We will deduce the topological model from our formula \Ref{triple} and a result of Lyzzaik \cite{Ly-light}. More precisely, we will parametrize the critical value curve $\beta$ in a natural way and then express its derivative, as did Lyzzaik, in the form $\beta'(t)=Ce^{it/2}\cdot R(t)$, with $C\ne 0$ and $R(t)$ a real valued analytic function. Lyzzaik defined in his Definition 2.2 the singularity to be of the {\em first kind} if $R(t)$ changes signs at $0$, which is equivalent to $j>0$ even, and of the {\em second kind} if $R(0)=0$ and $R(t)$ does not change sign at $0$, which is equivalent to $j\geq1$ odd. He then deduced the local geometric shape of $\beta$ (cusp or convex) in Theorem 2.3 following the kind. What we do here is to push further his calculation to determine the order-pair of the critical value curve $\beta$, which then gives automatically its shape (cusp or convex). Lyzzaik then provided topological models in his Theorem 5.1 following the parity of an integer $\ell$ (which corresponds to our $m-1$) and the kind (or the shape of $\beta$) of the singularity, corresponding in our setting to the parity of $m+j$. Thanks to our relation \Ref{triple}, we may then express Lyzzaik's topological model in terms of the parity of $\mu$. Lyzzaik's proof relies on previous results of Y. Abu Muhanna and A. Lyzzaik \cite{AL}. We will reestablish his models with a self-contained proof. As a side product, we obtain the following existence result (which was a priori not obvious): \REFCOR{existence} Given any triple of integers $(m,j,\mu)$ satisfying \Ref{triple}, there is a harmonic map $g(z)$ with a smooth critical point $z_0$ such that $Ord_{z_0}(g) =m$, $\mu(g,z_0)=\mu$, and $(j, j+1)$ is the order-pair of the critical value curve at $g(z_0)$. Given any pair of integers $n^\pm\ge 1$ satisfying $\left\{\begin{array}{ll} n^+=n^-\text{\ \ or}\\ |n^+-n^-|= 1remark}{{\medskip}{array}\right.$, there are two consecutive integers $k,k+1$ and harmonic maps with order $m=k$ and $m=k+1$ respectively realizing the topological model \Ref{N-plus} for the pair $n^\pm$ and the order $m$. remark}{{\medskip}{corollary} \section{Normal forms for planar harmonic mappings} Recall that any real harmonic function on a simply connected domain in $ \mathbb C$ is the real part of some holomorphic function. Therefore, if $U \subset \mathbb C$ is simply connected, and $f\,:\,U \to \mathbb C$ is a harmonic mapping, then $f=p+\overline{q}$ where $p$ and $q$ are holomorphic functions in $U$ that are unique up to additive constants. We will say that $p+\overline{q}$ is a {\it local expression} of $f$. In a study around a point $z_{0}$ we will often take the unique local expression in the form $\,f(z)=f(z_{0})+p(z)+\overline{q(z)}$ with $p(z_{0})=q(z_{0})=0$. \subsection{Existence and unicity of the normal forms} \begin{definition}{\emph A natural equivalence relation}. For $Z,W$ open sets in $ \mathbb C$, with $z_0\in Z$, $w_0\in W$, and for harmonic mappings $f:Z\to \mathbb C$ and $g:W\to \mathbb C$, we say that $(f,z_0)$ and $(g,w_0)$ are {\it equivalent} and we write $$(f,z_0)\sim (g,w_0)$$ if there is a bijective $\mathbb R$-affine map $H: \mathbb C\mapsto \mathbb C, z\mapsto az+b\overline z+c$ and a biholomorphic map $h:W'\to Z'$ with $z_0\in Z'\subset Z$, $w_0\in W'\subset W$ such that $h(w_0)=z_0$ and $g=H\circ f\circ h$ on $W'$. remark}{{\medskip}{definition} \REFLEM{local normal forms} Let $f$ be a non-constant harmonic map defined on a neighborhood of $z_0$. Then \REFEQN{Local} (f,z_0)\sim (g,0) \quad \text{for some}\quad g(z)=z^m-\overline{z^n(1+O(z))}remark}{{\medskip}{equation} with $\infty\ge n\ge m\ge 1$ (here $O(z)$ denotes a holomorphic map near $0$ vanishing at $0$). Moreover if another map $G(z)=z^M-\overline{z^N(1+O(z))}$ with $\infty\ge N\ge M\ge 1$ satisfies $(G,0)\sim (g,0)$ then $(M,N)=(m,n)$. If $m<n$ then $g(z)=\dfrac1{c^m}G(cz)$ for $c$ an $(m+n)$-th root of unity. remark}{{\medskip}{lemma} {\noindent\em Proof.} We may assume $z_0=0$ and $f(0)=0$. I. We may assume that $f$ is harmonic on a simply connected open neighborhood $V$ of $0$. One can thus write $f(z)=p(z)-\overline{q(z)}$ with $p,q$ holomorphic on $V$. Replacing $f$ by $f-f(0)$ we may assume $f(0)=0$, and we may also assume $p(0)=q(0)=0$. Case 0. Assume $p\equiv 0$ or $q\equiv 0$. Replacing $f(z)$ by $\overline{f(z)}$ if necessary we may assume $q\equiv 0$. In this case $p(z)=az^m(1+O(z))$ with $a\ne 0$ and there is a bi-holomorphic map $h$ so that $p(z)=(h(z))^m$. Therefore $f(z)=g(h(z))$ with $g(w)=w^m$. II. Assume now that none of $p,q$ is a constant function. Replacing $f(z)$ by $\overline{f(z)}$ if necessary we may assume $p(z)=az^m(1+O(z))$ and $q(z)=bz^n(1+O(z))$ with $\infty> n\ge m\ge 1$ and $a\cdot b\ne 0$. Replacing $f$ by $(\overline{b\lambda})^{-n}f(\lambda z)$ changes $a$ to $(\overline{b\lambda})^{-n}a\cdot \lambda ^m$ and $b$ to $1$. We may thus assume $f(z)=Az^m(1+O(z))-\overline{z^n(1+O(z))}$, $A\ne 0$. Case 1. $m< n$. Choose $\rho$ so that $\dfrac {A\cdot \rho^m}{\overline \rho^n}=1$. Replace $f$ by $\dfrac{f(\rho z)}{\overline \rho^n}$ we may assume $f(z)=z^m(1+O(z))-\overline{z^n(1+O(z))}$. Case 2. $m=n$. We choose $\tau$ so that $\dfrac {A\cdot \tau^m}{\overline \tau^n}=\dfrac {A\cdot \tau^m}{\overline \tau^m}\in \mathbb R_+^*$. We may thus assume $$f(z)=cz^m(1+ O(z))-\overline{ z^m(1+O(z))},\quad c>0.$$ If $c=1$ we stop. Assume $c\ne 1$. Then $H(z):=z+\dfrac1c \overline z$ is an invertible linear map. And as $c$ is real, we get easily : \[ H(f(z))= (c-\dfrac1c)z^m(1+O(z))-\overline{O(z^{m+1})}\qquad \text{with } c-\dfrac1c\ne 0\ .\] Replacing $f$ by $H\circ f$ we are reduced to Case 0 or Case 1. Therefore in any case we may assume $$f(z)=z^m(1+O(z))-\overline{z^n(1+O(z))},\quad 1\le m\le n, \ m<\infty,\ n\le \infty.$$ Now there is a holomorphic map $h$ with $h(0)=0$, $h'(0)=1$ defined in a neighborhood of $0$ so that the holomorphic part of $f$ can be expressed as $h(z)^m$. Then $$f\circ h^{-1}(z)=z^m-\overline{z^n(1+O(z))},\quad 1\le m\le n, \ m<\infty,\ n\le \infty$$ on some neighborhood of $0$. This establishes the existence of normal forms. Let us now take a map $G(z)=z^M-\overline{z^N(1+O(z))}$ with $\infty\ge N\ge M\ge 1$ so that $(G,0)\sim (g,0)$ with $g(z)=z^m-\overline{z^n(1+O(z))}$ and $m\le n$. It is easy to see that $M=m$. Let now $h(z)=cz(1+O(z))$ be a holomorphic map with $c\ne 0$ and $H(z)=az+b\overline z$ so that $H\circ G\circ h(z)=g(z)$. Then $$a\cdot (h(z))^m -a\cdot \overline{( h(z))^N(1+ O(z))}+ b\cdot \overline{(h(z))^m} -b \cdot ( h(z))^N(1+ O(z))=z^m -\overline{z^n(1+ O(z))}. $$ Assume $n>m=M$. If $N=m$ then the terms $z^m$ and $\overline z^m$ have coefficients $(a-b)c^m$ and $(b-a)\bar c^m$ on the left hand side, and $(1,0)$ on the right hand side. This is impossible. So $N>m$ as well. Comparing the $\overline z^m$ term on both sides we get $b=0$, and then the $z^m$ term we get $ac^m=1$. Comparing then the holomorphic part of both sides we get $h(z)=cz$. Now the anti-holomorphic part gives $N=n$ and $\overline a c^n=1$. It follows that $\overline c^{-m} c^n=1$. So $|c|=1$, $c^{m+n}=1$ and $c^{-m}G(cz)=g(z)$. \hfill{q.e.d.} We remark that in the case $m=n$ the normal form is not unique. Here is an example: Let $G(z)=z+ iz^2-\overline z$. For any ${\rm Re} b\ne -\frac12$ the map is equivalent to $G(z)+bG(z)+b\overline{G(z)}=(z+iz^2+bi z^2)-\overline{z+\overline b i z^2}=w+ O(w^2) -\bar w=: g(w)$ for $w=z+\overline b i z^2$. \subsection{Criterion and normal forms for critically smooth points} We say that a subset set $Q$ of $ \mathbb C$ is a {\bf locally regular star} at $z_0$ of $\ell$-arcs if there is a neighborhood $U$ of $z_0$ and a univalent holomorphic map $\phi: U\to \mathbb C$ with $\phi(z_0)=0$ so that $Q\cap U=\{z, \phi(z)^\ell\in \mathbb R\}$. If $\ell=1$ then $Q$ is a smooth arc in $U$. \REFLEM{Smooth} Let $f$ be a harmonic map in a neighborhood of $z_0$. The following conditions are equivalent: \begin{enumerate}\item[1)] ${\mathcal C}_f$ is a non-constant smooth $\mathbb R$-analytic curve in a neighborhood of $z_0$. \item[2)] For $m:=Ord_{z_0}(f)$, in a local expression $f(z)=p(z)+\overline{q(z)}$, we have $m =Ord_{z_0}(p)=Ord_{z_0}(q)<\infty$, the map $\psi(z):=\dfrac{p'(z)}{q'(z)}$ extends to a holomorphic map at $z_0$, with $|\psi(z_0)|=1$ and $\psi'(z_0)\ne 0$. \item[3)] $(f, z_0)\sim (g,0)$ with \REFEQN{degenerate} g(z)=z^{m}+bz^{m+1}+O(z^{m+2}) +\overline{z^{m}}, \quad |b|=1;remark}{{\medskip}{equation} remark}{{\medskip}{enumerate} Every equivalence class of such $(f,z_0)$ has a representative in any of the following forms (with any choice of signs): $ (f, z_0)\sim (h,0) $ with \REFEQN{plus minus}\ h(z)=\pm z^{m}+bz^{m+1}+O(z^{m+2}) \pm \overline{z^{m}} \text{\quad or\quad } h(z)=\pm \Big(z+bz^2+O(z^{2})\Big)^m \pm \overline{z^{m}}, \quad |b|=1.\\ remark}{{\medskip}{equation} Furthermore, $z_0$ is a regular critical point if and only if $m=1$. remark}{{\medskip}{lemma} {\noindent\em Proof.} Assume at first $f(z)=p(z)+\overline{q(z)}$, with $$p(z)=z^m+ bz^{m+k} + O(z^{m+k+1}),\quad q(z)=z^m,\quad k\ge 1, \ b\ne 0.$$ Note that $J_f=|p'|^2-|q'|^2$. Set $\psi(z)=\dfrac{p'(z)}{q'(z)}$. We have $${\mathcal C}_f=\{J_f=0\}=\{q'=0\}\cup \{ |\psi |=1\}=\{0\}\cup \psi^{-1}(S^1)=\psi^{-1}(S^1) \ .$$ But $\psi^{-1}(S^1)$ is a locally regular star at $0$ of $k$-arcs. So ${\mathcal C}_f$ is smooth at $0$ if and only if $k=1$, or equivalently, $\psi'(0)\ne 0$. This proves in particular the implication 3)$\Longrightarrow$1). Let us prove 1)$\Longrightarrow$3). We may assume $f$ is in the local normal form \Ref{Local}. If $m\ne n$ then it is easy to see that $z_0$ is an isolated point of ${\mathcal C}_f$. This will not happen under the smoothness assumption of ${\mathcal C}_f$. So $m=n$. Replace $f$ by $\overline{f(az)/a^m}$ with $a^{2m}=-1$ we have $f(z)=p(z)+\overline{q(z)}$, with $$p(z)=z^m+ bz^{m+k} + O(z^{m+k+1}),\quad q(z)=z^m,\quad k\ge 1, \ b\ne 0.$$ Since ${\mathcal C}_f$ is smooth at $0$ by the argument above we have $k=1$. We may then replace $f$ by $f(\lambda z)/\lambda^m$ for $\lambda=\dfrac1{|b|}>0$ to get a normal form so that $|b|=1$. This is \Ref{degenerate}. The rest of the proof is similar. We leave the details to the reader. \hfill{q.e.d.} \section{Order $j(f,z_0)$ of the critical value curve for a harmonic map} For a harmonic map near a smooth critical point, we will introduce what we call the natural parametrization of the critical value curve, and then compute its order-pair in this coordinate. Points 3, 4 and 5 of the following result are due to Lyzzaik, \cite{Ly-light}. Just to be self-contained we reproduce Lyzzaik's proof here (with a somewhat different presentation). \REFLEM{curves} Assume $f(z)=p(z)+\overline{q(z)}$ is an harmonic mapping and is critically smooth at $z_0$. Set $\psi(z)=\dfrac{p'(z)}{q'(z)}$ and $m=Ord_{z_0}f$. \begin{enumerate}\item We have $\lambda:=\psi(z_0)\in S^1$ and $\psi'(z_0)\ne 0$. The critical set ${\mathcal C}_f$ in a neighborhood of $z_0$ coincides with $\psi^{-1}(S^1)$, is locally a smooth arc. We endow this arc what we call the natural parametrization by $\gamma(t):=\psi^{-1}(\lambda e^{it})$;\item We then endow the critical value set what we call its natural parametrization by $\beta(t):=f(\gamma(t))$. Set $j=Ord_0(\beta(t))$. Either $\beta(t)\equiv \beta(0)=f(z_0)$, in which case $j=+\infty$ by convention, or $\infty> j\ge m $. \item For the line $L=\{f(z_0)+ s\sqrt\lambda,s\in \mathbb R\}$, the set $f^{-1}(L)$ is a locally regular star at $z_0$ with $2(m+1)$ branches. \item We have $\beta'(t)=\sqrt{\lambda e^{it}}R(t)$, with $R(t)=2{\rm Re} \Big( \sqrt{\lambda e^{it}} \dfrac d{dt}q(\gamma(t))\Big)$, an $\mathbb R$-analytic real function of $t$. \item In the case $\beta'(t)\not\equiv 0$, the curve $t\mapsto \beta(t)$ is locally injective, has a strictly positive curvature in a punctured neighborhood of $0$, turns always to the left, is tangent to $L$ at $\beta(0)$. \item We have $j-1=Ord_0(R)$ and $\infty\ge j\ge m $. Either $j=\infty$ and $\beta\equiv \beta(0)$, or the curve $\beta$ has the order-pair $(j,j+1)$ at $0$. remark}{{\medskip}{enumerate} remark}{{\medskip}{lemma} {\noindent\em Proof.} Point 1. We have $${\mathcal C}_f=\{J_f=0\}=\{q'=0\}\cup \{ |\psi |=1\}=\{q'=0\}\cup \psi^{-1}(S^1)\ .$$ But $q(z)$ is not constant (otherwise $\psi\equiv \infty$) we know that $\{q'=0\}$ is discrete and avoids a punctured neighborhood of $z_0$. Therefore, reducing $U$ if necessary, we have $\{J_f=0\}\cap U=\psi^{-1}(S^1)\cap U$, and we may choose a holomorphic branch of $\sqrt{\psi(z)}$ for $z\in U$. From Lemma \ref{Smooth} we know that $\psi(z_0)\in S^1$, $\psi'(z_0)\ne 0$ and so $\psi$ is locally injective. Reducing $U$ further if necessary, we see that $\{J_f=0\}\cap U$ is a smooth arc. We call the parametrization $\gamma(t)=\psi^{-1}(\psi(z_0)\cdot e^{it})$ the natural parametrization of ${\mathcal C}_f$. Point 2. We endow the critical value set with the natural image parametrization $\beta(t)=f(\gamma(t))$. Write $f(z)=p(z)+\overline{q(z)}$, $p(z)=a(z-z_0)^m+h.o.t.$ and $q(z)=q(z_0)+A(z-z_0)^{m}+ h.o.t.$ for some $a,A\ne 0$. Due to the smoothness of the critical set at $z_0$, we have $\gamma(t)= z_0+\gamma'(0)\cdot t+ h.o.t.$ with $\gamma'(0)\ne 0$. So $$\beta(t)=f(\gamma(t))=p(\gamma(t))+\overline{q(\gamma(t))}=\beta(0)+(a \gamma'(0)^m+ \overline A \overline{\gamma'(0)^m}) t^m + h.o.t\ .$$ It follows that $j=Ord_{0}\beta(t)$ satisfies $m\le j\le +\infty$. Point 3. Without loss of generality we may assume $z_0=0$ and $f(z_0)=0$. Choose a local expression $f(z)=p(z)+\overline{q(z)}$ so that $p(0)=q(0)=0$. Then $p(z)=az^m + O(z^{m+1})$ and $q(z)=bz^m + O(z^{m+1})$ for some $a,b\ne 0$. We have $\dfrac ab= \psi_f(0)=\lambda$. Rewrite now $f$ in the form $$f(z)=\sqrt{\lambda}\Big(P(z) + \overline{Q(z)}\Big) = \sqrt{\lambda} \Big(P(z)-Q(z) + Q(z)+\overline{Q(z)}\Big)$$ with $P(z)=p(z)/\sqrt \lambda$. Then $P(z)$ and $Q(z)$ have identical coefficient for the term $z^m$ and $Ord_0(P)=Ord_0(Q)=m$. Set \REFEQN{F} F(z)=P(z)-Q(z),\ r(z)=Q(z)+\overline{Q(z)}\quad \text{so that}\quad f(z)=\sqrt\lambda \Big(F(z)+ r(z)\Big).remark}{{\medskip}{equation} Note that $r(z)$ is real-valued, and $F(z)$ is holomorphic with multiplicity greater than $m$. Write $F$ in the form $F(z)=c z^{m+n}(1+O(z))$ with $c\ne 0$ and $n\ge 1$. As $P(z)=Q(z)+ F(z)$, we have $$\psi_f(z)=\dfrac {\sqrt\lambda P'(z)}{\overline{\sqrt\lambda}Q'(z)}=\lambda\Big( 1+\dfrac{F'(z)}{Q'(z)}\Big) .$$ It follows that $n=Ord_0\psi_f$. But $Ord_0\psi_f=1$ by the smoothness assumption of the critical set. So $n=1$ and $F$ takes the form $F(z)=c z^{m+1}(1+O(z))$ with $c\ne 0$. Finally $f^{-1}L=\{f(z)\in \sqrt \lambda\cdot \mathbb R\}=\{F(z)+r(z)\in \mathbb R\}=\{F(z)\in \mathbb R\}=F^{-1}\mathbb R$. This set is therefore a locally regular star of $2(m+1)$ branches. Point 4. We follow the calculation of Lyzzaik. Let $z\in {\mathcal C}_f$. Then $|\psi(z)|=1$. So there are two choices of $\sqrt{\psi(z)}$. Fix a choice of the square root. $$\begin{array}{rcl}Df|_z&=&p'(z)dz+\overline{q'(z) }d\overline z \\ &=&{\rule[-2ex]{0ex}{4.5ex}{}} q'(z)\psi(z)dz +\overline{q'(z) } d\overline z\\ &=& \sqrt{\psi(z)}\left(\sqrt{\psi(z)}q'(z)dz+ \overline{ \sqrt{\psi(z)}q'(z)}d\overline z \right)\\ &=&{\rule[-2ex]{0ex}{4.5ex}{}} \sqrt{\psi(z)}\,{\rm Re}\, \left(2\sqrt{\psi(z)}q'(z) dz \right).remark}{{\medskip}{array}$$ As $\gamma(t)$ is defined by $\psi(\gamma(t))=\lambda e^{it}$, we have $$\beta'(t)=Df|_{\gamma(t)}(\gamma'(t))= \sqrt{\lambda e^{it}}\, R(t),\quad \text{where}\quad R(t)= {\rm Re}\, \left(2\sqrt{\lambda e^{it}}\dfrac d{dt}q(\gamma(t)) \right).$$ Points 5 and 6. Assume that $\beta$ is not constant. Then $Ord_0(\beta)=j<\infty$, $R(t)\not\equiv 0$ and $Ord_0R=j-1$. So $\dfrac{\beta'(t)}{R(t)}\to \sqrt\lambda$ as $t\to 0$. It follows that $\beta(t)$ is tangent to $L$ at $\beta(0)$. Furthermore, $$R(t)=C(t^{j-1}+bt^j+O(t^{j+1}))\ , \ C\in \mathbb R^*,\ b\in \mathbb R\ .$$ A simple calculation shows that $\beta''(t)=\left(\dfrac{R'(t)}{R(t)} + \dfrac i2\right)\beta'(t)$. As $\dfrac{R'(t)}{R(t)}$ is real, we see already that the oriented angle from $\beta'$ to $\beta''$ is in $]0, \pi[$. One can also check the sign of the curvature of $\beta$: \REFEQN{Curvature} \kappa_\beta(t)=\dfrac{{\rm Im}\,(\overline{\beta'(t)} \cdot \beta''(t)) }{|\beta'(t)|^3}=\dfrac 1{2|\beta'(t)|} > 0,\quad t\in ]-\delta,\delta[ {\smallsetminus } \{0\}\ .remark}{{\medskip}{equation} This shows that there is some $\delta>0$ such that $\beta(t)$ is on the left of its tangent for any $t\in ]-\delta,\delta[ {\smallsetminus }\{0\}$ if $\beta'(0)=0$ and for any $t\in ]-\delta, \delta[$ if $\beta'(0)\ne 0$. Moreover, $$\beta(t)=\beta(0)+ \int_0^t \beta'(s) ds =\beta(0)+ C\sqrt{\lambda} \int_0^t e^{is/2}\left(s^{j-1}+b s^{j}+ h.o.t.\right)ds $$ $$=\beta(0)+ C\sqrt{\lambda} \int_0^t \left(s^{j-1}+ (b+\dfrac i 2) s^{j}+ h.o.t. \right)ds.$$ So $${\rm Re} \dfrac{\beta(t)-\beta(0)}{C\sqrt{\lambda}}=\displaystyle \int_0^t s^{j-1}(1+ O( s^{j}) )ds,\quad {\rm Im} \dfrac{\beta(t)-\beta(0)}{C\sqrt{\lambda}}=\displaystyle \int_0^t \dfrac{s^{j}}2(1+ O( s^{j+1}) )ds.$$ It follows that $t\mapsto \beta(t)$ is locally injective and $\beta$ has the order pair $(j,j+1)$ at $0$. \hfill{q.e.d.} \begin{definition} Let $f$ be a harmonic map and $z_0$ be a smooth critical point. We denote by $j(f,z_0)$ the integer so that the critical value curve has the order-pair $(j(f,z_0), j(f,z_0)+1)$ in its natural parametrization. We will call $j(f,z_0)$ the {\it critical value order} of $f$ at $z_0$. Let us notice that $j(f,z_0)$ is an analytic invariant hence is a fortiori invariant under our equivalence relation on harmonic maps. remark}{{\medskip}{definition} \section{Between critical value order and multiplicity} The objective here is to prove the following \REFTHM{general} Given a harmonic map $G$ together with a smooth critical point $z_0$, the three local analytic invariants $m$ order of $f$, $j$ order of the critical values curve and $\mu $ multiplicity of the complexified map on $( \mathbb C^2,z_0)$ are related by the (in)equalities : $$ \left\{\begin{array}{l} \infty\ge j\ge m\ge 1\\ j+m^2=\mu.remark}{{\medskip}{array}\right. $$ remark}{{\medskip}{Thm} \subsection{A formula for the multiplicity $\mu$} For $p(z)=\sum a_iz^i$ we use $\overline p(z)$ to denote the power series $\overline p (z)= \sum \overline a_i z^i$. The following lemma provides a formula for the multiplicity, which in the case of a polynomial $p$ leads to an algorithm. \REFLEM{first} Let $p(z)$ be a holomorphic map with $p(0)=0$. Let $f(z)=p(z)-\bar z$ and $g(z)=p(z)^m-\overline z^m$ (with $m\ge 1$ an integer). Then $\mu(f,0)=Ord_0(\bar p \circ p(z)-z)$ and more generally : $$ \mu(g,0)= \sum_{\xi^m=\eta^m=1}Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big). $$ remark}{{\medskip}{lemma} {\noindent\em Proof.} Consider the following holomorphic extensions of $f$ and $g$ in $ \mathbb C^2$: $$M_f: \left(\begin{matrix} u\\ vremark}{{\medskip}{matrix}\right) \mapsto \left(\begin{matrix} p(u)-v\\ \overline p(v)-uremark}{{\medskip}{matrix}\right),\quad M_g: \left(\begin{matrix} u\\ vremark}{{\medskip}{matrix}\right) \mapsto \left(\begin{matrix} p(u)^m-v^m\\ (\overline p(v))^m-u^m remark}{{\medskip}{matrix}\right).$$ By definition $\mu(f,0)=\mu(M_f, {\bf 0})$ and $\mu(g,0)=\mu(M_g,0)$. Let us work directly with $M_g$. It is known for example by \cite[theorem 6.1.4]{dJP} that $\mu(g,0)<\infty$ if and only if the germs of planar curves $p(u)^m-v^m=0$ and $p(v))^m-u^m$ have no branch in common. This condition means that there is a neighborhood of $(0,0)$ in $ \mathbb C^2$, in which $ \left(\begin{matrix} 0\\ 0remark}{{\medskip}{matrix}\right) $ is the only solution of the system of equations $M_f \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right) = \left(\begin{matrix} 0\\ 0remark}{{\medskip}{matrix}\right) $. Since this system is equivalent to the existence of $\xi,\eta$, such that \[ \xi^m=\eta^m=1,\text{ and } v=\xi p(u), \quad \eta\overline{p}(\xi p(u))-u=0 \] the condition $\mu(g,0)=\infty$ is indeed equivalent to the finiteness of the order in the right-hand side of the statement of lemma \ref{first}. We denote $\mu_1=\sum_{\xi^m=\eta^m=1}Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big)$ this order. Solving the equation $M_g \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right) = \left(\begin{matrix} 0\\ tremark}{{\medskip}{matrix}\right) $, we get \begin{equation*} \left\{ \begin{aligned} v-\xi p(u)=0 \\ \prod_{\xi^m=\eta^m=1}(\eta\overline{p}(\xi p(u))-u)=t remark}{{\medskip}{aligned} \right. remark}{{\medskip}{equation*} There are $\mu_1$ distinct solutions in the variable $u$ for the second equation, hence $ \mu_1$ solutions for the system which merge at a single solution $(0,0)$ when $t\to 0$. These solutions are all simple which means that $M_g$ is locally invertible. Applying again \cite[theorem 6.1.4]{dJP}, this proves that $\mu(M_g,{\bf 0})=\mu_1$. \hfill{q.e.d.} Note that $|p'(0)|\ne 1$ iff $\mu(f,0)=1$. Otherwise $\mu(f,0)\ge 2$. \subsection{Normalizations} \REFLEM{relation} Any harmonic map $G$ near a smooth critical point $z_0$ is equivalent to $(g, 0)$ with $g(z)=p(z)^m-\overline z^m$ for some integer $m\ge 1$ and some holomorphic function $p(z)=z + b z^2 + O(z^3)$, $|b|=1$. Furthermore, setting $f_\xi(z)=\xi \cdot p(z)-\overline z$, $\xi\in \mathbb C$, we have \[ \mu(G,z_0)=\mu(g,0)=\left\{\begin{array}{ll} m^2+m & \rm{ if \;} (-b^2)^m\ne 1 \\ \mu(f_{-b^2},0)+ (m-1)(m+2)> m^2+m& \text{\rm otherwise.} remark}{{\medskip}{array} \right. \] remark}{{\medskip}{lemma} Note that in the particular case $m=1$, the above formula becomes $$\mu(G,z_0)=\mu(g,0)=\left\{\begin{array}{ll} 2 & \text{if } b^2\ne -1 \\ \mu(f_1,0)> 2& \text{otherwise.} remark}{{\medskip}{array} \right. $$ {\noindent\em Proof.} The existence of the model map $g$ follows from Lemma \ref{Smooth}. In the following the sums are over the $m$-th roots of unity for both $\eta$ and $\xi$. By Lemma \ref{first}, \begin{eqnarray*}\mu(g ,0) &=& \sum_{\xi^m=\eta^m=1}Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big)\\ &=& \left[\sum_{\eta\xi\ne 1}+ \sum_{ \eta\xi=1 }\right]Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big) \\ &=& \sum_{\eta\xi\ne 1} Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big) +\sum_{ \xi^m=1}Ord_0 \Big( \overline{\xi p} (\xi \, p (z))-z\Big)\\ &\overset{Lem. \ref{first}}=& \sum_{\eta\xi\ne 1} Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big) +\sum_{ \xi^m=1,\xi\ne -b^2} Ord_0\Big( \overline{\xi p} (\xi \, p (z))-z\Big) + C\mu(f_{-b^2},0)remark}{{\medskip}{eqnarray*} where $C=0$ if $-b^2$ does not coincide with any $m$-th root of unity, and $C=1$ otherwise. There are $m(m-1)$ pairs of $(\eta,\xi)$ with $\eta^m=1=\xi^m$ and $\eta\xi\ne 1$. For each pair of them, $Ord_0 \Big(\eta\, \overline p (\xi \, p (z))-z\Big)=1$. This gives $m(m-1)$ for the first sum above. Now for any $\xi$ with $\xi^m=1,\xi\ne -b^2$, we have $Ord_0\Big( \overline{\xi p} (\xi \, p (z))-z\Big)=2$. If $-b^2$ does not equal to any $m$-th root of unity, there are $m$ terms in the middle sum above, so $\mu(g,0)=m(m-1)+2m=m^2+m$. Otherwise there are $m-1$ terms, so $\mu(g,0)=m(m-1)+2(m-1) + \mu(f_{-b^2},0)=m^2+m-2 + \mu(f_{-b^2},0) $. In this case one can check easily that $ \mu(f_{-b^2},0)>2$. So $\mu(g,0)> m^2+m$. \hfill{q.e.d.} Consider now $$g(z)=p(z)^m-\overline z^m=\Big(z + b z^2 + O(z^3)\Big)^m-\overline z^m, \quad |b|=1 .$$ A direct calculation using the first term of $\gamma(t)=\psi_g^{-1}(-e^{it})$ shows that the critical value curve $\beta$ in its natural parametrization satisfies $\beta(t) = \dfrac {2i}{(m+1)^m} {\rm Im}\Big(\dfrac{i^m}{b^m} \Big)t^m + o(t^m)$. Clearly $m=Ord_0(g)$. Let $j$ be the order of $\beta(t)$ at $0$, and $\mu$ the multiplicity of $g$ at $0$. We want to prove $$j\ge m\quad \text{and\quad} \mu= j+m^2.$$ Note that for $|b|=1$, $$(-b^2)^m=1 \Longleftrightarrow \Big(\dfrac i{ \overline{b}}\Big)^{2m}=1 \Longleftrightarrow \Big(\dfrac i b\Big)^{2m}=1 \Longleftrightarrow \Big(\dfrac i b\Big)^m=\pm 1 \Longleftrightarrow {\rm Im}\Big(\dfrac{i^m}{b^m}\Big) =0\ .$$ This, together with Lemma \ref{relation}, gives: \REFCOR{generic case} (The generic case) For $p(z)=z + b z^2 + O(z^3)$, $|b|=1$ with $(-b^2)^m\ne 1$, and $g(z)=p(z)^m-\overline z^m$, we have $$j =m\text{\quad and \quad}\mu= j + m^2=m+m^2.$$ If $(-b^2)^m= 1$ then $j > m$. remark}{{\medskip}{corollary} It remains to work on the degenerate case $(-b^2)^m=1$. \REFLEM{b=i} Any harmonic map of the form $g(z)=(z+bz^2+o(z^3))^m-\overline z^m$ with $(-b^2)^{m}=1$ is equivalent to a map of the form $(z+iz^2+o(z^2))^m-\overline z^m$.remark}{{\medskip}{lemma} {\noindent\em Proof.} One just need to replace $g$ by $g(\lambda z)/\overline{\lambda}^m$ for $\lambda=1/(-ib)$. \hfill{q.e.d.} \subsection{The normalised degenerate case} The following statement will complete the proof of Theorem \ref{general}. This is by far the hardest case. \REFTHM{key-relation} Let $p(z)=z + i z^2 + O(z^3)$ be a holomorphic map in a neighborhood of $0$ and $m\ge 1$ be an integer. Set $g(z)=p(z)^m-\overline z^m$. Then $g$ is a harmonic map with $0$ as a smooth critical point. Let $j$ be the order of the critical value curve at $g(0)=0$ in its natural parametrization, and $\mu$ the multiplicity of $g$ at $0$. Then $$ j>m\quad\text{and}\quad \mu= j+m^2.$$ remark}{{\medskip}{Thm} {\noindent\em Proof.} We know already that $0$ is a smooth critical point of $g$ and $j>m$ (Corollary \ref{generic case}). Let's look at the complexification of $g$: $G \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)= \left(\begin{matrix} p(u)^m-v^m\\ -u^m+\overline p (v)^m remark}{{\medskip}{matrix}\right)= \left(\begin{matrix} u^m(1+iu+o(u))^m - v^m \\ -u^m+ v^m(1-iv+o(v))^m remark}{{\medskip}{matrix}\right)= \left(\begin{matrix} G_1\\ G_2remark}{{\medskip}{matrix}\right).$ The critical set in $ \mathbb C^2$ of $G$ contains the set $\{(uv)^{m-1}=0\}$ which consists of two branches $u=0$ and $v=0$. The corresponding critical value branches are $G \left(\begin{matrix} 0 \\ vremark}{{\medskip}{matrix}\right) = \left(\begin{matrix} -v^m\\ v^m(1-iv + O(v^2))^m remark}{{\medskip}{matrix}\right) ,\ G \left(\begin{matrix} u \\ 0remark}{{\medskip}{matrix}\right) = \left(\begin{matrix} u^m(1+i u+O(u^2))^m\\ -u^mremark}{{\medskip}{matrix}\right).$ Both are plane curves with order pair $(m, m+1)$. The other branch gives a critical value curve $\beta$ with order pair $(j, j+1)$, as we already know from the real calculation. By comparing the two parametrizations, an elementary calculation shows that these two branches are distinct. They are also distinct from the third branch since we shall prove that $j>m$ hence that they have different first Puiseux pairs. The local behavior of $G$ at each of these critical branches, off the origin, is given by the following: \REFLEM{local} The multiplicity of $G$ at a real critical branch point (off the origin) is 2, and the multiplicity of $G$ at a non-real critical branch point (off the origin) is $m$. remark}{{\medskip}{lemma} {\noindent\em Proof.} The expression of $G$ at the point $ \left(\begin{matrix} 0 \\ v_0remark}{{\medskip}{matrix}\right)$ in local coordinates $ \left(\begin{matrix} u \\ wremark}{{\medskip}{matrix}\right)= \left(\begin{matrix} u \\ v-v_0remark}{{\medskip}{matrix}\right)$ is $$G \left(\begin{matrix} u \\ v_0+wremark}{{\medskip}{matrix}\right)- \left(\begin{matrix} -v_0^m\\ \overline p(v_0)^mremark}{{\medskip}{matrix}\right) = \left(\begin{matrix} p(u)^m-mv_0^{m-1}w+O(w^2))\\ -u^m+Q(v_0)w+O(w^2)remark}{{\medskip}{matrix}\right) . $$ By using to Taylor formula for $\overline{p}(v_0+w)^m$ we find $Q(v_0)=m\overline{p}(v_0)^{m-1}\overline{p}'(v_0)$. In order to see that the germ of $G$ at the point $ \left(\begin{matrix} 0 \\ v_0remark}{{\medskip}{matrix}\right)$ is equivalent by analytic coordinates changes to the germ $ \left(\begin{matrix} \hat{u} \\ \hat{v}remark}{{\medskip}{matrix}\right) \to \left(\begin{matrix} \hat{u}^m \\ \hat{v}remark}{{\medskip}{matrix}\right)$, it is sufficient to check that $Q(v_0)\neq mv_0^{m-1}$ for any small enough non zero $v_0$. The proof for the branch $u\to G \left(\begin{matrix} u \\ 0remark}{{\medskip}{matrix}\right)$ is similar. \hfill{q.e.d.} The preimage $G^{-1}(S)$, of $S$ a small sphere centered at the origin, is a smooth 3-manifold, and in fact we are going to prove a stronger result stating that the pair $(G^{-1}(B),G^{-1}(S))$ is diffeomorphic to the pair made of the standard ball and the standard sphere. We notice that $G^{-1}(S)$ is defined by the equation $N(u,v):=\left\Vert F \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)\right\Vert^2= \epsilon^2 $, and is the boundary of $G^{-1}(B)=\left\{ \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)\mid\left\Vert F \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)\right\Vert^2\leq \epsilon^2\right\}$. The above result will then follow from a general statement about a function $N$ defined on an open set of $\mathbb R^n$ given in the next lemma. \begin{lemma}\label{isotopy} Let $N : W\to \mathbb R$ be a positive real analytic map defined in a neighborhood of $0\in \mathbb R^n$ such that $N^{-1}(0)=\{0\}$. Then there is $\epsilon_0 >0$ such that for $0<\epsilon\leq \epsilon_0$ the pair of sets \[ \left(\{x\in \mathbb R^n\mid N(x)\leq \epsilon^2\},\{x\in \mathbb R^n\mid N(x)= \epsilon^2\}\right) \] is diffeomorphic to the standard ball and the standard sphere. remark}{{\medskip}{lemma} {\noindent\em Proof.} First we prove that $N$ is a submersion outside the origin if we restrict to a small enough neighborhood of $0$ : there is a constant $r_0>0$ such that if $0<\norm{x}\leq r_0$ we have \[ {\rm grad} N(x):=\left(\frac{\partial N}{\partial x_1},\dots,\frac{\partial N}{\partial x_n}\right)(x)\neq0. \] Indeed if this was not true, we could by the curve selection lemma \cite[lemma 3.1]{Mi} find an analytic path $\gamma : [0,\eta_0[\longrightarrow W$ such that $ \gamma(0)=0$ and $\gamma(t)\neq0$ for $t\in ]0,\eta_0[$, and ${\rm grad} N(\gamma(t))=0$. But then we would have $\frac{d}{dt}(N(\gamma(t)))=\scal{\gamma'(t),{\rm grad} N(\gamma(t))}=0$. But then $N(\gamma(t))$ would be constant equal to $N(\gamma(0))=0$ and this contradicts $\gamma(t)\neq0$ for $t\neq0$. In a second step we show that the gradient of $N$ tends to point out from $0$ when $t\to 0$. More precisely this means that given an analytic path $\gamma : [0,\eta_0[$ such that $ \gamma(0)=0$ and $\gamma(t)\neq0$ for $t\in ]0,\eta_0[\to W$, we have: \[ \underset{t\to 0}{\lim\;\;}\frac{\scal{\gamma(t),{\rm grad} N(\gamma(t))}}{\norm{\gamma(t)}\cdot\norm{{\rm grad} N(\gamma(t))}}\geq0. \] Indeed let $\alpha,\beta$ be the valuations of $\gamma$ and ${\rm grad} N\circ\gamma$. We have power series expansions with initial vector coefficients $a,b\in \mathbb R^4$ : \[ \gamma(t)=at^\alpha+o(t^\alpha), \quad {\rm grad} N(\gamma(t))=bt^\beta+o(t^\beta) \] and the limit above is $\frac{\scal{a,b}}{\norm{a}\norm{b}}.$ The expansion of the derivative of $\gamma$ is $\gamma'(t)=\alpha at^{\alpha-1}+o(t^{\alpha-1})$, and therefore $\frac{d}{dt}(N(\gamma(t)))=\scal{\gamma'(t),{\rm grad} N(\gamma(t))}=\alpha\scal{a,b}t^{\alpha+\beta-1}+o(t^{\alpha+\beta-1})$. Since $N(\gamma(t))>0$ for small enough positive $t$, this forces the inequality $\scal{a,b}\geq 0$ and we are done. We deduce a quantified version of this behaviour of the gradient vector field, showing that the angle of the vectors $x,{\rm grad} N(x)$ is bounded away from $\pi$. Precisely making the constant $r_0$ above smaller if necessary we may assume that for $0<\norm{x}\leq r_0$ : \[ \frac{\scal{x,{\rm grad} N(x)}}{\norm{x}\cdot\norm{{\rm grad} N(x)}}\geq -\frac12. \] This claim is a consequence of the curve selection lemma, because the limit property of ${\rm grad} N(x)$ implies that $0$ cannot be in the closure of the semi analytic set \[ Z:=\{x\in W \mid 0<\norm{x}\leq r_0, \quad \scal{x,{\rm grad} N(x)}<-\frac12\norm{x}\cdot\norm{{\rm grad} N(x)}\}. \] Our third and last step is to show that we have a homotopy between $\Sigma =N^{-1}(\epsilon ^2)$ and the standard ball $\norm{x}^2=\epsilon ^2$ because the gradient of interpolations between $N$ and $\norm{x}^2$ never vanishes outside the origin. Indeed the choice we made for $r_0$ has the following consequence: for any $t\in [0,1]$, we have $2tx+(1-t){\rm grad} N(x)\neq 0$ and this implies that the the relative gradient with respect to $(x_1,\dots, x_n)$ of the deformation $N(t,x):=t\norm{x}^2+(1-t)N(x)$ is non zero for any $x\neq 0$: \[ \forall t\in [0,1],\forall x, 0<\norm{x}\leq r_0 , \quad {\rm grad}_xN(t,x)=\left(\frac{\partial N}{\partial x_1},\dots,\frac{\partial N}{\partial x_n}\right)(t,x)\neq0 . \] Using the continuity of $N$ let us choose $\epsilon_0<r_0$ such that $N(x)\leq \epsilon _0^2\Longrightarrow \norm{x}<r_0$. Then the property we obtained on the gradient shows that for each $t\in [0,1]$ the set $\Sigma_t:=N_t^{-1}(\epsilon^2))$ (resp $\Sigma:=N^{-1}(\epsilon^2))$ is a submanifold of the open ball $B(0,r_0)$ (resp. of the product $[0,1]\times B(0,r_0)$). The set $B_t=N_t^{-1}([0,\epsilon_0^2])$ is a manifold with boundary $S_t$ and interior an open set of $\mathbb R^n$. Similarly $\Sigma$ is a part of the boundary of $B=N^{-1}([0,\epsilon_0])\subset [0,1]\times B(0,r_0)$ to be completed by $B_0\cup B_1$\footnote{we might avoid easily to consider a manifold with a corner along $S_0\cup S_1$ by enlarging slightly the range of $t$ to an open interval $]-\eta,1+\eta[$.}. We notice that $(B_1,\Sigma_1)$ is the standard ball of radius $\epsilon_0$ with its boundary. Finally the restriction to $\Sigma $ of the projection $(t,x)\longrightarrow t$ is a submersion. This implies by the version with boundary of a well known theorem of Ehresmann \cite{Eh} that the pair $(B,\Sigma)$ is locally trivial above $[0,1]$ which means that we have a diffeomorphism \[ (B,\Sigma)\longrightarrow [0,1]\times (B_0,\Sigma_0). \] In particular we have a diffeomorphism $(B_0,\Sigma_0)\longrightarrow (B_1,\Sigma_1)$ as expected. \hfill{q.e.d.} Let us now come back to the map $G$. Take a small round closed ball $D$ of radius $\epsilon_0$ and its preimage $B$ so that $G:B\to D$ is a covering of degree $\mu$ outside the critical value curves, and that $\partial D$ is transverse to the critical value set. It follows from lemma \ref{isotopy} applied to $N=\norm{G}^2$ that $\partial B$ is a smooth 3-variety diffeomorphic to a sphere. We take $r_0$ as in this lemma and denote : $B_\epsilon=\{x\mid N(x)\leq \epsilon^2\}\subset D_{r_0}$, $\Sigma_\epsilon=\partial B_\epsilon$ for all $0<\epsilon\leq\epsilon_0$. Let $\ell: \left(\begin{matrix} x_1\\ x_2remark}{{\medskip}{matrix}\right)\mapsto ax_1 + bx_2$ be a generic linear form. For $\epsilon_0$ small enough the disc $(\ell=0)\cap D$ intersects the critical value set ${\mathcal V}$ only at the origin. Therefore for $t$ a small enough non zero complex number, the line $L_t$ with equation $\ell(u,v)=t$ is transversal to the boundary of $D$ and $L_t\cap D$ is a disc $\Delta_t$. Furthermore if $t\neq0$ $L_t$, intersects the critical value set ${\mathcal V}$ at $j+2m$ points contained in the interior of $D$. Set $Y_t :=\{\ell(G_1,G_2)=t\}=G^{-1}(\{\ell=t\})$. \begin{proposition} With well chosen $r_0,\epsilon_0$, as in the proof of lemma \ref{isotopy} and $t\neq0$ small enough, $X_t:=Y_t\cap B_{\epsilon_0}$ is diffeomorphic to the Milnor fiber of the function $\ell(G_1,G_2)$. remark}{{\medskip}{proposition} \begin{proof} In the proof of lemma \ref{isotopy} we may choose if necessary a smaller $r_0$ to guarantee that the standard ball $B'_{r_0}=\Big\{ \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)\mid \Big\| \left(\begin{matrix} u \\ vremark}{{\medskip}{matrix}\right)\Big\|\leq r_0\Big\}$ is a Milnor ball which means that $X_0$ is transverse to the standard sphere $\partial B'_{r}$ for each $r\in ]0,r_0]$ and the Milnor fiber is by definition $X_t\cap B'_{r_0}$ for $0<|t|\leq \eta_0$, with $\eta_0$ small enough. By this very definition $B'_r$ is also a Milnor ball and $X_t\cap B'_r$ a Milnor fiber provided that we restrict the condition on $t$ to $0<|t|\leq \eta$ for an appropriate $\eta<\eta_0$. In fact for such a $t$ the inclusion $X_t\cap B'_r\subset X_t\cap B'_{r_0}$ yields a deformation retract between two diffeomorphic varieties. Now we have the inclusion $B_{\epsilon_0}\subset B'_{r_0}$ and choosing $r$ small enough to get $B'_r\subset B_{\epsilon_0}$ we can perform again the construction of lemma \ref{isotopy} and we get the chain of inclusions: \begin{equation}\label{1} B_{\epsilon}\subset B'_r\subset B_{\epsilon_0}\subset B'_{r_0}. remark}{{\medskip}{equation} Let us choose $\eta _0$ small enough both for the validity of the Milnor fibration and for the transversality of the intersections $L_t\cap \partial D$ as described above, with $D$ of radius ${\epsilon_0}$. We have to notice also that $L_0$ is transverse to $D_{\epsilon}$ for all $\epsilon\leq \epsilon _0$. Then at any point $y\in \partial X_t=Y_t\cap \Sigma_{\epsilon_0}$, the two varieties $Y_t$ and $\Sigma_{\epsilon_0}$ are also transversal, and so are $Y_0$ and $\Sigma_{\epsilon}$ for $0<\epsilon\leq \epsilon_0$. Indeed at such a point $y$ we have avoided ${\mathcal V}$ and the map $G$ is a local diffeomorphism. Because of these transversalities we can construct Milnor fibrations with Milnor fiber $Y_t\cap B_\epsilon$ using "pseudo Milnor balls" $B_\epsilon$ which make a basis of neighborhoods of $0$. The arguments are exactly the same as with the standard Milnor fibration. It is known (see \cite{Le}, Theorem 3.3) that this Milnor fiber is diffeomorphic to the standard one. The proof uses the chain of inclusions (\ref{1}). Indeed we choose $t$ small enough for the intersections of $Y_t$ with the four terms in (\ref{1}), to be Milnor fibers. The two inclusions $Y_t\cap B_\epsilon\subset Y_t\cap B_{\epsilon_0}$ and $Y_t\cap D_r \subset Y_t\cap D_{r_0}$ are homotopy equivalences. Therefore, in the sequence of maps \[ \xymatrix{H^i(Y_t\cap B_\epsilon)\ar[r]^{\alpha_1}&H^i(Y_t\cap D_r)\ar[r]^{\alpha_2}&H^i(Y_t\cap B_{\epsilon_0})\ar[r]^{\alpha_3}&H^i(Y_t\cap D_{r_0})} \] $\alpha_3\circ \alpha_2$ and $\alpha_2\circ \alpha_1$ are isomorphisms and this forces the middle arrow to be an isomorphism for $i=0,1$. Since we work on surfaces with boundaries this is enough to obtain that they are diffeomorphic. \hfill{q.e.d.} remark}{{\medskip}{proof} \begin{proposition} The surface $X_t$ is connected and $\chi(X_t)= 2m-m^2$. Furthermore its boundary has $m$ connected components and its genus is $g(X_t)=\frac{(m-1)(m-2)}2$. remark}{{\medskip}{proposition} Proof. Since $X_t$ is a smooth real surface with boundary its Euler characteristic is $\chi(X_t)=1-\dim(H^1(X_t), \mathbb C)$ because it is connected by \cite{Mi}. The first statement in the proposition is equivalent to the fact that the Milnor number $\mu(\ell\circ G)=\dim(H^1(X_t), \mathbb C)$ $(m-1)^2$. To check this fact recall that $\mu(\ell\circ G)$ is an analytic invariant (and even a topological one) of the function. Let us calculate a standard form up to an analytic change of coordinates, for $L:=\ell(G_1,G_2)$: \begin{eqnarray*} &L(u,v)&=a(p(u)-v^m)+b(-u^m+\overline p (v)\\ & &=(a-b)u^m(1+O(u))- (a-b)v^m(1+O(v))=U^m-V^m\\ remark}{{\medskip}{eqnarray*} where $\Phi: \left(\begin{matrix} u\\ vremark}{{\medskip}{matrix}\right) \mapsto \left(\begin{matrix} \varphi(u)\\ \psi(v)remark}{{\medskip}{matrix}\right) $ is a diagonal change of coordinates. We can now check that $\mu(\ell(G_1,G_2))=(m-1)^2$ by the formula for the Milnor number as the codimension of the Jacobian ideal : $\mu(L)=\dim_ \mathbb C\C\{u,v\}/(\frac{\partial L}{\partial u},\frac{\partial L}{\partial v})=\dim_ \mathbb C\C\{U,V\}/(U^{m-1},V^{m-1})$. The last statement follows since the number of components of the boundary is the number of irreducible local components of the curve $L(u,v)=0$. \hfill{q.e.d.} Now we are ready to finish the proof of theorem \ref{key-relation}. We already know that $F: X_t \to \Delta_t$ is a ramified cover of degree $\mu$ with $j+2m$ critical values and that above each critical value there is exactly one critical point. By the proof of \ref{local} we know that the germ of the map $F$, at a critical point different from $(0,0)$, is up to analytic changes of coordinates, equivalent to one of the two germs $(z_1,z_2)\to (z_1,z_2^2)$ or $(z_1,z_2)\to (z_1,z_2^m)$. Since the disc $\Delta_t$ is transversal to the critical value curve, we deduce that for $F: X_t \to \Delta_t$ the critical points are simple on the smooth branch, and of local multiplicity $m$ (therefore counts as $m-1$ critical points), above the fantom curves. By Riemann-Hurwitz, $\chi(X_t)+\#\{\text{critical points}\}=\mu\chi(\Delta)=\mu$. So $1-(m-1)^2+(j+2m(m-1) )=\mu$. That is $\mu=j+m^2$. \hfill{q.e.d.} Combining with Lemma \ref{relation}, in which we plug in $b=i$, $\mu(f_{-b^2},0)=\mu(f,0)$ we get: \REFCOR{with regular} For $m\ge 1$, $f(z)=(z+iz^2+O(z^3))-\overline z$ and $g(z)=(z+iz^2+O(z^3))^m-\overline z^m$, the four quantities $j(f),\mu(f), j(g), \mu(g)$ at $0$ are related as follows: $$\mu(g)=\mu(f)+m^2+m-2 ,\quad j(g)=\mu(g)-m^2=j(f)+m-1=\mu(f)+m-2.$$ In particular each of these number determines the three other ones. remark}{{\medskip}{corollary} \section{Topological models for harmonic smooth critical points} Notice that due to the equality $\mu=j+m^2$, the integers $\mu$ and $m+j$ have the same parity. In this section we will reformulate Lyzzaik's topological model in terms of the parity of $m+j$. We provide a self-contained proof. We then show examples of harmonic maps with prescribed numerical invariants or with prescribed local models. \subsection{Local models} \REFTHM{Lyzzaik2}(topological model, inspired by Lyzzaik, \cite{Ly-light}) Let $f$ be a harmonic map with $z_0$ a smooth critical point. Set $m=Ord_{z_0}(f)$. Let $j$ be the integer so that the critical value curve $\beta$ at $z_0$ has the order-pair $(j,j+1)$. Assume $j<\infty$. In this case, define $n^\pm$ by the following table: \REFEQN{n-plus}\begin{array}{|c|| c|c|}\hline \left(\begin{matrix} {\rule[-2ex]{0ex}{4.5ex}{}} 2n^+-1\\ 2n^--1remark}{{\medskip}{matrix}\right) &{\rule[-2ex]{0ex}{4.5ex}{}} \text{ $\beta$ convex ($m\le j$ odd)} & \text{$\beta$ cusp ($m\le j$ even)} \\ \hline\hline m\text{ odd} & \left(\begin{matrix} m \\ m remark}{{\medskip}{matrix}\right) & \left(\begin{matrix} m+2 \\ m remark}{{\medskip}{matrix}\right) \quad \text{or}\quad { \left(\begin{matrix} m \\ m+2 remark}{{\medskip}{matrix}\right)} \\ \hline m\text{ even} & \left(\begin{matrix} m+1 \\ m-1 remark}{{\medskip}{matrix}\right) \quad \text{or}\quad { \left(\begin{matrix} m-1 \\ m+1 remark}{{\medskip}{matrix}\right)} & \left(\begin{matrix} m+1 \\ m+1 remark}{{\medskip}{matrix}\right) \\ \hline remark}{{\medskip}{array}remark}{{\medskip}{equation} Set $R_{n^+, n^-}(z)= R_{n^+, n^-}(re^{i\theta})=\left\{\begin{array}{ll} re^{i(2n^+-1)\theta} & 0\le \theta\le \pi\\ re^{-i(2n^- -1)\theta} & \pi\le \theta\le 2\pi\ . remark}{{\medskip}{array}\right.$ Then one of the choices of $R_{n^+, n^-}(z)$ (the choice is unique if $m+j$ is odd) is a local topological model of $f$, in the following sense: There is a neighborhood $U$ of $0$, two orientation preserving homeomorphisms: $h_1: U\to \mathbb D, \ 0\mapsto 0, \quad h_2 : \mathbb C\to \mathbb C,\ 0\mapsto 0$, such that $$h_2\circ f\circ h_1^{-1}(z)=R_{n^+, n^-}(z).$$ Moreover $\# f^{-1}(z)=n^++n^-$ or $n^+ + n^- -2$ depending on whether $z$ is in one sector or the other of $f(U) {\smallsetminus } \beta$. remark}{{\medskip}{Thm} Notice that only the parity but not the size of $j$ comes into account, and $n^+-n^-=0,1$ or $-1$. {\noindent\em Proof.} By Lemma \ref{Smooth} we can assume $z_0=0$ and $f$ takes the form $f(z)=p(z)+\overline{q(z)}$ with $$p(z)=z^m+ bz^{m+1} + O(z^{m+2}),\quad q(z)=z^m,\quad |b|= 1.$$ In this case $\psi(z_0)=1$. From lemma \ref{curves}, we know that $t\mapsto \beta(t)$ is locally injective and the local shape of $\beta$ corresponds to that of $ u(t^j+it^{j+1})$. Therefore $\beta$ is a convex curve on one half plane if $j$ is odd and is a cusp of the first kind tangent to $\mathbb R$ if $j$ is even, then has its tangent lines on the right. See Figure \ref{beta}. \begin{figure}\hspace{2cm} \scalebox{1} { \begin{pspicture}(0,-1.6780468)(11.592149,1.6780468) \definecolor{color215}{rgb}{0.0,0.2,1.0} \psline[linewidth=0.04cm,linecolor=red](5.470254,-0.040351562)(9.570254,-0.040351562) \psbezier[linewidth=0.04,linecolor=color215,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(7.610254,0.0)(8.770254,-0.040351562)(8.910254,0.29964843)(9.370254,1.0996485) \usefont{T1}{ptm}{m}{n} \rput(9.721709,1.2846484){$\beta^+=f(\gamma^+)$} \usefont{T1}{ptm}{m}{n} \rput(9.881709,-1.4553516){$\beta^-=f(\gamma^-)$} \psbezier[linewidth=0.04,linecolor=color215,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(7.610254,-0.06035156)(8.810254,-0.06035156)(8.890254,-0.34035155)(9.410254,-1.1803516) \usefont{T1}{ptm}{m}{n} \rput(6.5099807,-1.1953516){{\bf Case $j$ even}} \psline[linewidth=0.04cm,linecolor=red](0.25025392,0.0)(4.350254,0.0) \psbezier[linewidth=0.04,linecolor=color215,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.5302539,1.0754099)(1.2702539,-0.42035156)(3.510254,-0.25830027)(4.1702538,1.1596484) \usefont{T1}{ptm}{m}{n} \rput(4.061709,1.4846485){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(0.701709,1.3446485){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(1.2799804,-1.1553515){{\bf Case $j$ odd}} remark}{{\medskip}{pspicture} } \caption{The shape of the critical value curve}\label{beta} remark}{{\medskip}{figure} Write $f(z)=p(z)-q(z) +2 {\rm Re} q(z) = b(\kappa(z))^{m+1} + 2 {\rm Re} q(z)$ with $\kappa$ a holomorphic map tangent to the identity at $0$. We may take $\kappa(z)$ as coordinate and transform $f$ into the following holomorphic+real normal form \REFEQN{Real-translation} f(z) =e^{i\theta}z^{m+1} + r(z)= F(z)+ r(z)\quad\text{with}\ F(z)=e^{i\theta}z^{m+1}, r(z)=2{\rm Re}(z^m+O(z^{m+1})). remark}{{\medskip}{equation} Claim 0. In this form the critical value curve $\beta$ is either a convex curve on one half plane or is a cusp of the first kind tangent to $\mathbb R$. Proof. We have only changed the variable in the source plane. So this new normal form has the same critical value curve as before. Claim 1. We give here a specific proof to be compared to lemma \ref{isotopy}. For a small enough round circle $C=\{|z|=s\}$ in the range, its preimage by $f$ contains a Jordan curve connected component bounding a neighborhood $U$ of $0$, with $f(U)\subset D_s$ (not necessarily equal) and $f:U\to D_s$ proper (see Figure \ref{U}). Notice that the tangent of $\gamma$ at $0$ depends on the choice of $\theta$ in $b=e^{i\theta}$, whereas the tangent of $\beta$ at $0$ does not depend on $\theta$. \begin{figure}\hspace{2cm} \scalebox{1} { \begin{pspicture}(0,-2.2791991)(10.141016,2.299199) \psline[linewidth=0.04cm,linecolor=red](0.26101562,-1.1991992)(3.7010157,0.9208008) \psline[linewidth=0.04cm,linecolor=red](1.9010156,1.8408008)(1.9210156,-2.2591991) \psline[linewidth=0.04cm,linecolor=red](0.12101562,0.84080076)(3.7410157,-1.1991992) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.5210156,1.8208008)(2.5210156,1.0208008)(1.3810157,-1.5591992)(0.9810156,-1.7791992) \psline[linewidth=0.04cm,linecolor=red](6.0210156,-0.2791992)(10.121016,-0.2791992) \usefont{T1}{ptm}{m}{n} \rput(2.7824707,2.1058009){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(0.8624707,-2.0541992){$\gamma^-$} \psdots[dotsize=0.12](8.121016,-0.25919923) \pscircle[linewidth=0.04,dimen=outer](8.141016,-0.23919922){1.3} \psbezier[linewidth=0.04](1.4410156,1.3408008)(2.4029737,1.6139978)(3.8369179,0.49721578)(3.5810156,-0.47919923)(3.3251133,-1.4556142)(1.5886531,-1.8633064)(0.84101564,-1.1991992)(0.093378216,-0.53509206)(0.47905752,1.0676037)(1.4410156,1.3408008) \usefont{T1}{ptm}{m}{n} \rput(0.7924707,-0.05419922){$U$} \usefont{T1}{ptm}{m}{n} \rput(9.662471,-0.8341992){$C$} \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2210155,0.5608008)(4.2610154,0.6808008)(5.4410157,1.0008007)(6.0210156,0.42080078) \usefont{T1}{ptm}{m}{n} \rput(5.102471,1.0658008){$f$} remark}{{\medskip}{pspicture} } \caption{The domain $U$ and $F^{-1}(\mathbb R)$}\label{U} remark}{{\medskip}{figure} Proof. By assumption on $j<\infty$ the point $0$ is an isolated point in $f^{-1}(0)$. So there is $r>0$ such that $\{|z|\le r\}$ is contained in the domain of definition $\Omega$ of $f$ and $0\notin f(\{|z|=r\})$. There is therefore a small round open disc $D$ centred at $0$ in the range such that $D\cap f(\{|z|=r\})=\emptyset$. Let $W$ be an open connected subset of $D$ containing $0$. As $f$ is continuous $f^{-1}(W)$ is open in $\Omega$. Let $V$ be the connected component of $f^{-1}(W)$ containing $0$. Then $V$ is an open neighborhood of $0$ with $V\subset \{|z|<r\}\subset\subset \Omega$. We now claim that $f|_V: V\to W$ is proper. Proof. Let $V\ni z_n\to z\in \partial V$. We need to show $f(z_n)\to \partial W$. As $z\in \partial V\subset \Omega$ the map $f$ is defined and continuous at $z$. It follows that $W\ni f(z_n)\to f(z)\in \overline W=W\sqcup \partial W$. If $f(z)\in W$, then by continuity $f$ maps a small disc neighborhood $B$ of $z$ into $W$, consequently $$B\cup V \text{ is } \left\{\begin{array}{l} \text{connected}\\ \text{strictly larger than $V$, and }\\ \text{a subset of $ f^{-1}(W)$.} remark}{{\medskip}{array}\right.$$ This contradicts the choice of $V$ as a connected component of $f^{-1}(W)$ and ends the proof of the claim. We now choose $W$ a small enough disc such that $t\to |\beta(t)|$ is strictly increasing (resp. decreasing) as along as $t>0$ (resp. $t<0$) and $\beta(t)\in W$ and consider the proper map $f:=f|_V: V\to W$. Fix now $C=\{|z|=s\}$ contained in $W$ in the range. The map $f$ is a local homeomorphism at every point of $f^{-1}C {\smallsetminus } \gamma$. Due to the local fold model at points of $\gamma^*$ we may conclude that $f^{-1}C$ is a 1-dimensional topological manifold, which is actually piecewise smooth. It is also compact by properness, so has only finitely many components, each is a Jordan curve. Let $I$ be an island, i.e. an open Jordan domain in $V$ bounded by a curve in $f^{-1}C$ . We claim that $f(I)\subset D_s:=\{|z|<s\}$. Assume $f(I) {\smallsetminus } \overline D_s\ne \emptyset$ . Then $|f|$ on the compact set $\overline I$ reaches its maximum at an interior point $x\in I$. Then $x$ can not be outside $ \gamma$ as $f$ is locally open outside $\gamma$. But if $x\in \gamma$ then $f(x)\in \beta$ and $|f|$ restricted to $\gamma$ can not reach a local maximum since $|\beta(t)|$ is locally monotone. This is not possible. So $f(I)\subset \overline D_s$. But if for some $x\in I$ we have $f(x)\in C=\partial D_s$, then $I$ contains a component (so a Jordan curve) of $f^{-1}C$. Choose a point $x'$ in this curve but disjoint from $\gamma$. Then $f$ is a local homeomorphism on a small disc $B$ centred at $x'$ with $B\subset I$ and $f(B)$ contains points outside $\overline D_s$. This is not possible by the previous paragraph. So we may conclude that $f(I)\subset D_s$. We claim now $0\in I$. Otherwise $0\not\in f(I)$ and we may argue as above using the minimum of $|f|$ on $\overline I$ to reach a contradiction. It follows that $f^{-1}C$ has only one component in $V$ bounding a Jordan domain $U$ containing $0$ and $f(U)\subset D_s$. As $f$ maps the boundary into the boundary (not necessarily onto), $f: U\to D_s$ is proper. Claim 2. The set $F^{-1}\mathbb R^*$ is a regular star of $2(m+1)$ radial branches from $0$ to $\infty$ and $F^{-1}\mathbb R\cap U$ is connected (see Figure \ref{U}). Otherwise there is a segment $L\subset F^{-1}\mathbb R^*$ connecting two boundary points of $U$. As $f(s)=F(s)+r(s)$ with $r$ real, $f(L)\subset \mathbb R$. But $f^{-1}(0)=0$. So $f( L)$ is a segment in $\mathbb R^*$ by Intermediate Value Theorem. Now as $f$ has no turning points (critical points) in $ L$, it maps $ L$ bijectively onto a real segment with constant sign, and the two ends are in $f(\partial U)=C$. This contradicts the choice that $C$ is a round circle. Claim 3. Each sector $S$ of $U {\smallsetminus } F^{-1}\mathbb R$ is mapped by both $f$ and $F$ into the same upper half plane. Each branch $\ell$ of $F^{-1}\mathbb R^*$ is mapped by $f$ to a real segment with constant sign (but not necessarily equal to the sign of $F(\ell)$). Two consecutive branches on the same side of $\gamma$ have images under $f$ with opposite signs, and two consecutive branches separated by $\gamma$ have images under $f$ with the same sign. Proof. As $f(s)=F(s)+r(s)$ with $r$ real, and $F(S)$ is either on the upper or lower half plane, the same is true for $f(S)$ with the same imaginary sign. The fact that $F(\ell)\subset \mathbb R^*$ implies $f(\ell)\subset \mathbb R$. But $f^{-1}(0)=0$. So $f(\ell)$ is a segment in $\mathbb R^*$ by Intermediate Value Theorem. Now as $f$ has no turning points (critical points) in $\ell$, it maps $\ell$ bijectively onto a real segment with constant sign. We now prove by contradiction that two consecutive branches on the same side of $\gamma$ have images under $f$ with opposite signs. Let $W$ be a small closed sector neighborhood of $0$ bounded by two consecutive branches on the same side of $\gamma$ and a small arc $\alpha$. Assume $f$ maps the two branches to the same segment in $\mathbb R$, say $[0,\varepsilon]$. As $W\cap f^{-1}\mathbb R^*=W\cap F^{-1}\mathbb R^*=\emptyset$, the connected set $f(W)$ is disjoint from $\mathbb R^-$. And $f(\alpha)$ is disjoint from $0$. Since $f(W)$ is not entirely contained in $\mathbb R^+$, one can find $v\in \partial f(W) {\smallsetminus } \Big(f(\alpha)\cup \mathbb R^+\cup \{0\}\Big)$. So $v=f(w)$ for some interior point $w$ of $W$. This contradicts that $f$ is a local homeomorphism. We may prove similarly that two consecutive branches of $F^{-1}\mathbb R^*$ separated by $\gamma$ have images under $f$ with the same sign, using the fact that $f$ realises a fold along $\gamma^*$. Claim 4. Let $S$ be a sector of $U {\smallsetminus } F^{-1}\mathbb R$ disjoint from $\gamma$. Then $f$ maps $S$ homeomorphically onto one of the half discs $\{|z|<s, {\rm Im} z >0\}, \{|z|<s, {\rm Im} z <0\}$, and in $S$ the number of branches of $f^{-1}(f(\gamma))$ is equal to the number of branches of $F^{-1}(F(\gamma))$ (see Figure \ref{co-critical}). Proof. The previous claim says that $f$ is a local homeomorphism on $S$, and $f(S)$ is contained in one of the half discs, say $\{|z|<s, {\rm Im} z >0\}$. We also know that $f:S\to \{|z|<s, {\rm Im} z >0\}$ is proper, so is in fact a covering. As $S$ is simply connected, we conclude that $f$ on $S$ is a homeomorphism onto its image. We also need to prove that $f(S)$ is one of the half discs bounded by $C\cup \mathbb R$. For $t\in ]0,\varepsilon[$, set $\gamma^\pm(t)=\gamma(\pm t)$. Consider $\delta^\pm(t)=F(\gamma^\pm( t))$ and $\beta^\pm(t)=f(\gamma^\pm(t))$, By \Ref{Real-translation} we know that $\delta^-(t)$ and $\beta^-(t)$ are in the same half plane of $ \mathbb C {\smallsetminus } \mathbb R$, idem for the pair $\delta^+(t)$ and $\beta^+(t)$. Comparing with the shape of $\beta$ relative to $\mathbb R$ we know that $\delta^\pm(t)$ are in the same half plane if $\beta$ is convex and in opposite half planes otherwise. Claim 5. The map $f$ sends each $S$ of the two sectors of $U {\smallsetminus } F^{-1}\mathbb R$ intersecting $\gamma$ onto one small sector $\chi$ with $0$ angle at $0$ of $ \mathbb C {\smallsetminus } (C\cup\beta\cup \mathbb R)$, and $S\cap f^{-1}(\beta)\subset \gamma$ (see Figure \ref{folding-side}). \begin{figure}\hspace{2cm} \scalebox{1} { \begin{pspicture}(0,-4.859199)(11.041895,4.879199) \definecolor{color2287}{rgb}{0.0,0.2,1.0} \definecolor{color2854b}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.0,2.460801)(4.92,2.4408007) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.36,4.4408007)(2.38,0.3408008) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.66,0.9208008)(4.18,4.0608006) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.82,4.0408006)(3.98,0.8008008) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.96,4.4408007)(2.96,3.6408007)(1.82,1.0608008)(1.42,0.84080076) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](4.45,2.0108008)(3.18,1.8608007)(1.62,2.6808007)(0.75,3.5108008) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.55,2.9308007)(1.35,2.9308007)(3.94,1.8008008)(4.15,1.3908008) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](5.56,2.4408007)(9.66,2.4408007) \psbezier[linewidth=0.04,linecolor=color2287,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.82,3.4976285)(6.56,2.0208008)(8.8,2.1808007)(9.46,3.5808008) \usefont{T1}{ptm}{m}{n} \rput(9.831455,3.7458007){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.8114552,3.7058008){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(3.201455,4.6858006){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.4814551,0.64580077){$\gamma^-$} \usefont{T1}{ptm}{m}{n} \rput(7.5497265,1.4658008){{\bf Case $m$ odd and $j$ odd}} \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](1.32,-3.7791991)(4.84,-1.5991992) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.96,-0.7391992)(2.98,-4.839199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](1.16,-1.6991992)(4.8,-3.7791991) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(4.62,-0.9591992)(4.42,-1.8391992)(2.48,-3.3191993)(1.18,-3.4591992) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.46,-4.699199)(3.1,-3.7791991)(3.2,-4.219199)(3.0,-2.799199) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](2.96,-2.7591991)(3.8380961,-3.381902)(3.913438,-3.3451452)(4.4,-4.279199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](6.02,-2.9591992)(10.12,-2.9591992) \psbezier[linewidth=0.04,linecolor=color2287,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.0,-2.039199)(7.38,-3.3191993)(8.96,-3.2591991)(10.02,-1.8791993) \usefont{T1}{ptm}{m}{n} \rput(10.291455,-1.6341993){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.911455,-1.8141992){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(4.741455,-0.6741992){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.2214551,-3.0741992){$\gamma^-$} \usefont{T1}{ptm}{m}{n} \rput(8.039726,-4.0141993){{\bf Case $m$ even and $j$ odd}} \psbezier[linewidth=0.04](2.36,4.180801)(3.36,4.200801)(3.34,4.320801)(3.82,3.8208008) \psline[linewidth=0.04cm](2.64,4.140801)(3.36,3.4408007) \psline[linewidth=0.04cm](2.4,3.9408007)(3.12,3.2408009) \psline[linewidth=0.04cm](2.42,3.4408007)(2.88,3.0008008) \psline[linewidth=0.04cm](2.42,3.0608008)(2.66,2.8408008) \psline[linewidth=0.04cm](3.14,4.160801)(3.64,3.6608007) \psbezier[linewidth=0.04](9.28,3.3008008)(9.44,3.1408007)(9.66,2.7408009)(9.46,2.460801) \psline[linewidth=0.04cm](9.54,2.8008008)(9.24,2.4808009) \psline[linewidth=0.04cm](9.5,3.0008008)(8.92,2.4808009) \psline[linewidth=0.04cm](9.38,3.1808007)(8.58,2.460801) \psline[linewidth=0.04cm](8.56,2.6808007)(8.32,2.460801) \pscircle[linewidth=0.04,dimen=outer,fillstyle=solid,fillcolor=color2854b](1.72,1.2208008){0.2} \psbezier[linewidth=0.04,fillstyle=solid,fillcolor=color2854b](6.02,3.1408007)(5.96,2.8008008)(6.08,2.700801)(6.26,2.9008007) \usefont{T1}{ptm}{m}{n} \rput(9.991455,3.0058007){$\chi$} remark}{{\medskip}{pspicture} } \caption{The folding sides for harmonic maps $f$}\label{folding-side} remark}{{\medskip}{figure} This is due to the harmonicness: $f$ folds a small neighborhood of $z\in \gamma^*$ onto a half neighborhood of $f(z)$ on the concave side of $\beta$ (see Figure \ref{folding-side}). As $\chi$ does not contain the other branch of $\beta$, the preimage $S$ contains no other co-critical points than $\gamma$. Claim 6. The critical curve $\gamma$ separates the branches of $F^{-1}\mathbb R^*$ into two parts whose numbers depend on the shape of $\beta$, by the following table: $$\begin{array}{|c||c|c|} \hline { \left(\begin{matrix} \#\text{right {\rule[-2ex]{0ex}{4.5ex}{}} branches of $F^{-1}\mathbb R^*$} \\ \#\text{left branches of $F^{-1}\mathbb R^*$}remark}{{\medskip}{matrix}\right)} & \text{$\beta$ convex} &\text{$\beta$ cusp} \\ \hline\hline m\text{ odd} & \text{equal number} & \text{differ by 2} \\ \hline m\text{ even} & \text{differ by 2} &\text{equal number} \\ \hline remark}{{\medskip}{array} $$ Proof. For $t\in ]0,\varepsilon[$, we have $\gamma^\pm(t)=\gamma(\pm t)$, $\delta^\pm(t)=F(\gamma^\pm( t))$ and $\beta^\pm(t)=f(\gamma^\pm(t))$. We need to know the relative positions between $\delta^\pm(t)$ and $\mathbb R$ in order to get the relative positions between $\gamma\subset F^{-1}(\delta^\pm(t))$ and $F^{-1}\mathbb R^*$. We know that $\delta^\pm(t)$ are in the same half plane if $\beta$ is convex and in opposite half planes otherwise. On the other hand, the two curves $\gamma^\pm(t), t\in [0,\varepsilon[$ make an angle $\pi$ at $\gamma(0)$. As $F(z)= e^{i\theta}z^{m+1}$, $$\text{angle}_0( \delta^\pm(t))= (m+1)\cdot \text{angle}_0(\gamma^\pm( t))=(m+1)\pi\!\!\!\mod\!2\pi =\left\{\begin{array}{ll} 0 & \text{if $m$ is odd}\\ \pi & \text{if $m$ is even.}remark}{{\medskip}{array}\right.$$ Now pullback these shapes by $F(z)=e^{i\theta} z^{m+1}$, we get the claim. See Figure \ref{cases}. Claim 7. In any case, the number of sectors in $U {\smallsetminus } f^{-1}\beta$ is odd in each side of $\gamma$. Denoting them by $2n^\pm-1$, with $+$ for the right-side of $\gamma$ and $-$ the left side, one can related them to the numbers of branches of $F^{-1}\mathbb R^*$ separated by $\gamma$ by: $$\begin{array}{|r|c|c|} \hline &{\rule[-2ex]{0ex}{4.5ex}{}} \text{$\beta$ convex, $j$ odd} &\text{$\beta$ cusp, $j$ even}\\ \hline \left(\begin{matrix} {\rule[-2ex]{0ex}{4.5ex}{}} 2n^+-1\\ 2n^--1remark}{{\medskip}{matrix}\right) = & \left(\begin{matrix} \#\text{right branches of $F^{-1}\mathbb R^*$}-1 {\rule[-2ex]{0ex}{4.5ex}{}} \\ \#\text{left branches of $F^{-1}\mathbb R^*$}-1remark}{{\medskip}{matrix}\right) & \left(\begin{matrix} \#\text{right {\rule[-2ex]{0ex}{4.5ex}{}} branches of $F^{-1}\mathbb R^*$} \\ \#\text{left branches of $F^{-1}\mathbb R^*$}remark}{{\medskip}{matrix}\right) \\ \hline remark}{{\medskip}{array} $$ Proof. The shape of $\beta$ is determined by the parity of $j$ in its order-pair $(j,j+1)$: If $j$ is odd then $\beta$ is convex, if $j$ is even then $\beta$ is a cusp. In the following only the shape of $\beta$ is relevant, but not the value of $j$. It follows that if $m+j$ is odd, $F^{-1}\mathbb R$ contains the tangent line of $\gamma$ at $0$. See Figure \ref{co-critical}. Now as the total number of branches of $F^{-1}\mathbb R^* $ is $2(m+1)$, we get, by Claim 6, $$\begin{array}{|c|c|c|} \hline { \left(\begin{matrix} \#\text{right {\rule[-2ex]{0ex}{4.5ex}{}} branches of $F^{-1}\mathbb R^*$} \\ \#\text{left branches of $F^{-1}\mathbb R^*$}remark}{{\medskip}{matrix}\right)} & \text{$\beta$ convex, $j$ odd} &\text{$\beta$ cusp, $j$ even} \\ \hline m\text{ odd} & { \left(\begin{matrix} m+1{\rule[-2ex]{0ex}{4.5ex}{}} \\ m+1remark}{{\medskip}{matrix}\right)} & \left(\begin{matrix} m+2 \\ m remark}{{\medskip}{matrix}\right) \quad \text{or}\quad { \left(\begin{matrix} m \\ m+2 remark}{{\medskip}{matrix}\right)} \\ \hline m\text{ even} & \left(\begin{matrix} m+2{\rule[-2ex]{0ex}{4.5ex}{}} \\ m remark}{{\medskip}{matrix}\right) \quad \text{or}\quad { \left(\begin{matrix} m \\ m+2 remark}{{\medskip}{matrix}\right)} & { \left(\begin{matrix} m+1 \\ m+1remark}{{\medskip}{matrix}\right)} \\ \hline remark}{{\medskip}{array} $$ We get \Ref{n-plus}. Claim 8. Now we forget about $F^{-1}\mathbb R$ and consider only the sectors in $U$ partitioned by $f^{-1}\beta$. The same arguments as above show that $f$ maps each sector homeomorphically onto one of the two sectors in $D_s {\smallsetminus } \beta$ in the range. To construct coordinate changes $h_1, h_2$ from $f$ to $R_{n^+, n^-}$, one proceeds as follows: Define at first an orientation preserving homeomorphisms $h_2: \overline D_s\to \overline \mathbb D$ mapping $0$ to $0$ and $\beta\cap \overline D_s$ onto $[-1,1]$. Note that $R_{n^+, n^-}^{-1}[-1,1]$ partitions $\overline \mathbb D$ into the same number of sectors as the partition of $U$ by $f^{-1}\beta$. We just need now to construct $h_1$ sector on sector so that $R_{n^+, n^-}\circ h_1=h_2\circ f$ on that sector and $h_1$ is an orientation preserving mapping from $\gamma\cap \overline D_s$ onto $[-1,1]$. We can see that $h_1$ is a homeomorphism from $U$ to $\mathbb D$. \hfill{q.e.d.} Notice that the local topological degree of $f$ can be expressed in the following table: $$\begin{array}{|c||c|c|c|c|}\hline &\multicolumn{2}{|c|}{\beta(t) \text{ convex, $m\le j$ odd} }&\multicolumn{2}{|c|}{ \beta(t) \text{ cusp, $m\le j$ even} }\\ \hline &m \text{ odd}&m \text{ even} &m \text{ odd} &m \text{ even} \\ \hline f_{z=0}\sim \left(\!\!\begin{array}{c} {\rule[-2ex]{0ex}{4.5ex}{}} z^{2n^+ -1}\\\overline z^{2n^- -1} remark}{{\medskip}{array}\!\!\right) & \left(\!\!\begin{array}{c}z^{ m}\\ \overline z^{m} remark}{{\medskip}{array}\!\!\right) & \left(\!\!\begin{array}{c}z^{ {m+1} }\\ \overline z^{m-1} remark}{{\medskip}{array}\!\!\right) \text{ or } \left(\!\!\begin{array}{c}z^{ m-1}\\ \overline z^{{m+1} } remark}{{\medskip}{array}\!\!\right) {\rule[-2ex]{0ex}{4.5ex}{}} & \left(\!\!\begin{array}{c}z^{ m+2}\\ \overline z^{m} remark}{{\medskip}{array}\!\!\right) \text{ or } \left(\!\!\begin{array}{c}z^{ m}\\ \overline z^{m+2} remark}{{\medskip}{array}\!\!\right) & \left(\!\!\begin{array}{c}z^{ m+1}\\ \overline z^{m+1 } remark}{{\medskip}{array}\!\!\right)\\ \hline {\rule[-2ex]{0ex}{4.5ex}{}} \deg(f,0)= & 0 & \pm 1 & \pm 1 & 0\\ \hline \# f^{-1}(z) = & m+1, m-1 & m+1{\rule[-2ex]{0ex}{4.5ex}{}}, m-1 & m+2,m& m+2,m \\ \hline \mu(f,0)= &\multicolumn{4}{c|}{j+m^2}\\ \hline remark}{{\medskip}{array}$$ \REFCOR{generic again} In the generic case $f(z)=(z+bz^2+O(z^3))^m-\overline z^m$ with $(-b^2)^m\ne 1$, we have $$\begin{array}{|c||c|c|}\hline &\beta(t) \text{ convex, $m=j$ odd} &\beta(t) \text{ cusp, $m=j$ even}\\ \hline f_{z=0}\sim \left(\!\!\begin{array}{c} {\rule[-2ex]{0ex}{4.5ex}{}} z^{2n^+ -1}\\\overline z^{2n^- -1} remark}{{\medskip}{array}\!\!\right) & \left(\!\!\begin{array}{c}z^{ m}\\ \overline z^{m} remark}{{\medskip}{array}\!\!\right) & \left(\!\!\begin{array}{c}z^{ m+1}\\ \overline z^{m+1 } remark}{{\medskip}{array}\!\!\right)\\ \hline {\rule[-2ex]{0ex}{4.5ex}{}} \deg (f,0)= & 0 & 0\\ \hline \# f^{-1}(z) = & m+1, m-1 & m+2,m \\ \hline \mu(f,0)= &\multicolumn{2}{c|}{m+m^2} \\ \hline remark}{{\medskip}{array}$$ remark}{{\medskip}{corollary} \begin{figure}\hspace{2cm} \scalebox{1} { \begin{pspicture}(0,-10.49)(11.68,10.51) \definecolor{color1623}{rgb}{0.0,0.2,1.0} \psline[linewidth=0.04cm,linecolor=red](0.72,7.01)(4.28,9.23) \psline[linewidth=0.04cm,linecolor=red](2.36,10.05)(2.38,5.95) \psline[linewidth=0.04cm,linecolor=red](0.7,9.21)(4.1,6.85) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.98,10.03)(2.98,9.23)(1.84,6.65)(1.44,6.43) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.7,6.47)(3.18,6.69)(1.66,8.87)(1.5,9.87) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.44,7.53)(1.68,8.17)(3.9,8.09)(4.48,7.87) \psline[linewidth=0.04cm,linecolor=red](5.2,7.99)(9.3,7.99) \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(7.34,8.03)(8.5,7.99)(8.64,8.33)(9.1,9.13) \psbezier[linewidth=0.04](7.355484,7.983008)(7.34,8.883008)(6.984516,8.95)(6.82,9.05) \psbezier[linewidth=0.04](7.32,7.97)(7.26,7.29)(7.02,7.23)(6.68,6.93) \usefont{T1}{ptm}{m}{n} \rput(9.47,9.315){$\beta^+=f(\gamma^+)$} \usefont{T1}{ptm}{m}{n} \rput(9.63,6.575){$\beta^-=f(\gamma^-)$} \usefont{T1}{ptm}{m}{n} \rput(6.08,9.355){$\delta^+=F(\gamma^+)$} \usefont{T1}{ptm}{m}{n} \rput(6.2,6.575){$\delta^-=F(\gamma^-)$} \usefont{T1}{ptm}{m}{n} \rput(3.26,10.315){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.34,6.155){$\gamma^-$} \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(7.34,7.97)(8.54,7.97)(8.62,7.69)(9.14,6.85) \usefont{T1}{ptm}{m}{n} \rput(7.46,10.275){{\bf Case $m$ even and $j$ even}} \psline[linewidth=0.04cm,linecolor=red](0.16,2.93)(5.08,2.91) \psline[linewidth=0.04cm,linecolor=red](2.52,4.91)(2.54,0.81) \psline[linewidth=0.04cm,linecolor=red](0.82,1.39)(4.34,4.53) \psline[linewidth=0.04cm,linecolor=red](0.98,4.51)(4.14,1.27) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(3.12,4.91)(3.12,4.11)(1.98,1.53)(1.58,1.31) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](2.2,1.09)(2.08,2.35)(2.9,3.85)(3.7,4.79) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](4.61,2.48)(3.34,2.33)(1.78,3.15)(0.91,3.98) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.71,3.4)(1.51,3.4)(4.1,2.27)(4.31,1.86) \psline[linewidth=0.04cm,linecolor=red](5.72,2.91)(9.82,2.91) \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.98,3.9668276)(6.72,2.49)(8.96,2.65)(9.62,4.05) \psbezier[linewidth=0.04](7.78,2.9030082)(7.92,3.65)(7.44,3.83)(7.3,3.95) \psbezier[linewidth=0.04](7.8,2.97)(7.9,3.7238462)(8.1,3.686154)(8.44,3.95) \usefont{T1}{ptm}{m}{n} \rput(10.01,4.215){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.99,4.175){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(7.22,4.355){$\delta^+$} \usefont{T1}{ptm}{m}{n} \rput(8.66,4.375){$\delta^-$} \usefont{T1}{ptm}{m}{n} \rput(3.38,5.155){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.66,1.115){$\gamma^-$} \usefont{T1}{ptm}{m}{n} \rput(7.72,1.935){{\bf Case $m$ odd and $j$ odd}} \psline[linewidth=0.04cm](0.0,5.49)(11.54,5.51) \psline[linewidth=0.04cm](0.2,0.53)(11.64,0.51) \psline[linewidth=0.04cm,linecolor=red](0.24,-3.01)(5.16,-3.03) \psline[linewidth=0.04cm,linecolor=red](2.6,-1.03)(2.62,-5.13) \psline[linewidth=0.04cm,linecolor=red](0.9,-4.55)(4.42,-1.41) \psline[linewidth=0.04cm,linecolor=red](1.06,-1.43)(4.22,-4.67) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.22,-1.03)(2.64,-2.01)(2.84,-3.99)(2.08,-4.99) \psline[linewidth=0.04cm,linecolor=red](5.62,-3.03)(8.34,-3.03) \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.3,-3.01)(7.84,-2.93)(7.8,-2.37)(8.16,-1.89) \psbezier[linewidth=0.04](6.3,-3.0369918)(7.44,-2.95)(7.56,-2.33)(7.68,-1.91) \psbezier[linewidth=0.04](6.3,-3.05)(7.18,-3.03)(7.64,-3.49)(7.86,-4.21) \usefont{T1}{ptm}{m}{n} \rput(8.53,-2.125){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(8.63,-3.865){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(7.74,-1.545){$\delta^+$} \usefont{T1}{ptm}{m}{n} \rput(7.72,-4.545){$\delta^-$} \usefont{T1}{ptm}{m}{n} \rput(2.32,-0.665){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.78,-5.285){$\gamma^-$} \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(6.38,-3.05)(7.92,-3.13)(7.88,-3.69)(8.24,-4.17) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.64,-3.43)(1.62,-3.01)(3.6,-2.81)(4.6,-3.57) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.64,-2.65)(1.62,-3.07)(3.6,-3.27)(4.6,-2.51) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.0,-1.07)(2.58,-2.05)(2.38,-4.03)(3.14,-5.03) \usefont{T1}{ptm}{m}{n} \rput(8.09,-0.645){{\bf Case $m$ odd and $j$ even}} \psline[linewidth=0.04cm](0.3,-5.77)(11.66,-5.75) \psline[linewidth=0.04cm,linecolor=red](1.0,-9.41)(4.52,-7.23) \psline[linewidth=0.04cm,linecolor=red](2.64,-6.37)(2.66,-10.47) \psline[linewidth=0.04cm,linecolor=red](0.84,-7.33)(4.48,-9.41) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(4.3,-6.59)(4.1,-7.47)(2.16,-8.95)(0.86,-9.09) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.14,-10.33)(2.52,-9.41)(2.52,-7.45)(3.06,-6.25) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.86,-7.81)(2.2,-7.79)(3.78,-8.97)(4.22,-9.95) \psline[linewidth=0.04cm,linecolor=red](5.7,-8.59)(9.8,-8.59) \psbezier[linewidth=0.04,linecolor=color1623,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.68,-7.67)(7.06,-8.95)(8.64,-8.89)(9.7,-7.51) \usefont{T1}{ptm}{m}{n} \rput(9.99,-7.265){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.61,-7.445){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(9.24,-7.145){$\delta^+$} \usefont{T1}{ptm}{m}{n} \rput(6.48,-7.305){$\delta^-$} \usefont{T1}{ptm}{m}{n} \rput(4.44,-6.305){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(0.92,-8.705){$\gamma^-$} \psbezier[linewidth=0.04](6.26,-7.69)(7.0,-9.01)(8.9,-8.753513)(9.18,-7.55) \usefont{T1}{ptm}{m}{n} \rput(8.07,-9.465){{\bf Case $m$ even and $j$ odd}} remark}{{\medskip}{pspicture} } \caption{The left hand figures are $F^{-1}(\mathbb R)$ (in red) and $F^{-1}(F(\gamma))$ (in black). The shape of $\beta^\pm$ is determined by the parity of $j$. The curves $\delta^\pm$ are in the same half planes as $\beta^\pm$ due to the fact that $F-f$ is real. The angle between $\delta^\pm$ is determined by the parity of $m$, as $F(z)=e^{i\theta}z^{m+1}$. }\label{cases}remark}{{\medskip}{figure} \begin{figure}\hspace{2cm} \scalebox{1} { \begin{pspicture}(0,-10.4892)(11.68,10.509199) \definecolor{color4115}{rgb}{0.0,0.2,1.0} \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.72,7.010801)(4.28,9.230801) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.36,10.0508)(2.38,5.950801) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.7,9.210801)(4.1,6.850801) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.98,10.030801)(2.98,9.230801)(1.84,6.6508007)(1.44,6.430801) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.7,6.470801)(3.18,6.6908007)(1.66,8.870801)(1.5,9.870801) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.44,7.530801)(1.68,8.170801)(3.9,8.090801)(4.48,7.870801) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](5.2,7.990801)(9.3,7.990801) \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(7.34,8.030801)(8.5,7.990801)(8.64,8.330801)(9.1,9.130801) \usefont{T1}{ptm}{m}{n} \rput(9.451455,9.315801){$\beta^+=f(\gamma^+)$} \usefont{T1}{ptm}{m}{n} \rput(9.611455,6.575801){$\beta^-=f(\gamma^-)$} \usefont{T1}{ptm}{m}{n} \rput(3.241455,10.315801){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.3214551,6.155801){$\gamma^-$} \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(7.34,7.970801)(8.54,7.970801)(8.62,7.6908007)(9.14,6.850801) \usefont{T1}{ptm}{m}{n} \rput(7.4497266,10.275801){{\bf Case $m$ even and $j$ even}} \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.16,2.9308007)(5.08,2.9108007) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.52,4.910801)(2.54,0.8108008) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.82,1.3908008)(4.34,4.530801) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.98,4.510801)(4.14,1.2708008) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(3.12,4.910801)(3.12,4.1108007)(1.98,1.5308008)(1.58,1.3108008) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](4.61,2.4808009)(3.34,2.3308008)(1.78,3.1508007)(0.91,3.9808009) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.71,3.4008007)(1.51,3.4008007)(4.1,2.2708008)(4.31,1.8608007) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](5.72,2.9108007)(9.82,2.9108007) \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.98,3.9676285)(6.72,2.4908009)(8.96,2.6508007)(9.62,4.050801) \usefont{T1}{ptm}{m}{n} \rput(9.991455,4.215801){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.971455,4.175801){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(3.361455,5.155801){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.641455,1.1158007){$\gamma^-$} \usefont{T1}{ptm}{m}{n} \rput(7.7097263,1.9358008){{\bf Case $m$ odd and $j$ odd}} \psline[linewidth=0.04cm](0.0,5.490801)(11.54,5.510801) \psline[linewidth=0.04cm](0.2,0.53080076)(11.64,0.5108008) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.24,-3.0091991)(5.16,-3.0291991) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.6,-1.0291992)(2.62,-5.129199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.9,-4.549199)(4.42,-1.4091992) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](1.06,-1.4291992)(4.22,-4.669199) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.22,-1.0291992)(2.64,-2.0091991)(2.84,-3.9891992)(2.08,-4.989199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](5.62,-3.0291991)(8.34,-3.0291991) \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.3,-3.0091991)(7.84,-2.9291992)(7.8,-2.3691993)(8.16,-1.8891993) \usefont{T1}{ptm}{m}{n} \rput(8.511456,-2.1241992){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(8.611455,-3.8641992){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(2.301455,-0.66419923){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(1.761455,-5.284199){$\gamma^-$} \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(6.38,-3.049199)(7.92,-3.1291993)(7.88,-3.6891992)(8.24,-4.169199) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.64,-3.4291992)(1.62,-3.0091991)(3.6,-2.8091993)(4.6,-3.5691993) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](0.64,-2.6491992)(1.62,-3.0691993)(3.6,-3.2691991)(4.6,-2.5091991) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.0,-1.0691992)(2.58,-2.049199)(2.38,-4.029199)(3.14,-5.029199) \usefont{T1}{ptm}{m}{n} \rput(8.079726,-0.6441992){{\bf Case $m$ odd and $j$ even}} \psline[linewidth=0.04cm](0.3,-5.7691994)(11.66,-5.7491994) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](1.0,-9.409199)(4.52,-7.2291994) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](2.64,-6.3691993)(2.66,-10.469199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](0.84,-7.3291993)(4.48,-9.409199) \psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(4.3,-6.589199)(4.1,-7.469199)(2.16,-8.9492)(0.86,-9.089199) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](3.14,-10.329199)(2.78,-9.409199)(2.88,-9.849199)(2.68,-8.429199) \psbezier[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm](2.64,-8.389199)(3.5180962,-9.011902)(3.593438,-8.975145)(4.08,-9.909199) \psline[linewidth=0.04cm,linecolor=red,linestyle=dotted,dotsep=0.16cm](5.7,-8.589199)(9.8,-8.589199) \psbezier[linewidth=0.04,linecolor=color4115,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.68,-7.669199)(7.06,-8.9492)(8.64,-8.889199)(9.7,-7.509199) \usefont{T1}{ptm}{m}{n} \rput(9.971455,-7.2641993){$\beta^+$} \usefont{T1}{ptm}{m}{n} \rput(5.591455,-7.444199){$\beta^-$} \usefont{T1}{ptm}{m}{n} \rput(4.421455,-6.304199){$\gamma^+$} \usefont{T1}{ptm}{m}{n} \rput(0.9014551,-8.704199){$\gamma^-$} \usefont{T1}{ptm}{m}{n} \rput(8.059727,-9.464199){{\bf Case $m$ even and $j$ odd}} remark}{{\medskip}{pspicture} } \caption{The cocritical set $f^{-1}(f(\gamma))=f^{-1}(\beta)$. We have kept the red lines for reference. In each sector $S$ bounded by red lines, the number of branches of $f^{-1}(f(\gamma))$ is equal to that of $F^{-1}(F(\gamma))$ (refer to Figure \ref{cases}), except in the two sectors containing $\gamma^\pm$, where $f^{-1}(f(\gamma))\subset \gamma$. }\label{co-critical} remark}{{\medskip}{figure} \subsection{Prescribing numerical invariants or local models for harmonic mappings} Now we are ready to prove Corollary \ref{existence}. Due to the equality $j+m^2=\mu$, we only need to prove that given two integers $\mu,m$ satisfying $m\ge 1$ and $\mu\ge m^2+m$ there exist harmonic maps of the form $g(z)=p(z)^m-\bar z^m$ such that $\mu(g,0)=\mu$. Assume that $p(z)= z+ bz^2+ o(z^2)$ with $|b|=1$. In the case $\mu=m^2+m$, one can take $p$ such that $(-b^2)^m\ne 1$ and apply Lemma \ref{relation}. Assume now $\mu> m^2+m$, in particular $\mu> 2$. Choose $b=i$. Then $-b^2=1$ is always a $m$-th root of unity. And $f_{-b^2}(z)=f_1(z)=p(z)-\bar z$. Choose $p$ such that $\mu(f_1,0)-2=\mu-(m^2+m)$ and apply Lemma \ref{relation}. Now given a pair of positive integers $n^\pm$ with $n^+=n^-$, resp. with $|n^+-n^-|=1$, one can use the table \Ref{n-plus} to find a suitable pair $m$ and $j$, or the table \Ref{N-plus} to find a suitable pair $m$ and $ \mu$, and proceed as above to find an harmonic map realising the model. \hfill{q.e.d.} Here are some concrete examples realizing a given pair $(\mu,m)$ with $\infty\ge \mu\ge m^2+m$. If $\infty>\mu=m^2+m$, take any $p(z)=z+bz^2$ with $|b|=1$ and $(-b^2)^m\ne 1$. Then $\mu(p(z)^m-\bar z^m,0)=\mu$ . If $\infty>\mu> m^2+m$, set $\nu=\mu-(m^2+m)+2=\mu-(m-1)(m+2)$ and $p_\nu(z)=z\sum_{s= 0}^{\nu-2} (iz)^s + a z^\nu$ with ${\rm Re} a\ne 0,\pm 1$ and $g_\nu(z)=(p_\nu(z))^m-\overline z^m$. Then $\mu(g,0)=\mu$. If $\mu=\infty$, set $p(z)=-\dfrac{z}{1-z}$ and $g(z)=p(z)^m-\overline z^m$. We have $\overline p\circ p(z)=p\circ p(z)=z$, and $$\mu(g,0)= \sum_{\xi^m=1,\eta^m=1, \xi,\eta\ne 1}Ord_0 \Big(\eta\, \overline p(\xi \, p(z))-z\Big)+ Ord_0 \Big( \overline p(p(z))-z\Big)= \infty.$$ One can also check by hand that $j(g,0)=\infty$.
1601.06838
\section{Stability of Dual Solution} \label{appn:dual} We first note that $D(\alpha)$ is the dual of a convex function and has a unique minimizer $\alpha^*$. The function ${V(\alpha) = D(\alpha) - D(\alpha^*)}$, hence, is a non-negative function and equals zero only at $\alpha = \alpha^*$. Differentiating $V(\alpha)$ with respect to time we get \[\dot{V}(\alpha) = \frac{\partial V}{\partial \alpha}\dot{\alpha} = -\gamma \Big(\sum_{i}{h_i} - B \Big)^2 < 0.\] Therefore, $V(\cdot)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition. \section{Stability of Primal Solution} \label{appn:primal} We first note that since $W(\mathbf{h})$ is a strictly concave function, it has a unique maximizer $\mathbf{h}^*$. Moreover ${V(\mathbf{h}) = W(\mathbf{h}^*) - W(\mathbf{h})}$ is a non-negative function and equals zero only at $\mathbf{h} = \mathbf{h}^*$. Differentiating $V(\cdot)$ with respect to time we obtain \begin{align*} \dot{V}(\mathbf{h}) &= \sum_{i}{\frac{\partial V}{\partial h_i}\dot{h_i}} \\ &= -\sum_{i}{\left( U'_i(h_i) - C'(\sum_{i}{h_i} - B) \right) \dot{h_i}}. \end{align*} For $\dot{h_i}$ we have \[\dot{h_i} = \frac{\partial h_i}{\partial t_i}\dot{t_i}.\] For non-reset and reset TTL caches we have \[\frac{\partial h_i}{\partial t_i} = \frac{\lambda_i}{(1+\lambda_i t_i)^2} \qquad\text{ and }\qquad \frac{\partial h_i}{\partial t_i} = \lambda_i e^{-\lambda_i t_i},\] respectively, and hence $\partial h_i / \partial t_i > 0$. From the controller for $t_i$ we have \[t_i = k_i \left( U'_i(h_i) - C'(\sum_{i}{h_i} - B) \right).\] Hence, we get \[\dot{V}(\mathbf{h}) = -\sum_{i}{k_i \frac{\partial h_i}{\partial t_i} \left( U'_i(h_i) - C'(\sum_{i}{h_i} - B) \right)^2} < 0.\] Therefore, $V(\cdot)$ is a Lyapunov function\footnote{A description of Lyapunov functions and their applications can be found in~\cite{srikant13}.}, and the system state will converge to $\mathbf{h}^*$ starting from any initial condition. \section{Stability of Primal-Dual Solution} \label{appn:primal_dual} As discussed in Section~\ref{sec:opt}, the Lagrangian function for the optimization problem~\eqref{eq:opt} is expressed as \[\mathcal{L}(\mathbf{h}, \alpha) = \sum_{i}{U_i(h_i)} - \alpha(\sum_{i}{h_i} - B).\] Note that $\mathcal{L}(\mathbf{h}, \alpha)$ is concave in $\mathbf{h}$ and convex in $\alpha$, and hence first order condition for optimality of $\mathbf{h}^*$ and $\alpha^*$ implies \begin{align*} \mathcal{L}(\mathbf{h}^*, \alpha) &\le \mathcal{L}(\mathbf{h}, \alpha) + \sum_{i}{\frac{\partial \mathcal{L}}{\partial h_i}(h_i^* - h_i)}, \\ \mathcal{L}(\mathbf{h}, \alpha^*) &\ge \mathcal{L}(\mathbf{h}, \alpha) + \frac{\partial \mathcal{L}}{\partial \alpha}(\alpha^* - \alpha) . \end{align*} Assume that the hit probability of a file can be expressed by $f(\cdot)$ as a function of the corresponding timer value $t_i$, \emph{i.e.}\ ${h_i = f(t_i)}$. The temporal derivative of the hit probability can therefore be expressed as \[\dot{h_i} = f'(t_i) \dot{t_i},\] or equivalently \[\dot{h_i} = f'(f^{-1}(h_i)) \dot{t_i},\] where $f^{-1}(\cdot)$ denotes the inverse of function $f(\cdot)$. For notation brevity we define ${g(h_i) = f'(f^{-1}(h_i))}$. Note that as discussed in Appendix~\ref{appn:primal}, $f(\cdot)$ is an increasing function, and hence ${g(h_i)\ge 0}$. In the remaining, we show that $V(\mathbf{h}, \alpha)$ defined below is a Lyapunov function for the primal-dual algorithm: \[V(\mathbf{h}, \alpha) = \sum_{i}{\int_{h_i^*}^{h_i}{\frac{x - h_i^*}{k_i g(x)}\mathrm{d} x}} + \frac{1}{2\gamma}(\alpha - \alpha^*)^2.\] Differentiating the above function with respect to time we obtain \[\dot{V}(\mathbf{h}, \alpha) = \sum_{i}{\frac{h_i - h_i^*}{k_i g(h_i)}\dot{h_i}} + \frac{\alpha - \alpha^*}{\gamma}\dot{\alpha}.\] Based on the controllers defined for $t_i$ and $\alpha$ we have \[\dot{h_i} = g(h_i) \dot{t_i} = k_i g(h_i) \frac{\partial \mathcal{L}}{\partial h_i},\] and \[\dot{\alpha} = -\gamma\frac{\partial \mathcal{L}}{\partial \alpha}.\] Replacing for $\dot{h_i}$ and $\dot{\alpha}$ in $\dot{V}(\mathbf{h}, \alpha)$, we obtain \begin{align*} \dot{V}(\mathbf{h}, \alpha) &= \sum_{i}{(h_i - h_i^*)\frac{\partial \mathcal{L}}{\partial h_i}} - (\alpha - \alpha^*)\frac{\partial \mathcal{L}}{\partial \alpha} \\ &\le \mathcal{L}(\mathbf{h}, \alpha) - \mathcal{L}(\mathbf{h}^*, \alpha) + \mathcal{L}(\mathbf{h}, \alpha^*) - \mathcal{L}(\mathbf{h}, \alpha) \\ &= \Big(\mathcal{L}(\mathbf{h}^*, \alpha^*) - \mathcal{L}(\mathbf{h}^*, \alpha)\Big) + \Big(\mathcal{L}(\mathbf{h}, \alpha^*) - \mathcal{L}(\mathbf{h}^*, \alpha^*)\Big) \\ &\le 0, \end{align*} where the last inequality follows from \[\mathcal{L}(\mathbf{h}, \alpha^*) \le \mathcal{L}(\mathbf{h}^*, \alpha^*) \le \mathcal{L}(\mathbf{h}^*, \alpha),\] for any $\mathbf{h}$ and $\alpha$. Moreover, $V(\mathbf{h}, \alpha)$ is non-negative and equals zero only at $(\mathbf{h}^*, \alpha^*)$. Therefore, $V(\mathbf{h}, \alpha)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition. \end{appendices} \section{Conclusion} \label{sec:conclusion} In this paper, we proposed the concept of utility-driven caching, and formulated it as an optimization problem with rigid and elastic cache storage size constraints. Utility-driven caching provides a general framework for defining caching policies with considerations of fairness among various groups of files, and implications on market economy for (cache) service providers and content publishers. This framework has the capability to model existing caching policies such as FIFO and LRU, as utility-driven caching policies. We developed three decentralized algorithms that implement utility-driven caching policies in an online fashion and that can adapt to changes in file request rates over time. We prove that these algorithms are globally stable and converge to the optimal solution. Through simulations we illustrated the efficiency of these algorithms and the flexibility of our approach. \section{Discussion} \label{sec:discussion} In this section, we explore the implications of utility-driven caching on monetizing the caching service and discuss some future research directions. \subsection{Decomposition} The formulation of the problem in Section~\ref{sec:opt} assumes that the utility functions $U_i(\cdot)$ are known to the system. In reality content providers might decide not to share their utility functions with the service provider. To handle this case, we decompose the optimization problem~\eqref{eq:opt} into two simpler problems. Suppose that the cache storage is offered as a service and the service provider charges content providers at a constant rate $r$ for storage space. Hence, a content provider needs to pay an amount of $w_i = rh_i$ to receive hit probability $h_i$ for file $i$. The utility maximization problem for the content provider of file $i$ can then be written as \begin{align} \label{eq:opt_user} \text{maximize} \quad &U_i(w_i/r) - w_i \\ \text{such that} \quad &w_i \ge 0 \notag \end{align} Now, assuming that the service provider knows the vector $\mathbf{w}$, for a proportionally fair resource allocation, the hit probabilities should be set according to \begin{align} \label{eq:opt_network} \text{maximize} \quad &\sum_{i=1}^{N}{w_i\log{(h_i)}} \\ \text{such that} \quad &\sum_{i=1}^{N}{h_i} = B \notag \end{align} It was shown in~\cite{kelly97} that there always exist vectors $\mathbf{w}$ and $\mathbf{h}$, such that $\mathbf{w}$ solves~\eqref{eq:opt_user} and $\mathbf{h}$ solves~\eqref{eq:opt_network}; further, the vector $\mathbf{h}$ is the unique solution to~\eqref{eq:opt}. \subsection{Cost and Utility Functions} In Section~\ref{sec:soft}, we defined a penalty function denoting the cost of using additional storage space. One might also define cost functions based on the consumed network bandwidth. This is especially interesting in modeling in-network caches with network links that are likely to be congested. Optimization problem~\eqref{eq:opt} uses utility functions defined as functions of the hit probabilities. It is reasonable to define utility as a function of the hit \emph{rate}. Whether this makes any changes to the problem, \emph{e.g.}\ in the notion of fairness, is a question that requires further investigation. One argument in support of utilities as functions of hit rates is that a service provider might prefer pricing based on request rate rather than cache occupancy. Moreover, in designing hierarchical caches a service provider's objective could be to minimize the internal bandwidth cost. This can be achieved by defining the utility functions as $U_i = -C_i(m_i)$ where $C_i(m_i)$ denotes the cost associated with miss rate $m_i$ for file $i$. \subsection{Online Algorithms} In Section~\ref{sec:online}, we developed three online algorithms that can be used to implement utility-driven caching. Although these algorithms are proven to be stable and converge to the optimal solution, they have distinct features that can make one algorithm more effective in implementing a policy. For example, implementing the max-min fair policy based on the dual solution requires knowing/estimating the file request rates, while it can be implemented using the modified primal-dual solution without such knowledge. Moreover, the convergence rate of these algorithms may differ for different policies. The choice of non-reset or reset TTL caches also has implications on the design and performance of these algorithms. These are subjects that require further study. \section{Online Algorithms} \label{sec:online} In Section~\ref{sec:opt}, we formulated utility-driven caching as a convex optimization problem either with a fixed or an elastic cache size. However, it is not feasible to solve the optimization problem offline and then implement the optimal strategy. Moreover, the system parameters can change over time. Therefore, we need algorithms that can be used to implement the optimal strategy and adapt to changes in the system by collecting limited information. In this section, we develop such algorithms. \subsection{Dual Solution} \label{sec:dual} The utility-driven caching formulated in~\eqref{eq:opt} is a convex optimization problem, and hence the optimal solution corresponds to solving the dual problem. The Lagrange dual of the above problem is obtained by incorporating the constraints into the maximization by means of Lagrange multipliers \begin{align*} \text{minimize} \quad &D(\alpha, \boldsymbol{\nu}, \boldsymbol{\eta}) = \max_{h_i}\Bigg\{ \sum_{i=1}^{N}{U_i(h_i)} \\ &\qquad\qquad\qquad -\alpha\left[ \sum_{i=1}^{N}{h_i} - B\right] \\ &\qquad\qquad\qquad -\sum_{i=1}^{N}{\nu_i (h_i - 1)} + \sum_{i=1}^{N}{\eta_i h_i} \Bigg\}\\ \text{such that} \quad &\alpha \ge 0, \quad \boldsymbol{\nu}, \boldsymbol{\eta} \ge \mathbf{0}. \end{align*} Using timer based caching techniques for controlling the hit probabilities with $0 < t_i < \infty$ ensures that $0 < h_i < 1$, and hence we have $\nu_i = 0$ and $\eta_i = 0$. Here, we consider an algorithm based on the dual solution to the utility maximization problem~\eqref{eq:opt}. Recall that we can write the Lagrange dual of the utility maximization problem as \[D(\alpha) = \max_{h_i}{\left\{ \sum_{i=1}^{N}{U_i(h_i)}-\alpha\left[ \sum_{i=1}^{N}{h_i} - B\right] \right\}},\] and the dual problem can be written as \[\min_{\alpha \ge 0}{D(\alpha)}.\] A natural decentralized approach to consider for minimizing $D(\alpha)$ is to gradually move the decision variables towards the optimal point using the gradient descent algorithm. The gradient can be easily computed as \[\frac{\partial D(\alpha)}{\partial\alpha} = -\Big(\sum_{i}{h_i} - B \Big),\] and since we are doing a gradient \emph{descent}, $\alpha$ should be updated according to the \emph{negative} of the gradient as \[\alpha \gets \max{\Big\{0, \alpha + \gamma \Big( \sum_{i}{h_i} - B \Big)\Big\}},\] where $\gamma > 0$ controls the step size at each iteration. Note that the KTT conditions require that $\alpha \ge 0$. Based on the discussion in Section~\ref{sec:opt}, to satisfy the optimality condition we must have \[U'_i(h_i) = \alpha,\] or equivalently \[h_i = {U'_i}^{-1}(\alpha).\] The hit probabilities are then controlled based on the timer parameters $t_i$ which can be set according to~\eqref{eq:non_reset_t} and~\eqref{eq:reset_t} for non-reset and reset TTL caches. Considering the hit probabilities as indicators of files residing in the cache, the expression $\sum_{i}{h_i}$ can be interpreted as the number of items currently in the cache, denoted here as $B_{curr}$. We can thus summarize the control algorithm for a reset TTL algorithm as \begin{align} \label{eq:dual_sol} t_i &= -\frac{1}{\lambda_i} \log{\Big(1 - {U'_i}^{-1}(\alpha) \Big)}, \notag\\ \alpha &\gets \max{\{0, \alpha + \gamma ( B_{curr} - B )\}}. \end{align} We obtain an algorithm for a non-reset TTL cache by using the correct expression for $t_i$ in~\eqref{eq:non_reset_t}. Let $\alpha^*$ denote the optimal value for $\alpha$. We show in Appendix~\ref{appn:dual} that $D(\alpha) - D(\alpha^*)$ is a Lyapunov function and the above algorithm converges to the optimal solution. \subsection{Primal Solution} We now consider an algorithm based on the optimization problem in~\eqref{eq:opt_soft} known as the \emph{primal} formulation. Let $W(\mathbf{h})$ denote the objective function in~\eqref{eq:opt_soft} defined as \[W(\mathbf{h}) = \sum_{i=1}^{N}{U_i(h_i)} - C(\sum_{i=1}^{N}{h_i} - B).\] A natural approach to obtain the maximum value for $W(\mathbf{h})$ is to use the gradient ascent algorithm. The basic idea behind the gradient ascent algorithm is to move the variables $h_i$ in the direction of the gradient \[\frac{\partial W(\mathbf{h})}{\partial h_i} = U'_i(h_i) - C'(\sum_{i=1}^{N}{h_i} - B).\] Since the hit probabilities are controlled by the TTL timers, we move $h_i$ towards the optimal point by updating timers $t_i$. Let $\dot{h_i}$ denote the derivative of the hit probability $h_i$ with respect to time. Similarly, define $\dot{t_i}$ as the derivative of the timer parameter $t_i$ with respect to time. We have \[\dot{h_i} = \frac{\partial h_i}{\partial t_i}\dot{t_i}.\] From~\eqref{eq:hit_non_reset} and~\eqref{eq:hit_reset}, it is easy to confirm that $\partial h_i/\partial t_i > 0$ for non-reset and reset TTL caches. Therefore, moving $t_i$ in the direction of the gradient, also moves $h_i$s in that direction. By gradient ascent, the timer parameters should be updated according to \[t_i \gets \max{\left\{0, t_i + k_i\Big[ U'_i(h_i) - C'(B_{curr} - B) \Big]\right\}},\] where $k_i > 0$ is the step-size parameter, and $\sum_{i=1}^{N}{h_i}$ has been replaced with $B_{curr}$ based on the same argument as in the dual solution. Let $\mathbf{h}^*$ denote the optimal solution to~\eqref{eq:opt_soft}. We show in Appendix~\ref{appn:primal} that $W(\mathbf{h}^*) - W(\mathbf{h})$ is a Lyapunov function, and the above algorithm converges to the optimal solution. \subsection{Primal-Dual Solution} Here, we consider a third algorithm that combines elements of the previous two algorithms. Consider the control algorithm \begin{align*} t_i &\gets \max{\{0, t_i + k_i [ U'_i(h_i) - \alpha]\}}, \\ \alpha &\gets \max{\{0, \alpha + \gamma (B_{current} - B)\}}. \end{align*} Using Lyapunov techniques we show in Appendix~\ref{appn:primal_dual} that the above algorithm converges to the optimal solution. Now, rather than updating the timer parameters according to the above rule explicitly based on the utility function, we can have update rules based on a cache hit or miss. Consider the following differential equation \begin{equation} \label{eq:t} \dot{t_i} = \delta_m(t_i, \alpha)(1 - h_i)\lambda_i - \delta_h(t_i, \alpha)h_i\lambda_i, \end{equation} where $\delta_m(t_i, \alpha)$ and $-\delta_h(t_i, \alpha)$ denote the change in $t_i$ upon a cache miss or hit for file $i$, respectively. More specifically, the timer for file $i$ is increased by $\delta_m(t_i, \alpha)$ upon a cache miss, and decreased by $\delta_h(t_i, \alpha)$ on a cache hit. The equilibrium for~\eqref{eq:t} happens when $\dot{t_i} = 0$, which solving for $h_i$ yields \[h_i = \frac{\delta_m(t_i, \alpha)}{\delta_m(t_i, \alpha) + \delta_h(t_i, \alpha)}.\] Comparing the above expression with $h_i = {U'_i}^{-1}(\alpha)$ suggests that $\delta_m(t_i, \alpha)$ and $\delta_h(t_i, \alpha)$ can be set to achieve desired hit probabilities and caching policies. Moreover, the differential equation~\eqref{eq:t} can be reorganized as \[\dot{t_i} = h_i \lambda_i \Big(\delta_m(t_i, \alpha)/h_i - [\delta_m(t_i, \alpha) + \delta_h(t_i, \alpha)]\Big),\] and to move $t_i$ in the direction of the gradient $U'_i(h_i) - \alpha$ a natural choice for the update functions can be \[\delta_m(t_i, \alpha) = h_i U'_i(h_i), \text{ and } \delta_m(t_i, \alpha) + \delta_h(t_i, \alpha) = \alpha.\] To implement proportional fairness for example, these functions can be set as \begin{equation} \label{eq:prop_pd} \delta_m(t_i, \alpha) = \lambda_i, \text{ and } \delta_h(t_i, \alpha) = \alpha - \lambda_i. \end{equation} For the case of max-min fairness, recall from the discussion in Section~\ref{sec:opt_identical} that a utility function that is content agnostic, \emph{i.e.}\ $U_i(h) = U(h)$, results in a max-min fair resource allocation. Without loss of generality we can have $U_i(h_i) = \log{h_i}$. Thus, max-min fairness can be implemented by having \[\delta_m(t_i, \alpha) = 1, \text{ and } \delta_h(t_i, \alpha) = \alpha - 1.\] Note that with these functions, max-min fairness can be implemented without requiring knowledge about request arrival rates~$\lambda_i$, while the previous approaches require such knowledge. \subsection{Estimation of $\lambda_i$} \label{sec:estimate} Computing the timer parameter $t_i$ in the algorithms discussed in this section requires knowing the request arrival rates for most of the policies. Estimation techniques can be used to approximate the request rates in case such knowledge is not available at the (cache) service provider. Let $r_i$ denote the remaining TTL time for file $i$. Note that $r_i$ can be computed based on $t_i$ and a time-stamp for the last time file~$i$ was requested. Let $X_i$ denote the random variable corresponding to the inter-arrival times for the requests for file~$i$, and $\bar{X_i}$ be its mean. We can approximate the mean inter-arrival time as $\hat{\bar{X_i}} = t_i - r_i$. Note that $\hat{\bar{X_i}}$ defined in this way is a one-sample unbiased estimator of $\bar{X_i}$. Therefore, $\hat{\bar{X_i}}$ is an unbiased estimator of $1/\lambda_i$. In the simulation section, we will use this estimator in computing the timer parameters for evaluating our algorithms. \section{Utility Functions and Fairness} \label{sec:fairness} Using different utility functions in the optimization formulation~\eqref{eq:opt} yields different timer values for the files. In this sense, each utility function defines a notion of fairness in allocating storage resources to different files. In this section, we study a number of utility functions that have important fairness properties associated with them. \subsection{Identical Utilities} \label{sec:opt_identical} Assume that all files have the same utility function, \emph{i.e.}\ $U_i(h_i) = U(h_i)$ for all $i$. Then, from~\eqref{eq:c} we obtain \[\sum_{i=1}^{N}{{U'}^{-1}(\alpha)} = N {U'}^{-1}(\alpha) = B,\] and hence \[{U'}^{-1}(\alpha) = B/N.\] Using~\eqref{eq:hu} for the hit probabilities we get \[h_i = B/N, \quad \forall{i}.\] Using a non-reset TTL policy, the timers should be set according to \[t_i = \frac{B}{\lambda_i (N - B)},\] while with a reset TTL policy, they must equal \[t_i = -\frac{1}{\lambda_i}\log{\left(1-\frac{B}{N}\right)}.\] The above calculations show that identical utility functions yield identical hit probabilities for all files. Note that the hit probabilities computed above do not depend on the utility function. \subsection{$\boldsymbol{\beta}$-Fair Utility Functions} Here, we consider the family of $\beta$-fair (also known as \emph{isoelastic}) utility functions given by \[U_i(h_i) = \left\{ \begin{array}{ll} w_i\frac{h_i^{1-\beta}}{1-\beta} & \beta \ge 0, \beta \neq 1; \\ & \\ w_i \log{h_i} & \beta = 1, \end{array} \right. \] where the coefficient $w_i \ge 0$ denotes the weight for file $i$. This family of utility functions unifies different notions of fairness in resource allocation~\cite{srikant13}. In the remainder of this section, we investigate some of the choices for $\beta$ that lead to interesting special cases. \subsubsection{$\boldsymbol{\beta = 0}$}\hspace*{\fill} \\ With $\beta = 0$, we get $U_i(h_i) = w_i h_i$, and maximizing the sum of the utilities corresponds to \[\max_{h_i}{\sum_{i}{w_i h_i}}.\] The above utility function defined does not satisfy the requirements for a utility function mentioned in Section~\ref{sec:model}, as it is not strictly concave. However, it is easy to see that the sum of the utilities is maximized when \[h_i = 1, i=1,\ldots, B \quad \text{ and } \quad h_i = 0, i=B+1,\ldots, N,\] where we assume that weights are sorted as ${w_1 \ge \ldots \ge w_N}$. These hit probabilities indicate that the optimal timer parameters are \[t_i = \infty, i=1,\ldots, B \quad \text{ and } \quad t_i = 0, i=B+1,\ldots, N.\] Note that the policy obtained by implementing this utility function with $w_i = \lambda_i$ corresponds to the Least-Frequently Used (LFU) caching policy, and maximizes the overall throughput. \subsubsection{$\boldsymbol{\beta = 1}$}\hspace*{\fill} \\ Letting $\beta = 1$, we get $U_i(h_i) = w_i \log{h_i}$, and hence maximizing the sum of the utilities corresponds to \[\max_{h_i}{\sum_{i}{w_i \log{h_i}}}.\] It is easy to see that ${U'_i}^{-1}(\alpha) = w_i / \alpha$, and hence using~\eqref{eq:c} we obtain \[\sum_{i}{{U'_i}^{-1}(\alpha)} = \sum_{i}{w_i} / \alpha = B,\] which yields \[\alpha = \sum_{i}{w_i} / B.\] The hit probability of file $i$ then equals \[h_i = {U'_i}^{-1}(\alpha) = \frac{w_i}{\sum_{j}{w_j}}B.\] This utility function implements a \emph{proportionally fair} policy~\cite{kelly98}. With $w_i = \lambda_i$, the hit probability of file $i$ is proportional to the request arrival rate $\lambda_i$. \subsubsection{$\boldsymbol{\beta = 2}$}\hspace*{\fill} \\ With $\beta = 2$, we get $U_i(h_i) = -w_i/h_i$, and maximizing the total utility corresponds to \[\max_{h_i}{\sum_{i}{\frac{-w_i}{h_i}}}.\] In this case, we get ${U'_i}^{-1}(\alpha) = \sqrt{w_i} / \sqrt{\alpha}$, therefore \[\sum_{i}{{U'_i}^{-1}(\alpha)} = \sum_{i}{\sqrt{w_i}} / \sqrt{\alpha} = B,\] and hence \[\alpha = \Big(\sum_{i}{\sqrt{w_i}}\Big)^2 / B^2.\] The hit probability of file $i$ then equals \[h_i = {U'_i}^{-1}(\alpha) = \frac{\sqrt{w_i}}{\sqrt{\alpha}} = \frac{\sqrt{w_i}}{\sum_{j}{\sqrt{w_j}}}B.\] The utility function defined above is known to yield minimum potential delay fairness. It was shown in~\cite{kelly98} that the TCP congestion control protocol implements such a utility function. \subsubsection{$\boldsymbol{\beta \rightarrow\infty}$}\hspace*{\fill} \\ With $\beta \rightarrow\infty$, maximizing the sum of the utilities corresponds to (see~\cite{mo00} for proof) \[\max_{h_i}{\min_{i}{h_i}}.\] This utility function does not comply with the rules mentioned in Section~\ref{sec:model} for utility functions, as it is not strictly concave. However, it is easy to see that the above utility function yields \[h_i = B/N, \quad \forall{i}.\] The utility function defined here maximizes the minimum hit probability, and corresponds to the \emph{max-min fairness}. Note that using identical utility functions for all files resulted in similar hit probabilities as this case. A brief summary of the utility functions discussed here is given in Table~\ref{tbl:u}. \begin{table*}[] \centering \caption{$\beta$-fair utility functions family} \begin{tabular}{ | c | c | c | c |} \hline $\beta$ & $\max{\sum_{i}{U_i(h_i)}}$ & $h_i$ & implication \\ \hline 0 & $\max{\sum{w_i h_i}}$ & $h_i = 1, i\le B, h_i = 0, i \ge B+1$ & maximizing overall throughput \\ 1 & $\max{\sum{w_i \log{h_i}}}$ & $h_i = w_i B / \sum_{j}{w_j}$ & proportional fairness \\ 2 & $\max{-\sum{w_i / h_i}}$ & $h_i = \sqrt{w_i} B / \sum_{j}{\sqrt{w_j}}$ & minimize potential delay \\ $\infty$ & $\max{\min{h_i}}$ & $h_i = B/N$ & max-min fairness \\ \hline \end{tabular} \label{tbl:u} \end{table*} \section{Introduction} The increase in data traffic over past years is predicted to continue more aggressively, with global Internet traffic in 2019 estimated to reach 64 times of its volume in 2005~\cite{cisco14}. The growth in data traffic is recognized to be due primarily to streaming of video on-demand content over cellular networks. However, traditional methods such as increasing the amount of spectrum or deploying more base stations are not sufficient to cope with this predicted traffic increase~\cite{Andrews12, Golrezaei12}. Caching is recognized, in current and future Internet architecture proposals, as one of the most effective means to improve the performance of web applications. By bringing the content closer to users, caches greatly reduce network bandwidth usage, server load, and perceived service delays~\cite{borst10}. Because of the trend for ubiquitous computing, creation of new content publishers and consumers, the Internet is becoming an increasingly heterogeneous environment where different content types have different quality of service requirements, depending on the content publisher/consumer. Such an increasing diversity in service expectations advocates the need for content delivery infrastructures with service differentiation among different applications and content classes. Service differentiation not only induces important technical gains, but also provides significant economic benefits~\cite{feldman02}. Despite a plethora of research on the design and implementation of \emph{fair} and \emph{efficient} algorithms for differentiated bandwidth sharing in communication networks, little work has focused on the provision of multiple levels of service in network and web caches. The little available research has focused on designing controllers for partitioning cache space~\cite{ko03, lu04}, biased replacement policies towards particular content classes~\cite{kelly99}, or using multiple levels of caches~\cite{feldman02}. These techniques either require additional controllers for fairness, or inefficiently use the cache storage. Moreover, traditional cache management policies such as LRU treat different contents in a strongly coupled manner that makes it difficult for (cache) service providers to implement differentiated services, and for content publishers to account for the valuation of their content delivered through content distribution networks. In this paper, we propose a utility-driven caching framework, where each content has an associated utility and content is stored and managed in a cache so as to maximize the aggregate utility for all content. Utilities can be chosen to trade off user satisfaction and cost of storing the content in the cache. We draw on analytical results for time-to-live (TTL) caches~\cite{Nicaise14b}, to design caches with ties to utilities for individual (or classes of) contents. Utility functions also have implicit notions of fairness that dictate the time each content stays in cache. Our framework allows us to develop \emph{online} algorithms for cache management, for which we prove achieve optimal performance. Our framework has implications for distributed pricing and control mechanisms and hence is well-suited for designing cache market economic models. Our main contributions in this paper can be summarized as follows: \begin{itemize} \item We formulate a utility-based optimization framework for maximizing aggregate content publisher utility subject to buffer capacity constraints at the service provider. We show that existing caching policies, \emph{e.g.}\ LRU, LFU and FIFO, can be modeled as utility-driven caches within this framework. \item By reverse engineering the LRU and FIFO caching policies as utility maximization problems, we show how the \emph{characteristic time}~\cite{Che01} defined for these caches relates to the Lagrange multiplier corresponding to the cache capacity constraint. \item We develop online algorithms for managing cache content, and prove the convergence of these algorithms to the optimal solution using Lyapunov functions. \item We show that our framework can be used in revenue based models where content publishers react to prices set by (cache) service providers without revealing their utility functions. \item We perform simulations to show the efficiency of our online algorithms using different utility functions with different notions of fairness. \end{itemize} The remainder of the paper is organized as follows. We review related work in the next section. Section~\ref{sec:model} explains the network model considered in this paper, and Section~\ref{sec:opt} describes our approach in designing utility maximizing caches. In Section~\ref{sec:fairness} we elaborate on fairness implications of utility functions, and in Section~\ref{sec:reverse}, we derive the utility functions maximized by LRU and FIFO caches. In Section~\ref{sec:online}, we develop online algorithms for implementing utility maximizing caches. We present simulation results in Section~\ref{sec:simulation}, and discuss prospects and implications of the cache utility maximization framework in Section~\ref{sec:discussion}. Finally, we conclude the paper in Section~\ref{sec:conclusion}. \section{Model} \label{sec:model} Consider a set of $N$ files, and a cache of size $B$. We use the terms file and content interchangeably in this paper. Let $h_i$ denote the hit probability for content $i$. Associated with each content, $i=1,\ldots, N$, is a utility function $U_i:[0,1] \rightarrow \mathbb{R}$ that represents the ``satisfaction'' perceived by observing hit probability $h_i$. $U_i(\cdot)$ is assumed to be increasing, continuously differentiable, and strictly concave. Note that a function with these properties is invertible. We will treat utility functions that do not satisfy these constraints as special cases. \subsection{TTL Caches} In a TTL cache, each content is associated with a timer~$t_i$. Whenever a cache miss to content $i$ occurs, content $i$ is stored in the cache and its timer is set to $t_i$. Timers decrease at constant rate, and a content is evicted from cache when its timer reaches zero. We can adjust the hit probability of a file by controlling the time a file is kept in cache. There are two TTL cache designs: \begin{itemize} \item Non-reset TTL Cache: TTL is only set at cache misses, \emph{i.e.}~TTL is not reset upon cache hits. \item Reset TTL Cache: TTL is set each time the content is requested. \end{itemize} Previous work on the analysis of TTL caches~\cite{Nicaise14} has shown that the hit probability of file $i$ for these two classes of non-reset and reset TTL caches can be expressed as \begin{equation} \label{eq:hit_non_reset} h_i = 1 - \frac{1}{1 + \lambda_i t_i}, \end{equation} and \begin{equation} \label{eq:hit_reset} h_i = 1 - e^{-\lambda_i t_i}, \end{equation} respectively, where requests for file $i$ arrive at the cache according to a Poisson process with rate $\lambda_i$. Note that depending on the utility functions, different (classes of) files might have different or equal TTL values. \section{Cache Utility Maximization} \label{sec:opt} In this section, we formulate cache management as a utility maximization problem. We introduce two formulations, one where the buffer size introduces a hard constraint and a second where it introduces a soft constraint. \subsection{Hard Constraint Formulation} We are interested in designing a cache management policy that optimizes the sum of utilities over all files, more precisely, \begin{align} \label{eq:opt} \text{maximize} \quad &\sum_{i=1}^{N}{U_i(h_i)} \notag\\ \text{such that} \quad &\sum_{i=1}^{N}{h_i} = B \\ & 0 \le h_i \le 1, \quad i=1, 2, \ldots, N. \notag \end{align} Note that the feasible solution set is convex and since the objective function is strictly concave and continuous, a unique maximizer, called the optimal solution, exists. Also note that the buffer constraint is based on the {\em expected} number of files not exceeding the buffer size and not the total number of files. Towards the end of this section, we show that the buffer space can be managed in a way such that the probability of \emph{violating} the buffer size constraint vanishes as the number of files and cache size grow large. This formulation does not enforce any special technique for managing the cache content, and any strategy that can easily adjust the hit probabilities can be employed. We use the TTL cache as our building block because it provides the means through setting timers to control the hit probabilities of different files in order to maximize the sum of utilities. Using timer based caching techniques for controlling the hit probabilities with $0 < t_i < \infty$ ensures that $0 < h_i < 1$, and hence, disregarding the possibility of $h_i = 0$ or $h_i = 1$, we can write the Lagrangian function as \begin{align*} \mathcal{L}(\mathbf{h}, \alpha) &= \sum_{i=1}^{N}{U_i(h_i)}-\alpha\left[ \sum_{i=1}^{N}{h_i} - B\right] \\ &= \sum_{i=1}^{N}{\Big[ U_i(h_i)-\alpha h_i \Big]} + \alpha B, \end{align*} where $\alpha$ is the Lagrange multiplier. In order to achieve the maximum in $\mathcal{L}(\mathbf{h}, \alpha)$, the hit probabilities should satisfy \begin{equation} \label{eq:drv} \frac{\partial\mathcal{L}}{\partial h_i} = \frac{\mathrm{d} U_i}{\mathrm{d} h_i} - \alpha = 0. \end{equation} Let $U'_i(\cdot)$ denote the derivative of the the utility function $U_i(\cdot)$, and define ${U'_i}^{-1}(\cdot)$ as its inverse function. From~\eqref{eq:drv} we get \[U'_i(h_i) = \alpha,\] or equivalently \begin{equation} \label{eq:hu} h_i = {U'_i}^{-1}(\alpha). \end{equation} Applying the cache storage constraint we obtain \begin{equation} \label{eq:c} \sum_{i}{h_i} = \sum_{i}{{U'_i}^{-1}(\alpha)} = B, \end{equation} and $\alpha$ can be computed by solving the fixed-point equation given above. As mentioned before, we can implement utility maximizing caches using TTL based policies. Using the expression for the hit probabilities of non-reset and reset TTL caches given in~\eqref{eq:hit_non_reset} and~\eqref{eq:hit_reset}, we can compute the timer parameters $t_i$, once $\alpha$ is determined from~\eqref{eq:c}. For non-reset TTL caches we obtain \begin{equation} \label{eq:non_reset_t} t_i = -\frac{1}{\lambda_i}\Big(1 - \frac{1}{1 - {U'_i}^{-1}(\alpha)}\Big), \end{equation} and for reset TTL caches we get \begin{equation} \label{eq:reset_t} t_i = -\frac{1}{\lambda_i}\log{\Big(1 - {U'_i}^{-1}(\alpha)\Big)}. \end{equation} \subsection{Soft Constraint Formulation} \label{sec:soft} The formulation in~\eqref{eq:opt} assumes a hard constraint on cache capacity. In some circumstances it may be appropriate for the (cache) service provider to increase the available cache storage at some cost to the file provider for the additional resources\footnote{One straightforward way of thinking this is to turn the cache memory disks on and off based on the demand.}. In this case the cache capacity constraint can be replaced with a penalty function $C(\cdot)$ denoting the cost for the extra cache storage. Here, $C(\cdot)$ is assumed to be a convex and increasing function. We can now write the utility and cost driven caching formulation as \begin{align} \label{eq:opt_soft} \text{maximize} \quad &\sum_{i=1}^{N}{U_i(h_i)} - C(\sum_{i=1}^{N}{h_i} - B) \\ \text{such that} \quad &0 \le h_i \le 1, \quad i=1,2,\ldots, N. \notag \end{align} Note the optimality condition for the above optimization problem states that \[U'_i(h_i) = C'(\sum_{i=1}^{N}{h_i} - B).\] Therefore, for the hit probabilities we obtain \[h_i = {U'_i}^{-1}\Big(C'(\sum_{i=1}^{N}{h_i} - B)\Big),\] and the optimal value for the cache storage can be computed using the fixed-point equation \begin{equation} \label{eq:elastic_B} B^* = \sum_{i=1}^{N}{{U'_i}^{-1}\Big(C'(B^* - B)\Big)}. \end{equation} \subsection{Buffer Constraint Violations} \label{sec:violation} Before we leave this section, we address an issue that arises in both formulations, namely how to deal with the fact that there may be more contents with unexpired timers than can be stored in the buffer. This occurs in the formulation of (\ref{eq:opt}) because the constraint is on the {\em average} buffer occupancy and in (\ref{eq:opt_soft}) because there is no constraint. Let us focus on the formulation in (\ref{eq:opt}) first. Our approach is to provide a buffer of size $B(1+\epsilon )$ with $\epsilon > 0$, where a portion $B$ is used to solve the optimization problem and the additional portion $\epsilon B$ to handle buffer violations. We will see that as the number of contents, $N$, increases, we can get by growing $B$ in a sublinear manner, and allow $\epsilon$ to shrink to zero, while ensuring that content will not be evicted from the cache before their timers expire with high probability. Let $X_i$ denote whether content $i$ is in the cache or not; $P(X_i =1) = h_i $. Now Let $\mathbb{E}\bigl[\sum_{i=1}^N X_i\bigr] = \sum_{i=1}^N h_i = B$. We write $B(N)$ as a function of $N$, and assume that $B(N) = \omega (1)$. \begin{theorem} \label{thrm:violation} For any $\epsilon > 0$ \[ \mathbb{P}\bigl(\sum_{i=1}^N X_i \ge B(N)(1+\epsilon)\bigr) \le e^{-\epsilon^2 B(N)/3} . \] \end{theorem} The proof follows from the application of a Chernoff bound. Theorem~\ref{thrm:violation} states that we can size the buffer as $B(1+\epsilon)$ while using a portion $B$ as the constraint in the optimization. The remaining portion, $\epsilon B$, is used to protect against buffer constraint violations. It suffices for our purpose that ${\epsilon^2 B(N) = \omega (1)}$. This allows us to select $B(N) = o(N)$ while at the same time selecting $\epsilon = o(1)$. As an example, consider Zipf's law with $\lambda_i = \lambda/i^s$, $\lambda > 0$, $0 < s <1$, $i=1,\ldots, N$ under the assumption that $\max{\{t_i\}} = t$ for some $t <\infty$. In this case, we can grow the buffer as $B(N) = O(N^{1-s})$ while $\epsilon$ can shrink as $\epsilon = 1/N^{(1-s)/3}$. Analogous expressions can be derived for $s \ge 1$. Similar choices can be made for the soft constraint formulation. \section{Related Work} \subsection{Network Utility Maximization} Utility functions have been widely used in the modeling and control of computer networks, from stability analysis of queues to the study of fairness in network resource allocation; see~\cite{srikant13, neely10} and references therein. Kelly~\cite{kelly97} was the first to formulate the problem of rate allocation as one of achieving maximum aggregate utility for users, and describe how network-wide optimal rate allocation can be achieved by having individual users control their transmission rates. The work of Kelly~\emph{et al.}~\cite{kelly98} presents the first mathematical model and analysis of the behavior of congestion control algorithms for general topology networks. Since then, there has been extensive research in generalizing and applying Kelly's \emph{Network Utility Maximization} framework to model and analyze various network protocols and architectures. This framework has been used to study problems such as network routing~\cite{tassiulas92}, throughput maximization~\cite{eryilmaz07}, dynamic power allocation~\cite{neely03} and scheduling in energy harvesting networks~\cite{huang13}, among many others. Ma and Towsley~\cite{Ma15} have recently proposed using utility functions for the purpose of designing contracts that allow service providers to monetize caching. \subsection{Time-To-Live Caches} TTL caches, in which content eviction occurs upon the expiration of a timer, have been employed since the early days of the Internet with the Domain Name System (DNS) being an important application~\cite{Jung03}. More recently, TTL caches have regained popularity, mostly due to admitting a general approach in the analysis of caches that can also be used to model replacement-based caching policies such as LRU. The connection between TTL caches and replacement-based (capacity-driven) policies was first established for the LRU policy by Che~\emph{et al.}~\cite{Che01} through the notion of cache \emph{characteristic time}. The characteristic time was theoretically justified and extended to other caching policies such as FIFO and RANDOM~\cite{Fricker12}. This connection was further confirmed to hold for more general arrival models than Poisson processes~\cite{Bianchi13}. Over the past few years, several exact and approximate analyses have been proposed for modeling single caches in isolation as well as cache networks using the TTL framework~\mbox{\cite{Nicaise14, Berger14}}. In this paper, we use TTL timers as \emph{tuning knobs} for individual (or classes of) files to control the utilities observed by the corresponding contents, and to implement \emph{fair} usage of cache space among different (classes of) contents. We develop our framework based on two types of TTL caches described in the next section. \section{Reverse Engineering} \label{sec:reverse} In this section, we study the widely used replacement-based caching policies, FIFO and LRU, and show that their hit/miss behaviors can be duplicated in our framework through an appropriate choice of utility functions. It was shown in~\cite{Nicaise14} that, with a proper choice of timer values, a TTL cache can generate the same statistical properties, \emph{i.e.}~same hit/miss probabilities, as FIFO and LRU caching policies. In implementing these caches, non-reset and reset TTL caches are used for FIFO and LRU, respectively, with $t_i=T, i=1,\ldots,N$ where $T$ denotes the \emph{characteristic time}~\cite{Che01} of these caches. For FIFO and LRU caches with Poisson arrivals the hit probabilities can be expressed as $h_i = 1 - 1/(1+\lambda_iT)$ and $h_i = 1 - e^{-\lambda_i T}$, and $T$ is computed such that $\sum_{i}{h_i} = B$. For example for the LRU policy $T$ is the unique solution to the fixed-point equation \[\sum_{i=1}^{N}{\left(1 - e^{-\lambda_i T}\right)} = B.\] In our framework, we see from~\eqref{eq:hu} that the file hit probabilities depend on the Lagrange multiplier $\alpha$ corresponding to the cache size constraint in~\eqref{eq:opt}. This suggests a connection between $T$ and $\alpha$. Further note that the hit probabilities are increasing functions of $T$. On the other hand, since utility functions are concave and increasing, $h_i = {U'_i}^{-1}(\alpha)$ is a decreasing function of $\alpha$. Hence, we can denote $T$ as a decreasing function of $\alpha$, \emph{i.e.}~$T = f(\alpha)$. Different choices of function $f(\cdot)$ would result in different utility functions for FIFO and LRU policies. However, if we impose the functional dependence $U_i(h_i) = \lambda_i U_0(h_i)$, then the equation $h_i = {U'_i}^{-1}(\alpha)$ yields \[h_i = {U'_0}^{-1}(\alpha/\lambda_i).\] From the expressions for the hit probabilities of the FIFO and LRU policies, we obtain $T = 1/\alpha$. In the remainder of the section, we use this to derive utility functions for the FIFO and LRU policies. \subsection{FIFO} The hit probability of file $i$ with request rate $\lambda_i$ in a FIFO cache with characteristic time $T$ is \[h_i = 1 - \frac{1}{1 + \lambda_i T}.\] Substituting this into~\eqref{eq:hu} and letting $T = 1/\alpha$ yields \[{U'_i}^{-1}(\alpha) = 1 - \frac{1}{1 + \lambda_i / \alpha}.\] Computing the inverse of ${U'_i}^{-1}(\cdot)$ yields \[U'_i(h_i) = \frac{\lambda_i}{h_i} - \lambda_i,\] and integration of the two sides of the above equation yields the utility function for the FIFO cache \[U_i(h_i) = \lambda_i (\log{h_i} - h_i).\] \subsection{LRU} Taking $h_i = 1 - e^{-\lambda_i T}$ for the LRU policy and letting ${T = 1/\alpha}$ yields \[{U'_i}^{-1}(\alpha) = 1 - e^{-\lambda_i/\alpha},\] which yields \[U'_i(h_i) = \frac{-\lambda_i}{\log{(1-h_i)}}.\] Integration of the two sides of the above equation yields the utility function for the LRU caching policy \[U_i(h_i) = \lambda_i \text{li}(1-h_i),\] where $\text{li}(\cdot)$ is the logarithmic integral function \[\text{li}(x) = \int_0^x{\frac{\mathrm{d} t}{\ln{t}}}.\] It is easy to verify, using the approach explained in Section~\ref{sec:opt}, that the utility functions computed above indeed yield the correct expressions for the hit probabilities of the FIFO and LRU caches. We believe these utility functions are unique if restricted to be multiplicative in\footnote{We note that utility functions, defined in this context, are subject to affine transformations, \emph{i.e.}~$aU+b$ yields the same hit probabilities as $U$ for any constant $a>0$ and $b$.} $\lambda_i$. \section{Simulations} \label{sec:simulation} In this section, we evaluate the efficiency of the online algorithms developed in this paper. Due to space restrictions, we limit our study to four caching policies: FIFO, LRU, proportionally fair, and max-min fair. Per our discussion in Section~\ref{sec:reverse}, non-reset and reset TTL caches can be used with $t_i = T, i=1,\ldots,N$ to implement caches with the same statistical properties as FIFO and LRU caches. However, previous approaches require precomputing the cache characteristic time $T$. By using the online dual algorithm developed in Section~\ref{sec:dual} we are able to implement these policies with no a priori information of $T$. We do so by implementing non-reset and reset TTL caches, with the timer parameters for all files set as $t_i = 1/\alpha$, where $\alpha$ denotes the dual variable and is updated according to~\eqref{eq:dual_sol}. For the proportionally fair policy, timer parameters are set to \[t_i = \frac{-1}{\lambda_i}\log{(1 - \frac{\lambda_i}{\alpha})},\] and for the max-min fair policy we set the timers as \[t_i = \frac{-1}{\lambda_i}\log{(1 - \frac{1}{\alpha})}.\] We implement the proportionally fair and max-min fair policies as reset TTL caches. In the experiments to follow, we consider a cache with the expected number of files in the cache set to $B=1000$. Requests arrive for ${N = 10^4}$ files according to a Poisson process with aggregate rate one. File popularities follow a Zipf distribution with parameter ${s=0.8}$,~\emph{i.e.}~${\lambda_i = 1/i^s}$. In computing the timer parameters we use estimated values for the file request rates as explained in Section~\ref{sec:estimate}. Figure~\ref{fig:dual} compares the hit probabilities achieved by our online dual algorithm with those computed numerically for the four policies explained above. It is clear that the online algorithms yield the exact hit probabilities for the FIFO, LRU and max-min fair policies. For the proportionally fair policy however, the simulated hit probabilities do not exactly match numerically computed values. This is due to an error in estimating $\lambda_i, i=1,\ldots, N$. Note that we use a simple estimator here that is unbiased for $1/\lambda_i$ but biased for $\lambda_i$. It is clear from the above equations that computing timer parameters for the max-min fair policy only require estimates of $1/\lambda_i$ and hence the results are good. Proportionally fair policy on the other hand requires estimating $\lambda_i$ as well, hence using a biased estimate of $\lambda_i$ introduces some error. To confirm the above reasoning, we also simulate the proportionally fair policy assuming perfect knowledge of the request rates. Figure~\ref{fig:prop_exact} shows that in this case simulation results exactly match the numerical values. \begin{figure}[h] \centering \begin{subfigure}[b]{0.50\linewidth} \centering\includegraphics[scale=0.20]{prop_dual_hit.eps} \caption{\label{fig:prop_exact}} \end{subfigure}% \begin{subfigure}[b]{0.50\linewidth} \centering\includegraphics[scale=0.20]{prop_pd_hit_est.eps} \caption{\label{fig:prop_pd}} \end{subfigure} \vspace{-0.25cm} \caption{Proportionally fair policy implemented using the (a) dual algorithm with exact knowledge of $\lambda_i$s, and (b) primal-dual algorithm with ${\delta_m(t_i, \alpha) = \lambda_i}$ and ${\delta_h(t_i, \alpha) = \alpha - \lambda_i}$, with approximate $\lambda_i$ values.} \label{fig:prop_fair} \end{figure} We can also use the primal-dual algorithm to implement the proportionally fair policy. Here, we implement this policy using the update rules in~\eqref{eq:prop_pd}, and estimated values for the request rates. Figure~\ref{fig:prop_pd} shows that, unlike the dual approach, the simulation results match the numerical values. This example demonstrates how one algorithm may be more desirable than others in implementing a specific policy. The algorithms explained in Section~\ref{sec:online} are proven to be globally and asymptotically stable, and converge to the optimal solution. Figure~\ref{fig:lru_dual_var} shows the convergence of the dual variable for the LRU policy. The red line in this figure shows $1/T=6.8\times 10^{-4}$ where $T$ is the characteristic time of the LRU cache computed according to the discussion in Section~\ref{sec:reverse}. Also, Figure~\ref{fig:lru_cache_size} shows how the number of contents in the cache is centered around the capacity $B$. The probability density and complementary cumulative distribution function (CCDF) for the number of files in cache are shown in Figure~\ref{fig:cs}. The probability of violating the capacity $B$ by more than $10\%$ is less than $2.5\times 10^{-4}$. For larger systems, \emph{i.e.}\ for large $B$ and $N$, the probability of violating the target cache capacity becomes infinitesimally small; see the discussion in Section~\ref{sec:violation}. This is what we also observe in our simulations. Similar behavior in the convergence of the dual variable and cache size is observed in implementing the other policies as well. \begin{figure}[h] \centering \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[scale=0.21]{lru_dual_var.eps} \caption{\label{fig:lru_dual_var}} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[scale=0.21]{cs_conv.eps} \caption{\label{fig:lru_cache_size}} \end{subfigure} \vspace{-0.25cm} \caption{Convergence and stability of dual algorithm for the utility function representing LRU policy.} \label{fig:lru_dual} \end{figure} \begin{figure}[h] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[scale=0.21]{cs_distr.eps} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \includegraphics[scale=0.21]{cs_ccdf.eps} \end{subfigure} \vspace{-0.25cm} \caption{Cache size distribution and CCDF from dual algorithm with the utility function representing LRU policy.} \label{fig:cs} \end{figure} \section{Final Remarks}
1508.06787
\section{Introduction} In many `reasonable' cohomology theories, one expects that the relative cohomology of a `fibration' $f: X\rightarrow S$ behaves as nicely as possible, that is that the higher direct image sheaves should be locally constant, and their fibres should be the cohomology groups of the fibres of $f$. For example, if one takes $f$ to be a smooth and proper morphism of algebraic varieties, then the higher direct images for de Rham cohomology (in characteristic zero) or $\ell$-adic \'{e}tale cohomology (in characteristic different from $\ell$) are `local systems' in the appropriate sense, with the expected fibres. Berthelot's conjecture is a version of this general philosophy for $p$-adic cohomology: roughly speaking, it states that if we take a smooth and proper morphism $f:X\rightarrow S$ of varieties in characteristic $p$, and an overconvergent $F$-isocrystal $E$ on $X$, then the higher direct images $\mathbf{R}^qf_*E$ should be overconvergent $F$-isocrystals on $S$. According to the various different perspectives that one can take on both the coefficient objects of $p$-adic cohomology and their push-forwards, there are many different ways to state Berthelot's conjecture, some stronger, some weaker, and some (currently) logically independent, and the aim of this short article is two-fold. Firstly, it is to act as a brief survey of the various forms that Berthelot's conjecture can take, and of the special cases and impactions between them all that are currently known. Secondly, it is to show some new (but reasonably straightforward) comparisons between different constructions of push-forwards in $p$-adic cohomology, which will then allow us to deduce some new cases of certain versions of Berthelot's conjecture. While the general form of Berthelot's conjecture still remains very open, the version of it that we manage to prove here still has some interesting applications, see for example \cite{ES15b} or \cite{Pal15b}. In the first couple of sections, therefore, we review various definitions of coefficients objects and their push-forwards, concentrating on four main perspectives: that of convergent isocrystals, overconvergent isocrystals (in two different ways) and overholonomic $\cur{D}$-modules. For each of these perspectives on coefficients objects, there is a corresponding way to phrase Berthelot's conjecture, and we are thus led to consider 4 types of conjecture. Viewing overconvergent isocrystals simply as modules with integrable connection on some frame leads to the `B' type conjectures, if we view them as modules with overconvergent stratifications, or more generally as collections of realisations with comparison morphisms, then the most natural formulation gives what we call the `S' type conjectures. If we include Frobenius structures and view them as a full subcategory of convergent $F$-isocrystals then we obtain `O' type conjectures, and finally, considering them as certain kinds of overholonomic $\cur{D}$-modules gives `C' type conjectures. While there are reasonably clear implications between the `B', `S' and `O' type conjectures, the lack of good comparisons between push-forwards of $\cur{D}$-modules and push-forwards of overconvergent isocrystals in general means that there are few straightforward implications between the `C' conjectures and the others. Since it is the `C' conjectures for which, thanks to Caro's work, most cases are known (in particular, all quasi-projective cases) it is therefore especially disappointing that it is these `C' conjectures that are the most difficult to relate to the others. Our rather modest contribution here is to note a few special cases of such comparison theorems between push-forwards, which enables us to deduce `O' type conjectures with a reasonably respectable level of generality (namely, for smooth \emph{projective} morphisms $X\rightarrow S$), and `B' type conjectures with a somewhat less respectable level of generality (see Corollary \ref{bp} for a precise statement). The main difficulty in extending these results is the somewhat indirect comparison between overconvergent isocrystals and overcoherent isocrystals (which are certain kinds of arithmetic $\cur{D}$-modules). The equivalence of categories constructed by Caro makes fundamental use of both resolution and gluing arguments, and therefore if one is to obtain the required comparisons between push-forwards, one needs to know certain cases of finiteness and base change for rigid higher direct images in order to push these objects through the construction - in other words, one needs to know certain cases of `S' or `B' type conjectures before one starts! The reasons that we could get our arguments to work here is essentially by bootstrapping up the few cases in which one has a direct comparison between overconvergent and overcoherent isocrystals as far as possible, which in `O' type conjectures does in fact give reasonable results, but is still rather inadequate for `B' or `S' type conjectures. One would hope that a direct comparison would lead to an easy implication from the `C' type conjectures proved by Caro to the conjectures of the other types. \subsection*{Notations and conventions} Throughout, $k$ will be a perfect field of characteristic $p>0$, $\cur{V}$ will be a complete DVR of mixed characteristic with residue field $k$ and fraction field $K$, and $\pi$ will be a uniformiser for $\cur{V}$. A $k$-variety will mean a separated $k$-scheme of finite type, and a formal $\cur{V}$-scheme will mean a $\pi$-adic formal scheme, separated and of finite type over $\spf{\cur{V}}$. If $X$ is an ${\mathbb F}_p$ scheme, absolute Frobenius will mean some fixed power of the $p$-power absolute Frobenius on $X$. For any $k$-variety $X$ we will denote the reduced subscheme by $X_\mathrm{red}$. For any formal $\cur{V}$-scheme $\frak{X}$, we will denote the special fibre by $\frak{X}_0$, and the generic fibre by $\frak{X}_K$, this is a rigid space over $K$ in the sense of Tate. If $\cur{F}$ is an abelian sheaf on some site, we will denote by $\cur{F}_{\mathbb Q}$ the tensor product $\cur{F}\otimes_{{\mathbb Z}}{\mathbb Q}$. \section{Categories of isocrystals}\label{coeffs} In this section, we review the various categories of coefficients that are used in $p$-adic cohomology, and the various comparison theorems between them. We start with the category of convergent isocrystals, following Ogus \cite{Ogu84}. Let $X$ be a $k$-scheme. The convergent site of $X/\cur{V}$ consists of pairs $(\frak{T},z_\frak{T})$ where $\frak{T}$ is a flat formal $\cur{V}$-scheme and $z_\frak{T}:(\frak{T}_0)_\mathrm{red}\rightarrow X$ is a morphism of $k$-varieties. The topology is induced by the Zariski topology on $\frak{T}$, and the associated topos is denoted $(X/\cur{V})_\mathrm{conv}$. We will usually drop $z_\frak{T}$ from the notation, and refer to an object of the convergent site simply as $\frak{T}$. We can describe sheaves $E$ on this site as `realisations' $E_\frak{T}$ and transition morphisms $$ g^{-1}E_\frak{T}\rightarrow E_{\frak{T}'} $$ associated to $g:\frak{T}'\rightarrow \frak{T}$ in the usual way. In particular we have the canonical sheaf $\cur{K}_{X/V}$ whose realisation on $\frak{T}$ is $\cur{O}_{\frak{T},{\mathbb Q}}$. \begin{definition} A convergent isocrystal on $X$ is a $\cur{K}_{X/\cur{V}}$-module $E$ such that each realisation $E_\frak{T}$ is a coherent $\cur{O}_{\frak{T},{\mathbb Q}}$-module, and the \emph{linearised} transition morphism $$ g^*E_\frak{T}\rightarrow E_{\frak{T}'} $$ associated to any $g:\frak{T}'\rightarrow \frak{T}$ is an isomorphism. The category of such objects is denoted $\mathrm{Isoc}(X/K)$. \end{definition} \begin{proposition}[\cite{Ogu84}, Theorem 2.15] \label{convcon} Suppose that $\frak{X}$ is a smooth formal $\cur{V}$-scheme with special fibre $X$. Then the realisation functor $E\mapsto E_\frak{X}$ induces a fully faithful functor from $\mathrm{Isoc}(X/K)$ to the category of coherent $\cur{O}_{\frak{X},{\mathbb Q}}$-modules with integrable connection. \end{proposition} \begin{remark} There is also a version of this proposition where we embed $X$ into a smooth formal $\cur{V}$-scheme. We will see this appearing later on. \end{remark} These objects are functorial in both $X$ and $\cur{V}$, in that a commutative diagram $$ \xymatrix{ Y \ar[r] \ar[d] & X \ar[d] \\ \spf{\cur{W}} \ar[r] & \spf{\cur{V}} } $$ induces a pullback functor $\mathrm{Isoc}(X/K)\rightarrow \mathrm{Isoc}(Y/L)$ (where $L=\mathrm{Frac}(\cur{W})$). In particular, after choosing a lift to $\cur{V}$ of the absolute Frobenius of $k$, we can talk about convergent isocrystals with Frobenius structure, the category of such objects being denoted $F\textrm{-}\mathrm{Isoc}(X/K)$, and there is an analogue of Proposition \ref{convcon} if $\frak{X}$ is equipped with a lift of absolute Frobenius. Next we introduce (partially) overconvergent isocrystals, following Berthelot \cite{Ber96b} and Le Stum \cite{LS07}. Before we do so we need to introduce pairs and frames, as well as Berthelot's functor $j^\dagger$ of overconvergent sections. \begin{definition} A $k$-pair consists of an open embedding $X\rightarrow \overline{X}$ of $k$-varieties. A $\cur{V}$-frame consists of a $k$-pair $(X,\overline{X})$ and a closed embedding $\overline{X}\rightarrow \frak{X}$ of $\overline{X}$ into a formal $\cur{V}$-scheme. We will say that a pair/frame is proper if $\overline{X}$ is, and that a frame is smooth if $\frak{X}$ is smooth in a neighbourhood of $X$. A morphism of pairs/frames is just a commutative diagram, and we will say a morphism of pairs $(X,\overline{X})\rightarrow (S,\overline{S})$ is Cartesian if the associated commutative square is. Smoothness/properness of a morphism of pairs or frames is defined as before. If $(X,\overline{X})$ is a pair, then a frame over $(X,\overline{X})$ is a frame $(Y,\overline{Y},\frak{Y})$ together with a morphism $(Y,\overline{Y})\rightarrow (X,\overline{X})$ of pairs. \end{definition} If we have a frame $(X,\overline{X},\frak{X})$, then we can consider the specialisation map $\mathrm{sp}:\frak{X}_K\rightarrow \frak{X}_0$, and for any locally closed subscheme $V\subset \frak{X}_0$ we define the tube $$ ]V[_\frak{X}:= \mathrm{sp}^{-1}(V). $$ If $]X[_\frak{X}\subset V \subset ]\overline{X}[_\frak{X}$ is an open subset of $]\overline{X}[_\frak{X}$, then we will call $V$ a strict neighbourhood of $]X[_\frak{X}$ if the covering $$ ]\overline{X}[_\frak{X} = V \cup ]\overline{X}\setminus X[_\frak{X} $$ is admissible for the $G$-topology. For any sheaf $\cur{F}$ on $]\overline{X}[_\frak{X}$ we define $$ j_X^\dagger\cur{F}:=\mathrm{colim}_V j_{V*}j_V^{-1}\cur{F} $$ where the colimit is taken over all strict neighbourhoods $V$, and $j_V:V\rightarrow ]\overline{X}[_\frak{X}$ denotes the inclusion. If $E$ is a $j^\dagger_X\cur{O}_{]\overline{X}[_\frak{X}}$-module, then an integrable connection on $E$ is just an integrable connection on $E$ as an $\cur{O}_{]\overline{X}[_\frak{X}}$-module. \begin{definition} An overconvergent isocrystal on the pair $(X,\overline{X})$ consists of a collection $E_\frak{U}$ of coherent $j^\dagger_Y\cur{O}_{]\overline{Y}[_\frak{Y}}$-modules, one for each frame $(Y,\overline{Y},\frak{Y})$ over $(X,\overline{X})$, together with isomorphisms $u^*E_\frak{Y}\rightarrow E_\frak{Z}$ associated to each morphism $u:(Z,\overline{Z},\frak{Z})\rightarrow (Y,\overline{Y},\frak{Y})$ of frames, satisfying the usual cocycle conditions. The category of such objects is denoted $\mathrm{Isoc}^\dagger((X,\overline{X})/K)$, and we refer to the $E_\frak{Y}$ as the realisations of $E$. \end{definition} \begin{proposition}[\cite{LS07}, Proposition 7.2.13] Suppose that $(X,\overline{X},\frak{X})$ is a smooth frame. Then the realisation functor $E\mapsto E_\frak{X}$ induces a fully faithful functor from $\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ to the category $\mathrm{MIC}((X,\overline{X},\frak{X})/K)$ of coherent $j^\dagger\cur{O}_{]\overline{X}[_\frak{X}}$-modules with integrable connection. \end{proposition} Of course, as before, we have functoriality as well as a version with Frobenius structures, denoted $F\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$. The category $(F\textrm{-})\mathrm{Isoc}(X/K)$ is local on $X$, and $(F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ is local on both $X$ and $\overline{X}$. When the pair $(X,\overline{X})$ is proper, $(F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ depends only on $X$, we will therefore write $(F\textrm{-})\mathrm{Isoc}^\dagger(X/K)$. \begin{definition} Let $(X,\overline{X},\frak{X})$ be a smooth frame. Then an overconvergent stratification on a coherent $j_X^\dagger\cur{O}_{]\overline{X}[_\frak{X}}$-module $E$ is an isomorphism $$ p_2^*E\cong p_1^*E $$ of $j_X^\dagger\cur{O}_{]\overline{X}[_{\frak{X}^2}}$-modules, where $\overline{X}$ is embedded in $\frak{X}^2$ via the diagonal, and $p_i:\frak{X}^2\rightarrow \frak{X}$ are the two projections. This isomorphism is subject to the usual conditions, for example it should be the identity after being pulled back via $\Delta:\frak{X}\rightarrow \frak{X}^2$, and should satisfy a cocycle condition on $\frak{X}^3$. The category of coherent $j_X^\dagger\cur{O}_{]\overline{X}[_\frak{X}}$-modules with overconvergent stratification is denoted by $\mathrm{Strat}^\dagger((X,\overline{X},\frak{X})/K)$. There is an obvious restriction functor $$\mathrm{Isoc}^\dagger((X,\overline{X})/K)\rightarrow \mathrm{Strat}^\dagger((X,\overline{X},\frak{X})/K)$$ which is an equivalence by Proposition 7.2.2 of \cite{LS07}. \end{definition} By restricting to frames of the form $(\frak{Y}_0,\frak{Y}_0,\frak{Y})$ we also get a natural functor $$ (F\textrm{-})\mathrm{Isoc}^\dagger((X,X)/K) \rightarrow (F\textrm{-})\mathrm{Isoc}(X/K). $$ \begin{proposition}[\cite{Ber96b}, 2.3.4] This functor is an equivalence of categories. \end{proposition} \begin{remark} This give an answer as to what the analogue of Proposition \ref{convcon} should be when $X$ is not smooth over $k$: convergent isocrystals form a full subcategory of the category of coherent $\cur{O}_{]X[_\frak{X}}$-modules with integrable connection, for any closed embedding $X\rightarrow \frak{X}$ into a smooth formal $\cur{V}$-scheme. In fact, it is this characterisation which is the key ingredient in the proof of the previous proposition. \end{remark} For any pair $(X,\overline{X})$ we get a canonical restriction functor $$ (F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K) \rightarrow (F\textrm{-})\mathrm{Isoc}(X/K) $$ and have the following theorem of Caro and Kedlaya. \begin{theorem}[\cite{Car11}, Th\'{e}or\`{e}me 2.2.1] The restriction functor $$ F\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)\rightarrow F\textrm{-}\mathrm{Isoc}(X/K) $$ is fully faithful. \end{theorem} \begin{remark}This is also conjectured to hold without Frobenius structures, but this is not currently known. \end{remark} Finally, the most complicated category of coefficients is that of Berthelot's arithmetic $\cur{D}$-modules, as developed by Caro. Since the definitions and constructions are so involved, we will content ourselves with giving a brief overview, referring to Berthelot and Caro's work for the details. If $\frak{P}$ is a smooth formal $\cur{V}$-scheme, then we let $\cur{D}^\dagger_{\frak{P},{\mathbb Q}}$ denote the ring of overconvergent differential operators on $\frak{P}$, as constructed in \S2 of \cite{Ber96a}, and $D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}})$ its (bounded, coherent) derived category. For any closed subscheme $T\subset \frak{P}_0$, we have functors $\mathbf{R}\underline{\Gamma}^\dagger_T$ and $(^\dagger T)$ of `sections with support in $T$' and `sections overconvergent along $T$' respectively, and an exact triangle $$ \mathbf{R}\underline{\Gamma}_T^\dagger\cur{E} \rightarrow \cur{E} \rightarrow (^\dagger T)\cur{E} \rightarrow \mathbf{R}\underline{\Gamma}_T^\dagger\cur{E}[1] $$ for any $\cur{E}\in D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}})$. We let $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}})\subset D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}})$ denote the full subcategory of overcoherent objects, as defined in \S3 of \cite{Car04}. We will also need a variant: if $T\subset \frak{P}_0$ is a divisor of the special fibre of $\frak{P}$, we may consider the ring $\cur{D}^\dagger_{\frak{P}}(^\dagger T)_{\mathbb Q}$ of differential operators with overconvergent singularities along $T$, as defined in \S4 of \cite{Ber96a}, as well as the categories $D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P}}(^\dagger T)_{\mathbb Q})$ and $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{\frak{P}}(^\dagger T)_{\mathbb Q})$ as before. There is a forgetful functor $$ D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P}}(^\dagger T)_{\mathbb Q}) \rightarrow D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}}) $$ which is fully faithful and with essential image those objects $\cur{E}$ such that $\cur{E}\cong(^\dagger T)\cur{E}$ (Lemme 1.2.1 4 of \cite{Car15a}). Be warned, however, that it does not respect the notion of overcoherence in general. Now let $(X,\overline{X})$ be a $k$-pair, and assume that we have an embedding $\overline{X}\hookrightarrow \tilde{\frak{P}}$ into a smooth and proper formal $\cur{V}$-scheme, and a divisor $\tilde{T}\subset \tilde{\frak{P}}_0$ such that $X=\overline{X}\setminus \tilde{T}$. Let $\frak{P}$ be an open formal subscheme of $\tilde{\frak{P}}$ such that $\overline{X}\rightarrow \tilde{\frak{P}}$ factors through a closed embedding $\overline{X}\rightarrow \frak{P}$, and let $T=\tilde{T}\cap \frak{P}$. Then the full subcategory of $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{\frak{P}}(^\dagger T))$ consisting of objects with support in $\overline{X}$, i.e. such that $\cur{E}\cong \mathbf{R}\underline{\Gamma}^\dagger_{\overline{X}}\cur{E}$, is independent of all choices (i.e. only depends on $(X,\overline{X})$), we therefore denote it $D^b_\mathrm{\mathrm{surcoh}}(\cur{D}^\dagger_{(X,\overline{X})/K})$ and refer to it as the category of overcoherent $\cur{D}^\dagger$-modules on $(X,\overline{X})/K$. When $X=\overline{X}$ we will denote it instead by $D^b_\mathrm{\mathrm{surcoh}}(\cur{D}_{X/K})$. When $\overline{X}$ is proper, it depends only on $X$ and we will therefore instead write $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{X/K})$. \begin{remark} We formalise the hypothesis used on pairs in the previous paragraph as follows: A couple $(Y,\overline{Y})$ is said to be `properly $d$-realisable' if there exists smooth and proper formal $\cur{V}$-scheme $\frak{P}$, a (not necessarily closed) immersion $\overline{Y}\rightarrow \frak{P}$ and a divisor $D$ of $\frak{P}_0$ such that $Y=\overline{Y}\setminus D$. \end{remark} \begin{lemma} \label{annoying} Let $\frak{S}$ be a smooth affine formal $\cur{V}$-scheme, with special fibre $S$. Then there exists a canonical equivalence of categories $$ D^b_{\mathrm{surcoh}}(\cur{D}^\dagger_{\frak{S,{\mathbb Q}}}) \cong D^b_{\mathrm{surcoh}}(\cur{D}_{S/K}). $$ More generally, if $D\subset \overline{S}:=\frak{S}_0$ is a divisor, and $S=\overline{S}\setminus D$, then there exists a canonical equivalence of categories $$ D^b_{\mathrm{surcoh}}(\cur{D}^\dagger_{\frak{S}}(^\dagger D)_{\mathbb Q}) \cong D^b_{\mathrm{surcoh}}(\cur{D}^\dagger_{(S,\overline{S})/K}). $$ \end{lemma} \begin{proof} Note that this is not immediate from the definitions! The first is not difficult: if we choose an affine embedding $\frak{S}\hookrightarrow \widehat{{\mathbb A}}^n_\cur{V}$ then $D^b_{\mathrm{surcoh}}(\cur{D}_{S/K})$ can be identified with the full subcategory of $D^b_{\mathrm{surcoh}}(\cur{D}^\dagger_{\frak{S,{\mathbb Q}}})$ consisting of objects with support in $\frak{S}$, the claimed equivalence therefore follows from the Berthelot-Kashiwara theorem (Th\'{e}or\`{e}me 3.1.6 of \cite{Car04}). The second is proved entirely similarly. \end{proof} Whenever $X$ is smooth, we will let $\mathrm{Isoc}^{\dagger\dagger}((X,\overline{X})/K) \subset D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(X,\overline{X})/K})$ denote the full subcategory of `overcoherent isocrystals' as in D\'{e}finition 1.2.4 of \cite{Car15a}, these are certain kinds of overcoherent $\cur{D}$-modules, and we will denote by $D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})/K})$ the full subcategory of $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(X,\overline{X})/K})$ objects whose cohomology sheaves are overcoherent isocrystals. We have a canonical equivalence of categories $$ \mathrm{sp}_{(X,\overline{X}),+} :\mathrm{Isoc}^\dagger((X,\overline{X})/K) \rightarrow \mathrm{Isoc}^{\dagger\dagger}((X,\overline{X})/K) $$ whose description we will need in the following two special cases. \begin{itemize} \item Assume that $\frak{X}$ is a smooth formal $\cur{V}$-scheme, so that we may identify objects of $\mathrm{Isoc}(X/K)$ with certain $\cur{O}_{\frak{X},{\mathbb Q}}$-modules with integrable connection, and objects of $D^b_\mathrm{surcoh}(\cur{D}_{X/K})$ with a full subcategory of $D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{X},{\mathbb Q}})$. Then $\mathrm{sp}_{(X,X),+}$ simply takes a module with integrable connection to the corresponding $\cur{D}$-module, as in Proposition 4.1.4 of \cite{Ber96a}. \label{sp1} \item Assume that $\frak{X}$ is a smooth formal $\cur{V}$-scheme, that $T\subset \overline{X}:=\frak{X}_0$ is a divisor, and set $X=\overline{X}\setminus T$. Let $\mathrm{sp}_*:\frak{X}_K\rightarrow\frak{X}$ be the specialisation map. Then thanks to Proposition 4.4.2 of \cite{Ber96a}, $\mathrm{sp}_*$ induces an equivalence of categories between $\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ and certain coherent $\cur{O}_{\frak{X},{\mathbb Q}}(^\dagger T)$-modules with integrable connection. Since we may identify objects of $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(X,\overline{X})/K})$ with a full subcategory of $D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{X}}(^\dagger T))$, the functor $\mathrm{sp}_{(X,\overline{X}),+}$ is again that taking an integrable connection to the associated $\cur{D}$-module, as in Th\'{e}or\`{e}me 4.4.5 of \emph{loc. cit}. \label{sp2} \end{itemize} \begin{remark} One can also construct $D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})})$ for non-smooth $X$, although the definition is more involved, and it is not clear whether or not we have the equivalence of categories $$ \mathrm{Isoc}^\dagger((X,\overline{X})/K) \cong \mathrm{Isoc}^{\dagger\dagger}((X,\overline{X})/K) $$ in this case. \end{remark} To define $D^b_\mathrm{\mathrm{surcoh}}(\cur{D}^\dagger_{(X,\overline{X})/K})$ and $D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})/K})$ in general we use Zariski descent: by localising on $\overline{X}$ and $X$ we may assume that we are in the `properly $d$-realisable' situation as above, and the corresponding categories glue. For more details on how to do this, see for example Remarque 3.2.10 of \cite{Car04}. \section{Push-forwards in $p$-adic cohomology} For the various different categories of $p$-adic coefficients, there are different ways of viewing higher direct images, and in this section we will review the basic constructions of such push-forwards. Again, we start with convergent isocrystals. Suppose that $X$ is a $k$-variety, $\frak{S}$ is a formal $\cur{V}$-scheme, and $f:X\rightarrow \frak{S}$ is a morphism of formal $\cur{V}$-schemes. Then we can define the category of convergent isocrystals on $X/\frak{S}$ exactly as in \S\ref{coeffs}, only taking formal schemes with a given structure morphism to $\frak{S}$. There is a canonical morphism of topoi $$ (X/\cur{V})_\mathrm{conv} \rightarrow (X/\frak{S})_\mathrm{conv} $$ induced by `forgetting' the structure morphism to $\frak{S}$; push-forward is exact and sends isocrystals to isocrystals. We then consider the morphism of topoi $$ f_{\frak{S},\mathrm{conv}}:(X/\frak{S})_\mathrm{conv} \rightarrow \frak{S}_\mathrm{Zar} $$ induced by the functor taking an open subset of $\frak{S}$ to the object $(f^{-1}\frak{U},\frak{U})$ of $(X/\frak{S})_\mathrm{conv}$. For any convergent isocrystal on $X/\frak{S}$, we can therefore consider the $\cur{O}_{\frak{S},{\mathbb Q}}$-modules $$ \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E\in \frak{S}_\mathrm{Zar}. $$ \begin{proposition} \label{derham} Suppose that $f:\frak{X}\rightarrow\frak{S}$ is a smooth morphism of formal $ \cur{V}$-schemes lifting $f:X\rightarrow \frak{S}$. \begin{enumerate} \item There is an equivalence of categories $E\mapsto E_\frak{X}$ between convergent isocrystals on $X/\frak{S}$ and a full subcategory of the category of coherent $\cur{O}_{\frak{X},{\mathbb Q}}$-modules with integrable connection relative to $\frak{S}$. \item For any convergent isocrystal $E$ on $X/\frak{S}$, there is a canonical isomorphism $$ \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E \cong \mathbf{R}^qf_{*} ( E_\frak{X} \otimes\Omega^*_{\frak{X}/\frak{S}}). $$ \end{enumerate} \end{proposition} \begin{proof} Note that Proposition 2.21 of \cite{Shi08a} (in the exceptionally simple case where the log structure is trivial and $\cur{P}_0=X$) implies that there is an equivalence of categories between convergent isocrystals and coherent $\cur{O}_{\frak{X}_K}$ modules with a convergent integrable connection, which implies (1), (2) then follows immediately from Corollary 2.33 of \emph{loc. cit}. \end{proof} Hence, by localising, whenever $X$ is smooth over $\frak{S}_0$, $\frak{S}$ is smooth, and $E\in \mathrm{Isoc}(X/K)$, the $\cur{O}_{\frak{S},{\mathbb Q}}$-modules $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ are equipped with a canonical connection, the Gauss--Manin connection. In fact, the assumption that $X$ is smooth over $\frak{S}_0$ is unnecessary, but we will not need this directly . Also, if $\frak{S}$ comes equipped with a lift $\sigma_\frak{S}$ of the absolute Frobenius of $\frak{S}_0$, and $E\in F\textrm{-}\mathrm{Isoc}^\dagger(X/K)$, then the sheaf $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ has a natural Frobenius morphism $$ \sigma_\frak{S}^* \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E \rightarrow \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E $$ which is compatible with the connection. Now assume that we have a smooth and proper morphism $f:X\rightarrow S$ of $k$-varieties. Then in \S3 of \cite{Ogu84}, Ogus constructs, for any convergent $F$-isocrystal $E\in F\textrm{-}\mathrm{Isoc}(X/K)$, and $q\geq0$, a convergent isocrystal $\mathbf{R}^qf_{\mathrm{conv}*}E\in F\textrm{-}\mathrm{Isoc}(X/K)$ using crystalline cohomology (actually he only does this for the constant $F$-isocrystal, but essentially the same construction works in general, using Th\'{e}or\`{e}me 2.4.2 of \cite{Ber96b}). The construction is compatible with base change, in that if we have a Cartesian diagram $$ \xymatrix{ X'\ar[r]^{g'}\ar[d]_{f'} & X\ar[d]^f \\ S'\ar[r]^g & S } $$ then there is a natural isomorphism $$ g^*\mathbf{R}^qf_{\mathrm{conv}*}E\cong \mathbf{R}^qf'_{\mathrm{conv}*}g'^*E $$ in $F\textrm{-}\mathrm{Isoc}(S'/K)$. \begin{proposition} \label{pfconv} Suppose that $\frak{S}$ is smooth over $\cur{V}$, and that $\sigma_\frak{S}$ is a lift of Frobenius. Then for any smooth and proper morphism $f:X\rightarrow\frak{S}_0$ of $k$-varieties, the realisation $(\mathbf{R}^qf_{\mathrm{conv}*}E)_\frak{S}$ is isomorphic to $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ with its canonical Frobenius and connection. \end{proposition} \begin{proof} If we ignore Frobenius structures, then this just follows from the fact that $\mathbf{R}^qf_{\mathrm{conv}*}E$ is constructed using crystalline cohomology, and hence the induced connection is simply the Gauss--Manin connection. Compatibility with Frobenius then follows from compatibility with base change in general. \end{proof} Next we turn to push-forward for overconvergent isocrystals. \begin{definition} Let $f:(X,\overline{X},\frak{X})\rightarrow (S,\overline{S},\frak{S})$ be a smooth morphism of smooth frames, and $E\in (F)\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$. Then define $$ \mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E := \mathbf{R}f_*(E_\frak{X}\otimes \Omega^*_{]\overline{X}[_\frak{X}/]\overline{S}[_\frak{S}}). $$ \end{definition} This only depends on the induced morphism $f:(X,\overline{X})\rightarrow (S,\overline{S},\frak{S})$ and not on the choice of $\frak{X}$, and if $\overline{X}$ is proper, then it in fact only depends on $f:X\rightarrow (S,\overline{S},\frak{S})$. The construction is local on $\overline{X}$, and hence to define $\mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E$ for an arbitrary pair $(X,\overline{X})$ over $(S,\overline{S},\frak{S})$ (i.e. one not necessarily admitting an extension to a smooth morphism of frames $(X,\overline{X},\frak{X})\rightarrow (S,\overline{S},\frak{S})$) we can use descent. The details are somewhat tedious, so we won't go into them here. For a detailed description of how this works, see \cite{CT03}. Note that $\mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E$ is a $j^\dagger_S\cur{O}_{]\overline{S}[_\frak{S}}$-module, and is equipped with a canonical connection, the Gauss--Manin connection. When $\frak{S}$ admits a lift of Frobenius, then these sheaves are also equipped with a Frobenius morphism. Using the restriction functor $$ (F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K)\rightarrow (F\textrm{-})\mathrm{Isoc}(X/K) $$ we can also define $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ for any $E\in (F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K)$, and $\mathbf{R}^qf_{\mathrm{conv}*}E$ whenever $X\rightarrow S$ is smooth and proper and $E\in F\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$. The relation between these is as follows. \begin{proposition}\label{occ} Let $(S,\overline{S},\frak{S})$ be a smooth frame, and suppose that $f:(X,\overline{X})\rightarrow (S,\overline{S})$ is a Cartesian morphism of pairs. Assume that $\frak{S}$ admits a lift of the absolute Frobenius of $\overline{S}$, and let $\frak{S}'$ be an open subset of $\frak{S}$, stable under Frobenius, such that $\frak{S}'\cap \overline{S}=S$. Then for any $E\in F\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ the restriction of $\mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E$ to $]S[_\frak{S} = ]S[_{\frak{S'}}$ is isomorphic to the realisation of $\mathbf{R}^qf_{\mathrm{conv}*}E\in F\textrm{-}\mathrm{Isoc}(S/K)\cong F\textrm{-}\mathrm{Isoc}^\dagger((S,S)/K)$ on $(S,S,\frak{S}')$. \end{proposition} \begin{proof} We may assume that $S=\overline{S}$, and $\frak{S}'=\frak{S}$, where the claim follows from Corollary 2.34 of \cite{Shi08a}. (Shiho actually treats the more general case of log schemes, which includes the above as a special case). \end{proof} \begin{remark} Shiho's relative log convergent cohomology is used in \cite{CT14} to obtain a Clemens-Schmidt type exact sequence in $p$-adic cohomology. \end{remark} Again, the most involved of the push-forward constructions is the version in Berthelot's theory of arithmetic $\cur{D}$-modules, and we will only give the briefest of overviews here. So suppose that $f:\frak{P}'\rightarrow \frak{P}$ is a smooth morphism of formal $\cur{V}$-schemes. Then Berthelot constructs in (4.3.7) of \cite{Ber02} a $(f^{-1}\cur{D}^\dagger_{\frak{P},{\mathbb Q}},\cur{D}^\dagger_{\frak{P}',{\mathbb Q}})$-bimodule $\cur{D}^\dagger_{\frak{P}'\rightarrow \frak{P},{\mathbb Q}}$ and defines the push-forward of a complex $\cur{E}\in D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P}',{\mathbb Q}})$ to be $$ f_+ \cur{E} := \mathbf{R}f_*(\cur{D}^\dagger_{\frak{P}'\rightarrow \frak{P},{\mathbb Q}}\otimes^\mathbf{L}_{\cur{D}^\dagger_{\frak{P}',{\mathbb Q}}} \cur{E}). $$ There is a similar version for differential operators overconvergent along a divisor, assuming that the pullback of the divisor on $\frak{P}$ is contained in that on $\frak{P}'$. If $f:(X,\overline{X})\rightarrow (S,\overline{S})$ is a \emph{proper} morphism of properly $d$-realisable pairs, then we may construct a diagram $$ \xymatrix{ \overline{X} \ar[r]\ar[d]_f & \frak{P}'\ar[r]\ar[d] & \tilde{\frak{P}}' \ar[d]^g \\ \overline{S} \ar[r] & \frak{P} \ar[r] & \tilde{\frak{P}} } $$ with both left had horizontal arrows closed immersions, both right hand horizontal arrows open immersions, $\tilde{\frak{P}}'\rightarrow\tilde{\frak{P}}$ a smooth and proper morphism between smooth and proper formal $\cur{V}$ schemes, and the right hand square Cartesian, such that there exist divisors $T,D$ of $\tilde{\frak{P}}_0',\tilde{\frak{P}}_0$ respectively with $X=\overline{X}\setminus T$, $S=\overline{S}\setminus D$, and $g^{-1}(D)\subset T$, as in Lemme 4.2.8 of \cite{Car15a}. We may then define the push-forward $$ f_+:D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(X,\overline{X})/K}) \rightarrow D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(S,\overline{S})/K}) $$ as simply the push-forward $g_+$ associated to the lift $g:\frak{P}'\rightarrow \frak{P}$: this does indeed land in the category $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(S,\overline{S})/K})\subset D^b_\mathrm{surcoh}(\cur{D}^\dagger_{\frak{S}}(^\dagger D)_{\mathbb Q})$, and does not depend on any of the choices (see Proposition 4.2.7 of \emph{loc. cit.}). \section{Versions of Berthelot's conjecture}\label{conjs} According to the different interpretations of the category of overconvergent ($F$)-isocrystals, there are correspondingly different versions of Berthelot's conjecture, each of which is most naturally adapted to a particular viewpoint on the category of isocrystals. In this section, we review some of the different versions of Berthelot's conjecture, and discuss some of the easier implications among them. We start with Berthelot's original formulation, which is the one most closely related to the viewpoint of overconvergent isocrystals as $j^\dagger\cur{O}$-modules with connection. \begin{conjectureu}[B(F), \cite{Ber86} \S4.3] Let $(S,\overline{S},\frak{S})$ be a smooth and proper $\cur{V}$-frame, and $f:X\rightarrow S$ a smooth and proper morphism of $k$-varieties. Then the $j-S^\dagger\cur{O}_{]\overline{S}[_\frak{S}}$-module with integrable connection $\mathbf{R}^qf_{\frak{S},\mathrm{rig}*}\cur{O}_{X/K}^\dagger$ arises from a unique overconvergent ($F$)-isocrystal $$ \mathbf{R}^qf_{\mathrm{rig}*}\cur{O}_{X/K}^\dagger \in (F) \textrm{-}\mathrm{Isoc}^\dagger(S/K). $$ In other words, $\mathbf{R}^qf_{\mathrm{rig}*}\cur{O}_{X/K}^\dagger$ is coherent, the connection is overconvergent, and the resulting object in $\mathrm{Isoc}^\dagger(S/K) \cong \mathrm{MIC}^\dagger((S,\overline{S},\frak{S})/K)$ only depends on $S$ and not on the choice of frame $(S,\overline{S},\frak{S})$. (Moreover, this object has a canonical Frobenius structure.) \end{conjectureu} \begin{remark} When referring to this and any other form of Berthelot's conjecture, we will use, for example, `Conjecture B' to refer to the conjecture without Frobenius structure, and `Conjecture BF' to refer to the conjecture with Frobenius structure. \end{remark} We also have the following slightly more general formulation of Berthelot's original conjecture, due to Tsuzuki. \begin{conjectureu}[B1(F), \cite{Tsu03}] Suppose that $(X,\overline{X})\rightarrow (S,\overline{S})$ is a proper, Cartesian morphism of $k$-pairs with $X\rightarrow S$ smooth, and that $(S,\overline{S},\frak{S})$ is a smooth $\cur{V}$-frame. Let $E\in (F)\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$. Then the $j_S^\dagger\cur{O}_{]\overline{S}[_\frak{S}}$-module with integrable connection $\mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E$ arises from a unique overconvergent ($F$)-isocrystal $$ \mathbf{R}^qf_{\mathrm{rig}*}E \in (F)\textrm{-}\mathrm{Isoc}^\dagger((S,\overline{S})/K). $$ If $\overline{S}$ is proper, then $\mathbf{R}^qf_{\mathrm{rig}*}E$ moreover only depends on $f:X\rightarrow S$ and $E$. \end{conjectureu} Note that if we have a smooth and proper morphism $f:X\rightarrow S$, and extend to a morphism $\bar{f}:\overline{X}\rightarrow \overline{S}$ between compactifications, then the diagram $$ \xymatrix{ X\ar[r]\ar[d] & \overline{X} \ar[d] \\ S \ar[r] & \overline{S} } $$ is Cartesian, and hence Conjecture B1(F) does contain Conjecture B(F) as a special case. We can also think of overconvergent isocrystals as coherent $j^\dagger\cur{O}$-modules with an overconvergent stratification, and with this viewpoint, a more natural formulation is the following version, due to Shiho. \begin{conjectureu}[S(F), \cite{Shi08a} Conjecture 5.5] Suppose that $(X,\overline{X})\rightarrow (S,\overline{S})$ is a Cartesian morphism of pairs over $k$, with $\overline{X}\rightarrow \overline{S}$ proper and $X\rightarrow S$ smooth. Let $E$ be an overconvergent ($F$)-isocrystal on $(X,\overline{X})/K$, and $q\geq0$. Then there exists a unique overconvergent ($F$)-isocrystal $\tilde{E}$ on $(S,\overline{S})$ such that for all frames $(T,\overline{T},\frak{T})$ over $S$, with $\frak{T}$ smooth over $\cur{V}$ in a neighbourhood of $T$, the restriction of $\tilde{E}$ to $\mathrm{Strat}^\dagger(T,\overline{T},\frak{T})$ is given by $$ p_2^* \mathbf{R}^qf'_{\frak{T},\mathrm{rig}*} E|_{(X_T,\overline{X}_{\overline{T}})} \cong \mathbf{R}^qf'_{\frak{T}\times_\cur{V}\frak{T},\mathrm{rig}*} E|_{(X_T,\overline{X}_{\overline{T}})}\cong p_1^* \mathbf{R}^qf'_{\frak{T},\mathrm{rig}*} E|_{(X_T,\overline{X}_{\overline{T}})}. $$ Here $p_i:\frak{T}\times_\cur{V}\frak{T}\rightarrow \frak{T}$ are the projection maps, and $f'$ refers to the natural map of pairs $(X_T,\overline{X}_{\overline{T}}) \rightarrow (T,\overline{T}) $. If $\overline{S}$ is proper, then $\tilde{E}$ only depends on $f:X\rightarrow S$ and $E$. \end{conjectureu} Next, by viewing overconvergent isocrystals as collections of $j^\dagger\cur{O}$-modules on each frame over $(S,\overline{S})$ we arrive at the following, stronger version of Shiho's conjecture. \begin{conjectureu}[S1(F)] Suppose that $(X,\overline{X})\rightarrow (S,\overline{S})$ is a proper, Cartesian morphism of $k$-pairs, with $X\rightarrow S$ smooth. Let $E$ be an overconvergent ($F$)-isocrystal on $(X,\overline{X})/K$, and $q\geq0$. Then there exists a unique overconvergent ($F$)-isocrystal $\tilde{E}$ on $(S,\overline{S})$ such that for all frames $(T,\overline{T},\frak{T})$ over $(S,\overline{S})$, with $\frak{T}$ smooth over $\cur{V}$ in a neighbourhood of $T$, we have $$ \tilde{E}_{(T,\overline{T},\frak{T})} \cong \mathbf{R}^qf'_{\frak{T},\mathrm{rig}*} E|_{(X_T,\overline{X}_{\overline{T}})}, $$ with transition morphisms given by the natural base change morphisms. Here $f'$ is as above. If $\overline{S}$ is proper, then $\tilde{E}$ only depends on $f:X\rightarrow S$ and $E$. \end{conjectureu} At least with Frobenius structures, then thanks to full-faithfulness of the restriction from overconvergent to convergent $F$-isocrystals, we get the following (much weaker) form of the conjecture. \begin{conjectureu}[OF] Let $:(X,\overline{X})\rightarrow (S,\overline{S})$ be a proper, Cartesian morphism of $k$-pairs, with $X\rightarrow S$ smooth, and $E\in F\textrm{-}\mathrm{Isoc}^\dagger((X,\overline{X})/K)$. Then $\mathbf{R}^qf_{\mathrm{conv}*}E$ is overconvergent along $\overline{S}\setminus S$, i.e. is in the essential image of the functor $$ F\textrm{-}\mathrm{Isoc}^\dagger((S,\overline{S})/K) \rightarrow F\textrm{-}\mathrm{Isoc}(S/K). $$ \end{conjectureu} Finally, by translating into the language of arithmetic $\cur{D}$-modules, we have the following version of Berthelot's conjecture (this has actually been essentially proven by Caro, as we shall see later, but we include it here as a conjecture for the purposes of exposition). \begin{conjectureu}[C(F)] Suppose that $f: (X,\overline{X})\rightarrow (S,\overline{S})$ is a Cartesian morphism of properly $d$-realisable pairs over $k$, with $\overline{X}\rightarrow \overline{S}$ proper and $X\rightarrow S$ smooth. Then the functor $$ f_+ :( F\textrm{-})D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(X,\overline{X})/K}) \rightarrow (F\textrm{-})D^b_\mathrm{surcoh}(\cur{D}^\dagger_{(S,\overline{S})/K}) $$ sends $(F\textrm{-})D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})/K})$ into $(F\textrm{-})D^b_\mathrm{isoc}(\cur{D}^\dagger_{(S,\overline{S})/K})$. \end{conjectureu} Thus we have 5 conjectures B, B1, S, S1, C relating to overconvergent isocrystals without Frobenius, and 6 conjectures BF, B1F, SF, S1F, OF, CF relating to those with Frobenius structures. We have the straightforward implications $$ \textrm{Conjecture S1(F)} \Rightarrow \textrm{Conjecture S(F)}\Rightarrow \textrm{Conjecture B1(F)}\Rightarrow\textrm{Conjecture B(F)}.$$ \begin{lemma} Conjecture \textrm{B1F} $\Rightarrow$ Conjecture \textrm{OF}. \end{lemma} \begin{proof} The question is local on $\overline{S}$, we may therefore assume that we are in the situation of Proposition \ref{occ}, and the lemma immediately follows. \end{proof} There are also some implications between the `Frobenius' conjectures and the conjectures without Frobenius, for example we have the obvious observation that Conjecture BF $\Rightarrow$ Conjecture B, and the base change part of Conjecture S1 means that Conjecture S1 $\Rightarrow$ Conjecture S1F. Also, the existence of the commutative diagram $$ \xymatrix{ F\textrm{-}D^b_\mathrm{overhol}(\cur{D}^\dagger_{(X,\overline{X})/K}) \ar[r]\ar[d] & D^b_\mathrm{overhol}(\cur{D}^\dagger_{(X,\overline{X})/K}) \ar[d] \\ F\textrm{-}D^b_\mathrm{overhol}(\cur{D}^\dagger_{(S,\overline{S})/K}) \ar[r] & D^b_\mathrm{overhol}(\cur{D}^\dagger_{(S,\overline{S})/K}) } $$ shows that Conjecture C $\Rightarrow$ Conjecture CF. To all of these conjectures we may also append a `base change' statement, which states that the resulting overconvergent $(F\textrm{-})$isocrystals satisfy a suitable base change property via morphisms of varieties $T\rightarrow S$, pairs $(T,\overline{T})\rightarrow (S,\overline{S})$ or triples $(T,\overline{T},\frak{T})\rightarrow (S,\overline{S},\frak{S})$. For example, in the base of Conjecture B1(F), this states that if $g:(T,\overline{T},\frak{T})\rightarrow (S,\overline{S},\frak{S})$ is a morphism of triples, and $E\in (F\textrm{-})\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ with pullback $E_{(T,\overline{T})}\in (F\textrm{-})\mathrm{Isoc}^\dagger((X_T,\overline{X}_{\overline{T}})/K)$, then in addition to Conjecture B1(F) holding for both the pushforward of $E$ via $f:(X,\overline{X})\rightarrow (S,\overline{S},\frak{S})$ and the pushforward of $E_{(T,\overline{T})}$ via $f_{(T,\overline{T})}:(X_T,\overline{X}_{\overline{T}})\rightarrow (T,\overline{T},\frak{T})$, we have an isomorphism \[ g^*\mathbf{R}f_{\mathrm{rig}*} E \cong \mathbf{R}f_{(T,\overline{T})\mathrm{rig}*} E_{(T,\overline{T})}\] of overconvergent ($F\textrm{-}$)isocrystals on $(T,\overline{T})/K$. Similarly, in the case of Conjecture $C(F)$ this states that if \[ \xymatrix{ (X_T,\overline{X}_{\overline{T}}) \ar[r]^{g'}\ar[d]_{f'} & (X,\overline{X})\ar[d]^f \\ (T,\overline{T})\ar[r]^g & (S,\overline{S}) } \] is a Cartesian square of properly $d$-realisable pairs, and $E\in ( F\textrm{-})D^b_\mathrm{surhol}(\cur{D}^\dagger_{(X,\overline{X})/K})$ with pullback $g'^! E \in ( F\textrm{-})D^b_\mathrm{surhol}(\cur{D}^\dagger_{(X_T,\overline{X}_{\overline{T}})/K})$, then in addition to Conjecture C(F) holding for both $E$ and $g'^!E$, we have an isomorphism \[ g^!f_+E \cong f'_+g'^!E \] in $(F\textrm{-})D^b_\mathrm{surhol}(\cur{D}^\dagger_{(T,\overline{T})/K})$ (for the definition of the extraordinary inverse image functors $g^!$ and $g'^!$ see for example \S4.3 of \cite{Ber02}). We invite the reader to formulate precise `base change' versions of Conjectures B(F), S(F), S1(F) and OF. We will denote by `c' a conjecture including a base change statement, for example we will refer to Conjecture B1Fc. We therefore have the implications Conjecture B(1)c $\Rightarrow$ Conjecture B(1)Fc, Conjecture Sc $\Rightarrow$ Conjecture SFc and Conjecture S1(F)c $\Leftrightarrow $ Conjecture S1(F). \section{Previously known results} In this section, we collect together some previously known cases of the conjectures stated above. We start with the original case noted by Berthelot. \begin{theorem}[\cite{Ber86}, Th\'{e}or\`{e}me 5] Assume that there exists a morphism $f:\overline{\frak{X}}\rightarrow \overline{\frak{S}}$ of proper formal $\cur{V}$-schemes, and a smooth open formal subscheme $\frak{S}\subset \overline{\frak{S}}$ such that $f:\frak{X}:=f^{-1}\frak{S}\rightarrow \frak{S}$ is smooth and lifts the given morphism $f:X\rightarrow S$. Then Conjecture B(F) holds. \end{theorem} In the paper where he introduced Conjecture B1(F), Tsuzuki also proved the following case. \begin{theorem}[\cite{Tsu03}, Theorem 4.1.1] In the situation of Conjecture B1(F), suppose that there exists a smooth and proper morphism $(X,\overline{X},\frak{X})\rightarrow (S,\overline{S},\frak{S})$ of smooth $\cur{V}$-frames, such that the square $$ \xymatrix{ \overline{X} \ar[r] \ar[d] & \frak{X} \ar[d] \\ \overline{S} \ar[r] & \frak{S} } $$ is Cartesian. Then Conjecture B1(F)c holds. \end{theorem} Most recently, Caro has proven the following version of Conjecture C(F). \begin{theorem}[\cite{Car15a}, Th\'{e}or\`{e}mes 4.4.2 and 4.4.3] Conjecture C(F)c holds.\label{cfc} \end{theorem} It is also worth mentioning here that Shiho in \cite{Shi08a} proved a weaker version of Conjecture S(F), under certain assumptions on $f$ and $E$, whose statement is somewhat technical and which we will therefore not recall here. There is also a variant on Conjecture C(F) that Caro proved in \cite{Car15a}, which slightly relaxes the condition on $(X,\overline{X})$ and $(S,\overline{S})$ of of begin properly $d$-realisable, but depends on choices of embeddings into formal $\cur{V}$-schemes. There are a few more special cases of these conjectures which have been proven by Matsuda-Trihan and by Etesse. \begin{theorem}[\cite{MT04}, Corollaire 3 and \cite{Ete02}, Th\'{e}or\`{e}me 7] In the situation of Conjecture OF, assume that $S$ is smooth, $\overline{S}$ is proper, $E=\cur{O}_{(X,\overline{X})/K}^\dagger$ is the constant isocrystal, and that the ramification index of $\cur{V}$ is $\leq p-1$. Then Conjecture OF holds in either of the following 2 cases: \begin{enumerate}\item $S$ is an affine curve. \item $X$ is an abelian scheme over $S$. \end{enumerate} \end{theorem} \begin{theorem}[\cite{Ete12}, Th\'{e}or\`{e}me 3.4.8.2] In the situation of Conjecture B1(F), assume that $S$ is smooth, and that $(S,\overline{S},\frak{S})$ is a Monsky-Washnitzer frame, with $\cur{S}$ the associated lift of $S$ to a smooth scheme over $\cur{V}$. Assume that $X\rightarrow S$ lifts to a flat morphism $\cur{X}\rightarrow \cur{S}$. Then Conjecture B1(F) holds. \end{theorem} \begin{remark} If $S$ is smooth, and $f:X\rightarrow S$ is any smooth and proper morphism which locally lifts to a flat morphism $\cur{X}\rightarrow \cur{S}$ of $\cur{V}$-schemes (for example, if $X$ is a complete intersection in some projective space over $S$), then this is enough to guarantee the existence of unique higher direct image isocrystals $\mathbf{R}^qf_{\mathrm{rig}*}E\in (F\textrm{-})\mathrm{Isoc}^\dagger(X/K)$ \end{remark} \begin{theorem} If $S=\overline{S}$ then Conjectures BF, B1F, SF and S1F are true. \end{theorem} \begin{proof} Follows more or less immediately from Proposition \ref{occ}. \end{proof} \section{Main results} We start with the following lemma. \begin{lemma} \label{compconv} Let $\frak{S}$ be a smooth affine formal $\cur{V}$-scheme with special fibre $S$, and let $f:X\rightarrow S$ be a smooth and projective morphism of $k$-varieties, of constant relative dimension $d$. Then for any convergent isocrystal $E\in \mathrm{Isoc}(X/K)$, with associated arithmetic $\cur{D}$-module $\tilde{E}\in D^b_{\mathrm{isoc}}(\cur{D}_{X/K})$, there is a natural isomorphism $$ \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E \cong \cur{H}^{q-d}(f_+\tilde{E}) $$ of $\cur{O}_{\frak{S},{\mathbb Q}}$-modules with integrable connection. This is moreover compatible with base change $\frak{T}\rightarrow \frak{S}$ when $\frak{T}$ is also smooth and affine. \end{lemma} \begin{remark} \begin{enumerate}\item The hypotheses imply that $(S,S)$ and $(X,X)$ are both properly $d$-realisable pairs, hence we do indeed have a push-forward functor $$ f_+: D^b_\mathrm{surcoh}(\cur{D}_{X/K})\rightarrow D^b_\mathrm{surcoh}(\cur{D}_{S/K}). $$ \item We have used Lemma \ref{annoying} to identify $D^b_\mathrm{surcoh}(\cur{D}_{S/K})$ with $D^b_\mathrm{surcoh}(\cur{D}^\dagger_{\frak{S},{\mathbb Q}})$, we may therefore consider $\cur{H}^{q-d}(f_+\tilde{E})$ as an $\cur{O}_{\frak{S},{\mathbb Q}}$-module with integrable connection. \end{enumerate} \end{remark} \begin{proof} Choose closed embeddings $\frak{S}\hookrightarrow \widehat{{\mathbb A}}^n_\cur{V}$ and $X\rightarrow \P^m_S$. Then we may identify $\tilde{E}$ with a certain $\cur{D}$-module on $\frak{P}:=\widehat{\P}^m_{\widehat{{\mathbb A}}^n_\cur{V}}$, supported on $X$, and the push-forward $\tilde{E}$ is given in terms of the functor $$ g_+: D^b_\mathrm{coh}(\cur{D}^\dagger_{\frak{P},{\mathbb Q}}) \rightarrow D^b(\cur{D}^\dagger_{\widehat{{\mathbb A}}^n_{\cur{V}},{\mathbb Q}}) $$ where $g:\frak{P}\rightarrow \widehat{{\mathbb A}}^n_\cur{V}$ is the projection. Now, the formation of $g_+\tilde{E}\in D^b(\cur{D}^\dagger_{\widehat{{\mathbb A}}^n_{\cur{V}},{\mathbb Q}})$ is local on $\frak{P}$, and the formation of $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ is local on $X$, therefore we may replace $\frak{P}$ by an open formal subscheme, and $X$ by its intersection with this subscheme, such that $X\hookrightarrow \frak{P}$ lifts to a closed immersion $\frak{X}\hookrightarrow \frak{P}$ of smooth formal $\cur{V}$-schemes. But now by the compatibility of the Berthelot-Kashiwara equivalence with push-forwards (which is nothing more than (4.3.6.2) of \cite{Ber02}), we may replace the morphism $g:\frak{P}\rightarrow \widehat{{\mathbb A}}^n_\cur{V}$ by the induced morphism $g:\frak{X}\rightarrow \frak{S}$, and using the concrete description of $\mathrm{sp}_{(X,X),+}$ on page \pageref{sp1}, the claim follows immediately from Proposition \ref{derham} and (4.3.6) of \cite{Ber02}. \end{proof} \begin{remark} Note that compatibility with base change means that when $\frak{S}$ is equipped with a lift of the absolute Frobenius on $S$, we can promote the above isomorphism to an isomorphism $$ \mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E \cong \cur{H}^{q-d}(f_+\tilde{E})(d) $$ of (realisations of) convergent $F$-isocrystals. \end{remark} Hence we get Conjecture OF in the projective case as follows. \begin{corollary} \label{ofp} In the situation of Conjecture OF assume that $X\rightarrow S$ is projective. Then Conjecture OF holds. \end{corollary} \begin{proof} We may assume that $X\rightarrow S$ has constant relative dimension. Thanks to Chow's lemma, we may blow up $\overline{X}$ outside of $X$ to obtain a projective morphism $\overline{X}\rightarrow \overline{S}$, this does not change the category $\mathrm{Isoc}^\dagger((X,\overline{X})/K)$ and we may therefore assume that $\overline{X}\rightarrow \overline{S}$ is projective. By choosing a convenient alteration $\overline{S}'\rightarrow \overline{S}$ and using Th\'{e}or\`{e}me 2.1.3 of \cite{Car11}, together with base change for $\mathbf{R}^qf_{\mathrm{conv}*}E$, we may assume that $\overline{S}$ is smooth. The question is also local on $\overline{S}$, which we may therefore assume to have a smooth affine lift $\overline{\frak{S}}$ over $\cur{V}$, and that $\overline{\frak{S}}$ is equipped with a lift of the absolute Frobenius of $\overline{S}$. Let $\frak{S}$ denote the open subscheme of $\overline{\frak{S}}$ corresponding to $S$. Then question is also local on $S$, hence (after further localising on $\overline{\frak{S}}$) we may assume that there exists a locally closed immersion $\overline{\frak{S}}\rightarrow \widehat{\P}^N_\cur{V}$ and a divisor $D\subset \P^N_k$ such that $S=\overline{S}\setminus D$. Hence $(S,\overline{S})$ is properly $d$-realisable. Since $\overline{X}\rightarrow \overline{S}$ is projective, and $$ \xymatrix{ X \ar[r] \ar[d] & \overline{X} \ar[d] \\ S\ar[r] & \overline{S} } $$ is Cartesian, it follows that the pair $(X,\overline{X})$ is also properly $d$-realisable. Hence for any $\tilde{E}\in F\textrm{-}D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})/K})$, we have $f_+\tilde{E}\in F\textrm{-}D^b_\mathrm{isoc}(\cur{D}^\dagger_{(S,\overline{S})/K})$. Therefore, using Lemma \ref{compconv} and the proceeding remark, together with Proposition \ref{pfconv}, the realisation $\mathbf{R}^qf_{\frak{S},\mathrm{conv}*}E$ of the convergent $F$-isocrystal $\mathbf{R}^qf_{\mathrm{conv}*}E$ comes from an object in $F\textrm{-}D^b_\mathrm{isoc}(\cur{D}^\dagger_{(S,\overline{S})/K})$, and is thus overconvergent. \end{proof} We now turn to a version of Conjecture B1(F). \begin{lemma} Let $(S,\overline{S},\frak{S})$ be a $\cur{V}$-frame such that $\frak{S}$ is smooth and affine, $\overline{S}=\frak{S}_0$, and $S=\overline{S}\setminus \tilde{D}$ for some divisor $\tilde{D}$ inside some projective embedding $\frak{S}\hookrightarrow \P^N_\cur{V}$. Let $(X,\overline{X})\rightarrow (S,\overline{S})$ be a Cartesian morphism of pairs, with $\overline{X}\rightarrow \overline{S}$ smooth and projective. Let $E\in \mathrm{Isoc}^\dagger((X,\overline{X})/K)$ with associated arithmetic $\cur{D}$-module $\tilde{E}\in D^b_\mathrm{isoc}(\cur{D}^\dagger_{(X,\overline{X})/K})$. Then there is a canonical isomorphism $$ \mathbf{R}^qf_{\frak{S},\mathrm{rig}*}E \cong \mathrm{sp}^*\cur{H}^{q-d}(f_+\tilde{E}) $$ of $j_S^\dagger\cur{O}_{\frak{S}_K}$-modules with integrable connection. \end{lemma} \begin{remark} \begin{enumerate} \item Note again that by Lemma \ref{annoying}, we may view $\cur{H}^{q-d}(f_+\tilde{E})$ as a coherent $\cur{D}^\dagger_{\frak{S},{\mathbb Q}}(^\dagger D)$-module (where $D=\tilde{D}\cap \frak{S}$), and hence as an $\cur{O}_{\frak{S}}(^\dagger D)_{\mathbb Q}$-module with integrable connection. Using the morphism of ringed spaces $$ \mathrm{sp}: (\frak{S}_K,j_S^\dagger\cur{O}_{\frak{S}_K}) \rightarrow (\frak{S},\cur{O}_\frak{S}(^\dagger D)_{\mathbb Q}) $$ we may therefore view $\mathrm{sp}^*\cur{H}^{q-d}(f_+\tilde{E})$ as a $j^\dagger_S\cur{O}_{\frak{S}_K}$-module with integrable connection, and the statement of the lemma makes sense. \item Note that the hypotheses imply that both pairs $(S,\overline{S})$ and $(X,\overline{X})$ are properly $d$-realisable, and hence we are in the situation where Theorem \ref{cfc} holds. \end{enumerate} \end{remark} \begin{proof} Exactly as in the proof of Lemma \ref{compconv}, we may embed $\overline{X}$ into a smooth and proper formal $\frak{S}$ scheme $\frak{P}$, and then localise on $\frak{P}$ to assume that we have a smooth morphism $f:\frak{X}\rightarrow \frak{S}$ lifting $f:\overline{X}\rightarrow \overline{S}$ (although $\overline{X}$ will no longer be proper). Let $D=\tilde{D}\cap \frak{S}$ and $T=f^{-1}D$. We may thus, using the explicit description of $\mathrm{sp}_{(X,\overline{X}),+}$ on page \pageref{sp2}, make the identifications $$ \tilde{E}\cong \mathrm{sp}_*E, \;\; E \cong \mathrm{sp}^*\tilde{E} $$ where $\mathrm{sp}:\frak{X}_K\rightarrow \frak{X}$ is the specialisation map. Here again $\mathrm{sp}^*$ refers to module pullback via the morphism $$ (\frak{X}_K,j_X^\dagger\cur{O}_{\frak{X}_K}) \rightarrow (\frak{X},\cur{O}_{\frak{X}}(^\dagger T)_{\mathbb Q}) $$ of ringed spaces. Hence using (4.3.6) of \cite{Ber02} again we get a canonical isomorphism $$ \cur{H}^{q-d}(f_+\tilde{E}) \cong \mathbf{R}^qf_*( \tilde{E} \otimes \Omega^*_{\frak{X}/\frak{S}}) $$ of $\cur{O}_{\frak{S}}(^\dagger D)_{\mathbb Q}$-modules with integrable connection. By using the spectral sequence for a complex, and the identification $$ \mathrm{sp}^*(\tilde{E}\otimes_{\cur{O}_{\frak{X},{\mathbb Q}}} \Omega^*_{\frak{X}/\frak{S}}) \cong E \otimes_{\cur{O}_{\frak{X}_K}} \Omega^*_{\frak{X}_K/\frak{S}_K}, $$ it therefore suffices to show that for any coherent $\cur{O}_{\frak{S}}(^\dagger D)_{\mathbb Q}$-module $\tilde{E}$, the base change morphism $$ \mathrm{sp}^*\mathbf{R}^qf_*\tilde{E} \rightarrow \mathbf{R}^qf_{K*} \mathrm{sp}^*\tilde{E} $$ is an isomorphism. Actually, since overconvergent isocrystals extend to some strict neighbourhood of $]X[_\frak{X}$, we may by Proposition 4.4.5 of \cite{Ber96a} assume that there exists some $r$ such that $\tilde{E}$ comes from a coherent $\cur{B}_\frak{X}(T,r_0)_{\mathbb Q}$-module for some $r_0\geq0$, i.e. we have $$ \tilde{E}\cong (\mathrm{colim}_{r\geq r_0} E_r)_{\mathbb Q} $$ for coherent $\cur{B}_\frak{X}(T,r)$-modules $E_r$. (These $\cur{B}_\frak{X}(T,r)$ are essentially formal models for the ring of functions on a certain cofinal system of neighbourhoods of $]X[_\frak{X}$ inside $\frak{X}_K$, for more details see \S4 of \cite{Ber96a}.) Since $\mathrm{sp}^*,\mathbf{R}f_*$ and $\mathbf{R}f_{K*}$ commute with filtered direct limits ($f$ and $f_K$ are both quasi-compact), we can therefore reduce to the case of a coherent $\cur{B}_\frak{X}(T,r)$-module $E$. But now we may replace $\frak{X}$ by the relative spectrum $\mathrm{Spf}(\cur{B}_\frak{X}(T,r))$, it therefore suffices to treat the case of a coherent $\cur{O}_\frak{X}$-module. By further localising on $\frak{X}$ we may assume it to be affine, whence it suffices to treat the case $q=0$. This then follows by direct calculation. \end{proof} Of course, as with Lemma \ref{compconv}, there exists a version with Frobenius, and we easily arrive at the following. \begin{corollary} \label{bp} In the situation of Conjecture B1(F), assume that $\frak{S}$ is smooth, $\overline{S}=\frak{S}_0$ and that the induced morphism $\overline{X}\rightarrow \overline{S}$ is smooth and projective. Then Conjecture B1(F) holds. \end{corollary} \begin{proof} Entirely similar to the proof of Corollary \ref{ofp}. \end{proof} \begin{remark} Note that by using Th\'{e}or\`{e}me 4.4.2 of \cite{Car15a} and Proposition 4.1.8 of \cite{Car09a} we also get a base change statement for morphisms $(T,\overline{T},\frak{T}) \rightarrow (S,\overline{S},\frak{S})$ where $(T,\overline{T},\frak{T})$ also satisfies the hypotheses of the corollary. \end{remark} \section*{Acknowledgements} The author would like to thank Ambrus P\'{a}l, whose interest in the current status of Berthelot's conjecture (in particular, the requirement of a result along the lines of Corollary \ref{ofp} to be used in \cite{Pal15b}) lead to the writing of this article. The author was supported by an HIMR fellowship. \bibliographystyle{mysty}
1508.06640
\section{Introduction \label{sec:intro}} Dark matter (DM) is a necessary component in the time evolution of our Universe, and its existence has been supported by a tremendous amount of astrophysical and cosmological evidence for which the relevant observations are made mostly based upon the gravitational interaction of DM (see Ref.~\cite{Bertone:2004pz} for a general review). In fact, none of the Standard Model (SM) particles can explain various DM-related phenomena, so that the detection of any DM candidates can be not only exciting {\it per se.} but a strong sign of new physics framework beyond the Standard Model (BSM). A great deal of experimental effort to detect DM signals has been made in three different directions such as i) direct detection experiments by observing recoil energy of target nuclei scattered off by DM, ii) indirect detection experiments by observing cosmic rays originating from DM annihilation or decay, and iii) collider searches (for example, at the Large Hardon Collider at CERN) by actively producing DM candidates and exploiting their collider signatures. These three avenues to DM detection are complementary to one another, and have set the bounds of the viable DM mass and the associated cross section (see Ref.~\cite{Agashe:2014kda} and references therein for review).\footnote{Recently, Ref.~\cite{Davoudiasl:2015vba} proposed a general scenario, dubbed ``Inflatable DM models'', within the context of which many well-motivated DM models having too large production of DM can be remedied, hence evade the bounds without tuning of underlying parameters.} Of those experimental attempts, indirect detection experiments have received particular attention as many of them have reported anomalous observations potentially signalling the presence of DM candidates at the locus of cosmic ray sources. For instance, PAMELA~\cite{Adriani:2008zr}, Fermi-LAT~\cite{FermiLAT:2011ab}, and AMS-02~\cite{Aguilar:2013qda} found quite a marked rise of the positron fraction in the energy range from roughly 10 to 200 GeV, and similarly ATIC~\cite{Chang:2008aa}, FERMI-LAT~\cite{Abdo:2009zk}, and HESS~\cite{Aharonian:2009ah} reported an excess in the positron-electron combined energy spectrum between 100 and 1000 GeV. Several photon channels showed intriguing excesses such as 3.5 keV line~\cite{Bulbul:2014sua,Boyarsky:2014jta}, 511 keV line~\cite{Jean:2003ci}, GeV bump~\cite{Goodenough:2009gk}, and 130 GeV line~\cite{Bringmann:2012vr,Weniger:2012tx}. The positron excesses and the Galactic Center (GC) GeV $\gamma$-ray excess are featured by a continuum bump, while the other three X/$\gamma$-ray excesses showed a sharp peak within a very narrow energy range. The latter class of excesses are particularly interesting because they can be readily connected to the DM interpretation. As typical DM candidates behave non-relativistically, the photon energy from a DM pair annihilation (or 2-body decay) is monochromatic, being the same as (half) the DM mass.\footnote{The 511 keV $\gamma$-ray peak comes from the positronium decay. Thus, for the explanation of the 511 keV line excess, the required is a new source of positrons which can be DM annihilation or decay.} In this context, many DM models to address those excesses have been introduced and studied in literature: for example, Ref.~\cite{Huh:2007zw} for 511 keV line, Refs.~\cite{Kyae:2012vi, Park:2012xq} for 130 GeV line, and Ref.~\cite{Kong:2014gea} for 3.5 keV line. In reality, the relevant signal spectrum does {\it not} appear as a $\delta$-function-like peak but is smeared to some extent because of imperfection in cosmic ray detectors. With the assumption of a Gaussian smearing, the resultant $\gamma$-ray energy spectrum (typically) becomes {\it symmetric} with respect to the nominal peak. This broadening effect has motivated the possibility of non-minimal DM scenarios for interpreting the narrow width of peaks as the physical, not the one induced from instrumental uncertainties. The next-to-minimal DM models hypothesize the situation where DM particles annihilate or decay into on-shell intermediate particles that decay into photons \cite{Ibarra:2012dw, Boddy:2015efa}. As such an on-shell intermediate particle comes along with a fixed boost, the photon energy spectrum is characterized by a rectangular shape \cite{Ibarra:2012dw, Boddy:2015efa}. If the mass gap between the DM and the intermediate particle is sufficiently small, hence so is the boost factor, then the photon energy spectrum becomes narrow enough, potentially being indistinguishable from the signal spectrum by the minimal scenario. Having similar philosophy and positing the DM interpretation, we here propose a {\it new} mechanism to develop a {\it narrow} continuum energy spectrum which would {\it fake} a sharp spike. The research program to explain the excesses in cosmic ray energy spectra with the ``{\it energy-peak}'' emerging under non-minimal DM frameworks has been initiated by Ref.~\cite{Kim:2015usa}, in which various observations of the energy-peak made in the context of collider physics~\cite{Agashe:2012bn,Agashe:2012fs,Agashe:2013eba,Chen:2014oha,Agashe:2015wwa,massive} have been applied to the GC $\gamma$-ray GeV excess. As in~\cite{Kim:2015usa}, we begin the discussion with noting that multiple DM species could exist in the Universe, and the relevant DM models constructed upon such a DM framework can bring about not only nontrivial cosmological implications, e.g., ``{\it assisted freeze-out}''~\cite{Belanger:2011ww}, but interesting phenomenology, e.g., ``{\it boosted DM}''~\cite{Agashe:2014yua,Berger:2014sqa,Kong:2014mia}. In this context, the proposed mechanism involves a non-minimal dark sector containing multiple DM particles. For the purpose of simplicity, we introduce two DM species one of which is assumed heavier than the other, denoting henceforth the former and the latter as $\chi_h$ and $\chi_l$, respectively. The heavier DM communicates to the SM sector not directly but through the lighter DM. In addition, the heavier DM pair-annihilates into a pair of on-shell intermediate states (denoted as $A$) each of which subsequently decays into the lighter DM together with a dark pion or an axion-like particle (ALP) (denoted as $a$) emitting a couple of photons in the final state.\footnote{In general, $A$ and $\chi_l$ can be either dark or SM sector particles (and may be even unstable) unless they are stringently constrained by other observations. However, for the sake of simplicity, we assume that they are dark sector particles.} FIG.~\ref{fig:model} schematizes the DM scenario that we consider throughout this paper.\footnote{The main idea in this paper can be readily applied to the decaying DM scenario, but we keep the annihilating one as a concrete example.} We point out that although we employ the photon final state as a concrete example for elaborating our mechanism, it is straightforwardly extensible to other visible particle final states, e.g., $e^+e^-$. \begin{figure}[t] \centering \includegraphics[scale=0.65]{DMmodel.png} \caption{\label{fig:model} The dark matter scenario under consideration.} \end{figure} In this DM scenario, the on-shell intermediate particle $A$ comes with a fixed boost factor, leading to a rectangular energy spectrum, hence a rectangular boost distribution for particle $a$. Due to variations in the boost factor of $a$ the emitted photons develop a {\it continuum} energy spectrum whose width is determined by the mass parameters involved in the entire process demonstrated in FIG.~\ref{fig:model}. We remark that the relevant scenario invokes from the (relatively) narrow energy spectrum to the broadly-distributed energy spectrum, depending on details of the DM models of interest. We shall briefly discuss the aspect of the wide continuum bump signature in Sec.~\ref{sec:spectrum}, while primarily focusing on the ``peak-faking'' interpretation and the associated mass spectrum. Not surprisingly, the mass spectrum of $m_{\chi_h} \gtrsim m_A \gtrsim m_a + m_{\chi_l}$ renders a relatively narrow photon energy spectrum. As a characteristic feature, the resulting differential energy spectrum is {\it symmetric} in the logarithmic scale, and remarkably, its center position is identified as half the mass of particle $a$, $m_a/2$~\cite{Agashe:2012bn,1971NASSP.249.....S}. Therefore, these structural properties enable us to not only distinguish this DM model scenario from other standard DM interpretations, but probe/measure the mass parameters of some dark sector particles. To present our main idea, this paper is organized as follows. We first begin with the DM model under consideration in the next section. In Sec.~\ref{sec:spectrum}, we discuss the energy spectra of relevant visible particles arising from the DM model introduced in Sec.~\ref{sec:model}, mainly focusing on their functional structure. We then apply the main idea to a couple of photon peaks in Sec.~\ref{sec:applications}: i) Fermi-LAT 130 GeV excess and ii) 3.5 keV excess. Sec.~\ref{sec:conclusions} is reserved for our conclusions. \section{Dark matter model \label{sec:model}} We here discuss a class of DM models for which our main idea is applied, and offer a viable DM model that realizes the relevant scenario. As briefly mentioned earlier, we imagine that the dark sector is non-minimal, meaning that there exist more than one DM candidate. Although the arbitrary number of DM species could be introduced, we employ only two types of DM particles, the heavier ($\chi_h$) and the lighter ($\chi_l$), for simplicity. The lighter DM is assumed to directly communicate to the SM sector, whereas the heavier DM is set out to have interactions with the SM sector via the lighter DM. In this sense, the relic abundance of $\chi_h$ can be computed by the scheme of ``{\it assisted freeze-out}''~\cite{Belanger:2011ww}. We further assume that $\chi_h$ has a contact with $\chi_l$ {\it not} directly, {\it but} via an on-shell intermediate state $A$, i.e., a pair of heavier DM particles annihilate into a pair of $A$'s each of which subsequently decays into $\chi_l$ together with a dark pion or an ALP as depicted in FIG.~\ref{fig:model}. Finally, the dark pion or the ALP decays into a photon pair whose energy spectrum is the major interest of this paper. We remark that it is not a necessary condition for particle $A$ to get pair-produced, i.e., $A$ could be produced in association with another particle $A'$ as the detailed dynamics of $A'$ is irrelevant to the later argument and formalism. Similarly, $A$ does not need to directly decay into the lighter DM, i.e., $\chi_l$ could be replaced by other heavy particles whose detailed dynamics does not affect the later argument and formalism. In this context, the model set-up demonstrated in FIG.~\ref{fig:model} is a simplified version. A possible realization of the DM scenario at hand can be summarized as follows. We consider two fermionic DM species, $\chi_h$ and $\chi_l$, with an intermediate fermionic state $\psi_A$ and a singlet pseudo-scalar $a$ (e.g., a dark pion or an ALP as mentioned earlier). Then, the effective Lagrangian required for the DM scenario exhibited in FIG.~\ref{fig:model} is simply described by the following operators: \begin{eqnarray}\label{eq:Model} \mathcal{L}_{\rm DM} \supset \frac{1}{\Lambda^2}\, \overline{\chi}_h \chi_h \overline{\psi}_A \psi_A + \lambda\, a \overline{\psi}_A\gamma^5\chi_l +\frac{1}{f_a}\, a F_{\mu\nu}\widetilde{F}^{\mu\nu}, \label{eq:effL} \end{eqnarray} where $\widetilde{F}^{\mu\nu}$ denotes the dual field strength tensor as usual, and $\Lambda$ and $f_a$ describe the associated suppression scales whose details can be revealed by appropriate UV completion. The first term ensures an $s$-wave annihilation of the heavier DM, i.e., $\overline{\chi}_h\chi_h \to \psi_A\overline{\psi}_A$, the second induces the decay of $\psi_A$ into $\chi_l$ and $a$, and the last corresponds to two photon decay of $a$ as in FIG.~\ref{fig:model}, respectively. The stability of $\chi_h$ and $\chi_l$ can be easily achieved with separate symmetries, e.g., U(1)$'\otimes {\rm U(1)}''$~\cite{Belanger:2011ww} or $Z_2'\otimes Z_2''$~\cite{Agashe:2014yua}. We again stress that the Lagrangian in Eq.~(\ref{eq:effL}) is a simple realization and there exist a host of other possibilities to accommodate the event topology in FIG.~\ref{fig:model}. Exhausting all of them is, however, beyond the scope of this paper. \section{Energy spectrum \label{sec:spectrum}} In this section, we derive the analytic expression for the gamma-ray energy spectrum and discuss its properties which could be distinguished from other (standard) scenarios, given the scenario in FIG.~\ref{fig:model}. \subsection{Derivation of the analytic expression} Assuming that the heavier DM particles are non-relativistic, their pair annihilation into two $A$'s leads to a fixed boost of particle $A$ (denoted as $\gamma_A$) relating the two mass parameters by $\gamma_A =m_{\chi_h}/m_A$ with $m_i$ symbolizing the mass of particle species $i$. Since $A$ obtains a non-zero boost factor, the energy of particle $a$ is not monochromatic, but given by a broad spectrum. The $a$ energy, $E_a$, measured in the laboratory frame is parameterized as \begin{eqnarray} E_a =E_a^* \left(\gamma_A +\frac{p_a^*}{E_a^*}\sqrt{\gamma_A^2-1}\cos\theta_a^*\right), \label{eq:Ea} \end{eqnarray} where $\theta_a^*$ is the emission angle of $a$ in the $A$ rest frame with respect to the boost direction of $A$ and $E_a^*$ is the fixed $a$ energy measured in the rest frame of particle $A$: \begin{eqnarray} E_a^*=\frac{m_A^2-m_{\chi_l}^2+m_a^2}{2m_A}. \end{eqnarray} If $A$ is either a scalar or produced in an {\it unpolarized} way, then $\cos\theta_a^*$ becomes a flat variable, resulting in a rectangular distribution in $E_a$ by a simple chain rule whose range is given by \begin{eqnarray} E_a \in \left[E_a^*\gamma_A-p_a^*\sqrt{\gamma_A^2-1},\;E_a^*\gamma_A+p_a^*\sqrt{\gamma_A^2-1} \right]. \label{eq:Eadist} \end{eqnarray} Similarly to Eq.~(\ref{eq:Ea}), the observed photon energy for a fixed $\gamma_a$ is expressed as \begin{eqnarray} E_{\gamma}=E_{\gamma}^*\left(\gamma_a+\sqrt{\gamma_a^2-1}\cos\theta_{\gamma}^*\right),\label{eq:Egamma} \end{eqnarray} where $\theta_{\gamma}^*$ denotes the intersecting angle between its emission direction and the boost direction of particle $a$ in the $a$ rest frame and $E_{\gamma}^*$ is the fixed photon energy measured in the $a$ rest frame, that is, half the mass of particle $a$: \begin{eqnarray} E_{\gamma}^*=\frac{m_a}{2}. \end{eqnarray} Unlike $\gamma_A$ in the case of particle $a$, $\gamma_a$ is not single-valued but distributed. Denoting its distribution by $g(\gamma_a)$, from Eq.~(\ref{eq:Eadist}) we find the unit-normalized expression (or equivalently, probability distribution function) for $g(\gamma_a)$ as \begin{eqnarray} g(\gamma_a)=\frac{m_a}{2p_a^*\sqrt{\gamma_A^2-1}}\Theta(\gamma_a-\gamma_a^-)\Theta(\gamma_a^+-\gamma_a), \end{eqnarray} where $\Theta(x)$ is the usual Heaviside step function, and $\gamma_a^{\pm}$ are defined by \begin{eqnarray} \gamma_a^{\pm}\equiv \frac{E_a^*}{m_a}\gamma_A\pm\frac{p_a^*}{m_a}\sqrt{\gamma_A^2-1}.\label{eq:gammaapm} \end{eqnarray} Here we used the fact that $\cos\theta_{\gamma}^*$ is a flat variable so that $g(\gamma_a)$ develops a rectangular distribution as well. We then find that for any fixed $\gamma_a$, the unit-normalized differential energy distribution is \begin{eqnarray} \left.\frac{1}{\Gamma}\frac{d\Gamma}{dE_{\gamma}}\right|_{{\rm fixed }\gamma_a}=\frac{1}{2E_{\gamma}^*\sqrt{\gamma_a^2-1}}\Theta(E_{\gamma}-E_{\gamma}^-)\Theta(E_{\gamma}^+-E_{\gamma}),\label{eq:Egammadistfixgamma} \end{eqnarray} where $E_{\gamma}^{\pm}$ can be obtained by setting $\cos\theta_{\gamma}^*$ to be $\pm1$ in Eq.~(\ref{eq:Egamma}): \begin{eqnarray} E_{\gamma}^{\pm}\equiv E_{\gamma}^*(\gamma_a\pm\sqrt{\gamma_a^2-1}). \label{eq:EgammaRange} \end{eqnarray} Denoted by $f(E_{\gamma})$, the expression for the unit-normalized total energy spectrum can be obtained by summing Eq.~(\ref{eq:Egammadistfixgamma}) over all relevant $\gamma_a$'s, that is, \begin{eqnarray} f(E_{\gamma})&=&\int^{\gamma_a^{\max}}_{\gamma_a^{\min}}d\gamma_a \frac{g(\gamma_a)}{2E_{\gamma}^*\sqrt{\gamma_a^2-1}} \label{eq:integralrep}\\ &=&\frac{m_a}{4E_{\gamma^*}p_a^*\sqrt{\gamma_A^2-1}}\left\{\log\left[\sqrt{(\gamma_a^{\max})^2-1}+\gamma_a^{\max}\right] \right. \nonumber \\ &&\left. -\log\left[\sqrt{(\gamma_a^{\min})^2-1}+\gamma_a^{\min}\right] \right\}, \label{eq:analexpE} \end{eqnarray} where $\gamma_a^{\min}$ and $\gamma_a^{\max}$ are defined as \begin{eqnarray} \gamma_a^{\min}\equiv \max\left[\gamma_a^-,\frac{1}{2}\left(\frac{E_{\gamma}}{E_{\gamma}^*}+\frac{E_{\gamma}^*}{E_{\gamma}} \right) \right], \;\gamma_a^{\max}\equiv \gamma_a^+. \label{eq:gammaadef} \end{eqnarray} Since $g(\gamma_a)$ is upper-bounded, $\gamma_a^{+}$ determines the spanning range of $f(E_{\gamma})$ as follows: \begin{eqnarray} \frac{E_{\gamma}}{E_{\gamma}^*}\in \left[\gamma_a^{+}-\sqrt{(\gamma_a^{+})^2-1},\; \gamma_a^{+}+\sqrt{(\gamma_a^{+})^2-1}\right]. \end{eqnarray} We finally remark that in the actual data analysis with concrete examples, all prefactors in Eq.~(\ref{eq:analexpE}) are eventually absorbed into the overall normalization parameter $N$, and as a consequence the shape of $f(E_{\gamma})$ is completely determined by $\gamma_a^+$, $\gamma_a^-$, and $E_{\gamma}^*$, i.e., there are four independent fit parameters. \subsection{Functional properties and discussions} \begin{figure*}[t] \centering \includegraphics[width=8.5cm]{thCurve_Peak.png}\hspace{0.5cm} \includegraphics[width=8.5cm]{thCurve_Plateau.png} \caption{\label{fig:theorycurves} Left panel: the gamma-ray energy spectrum with a peak. The chosen mass spectrum is $(m_{\chi_h},\;m_A,\;m_{\chi_l},\;m_a)=(237.5,\; 200,\;50,\;100)$ GeV. The simulated data and corresponding theory expectation are represented by the blue histogram and red line, respectively. Right panel: the gamma-ray energy spectrum with a plateau. The chosen mass spectrum is the same as in the left panel with $m_A$ replaced by 170 GeV. } \end{figure*} To discuss functional properties of the photon energy spectrum, we first revisit the expression of $E_{\gamma}$ for a fixed $\gamma_a$ shown in Eq.~(\ref{eq:Egamma}). Since $\cos\theta_{\gamma}^*$ spans $-1$ to $+1$, the range of $E_{\gamma}$ is trivially given by \begin{eqnarray} \frac{E_{\gamma}}{E_{\gamma}^*}\in \left[\gamma_a-\sqrt{\gamma_a^2-1},\; \gamma_a+\sqrt{\gamma_a^2-1}\right], \end{eqnarray} as also expressed in Eq.~(\ref{eq:EgammaRange}). One remarkable feature from the above range is the fact that the lower (upper) end in the right-hand side is smaller (greater) than 1, implying that $E_{\gamma}^*$ is the only commonly-included energy value for {\it any} $\gamma_a$. Moreover, we observe that $E_{\gamma}^*$ is the geometric mean of minimum and maximum energy values, i.e., $(E_{\gamma}^*)^2=E_{\gamma}^{\min}E_{\gamma}^{\max}$. This again implies that the $E_{\gamma}^*$ value is located at the center of the $E_{\gamma}$ distribution for a given $\gamma_a$ in the {\it logarithmic} scale. As mentioned before, the shape of $E_{\gamma}$ distribution for a fixed $\gamma_a$ is rectangular due to the flatness of $\cos\theta_{\gamma}^*$ in Eq.~(\ref{eq:Egamma}). In order to obtain the overall energy distribution, one should ``stack up'' all of such rectangles contributing to the energy distribution. This statement is already formulated in Eq.~(\ref{eq:integralrep}) as a Lebesque-type integral representation. Hence, one would expect that the resultant photon energy distribution contains a unique peak at $E_{\gamma}=E_{\gamma}^*$ in conjunction with the observation made in the previous paragraph. Indeed, there arises a subtlety here: the validity of this expectation depends whether or not the smallest boost factor of particle $a$ approaches 1. The condition for $\gamma_a=1$ (i.e., solving Eq.~(\ref{eq:gammaapm}) with $\gamma_a^-$ being 1) is $E_a^*=m_a\gamma_A$, representing a hypersurface formed by $m_A$, $m_{\chi_h}$, and $m_{\chi_l}$ for any fixed $m_a$, i.e., only delicately chosen mass spectra can attain this condition. Once $\gamma_a=1$ is available, clearly, $E_{\gamma}^*$ appears as a unique and cusp-structured peak~\cite{Agashe:2012bn,Chen:2014oha}. The left panel in FIG.~\ref{fig:theorycurves} demonstrates an example spectrum in this category in the logarithmic scale. The chosen mass spectrum is $(m_{\chi_h},\;m_A,\;m_{\chi_l},\;m_a)=(237.5,\; 200,\;50,\;100)$ GeV, and events were generated by pure phase space. The simulated events are binned to the blue histogram, and the associated theory prediction is shown by the red line. We clearly see that the theory expectation can reproduce the data well enough, and the spectrum is {\it symmetric} with respect to $E_{\gamma}^*=m_a/2=50$ GeV (indicated by a black dashed line) in this scale. We also remark that both sides of the distribution look like straight lines, which can be easily seen from Eq.~(\ref{eq:analexpE}) with $\gamma_a^{\min}=\frac{1}{2}\left( \frac{E_{\gamma}}{E_{\gamma}^*}+\frac{E_{\gamma}^*}{E_{\gamma}} \right)$ in logarithmic $E_{\gamma}$, so that the whole spectrum appears as an isosceles triangle. On the other hand, $g(\gamma_a)$ starts from the value away from $\gamma_a=1$, {\it no} peak is developed in the middle of the energy distribution. Instead, a plateau region emerges because even the narrowest rectangle corresponding to the smallest $\gamma_a$ has a finite-sized width. Nevertheless, it is straightforward that the center of the relevant photon energy spectrum can be identified as $E_{\gamma}^*$ in the logarithmic base. These expectations are manifestly shown in the right panel of FIG.~\ref{fig:theorycurves}. The relevant mass spectrum is the same as the previous case with $m_A$ replaced by 170 GeV. Also, the existence of a plateau region makes the whole energy spectrum appears as an isosceles trapezoid in the logarithmic base. This plateau structure is a distinguished feature from other energy spectra. However, its presence may be invisible in actual indirect detection experiments due to the issue of energy resolution. Clearly, if the plateau is smaller than the relevant resolving power, its existence is rarely identifiable so that the relevant spectrum easily fakes a unimodal distribution like the previous case. Even for the energy spectrum with a sufficiently sizable plateau, its identification may be unavailable with small statistics, i.e., more data accumulation may be required. As the proposed mechanism, which is denoted as Scenario iii), is used for explaining narrow peaks, it is interesting to compare and contrast it with the other two conventional scenarios enumerated below. \begin{itemize} \itemsep3pt \parskip0pt \parsep0pt \item Scenario i): Photons directly come from DM annihilation/decay. \item Scenario ii): Photons are emitted as a decay product of the on-shell intermediate particle into which DM annihilates/decays. \end{itemize} In Scenario i), the width of the peak is typically caused by the intrinsic energy resolution of detectors, and thus the final energy spectrum is symmetric about the nominal peak. Even in the logarithmic scale, the spectrum is described by a smooth curve, while it is no more symmetric-looking about the peak. In Scenario ii), the width of the energy spectrum is physical. However, identifying the peak position is ambiguous due to the box-like spectral behavior in both the linear and the logarithmic scales. Therefore, defining the symmetry property of the shape is not available. The comparisons thus far are summarized in Table~\ref{tab:comparison}. One can easily see that their respective morphological features differ from one another, and therefore, one is able to pin down the underlying DM scenario with a reasonable amount of signal statistics. \begin{table}[t] \centering \begin{tabular}{c|c|c|c} \hline \hline & Scenario i) & Scenario ii) & Scenario iii) \\ \hline Peak existence & Always & Absent & Sometimes \\ Plateau existence & Absent & Always & Sometimes \\ Width & Instrumental & Physical & Physical \\ Symmetry in $E$ & Symmetric & Not available & Asymmetric \\ Symmetry in $\log E$ & Asymmetric & Not available & Symmetric \\ Shape in $E$ & Curved & Rectangular & Curved \\ Shape in $\log E$ & Curved & Rectangular & Oblique \\ \hline \hline \end{tabular} \caption{\label{tab:comparison} Comparisons of structural properties in the energy spectrum among three DM scenarios defined in the text. Symmetry properties are defined with respect to the relevant peak if available.} \end{table} Before closing the current section, we briefly discuss the case of wide continuum bump spectra. Certainly, such types of spectra arise within a broad realm of relevant parameter space so that the theory prediction in Eq.~(\ref{eq:analexpE}) can be employed to explain continuum cosmic ray excesses. Due to the unique morphological features discussed so far, the relevant signal spectrum could be easily distinguished from other continuum bumps, taken as strong evidence of a non-trivial dark sector. \section{Applications \label{sec:applications}} Armed with the argument in the previous section, we apply the basic idea to a couple of existent cosmic ray peaks: i) 130 GeV line~\cite{Bringmann:2012vr,Weniger:2012tx} and ii) 3.5 keV line~\cite{Bulbul:2014sua,Boyarsky:2014jta}. Although claiming that these examples could be understood by the proposed mechanism, we admit that the underlying DM models for them might not fall into the scenario depicted in FIG.~\ref{fig:model}. Nevertheless, we emphasize that the applicability of the relevant technique is restricted to neither these two examples nor gamma-ray spectra. In other words, nothing precludes us from applying the DM interpretation at hand for any of future excessive cosmic ray signals with a narrow peak. \subsection{130 GeV line} Our first example is the famous 130 GeV line whose original data was collected by the FERMI-LAT collaboration. We basically conduct the fit to the spectrum of the observed 130 GeV gamma-ray excess with the expected shape in Eq.~(\ref{eq:analexpE}) with the overall normalization parameter $N_S$ being added: \begin{eqnarray} f_S(E_{\gamma})&=&N_S\left\{\log\left[\sqrt{(\gamma_a^{\max})^2-1}+\gamma_a^{\max}\right] \right. \nonumber \\ &&\left. -\log\left[\sqrt{(\gamma_a^{\min})^2-1}+\gamma_a^{\min}\right] \right\} . \label{eq:sigtemp} \end{eqnarray} The relevant data points and errors are taken from Reg4 with the ULTRACLEAN event class in Ref.~\cite{Weniger:2012tx}. Since the measured bin counts contain the contributions from backgrounds, the relevant fit is performed simultaneously with the background template. We assume that the backgrounds can be parameterized by a simple (gradually-falling) power law such as \begin{eqnarray} f_B(E_{\gamma})=N_B\left( \frac{E_{\gamma}}{E_{\gamma}^*} \right)^{-p}, \label{eq:bgtemp} \end{eqnarray} where $N_B$ is the normalization parameter for backgrounds and $p$ encodes the background shape. Here $E_{\gamma}^*$ is the same $E_{\gamma}^*$ contained in $\gamma_a^{\min}$ of Eq.~(\ref{eq:sigtemp}). Therefore, the entire data set is fitted with the combination of $f_S(E_{\gamma})$ in Eq.~(\ref{eq:sigtemp}) and $f_B(E_{\gamma})$ in Eq.~(\ref{eq:bgtemp}), i.e., \begin{eqnarray} f_{\rm total}(E_{\gamma})=f_S(E_{\gamma})+f_B(E_{\gamma}). \label{eq:tottemp} \end{eqnarray} \begin{figure*}[t] \centering \includegraphics[width=8.1cm]{fit_130GeV.png} \hspace{0.6cm} \includegraphics[width=8.1cm]{fit_3p5keV.png} \\ \vspace{0.3cm} \includegraphics[width=8.2cm]{Contour_130GeV.png} \hspace{0.4cm} \includegraphics[width=8.5cm]{Contour_3p5keV.png} \caption{\label{fig:fitresults} Upper-left panel: the 130 GeV $\gamma$-ray energy spectrum of the DM and background components taken from Ref.~\cite{Weniger:2012tx}. The fit is performed with 25 data points (black dots), and the best-fitted is represented by the red curve. Upper-right panel: the 3.5 keV X-ray energy spectrum of the DM components taken from Ref.~\cite{Boyarsky:2014jta}. The fit is performed with 17 data points. Lower-left panel: the allowed mass space for the 130 GeV line along with 1$\sigma$ variations of $m_a$ and $\gamma_a^+$ for three different masses of particle $\chi_l$. Lower-right panel: the allowed mass space for the 3.5 keV line.} \end{figure*} Our fit result is shown in the upper-left panel of FIG.~\ref{fig:fitresults}. The data points and associated error bars are represented by black dots and black lines, correspondingly, while the red solid curve shows the best-fitted model. We clearly see that our fit is in a rather good agreement with the gamma-ray energy spectrum, i.e., the template given in Eq.~(\ref{eq:tottemp}) reproduces the data sufficiently well. More quantitatively speaking, we find that the $\chi^2$ value is 19.7 for 19 degrees of freedom (i.e., 25 data points subtracted by 6 fitting parameters such as $N_S$, $N_B$, $\gamma_a^+$, $\gamma_a^-$, $E_{\gamma}^*$, and $p$) between 80 and 210 GeV. This number is quite comparable to that from conventional scenarios, suggesting that our DM scenario be considered (equally) plausible in explaining the observed data. The fit also tells us useful information. First of all, we extract the mass of particle $a$ from the measurement of $E_{\gamma}^* (=m_a/2)$, that is, \begin{eqnarray} m_a^{\rm ext} = 258.1^{+ 4.2}_{-14.2} \hbox{ GeV}, \end{eqnarray} where the errors here are evaluated by 1$\sigma$ statistical uncertainty. The other mass parameters can be also estimated by the measured $\gamma_a^{+}$ and $\gamma_a^{-}$, for which the best-fitted numbers are reported as \begin{eqnarray} (\gamma_a^+)^{\rm ext} &=&1.0094^{+0.0418}_{-0.0046}, \\ (\gamma_a^-)^{\rm ext} &=&1.0000^{+0.0012}, \end{eqnarray} respectively. Again, the errors are reported by 1$\sigma$ statistical uncertainty. Note that the lower error for $(\gamma_a^-)^{\rm ext}$ is not provided because there is no sensitivity to the values of $\gamma_a^-<1$ according to the definition of $\gamma_a^{\min}$ in Eq.~(\ref{eq:gammaadef}). From Eq.~(\ref{eq:gammaapm}), we obtain the expressions for $E_a^*$ and $p_a^*$ in terms of $\gamma_a^{\pm}$ \begin{eqnarray} E_a^*\gamma_A&=&\frac{m_a(\gamma_a^++\gamma_a^-)}{2}, \\ p_a^*\sqrt{\gamma_A^2-1}&=&\frac{m_a(\gamma_a^+-\gamma_a^-)}{2}, \end{eqnarray} and the difference between the former squared and the latter squared leads the following mass relation: \begin{eqnarray} \frac{m_{\chi_h}^2}{m_A^2}-1+\frac{m_A^2-m_{\chi_l}^2+m_a^2}{2m_A m_a}=\gamma_a^+\gamma_a^-, \label{eq:contourbasic} \end{eqnarray} with which we perform a scan of allowed mass space. As $\gamma_a^-$ shows the smallest error, we simply fix it to be unity for phenomenological purpose. Instead of doing three-dimensional scanning, we choose three different $m_{\chi_l}$ values, 10, 100, and 1000 GeV to find the allowed regions in terms of $m_{\chi_h}$ and $m_A$. The lower-left panel of FIG.~\ref{fig:fitresults} demonstrates the allowed parameter space in the plane of $(m_A-m_{\chi_l}-m_a)$ versus $(m_{\chi_h}-m_A-m_{\chi_l}-m_a)$. The red, green, and blue regions are for $m_{\chi_l}=10$, 100, and 1000 GeV, respectively. Solid curves are the contours evaluated with the best-fitted $m_a$ and $\gamma_a^+$. On the other hand, dashed (dot-dashed) curves are the ones evaluated with $m_a+\delta m_a$ and $\gamma_a^++\delta\gamma_a^+$ ($m_a-\delta m_a$ and $\gamma_a^+-\delta\gamma_a^+$), so that one can get some intuition on the mass spectrum allowed by 1$\sigma$ variations of the relevant parameters. We observe that the viable mass spectra are more or less compact. This is not surprising because the narrow peak enforces a degenerate mass spectrum not to obtain too large boost in any of the steps in FIG.~\ref{fig:model}. Dark sector scenarios featuring such a mass spectrum could be achieved by a symmetry. Building realistic DM models is, however, beyond the scope of this paper, so we do not further pursue this direction here. \subsection{3.5 keV line} Our next example is the well-known 3.5 keV line. As in the previous case, the fit is performed to the spectrum of the observed 3.5 keV gamma-ray excess with the signal template given in Eq.~(\ref{eq:sigtemp}). The relevant data points and associated errors are taken from the processed data for the MOS spectrum of the central region of the Andromeda galaxy (M31) found in Ref.~\cite{Boyarsky:2014jta}. The numbers to be used contain only the DM component, i.e., the background component is already subtracted. Therefore, we execute the fit procedure only with the signal template, not introducing any background template unlike the previous case. The fit result is exhibited in the upper-right panel of FIG.~\ref{fig:fitresults}. Again, the data points and the error bars are represented by black dots and black lines, respectively, while the red solid line describes the best-fitted model. Our fit is in a very good agreement with the measured energy spectrum. The reported $\chi^2$ value is 4.61 for 13 degrees of freedom (i.e., 17 data points subtracted by 4 fitting parameters such as $N_S$, $\gamma_a^+$, $\gamma_a^-$, and $E_{\gamma}^*$) between 3 and 4 keV. This number is significantly improved, compared with that from standard interpretations, and therefore, our DM framework can be considered as a plausible scenario accommodating the observed spectrum. Speaking of various best-fit parameters, we first find that the extracted mass parameter for particle $a$ is \begin{eqnarray} m_a^{\rm ext}=7.09^{+0.08}_{-0.06} \hbox{ keV}, \end{eqnarray} where errors are estimated by 1$\sigma$ statistical uncertainty. We find the best-fit values for $\gamma_a^+$ and $\gamma_a^-$ are \begin{eqnarray} (\gamma_a^+)^{\rm ext}&=&1.0021^{+0.0015}_{-0.0006}, \\ (\gamma_a^-)^{\rm ext}&=&1.0000^{+0.0014}, \end{eqnarray} respectively, together with 1$\sigma$ statistical uncertainty. In order to obtain the allowed mass space within 1$\sigma$ variations of the relevant parameters, we follow the same procedure in the previous case for three different $m_{\chi_l}$ masses, 1, 10, and 100 keV. The lower-right panel of FIG.~\ref{fig:fitresults} demonstrates the allowed region again in the plane of $(m_A-m_{\chi_l}-m_a)$ versus $(m_{\chi_h}-m_A-m_{\chi_l}-m_a)$. The red, green, and blue regions correspond to $m_{\chi_l}=1$, 10, and 100 keV, respectively. \section{Conclusions \label{sec:conclusions}} Dark matter indirect detection offers an excellent opportunity to confirm the existence of DM. Several experimental collaborations have already reported anomalous phenomena in the relevant cosmic ray energy spectrum. Particular attention has been paid to sharply-peaked signals due to the easiness of DM interpretation. Typical models assume that (non-relativistic) DM particles directly annihilate or decay into visible ones. In this minimal DM scenario, the narrow width of the peak is typically understood as the one stemming from the imperfection of cosmic ray detectors. We have rather taken the viewpoint that this width can be physically induced, and proposed a {\it new} mechanism to realize it with the assumption of a non-minimal dark sector. Two DM species were introduced, and the heavier one is assumed to annihilate to the on-shell intermediate state which subsequently decays into the lighter one and an unstable particle. This unstable particle further decays into a pair of visible particles which can be a source of anomalous peaks in the cosmic ray energy spectra. We showed that the signal spectrum can be narrow enough to fake a sharp spike with a suitable choice of the associated mass parameters. The shape of the full signal spectrum was derived, and several interesting functional properties were discussed. We pointed out that the peak position, one of the fit parameters, is immediately identified as half the mass of the above-mentioned unstable particle. We also showed that other mass parameters can be estimated by other fit parameters. We then enumerated various morphological features to be utilized for distinguishing several DM scenarios in which the sharp peak signature is available. The viability of the relevant strategy was assessed with two real observational data sets, 130 GeV line and 3.5 keV line. We found that both of them can be well-described by the theoretical expectation in our DM scenario, and that each allowed parameter space is fairly large. Finally, we emphasize that our DM scenario and the associated data analysis are not restricted to photon energy peaks, i.e., any cosmic ray energy peaks can take advantage of them. We strongly encourage people to pursue the direction exploited in this paper as well whenever cosmic ray peaks are observed. \section*{Acknowledgments} We thank Kaustubh Agashe for useful discussions. D.~K. is supported by the LHC Theory Initiative postdoctoral fellowship (NSF Grant No. PHY-0969510), and J-C.~P. is supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (NRF-2013R1A1A2061561). We appreciate CETUP* (Center for Theoretical Underground Physics and Related Areas) for its hospitality enabling us to initiate this work.
1706.00106
\section{Introduction} \label{sec:intro} \subsection{Observational Background} Despite their diversity in mass, spatial extent, and stellar and gas content, disc galaxies both in the local and distant Universe show a striking range of regularities. Perhaps the most famous of these is the Kennicutt-Schmidt relation \citep[see reviews by][]{kennicutt98a, kennicutt12a, krumholz14c}, the observed correlation between the rate at which galaxies form stars and a combination of their gas content and their dynamical times. The rate of star formation implied by this relation is remarkably small: on average, galaxies turn only $\sim 1\%$ of their gas into stars per dynamical time of the gas \citep{zuckerman74a}. This correlation between gas content and star formation, and the remarkably low efficiency of star formation that it implies, was first observed on galactic scales and in the local Universe. However, subsequent work has shown that it continues to hold even at high redshift \citep[e.g.,][]{bouche07a, daddi08a, daddi10a, daddi10b, genzel10a, tacconi13a}, and on $\sim 1$ kpc scales in the local Universe \citep{kennicutt07a, bigiel08a, leroy08a, leroy13a, liu11a, momose13a}. Indeed, the correlation and inefficiency extend down to even $\sim 1$ pc scales. There are a number of lines of evidence in favour of this conclusion, including direct star counts in star-forming clouds near the Sun \citep{lada10a, heiderman10a, krumholz12a, evans14a, salim15a, heyer16a}, correlations between gas and indirect star formation tracers such as recombination lines to larger distances in the Milky Way \citep{vutisalchavakul16a}, and correlations between star formation and tracers of dense gas in both Galactic and extragalactic systems \citep{krumholz07e, garcia-burillo12a, usero15a}.\footnote{This conclusion has recently been questioned by \citet{lee16a}, but we argue in this paper that this is likely an artefact of their methodology, which differs from that of all the other authors. See below for details.} A second regularity and noted galactic-scale correlation concerns the velocity dispersions of the gas in galaxies. In both local and high redshift galaxies, this gas invariably displays superthermal linewidths indicative of transsonic or supersonic motion \citep[and references therein]{glazebrook13a}. This is true regardless of whether these motions are traced using the 21 cm line of H~\textsc{i} \citep{van-zee99a, petric07a, tamburro09a, burkhart10a, ianjamasimanana12a, ianjamasimanana15a, stilp13a, chepurnov15a}, the low$-J$ lines of CO \citep{caldu-primo13a, caldu-primo15a, meidt13a, pety13a}, or the recombination lines of ionised gas \citep{cresci09a, lehnert09a, lehnert13a, green10a, green14a, le-tiran11a, swinbank12a, arribas14a, genzel14a, moiseev15a}. Observed linewidths are relatively independent of radius within a given galaxy, but vary significantly from galaxy to galaxy in a way that is well-correlated with galaxies' rates of star formation. Galaxies with star formation rates below $\sim 1$ $M_\odot$ yr$^{-1}$, typical of the local Universe \citep{kennicutt12a}, all have roughly the same velocity dispersion of $\approx 10$ km s$^{-1}$. However, at the higher star formation rates found both in local starbursts and in main sequence star-forming galaxies at higher redshift, velocity dispersions increase roughly linearly, $\sigma \propto \dot{M}_*$ \citep{krumholz16a}, although with substantial scatter and subsidiary dependencies on quantities such as the galaxies' gas fractions, sizes, and rotational velocities. These velocity dispersions feed naturally into a third observed correlation, which is that galaxy discs tend to be in a state of marginal gravitational stability. The gravitational stability of a disc can be characterised by the \citet{toomre64a} $Q$ parameter, defined by $Q \approx \kappa\sigma/\pi G \Sigma$, where $\kappa$ is the epicyclic frequency of the galaxy's rotation, $\sigma$ is the velocity dispersion, and $\Sigma$ is the surface mass density. Observed disc galaxies in both the local universe and at high redshift tend to have $Q\approx 1$ throughout their discs \citep[e.g.,][]{martin02a, genzel10a, meurer13a, romeo13a, romeo17a}. A fourth and final observed correlation relates to the spatial distribution of gas and star formation in galaxy disks. Star formation correlates with molecular gas rather than total gas, and the H$_2$-rich regions of galaxies are preferentially located in their centres. Consequently, the scale length of the star formation is comparable to the stellar scale length, $\approx 2-4$ kpc, and a factor of $2-3$ smaller than the neutral gas scale length \citep{regan01a, leroy08a, schruba11a, bigiel12a}. Within the molecule-dominated region, the gas depletion time is $\sim 1-2$ Gyr \citep{bigiel08a, leroy13a}, much less than a Hubble time. The fact that star formation has not ceased in the centres of all galaxy discs implies either that we live at a special time when all local disc centres are about to quench, or that there is an ongoing gas supply to fuel star formation. Direct accretion of cold gas from the intergalactic medium \citep[e.g.,][]{keres05a, dekel06a, dekel09b, wetzel15a} and condensation from hot halos in low redshift galaxies \citep{marinacci10a, joung12a, fraternali13a, hobbs13a}, supplemented by mass returned by stellar evolution \citep{leitner11a}, likely provide a sufficient mass supply for star formation. However, they do not naturally provide it at the small galactocentric radii where star formation takes place. Accretion from a hot corona is predicted to deliver most of its mass at radii of $\sim 3-4$ stellar scale lengths \citep[e.g.,][]{marasco12a}, and, at least at high redshift, cold accretion tends to join the disc at radii of $\sim 0.1 - 0.3$ virial radii, which is $\sim 10$ times the stellar scale length \citep{danovich15a}, though there are exceptions associated with loss of angular momentum by counter-rotating streams and major mergers, which tend to trigger ``compaction" events \citep{zolotov15a, tacchella16a, tacchella16b}. Preventing quenching requires this gas then flow radially inward. Such flows have recently been detected directly in a number of nearby galaxy discs \citep{schmidt16a}. \subsection{Theoretical Background} \label{ssec:theory} Any successful theory of the structure and evolution of disc galaxies ought to be able to explain all of these observed regularities, but at present no such theory is available. This is at least in part because theoretical modelling has tended to focus on one or two of the observed correlations, without attempting to unify all of them into a single, coherent picture. Several authors have attempted to develop theories that link the problems of velocity dispersion, marginal stability, and star formation fuelling \citep[e.g.,][]{bournaud07a, bournaud09a, agertz09b, dekel09a, ceverino10a, krumholz10c, vollmer11a, cacciato12a, forbes12a, forbes14a, goldbaum15a, goldbaum16a}. The central premise in these models is that gravitational instability produces torques that both move mass inward and drive turbulence, simultaneously regulating galaxies to $Q\approx 1$, producing supersonic velocity dispersions, and fuelling star formation. Models in this class naturally explain why $Q\approx 1$, why star formation is not quenched in modern galaxy centres, and why high redshift galaxies have high velocity dispersions. If one couples them to an empirically-determined star formation relation, they can also do a reasonable job of explaining both galaxy-scale star formation laws and the high star formation - velocity dispersion portion of the $\sigma-\dot{M}_*$ correlation \citep{zheng13a, krumholz16a, wong16a}. However, these models do not naturally explain the minimum velocity dispersion to which galaxy discs seem to settle at $z\approx 0$. Even in quiescent galaxies similar to the Milky Way, observed ISM velocity dispersions are $\approx 10$ km s$^{-1}$ \citep[e.g.,][]{ianjamasimanana12a}, corresponding to bulk motions at a Mach number $\sim 1$ for gas at the typical warm neutral medium temperature of $\approx 7000$ K \citep{wolfire03a}. Some energy input is required to maintain transsonic flows of this sort, and models based purely on gravitational instability-driven torques do not naturally produce such an input in quiescent discs. Because such models do not naturally make any predictions about star formation rates on either large or small scales, they also do not explain the physical origins of the star formation law. More generally these models usually do not include any specific treatment of star formation feedback or its coupling to the interstellar medium, an obvious omission. Other authors have instead chosen to focus on the observed correlation between star formation and gas. Some authors have attempted to derive this correlation using a ``bottom up" approach, whereby one begins by attempting to explain the inefficiency of star formation on small scales, and then builds a galaxy-scale star formation relation as the sum of small-scale relations \citep{krumholz05c, krumholz09b, padoan12a, federrath12a, federrath13a, krumholz13c, federrath15b, burkhart18a}. These small-scale relations, while theoretically-motivated, can be checked directly against numerical simulations of self-gravitating turbulence, and the agreement is generally good \citep[e.g.,][]{padoan11a, federrath12a, padoan14a}. This approach allows one to explain the star formation rate on both small and large scales, and naturally incorporates star formation feedback on small scales. Furthermore, if these models are supplemented by chemical models that capture the transition between the warm, H~\textsc{i} and cold, H$_2$ phases of the ISM \citep{krumholz09b, krumholz13c}, they also correctly capture the observed dependence of the star formation rate on the chemical phase and metallicity of the ISM \citep{bolatto11a, wong13a, shi14a, filho16a, jameson16a, rafelski16a}. On the other hand, these models are generally silent on the question of galaxies' velocity dispersions, gravitational stability, or long term fuelling. Conversely, some authors have attempted to derive the star formation rate and velocity dispersion using a ``top down" method, the fundamental assumption of which is that the star formation rate is set by considerations of force and energy balance within a galactic disc \citep[e.g.,][]{thompson05a, ostriker10a, ostriker11a, hopkins11a, faucher-giguere13a, hayward17a}. In these models, one considers a disc of a prescribed gas content and gravitational potential, and asks what star formation rate is required for star formation feedback to be vigorous enough to keep the disc in vertical pressure balance and energy balance. This approach has the advantage that it is rooted in simple physical considerations that must hold at some level, and it is the first step in the approach that we shall pursue in this paper. Moreover, it enables one to make predictions that link star formation, velocity dispersion, and Toomre stability, and thus unify three of the observed correlations discussed above. However, top-down models that work solely based on the balance between feedback, vertical gravity, and dissipation have proven difficult to make work in practice. First of all, unless one posits a source of star formation feedback for which the momentum injected per star formed increases with gas surface density \citep[e.g., as trapped infrared radiation pressure does in the model of][]{thompson05a}, the natural prediction of these models is that the star formation rate per unit area should rise as the square of the gas surface density (e.g., equation 13 of \citealt{ostriker11a} or equation 18 of \citealt{faucher-giguere13a}). The predicted correlation $\dot{\Sigma}_* \propto \Sigma^p$ with $p \approx 2$ is substantially steeper than the observed correlation, which ranges between $p\approx 1$ in spatially resolved patches of local galaxies to $p\approx 1.5$ for rapidly star-forming galaxies as a whole.\footnote{\citet{narayanan12a} and \citet{faucher-giguere13a} argue that one can steepen the relation and increase the value of $p$ by adopting a CO to H$_2$ conversion factor that scales strongly with galaxy star formation rate. However, even adopting such a scaling, fits to the more recent and larger data sets favour $p \approx 1.7$ rather than $2$ \citep[c.f.~figure 3 of][]{thompson16a}, and recent dust-based measurements of gas content that are independent of CO suggest that even this is too steep \citep{genzel15a}.} Second, because these models compute the star formation rate from the weight of the ISM, they naturally predict that the star formation rate at a given surface density is independent of the metallicity or chemistry of the ISM, since these factors do not alter the weight. They can be reconciled with the strong observational evidence that metallicity and chemical phase do affect the rate of star formation only by positing that the efficiency of star formation feedback is metallicity-dependent. For example, the model of \citet{ostriker10a} predicted that the regions of comparable gas surface density in the Small Magellanic Cloud and the Milky Way should form stars at nearly equal rates. \citet{bolatto11a} found that this prediction was incorrect, and proposed a modification to the theory in which the efficiency of photoelectric heating scales inversely with metallicity, and thus stars pressurise the ISM more efficiently in low-metallicity galaxies. While this does fix agreement with the observations, \citet{krumholz13c} points out that the physical mechanism proposed by \citet{bolatto11a} to produce the metallicity dependence is not correct, and, more generally, that there is no good reason to expect that feedback efficiencies will depend on metallicity in the ways required to explain the observations. In particular, supernovae are thought to be the dominant feedback mechanism in most galaxies, and supernova momentum injection is nearly independent of metallicity \citep[e.g.,][]{thornton98a, martizzi15a, gentry17a}. Third, these models do not naturally predict either the sub-galactic star formation law or the gravitational stability parameter, forcing one to adopt one or the other based on empirical observations. If one adopts the observed sub-galactic star formation rate \citep[e.g.,][]{ostriker11a}, then, as we shall show below, one predicts velocity dispersions and Toomre $Q$ parameters sharply at odds with what is observed. Conversely, one can posit that star formation rates are very sensitive to the Toomre $Q$ parameter, so that the star formation rate self-adjusts to maintain $Q\approx 1$ \citep[e.g.,][]{faucher-giguere13a, hayward17a}. By construction this produces the correct Toomre $Q$, but it still fails to reproduce the observed $\sigma-\dot{M}_*$ correlation (because the predicted star formation law is too steep -- \citealt{krumholz16a}), and it also predicts that star formation on small scales is very efficient in high surface density galaxies, contrary to observations. Just to give one example of this difficulty: if star formation efficiencies on small scales were higher in high surface density galaxies, then the ratio of infrared emission (a star formation tracer) to HCN luminosity (a tracer of dense gas on small scales) should increase with star formation rate, whereas the observed trend is the opposite \citep{garcia-burillo12a, usero15a}. This approach also runs into observational difficulty with its central assumption that galaxies' star formation rates are very sensitive to the value of the Toomre $Q$ parameter; observations strongly disfavour any such correlation \citep{leroy08a}. Instead, both observations and simulations \citep{agertz09a, goldbaum15a, goldbaum16a} seem to suggest that the response of a disc to a drop in Toomre $Q$ is that the disc becomes non-axisymmetric and moves mass inwards, rather than that its star formation rate dramatically increases. \subsection{This Work and Its Motivation} Our goal in this work is to unify models of galactic discs that focus on transport, star formation fuelling, and gravitational instability with those that focus on the energy and momentum balance of star formation feedback. We show below that this approach remedies many of the observational problems we have identified with the various theories that have been proposed to date. However, the need for such a synthesis can be driven home simply by more basic consideration of the observations and their energetic implications. The turbulent energy per unit area contained in a galactic disc of gas surface density $\Sigma_{\rm g}$ and velocity dispersion $\sigma_{\rm g}$ is \begin{equation} \left(\frac{dE}{dA}\right)_{\rm turb} \approx \frac{3}{2}\Sigma_{\rm g} \sigma_{\rm g}^2 = 3.1\times 10^9\, \Sigma_{\rm g,10} \sigma_{\rm g,10}^2\mbox{ erg cm}^{-2}, \end{equation} where $\Sigma_{\rm g,10} = \Sigma_{\rm g}/10$ $M_\odot$ pc$^{-2}$ and $\sigma_{\rm g,10} = \sigma/10$ km s$^{-1}$; the scaling factors are typical values at the Solar Circle in the Milky Way. The energy should dissipate due to decay of turbulence over a timescale comparable to the crossing time, but, in a disc with $Q\approx 1$, this timescale is comparable to the galactic dynamical time $t_{\rm dyn} = r/v_\phi$, where $r$ is the galactocentric radius and $v_\phi$ is the rotation velocity. It is therefore of interest to consider possible sources of power that are capable of delivering this amount of energy per unit area over a timescale $t_{\rm dyn}$. As noted above, \citet{schmidt16a} directly detect flows of mass radially inwards through the discs of local spiral galaxies with mass fluxes $\dot{M}_{\rm in} \sim 1$ $M_\odot$ yr$^{-1}$. These observations are difficult due to the near cancellation of inflow- and outflow-rates around spiral arms and in outer regions where galaxies become significantly lopsided. At a minimum, the magnitude of the inflow should be regarded as significantly uncertain. However, we note that inflows rates of roughly this size must be ubiquitous to explain star formation fuelling. In a galaxy with a flat rotation curve, the amount of gravitational potential energy per unit area per unit time released by this flow of mass down the potential well is \begin{equation} \frac{d^2 E}{dt\, dA} \approx \frac{\dot{M}_{\rm in} v_\phi^2}{2\pi r^2}, \end{equation} so over a galactic dynamical time the flow delivers an energy per unit area \begin{equation} \left(\frac{dE}{dA}\right)_{\rm inflow} \approx \frac{\dot{M}_{\rm in} v_\phi}{2\pi r} = 6.5\times 10^9\, \dot{M}_{\rm in,1} v_{\phi,200} r_{10}^{-1}\mbox{ erg cm}^{-2}, \end{equation} where $\dot{M}_{\rm in, 1} = \dot{M}/1$ $M_\odot$ yr$^{-1}$, $v_{\phi,200} = v_\phi/200$ km s$^{-1}$, and $r_{10} = r/10$ kpc. In comparison, star formation feedback is expected to inject energy at a rate per unit area \begin{equation} \frac{d^2 E}{dt\, dA} \approx \dot{\Sigma}_* \left\langle \frac{p_*}{m_*}\right\rangle \sigma_{\rm g}, \end{equation} where $\dot{\Sigma}_*$ is the star formation rate per unit area and $\langle p_*/m_*\rangle$ is the terminal momentum per unit mass delivered by star formation feedback. (We give a detailed explanation for the origin of this expression below, but intuitively it results simply from the assumption that motions driven by stellar feedback break up and add their energy to the turbulent background once their expansion velocities become comparable to the overall velocity dispersion, so the energy added per ``injection event" is of order the momentum injected times the velocity dispersion.) Simulations suggest the momentum per unit mass is $\langle p_*/m_*\rangle \approx$ 3000 km s$^{-1}$ for single supernovae \citep{cioffi88a, thornton98a, martizzi15a, kim15a, walch15b, gentry17a, kim17a}. Over a galactic dynamical time, and scaling to Solar Circle values again, \begin{eqnarray} \left(\frac{dE}{dA}\right)_{\rm sf} & \approx & \dot{\Sigma}_* \left\langle \frac{p_*}{m_*}\right\rangle \sigma_{\rm g} \frac{r}{v_\phi} \nonumber \\ & = & 3.1\times 10^9\, \dot{\Sigma}_{*,-3} \sigma_{\rm g,10} r_{10} v_{\phi,200}^{-1}\mbox{ erg cm}^{-2}, \end{eqnarray} where $\dot{\Sigma}_{*,-3} = \dot{\Sigma}_* / 10^{-3}$ $M_\odot$ pc$^{-2}$ Myr$^{-1}$. The implication of this calculation is that, at least at the order of magnitude level, inflow and star formation feedback are comparably important energetically in the Solar neighbourhood, and that both are capable of supplying enough energy to replenish the turbulence in the ISM over a galactic dynamical time. Moreover, if we were to repeat this calculation for other types of galaxies we might well get quite different results. The ratio of $(dE/dA)_{\rm inflow}$ to $(dE/dA)_{\rm sf}$ scales as $(\dot{M}_{\rm in}/\dot{M}_*) (v_\phi^2/\sigma_{\rm g})$, where $\dot{M}_*$ is the total star formation rate. We do not have direct measurements of $\dot{M}_{\rm in}$ except in local spirals, but assuming that $\dot{M}_{\rm in}/\dot{M}_* \sim 1$, as would be required to explain star formation fuelling and as is observed locally, star formation should be energetically dominant in galaxies with smaller $v_\phi$ (for example local dwarfs), while inflow should dominate those with larger $\sigma_g$ (for example high-$z$ galaxies). Clearly it is not reasonable to ignore either star formation feedback or inflows in building a model of galaxy discs, as has been the practice for most work up to this point. Below we build a minimal unified model that combines both of these processes. We show that, while simple, this model is far more successful than either feedback-only or inflow-only models at explaining the observed correlations obeyed by galaxy discs. We derive the model in \autoref{sec:model}, and compare it to a variety of observations in \autoref{sec:observations}. We discuss the implications of our findings for galaxy formation in \autoref{sec:discussion}, and conclude in \autoref{sec:summary}. \section{Model} \label{sec:model} \begin{table*} \begin{tabular}{llll} \hline Symbol & Fiducial Value & Meaning & Defining equation \\ \hline \multicolumn{4}{c}{Inputs to model} \\ \hline $\Sigma_{\rm g}$ & - & Gas surface density & - \\ $\Sigma_*$ & - & Stellar surface density & - \\ $\sigma_{\rm g}$ & - & Gas velocity dispersion (total thermal plus non-thermal) & - \\ $\sigma_*$ & - & Stellar velocity dispersion & - \\ $\rho_{\rm d}$ & - & Dark matter density & - \\ $v_\phi$ & - & Galaxy rotation curve velocity & - \\ $\Omega$ & - & Galaxy angular velocity & - \\ $t_{\rm orb}$ & - & Galaxy orbital period, $t_{\rm orb} = 2\pi/\Omega$ & - \\ $\beta$ & 0 & Rotation curve index, $\beta = d\ln v_\phi d\ln r$ & - \\ $f_{g,Q}$ & 0.5 & Fractional contribution of gas to $Q$ & \ref{eq:fgQ} \\ $f_{g,P}$ & 0.5 & Fractional contribution of gas self-gravity to midplane pressure & \ref{eq:fgP} \\ $f_{\rm sf}$ & - & Fraction of ISM in star-forming molecular phase & \ref{eq:sfr} \\ \hline \multicolumn{4}{c}{Physics parameters} \\ \hline $Q_{\rm min}$ & 1 & Minimum possible disc stability parameter & \ref{eq:Qdef} \\ $\phi_{\mathrm{mp}}$ & 1.4 & Ratio of total pressure to turbulent pressure at midplane & \ref{eq:phimp} \\ $\eta$ & 1.5 & Scaling factor for turbulent dissipation rate & \ref{eq:loss_rate} \\ $\phi_{\rm Q}$ & 2 & One plus ratio of gas to stellar $Q$ & \ref{eq:phiQ} \\ $\phi_{\rm nt}$ & 1 & Fraction of velocity dispersion that is non-thermal & \ref{eq:phith} \\ $\epsilon_{\rm ff}$ & 0.015 & Star formation efficiency per free-fall time & \ref{eq:sfr} \\ $t_{\rm sf,max}$ & 2 Gyr & Maximum star formation timescale & \ref{eq:sfr1} \\ $\phi_a$ & 2 & Offset between resolved and unresolved star formation law normalisations & \ref{eq:sfr_averaged} \\ \hline \multicolumn{4}{c}{Model outputs} \\ \hline $\rho_{\rm min}$ & - & Minimum midplane density required to produce rotation curve & \ref{eq:rho_min} \\ $t_{\rm orb,T}$ & - & Orbital period at which galaxies switch from GMC to Toomre regime & \ref{eq:ttoomre} \\ $\sigma_{\rm sf}$ & - & Gas velocity dispersion that can be sustained by star formation alone & \ref{eq:sigma_sf} \\ $\Sigma_{\rm sf}$ & - & Gas surface density below which star formation alone can sustain turbulence & \ref{eq:Sigma_sf} \\ $\dot{M}_{\rm ss}$ & - & Steady-state mass inflow rate through the disc & \ref{eq:mdot_steady} \\ \hline \end{tabular} \caption{ \label{tab:quantities} Symbol definitions. The fiducial value listed is the value used in numerical evaluations and plots unless otherwise stated. } \end{table*} In this section we develop a model for a galactic disc in both vertical hydrostatic and energy equilibrium, where the sources of energy input include both star formation feedback and gravitational potential energy released by inward flow of gas through the disc. A central premise of our model is that the gas is dynamically important and capable of adjusting its inflow rate to maintain marginal stability, rather than simply acting as a passive tracer whose transport rate is dictated by the stellar potential independent of the dynamical state of the gas. This premise likely fails in regions where the gas contributes a negligible mass fraction even at the midplane, for example, the central $\sim 3$ kpc of the Milky Way where the Galactic bar dominates the dynamics \citep[e.g.,][]{binney91a}. We argue in \autoref{ssec:limitations} that the vast majority of the interstellar medium by both mass and star formation rate is not found in such regions, so that our model is applicable to the bulk of the ISM and star formation in the Universe. For now, however, we simply take as given that the transport rate is not dictated by a stellar bar or similar structures, but is able to self-adjust. For convenience we summarise all the quantities used in our model in \autoref{tab:quantities}. We treat our model galaxy as a thin, axisymmetric disc characterised at every radius $r$ by a total gas surface density $\Sigma_{\rm g}$ and 1D gas velocity dispersion $\sigma_{\rm g}$. In addition to gas, the disc contains stars and dark matter. The dark matter has a density $\rho_{\rm d}$, and we assume that its distribution is close to spherical. If there is a spheroidal stellar distribution, we also include its density in $\rho_{\rm d}$. Other stars are in a disc, characterised by a surface density $\Sigma_*$ and a 1D velocity dispersion $\sigma_*$. For simplicity we assume that both $\sigma_{\rm g}$ and $\sigma_*$ are isotropic. In real stellar discs at $z\approx 0$ this assumption fails at the factor of $\sim 2$ level. The gas and stars orbit within a steady gravitational potential, which we characterise by the velocity $v_\phi$ required for material in orbit to be in balance between centrifugal and gravitational forces in the co-rotating frame. The rotation curve has an index $\beta = d\ln v_\phi /d\ln r$, the angular velocity at radius $r$ is $\Omega = v_\phi/r$, and the orbital period is $t_{\rm orb} = 2 \pi r/v_\phi$. We provide the source code to perform the computations involved in the model, and produce all the plots included in the paper, at \url{https://bitbucket.org/krumholz/kbfc17}. \subsection{Gravitational Instability} A central ansatz of our model, following \citet{krumholz10c}, \citet{cacciato12a}, \citet{forbes12a}, and \citet{forbes14a}, is that gravitational instability-driven transport will prevent the disc from ever becoming more than marginally gravitationally unstable. If the disc begins to become unstable, the instability will break axisymmetry and the subsequent torques will drive mass inward until marginal stability is restored. We therefore begin by expressing this condition. Modern treatments of gravitational instability include the effects of multiple stellar populations as well as gas, along with the effects of finite thickness and the dissipative nature of gas \citep{rafikov01a, romeo10a, romeo11a, elmegreen11a, hoffmann12a, romeo13a}. In this work we use the simple approximation given by \citet{romeo13a}, \begin{equation} \label{eq:Qdef} Q \approx \left(Q_{\rm g}^{-1} + \frac{2\sigma_{\rm g}\sigma_*}{\sigma_{\rm g}^2+\sigma_*^2} Q_*^{-1}\right)^{-1} \end{equation} where \begin{equation} \label{eq:Qgas} Q_{\rm g} = \frac{\kappa\sigma_{\rm g}}{\pi G \Sigma_{\rm g}} \end{equation} and similarly for $Q_*$. Here $\kappa = \sqrt{2(\beta+1)}\Omega$ is the epicyclic frequency. This expression is valid as long as $Q_{\rm g} < Q_*$, the quasi-spherical dark matter halo contributes negligibly to the gravitational stability or instability of the system (i.e., $Q_{\rm d} \gg Q_*$, where $Q_{\rm d}$ is the dark matter $Q$), and the ratio of vertical to radial velocity dispersions for the gas and stars is $\gtrsim 0.5$. The latter two conditions hold broadly across all the galaxies we shall consider; the first requires a bit more discussion, which we defer to the end of this section. For convenience, we can rewrite \autoref{eq:Qdef} as \begin{equation} \label{eq:Q} Q = f_{g,Q} Q_{\rm g} \end{equation} where \begin{equation} \label{eq:fgQ} f_{g,Q} \equiv \frac{\Sigma_{\rm g}}{\Sigma_{\rm g} + [2\sigma_{\rm g}^2/(\sigma_{\rm g}^2+\sigma_*^2)] \Sigma_*}. \end{equation} The quantity $f_{g,Q}$ can be thought of as defining the effective gas fraction in the disc for the purposes of computing gravitational stability. It clearly behaves as we intuitively expect, in that $f_{g,Q} \rightarrow 1$ for $\Sigma_{\rm g} \gg \Sigma_*$, and $f_{g,Q} \rightarrow 0$ for $\Sigma_{\rm g} \ll \Sigma_*$. In the Solar neighbourhood, which has gas properties $\Sigma_{\rm g} \approx 14$ $M_\odot$ pc$^{-2}$ \citet{mckee15a}, $\sigma_{\rm g} \approx 7$ km s$^{-1}$ \citep{kalberla09a} and stellar properties $\Sigma_* \approx 33$ $M_\odot$ pc$^{-2}$ and $\sigma_* \approx 16$ km s$^{-1}$ \citep{mckee15a}, we have $Q_* \approx Q_{\rm g} \approx 1.5$ and $f_{g,Q} \approx 0.6$. The condition for stability is that $Q$ be larger than a value $Q_{\rm min}$ of order unity that depends on the thickness of the disc (thicker discs can be stable at lower $Q$) and the gas equation of state (more dissipative equations require higher $Q$ for stability). As a fiducial value we shall adopt $Q_{\rm min} = 1$, which is appropriate for discs that are relatively quiescent. There is some evidence from cosmological simulations that instability can set in at slightly higher $Q \sim 2 - 3$ in the perturbed discs where a greater fraction of the turbulence is in compressive modes that do not support the gas \citep{inoue16a}, but since this is only a factor of $\sim 2$ level effect and only then in some of the galaxies with which we are concerned, we will neglect this complication. The case $Q_* < Q_{\rm g}$, where stars rather than gas are the most unstable component, requires a bit more attention. Due to the fact that gas is dissipational and thus usually has a lower velocity dispersion than stars, it tends to be the most unstable component in any gas-rich system. Thus we expect $Q_* > Q_{\rm g}$ to hold in local dwarfs and lower-mass spirals, all star-forming galaxies at high-redshift, and in all mergers and starbursts. However, massive local spirals like the Milky Way are sufficiently gas poor ($f_{\rm gas} \sim 10-20\%$ -- \citealt{saintonge11a}) that for the most part they have $Q_* < Q_{\rm g}$: \citet{romeo17a} find $Q_{\rm g}/Q_* \approx 0.5-10$ for the HERACLES / THINGS sample, with the bulk of the data at $Q_{\rm g}/Q_* \approx 3$. Our expression for $Q$ (\autoref{eq:Qdef}) assumes $Q_{\rm g} < Q_*$, but the equivalent expression for $Q_{\rm g} > Q_*$ \citep{romeo13a} differs only slightly when $Q_{\rm g}$ and $Q_*$ are within a factor of a few of one another. Quantitatively, using the Solar neighbourhood velocity dispersions quoted above ($\sigma_{\rm g} \approx 7$ km s$^{-1}$, $\sigma_* \approx 16$ km s$^{-1}$), the error produced by using \autoref{eq:Qdef} is 10\% for $Q_{\rm g}/Q_* = 3$, and 17\% for $Q_{\rm g}/Q_* = 10$. This is well below the factor of $\approx 2$ uncertainty in $Q_{\rm min}$, so for simplicity we simply use \autoref{eq:Qdef} in all cases, rather than using a different form for large local spirals than for all the other types of galaxies we will consider. One might also worry that, in the $Q_* < Q_{\rm g}$ regime, gravitational instabilities in the stars might not induce perturbations in the gas capable of driving transport. However, \citet{romeo17a} find that the local spirals with $Q_* < Q_{\rm g}$ are also in the regime where perturbations in the gas and stars are strongly coupled (e.g., see their Figure 5), so this is not a concern. \subsection{Vertical Force Balance} A second ansatz of our model, following a number of authors \citep[e.g.,][]{boulares90a, piontek07a, koyama09a, ostriker10a} is that the gas is in vertical hydrostatic equilibrium. The spatially-averaged momentum equation for a time-steady isothermal gas reads (\citealt{krumholz17b}, equation 10.9; also see \citealt{kim15b}) \begin{equation} \label{eq:vert_hydrostatic} \frac{\partial}{\partial z} \left\langle \rho_{\rm g} \left(\sigma_{\rm th}^2 + v_z^2 + v_A^2\right)\right\rangle - \frac{\partial}{\partial z}\left\langle\frac{B_z^2}{4\pi}\right\rangle -\left\langle\rho_{\rm g} g_z\right\rangle = 0 \end{equation} where $\rho_{\rm g}$ is the gas density, $\sigma_{\rm th}$ is the gas thermal velocity dispersion, $v_z$ is the vertical velocity, $v_A$ is the gas Alfv\'en speed, $B_z$ is the $z$ component of the magnetic field, $g_z$ is the vertical gravitational acceleration, and we have oriented our coordinate system so the disc midplane lies in the $xy$ plane; the angle brackets denote averaging over the area of the disc, where the area considered is small compared to the disc scale length, but large compared to the size of an individual molecular cloud of star-forming complex. The first term represents the force exerted by the gradient in thermal, turbulent, and magnetic pressure, the second represents the force due to magnetic tension, and the third represents the force due to gravity. Magnetic tension tends to be subdominant except for unusual, artificially-constructed magnetic field configurations, and thus we can generally drop the second term. This expression omits the contribution from cosmic ray pressure, but this is likely comparable to magnetic pressure in importance \citep[e.g.,][]{boulares90a}. Integrating \autoref{eq:vert_hydrostatic} from $z=0$ to $\infty$, and assuming that $\rho_{\rm g}\rightarrow 0$ and the Alfv\'en speed remains finite as $z\rightarrow\infty$, we have \begin{equation} \label{eq:vert_hydrostatic1} \rho_{\rm g,mp} \left(\sigma_{\rm g}^2 + v_A^2\right)_{\rm mp} = -\int_0^\infty \langle \rho_{\rm g} g_z\rangle \, dz, \end{equation} where the subscript mp indicates that a quantity is to be evaluated at the disc midplane, and where we have dropped the angle brackets and implicitly understand that midplane terms represent area averages over the midplane; in writing this expression, we have relied on our assumption that the gas velocity dispersion is isotropic, so $\langle \rho_{\rm g} v_z^2\rangle = \rho_{\rm g,mp} (\sigma_{\rm g}^2 - \sigma_{\rm th}^2)$. We write the left hand side as \begin{equation} \label{eq:phimp} \rho_{\rm g,mp} \left(\sigma_{\rm g}^2 + v_A^2\right)_{\rm mp} \equiv \phi_{\mathrm{mp}} \rho_{\rm g,mp} \sigma_{\rm g}^2, \end{equation} where $\phi_{\mathrm{mp}}$ is a factor that represents the factor by which the midplane pressure exceeds that due to turbulent plus thermal pressure alone, due to magnetic and cosmic ray pressure. Equipartition between magnetic and kinetic degrees of freedom in the directions transverse to the field corresponds to an Alfv\'en Mach number of $2/3$, which is $\phi_{\mathrm{mp}} = 1.4$ assuming that thermal pressure is unimportant compared to turbulent pressure. A cosmic ray pressure comparable to the magnetic pressure would increase this to $\phi_{\mathrm{mp}} \approx 2$. On the other hand, if thermal pressure is non-negligible, for example in modern dwarf galaxies, then kinetic-magnetic equipartition implies $\phi_{\mathrm{mp}}$ closer to unity, since the gas reaches equipartition only between the non-thermal motions and the magnetic field. The differences between $\phi_{\mathrm{mp}} = 2$ and $\phi_{\mathrm{mp}}=1$ are small enough that we will not worry about it, and we will simply use $\phi_{\mathrm{mp}}=1.4$ as our fiducial value. The term on the right hand side of \autoref{eq:vert_hydrostatic1} depends on the distribution of gas, stars, and dark matter, since each of these components contributes to $g_z$. To parameterise this dependence, note that the potential $\psi$ obeys the Poisson equation, which in cylindrical coordinates (assuming symmetry in the azimuthal direction) reads \begin{equation} \frac{1}{r}\frac{\partial}{\partial r}\left(r \frac{\partial \psi}{\partial r}\right) + \frac{\partial^2 \psi}{\partial z^2} = 4 \pi G \rho, \end{equation} where $\rho$ is the total density including all components. The radial gradient of $\psi$ is related to the rotation curve by \begin{equation} \frac{v_\phi^2}{r} = \frac{\partial \psi}{\partial r}, \end{equation} and using this in the Poisson equation we obtain \begin{equation} \frac{\partial g_z}{\partial z} = 4 \pi G \rho - 2 \beta \Omega^2, \end{equation} where $g_z = \partial \psi/\partial z$ and $\beta = d\ln v_\phi/ d\ln r$ is the rotation curve index. Integrating, we therefore have \begin{equation} g_z \approx \int_0^z \left(4 \pi G \rho - 2\beta \Omega^2\right) \,dz'. \end{equation} Note that, although it is tempting to approximate that $\beta\Omega^2$ is constant for small $z$, this approximation clearly fails for the common case of a flat rotation curve, $\beta = 0$, because $\beta = 0$ at the midplane but not above it -- see Appendix C of \citet{mckee15a} for discussion. The weight is therefore \begin{equation} \label{eq:wgt_integral} \int_0^\infty \left\langle \rho_{\rm g} g_z\right\rangle \, dz = 2\pi G \int_0^\infty \rho_{\rm g} \left[\Sigma(z) - \frac{1}{\pi G}\int_0^z \beta \Omega^2 \, dz' \right] \, dz \end{equation} where $\Sigma(z) = 2\int_0^z \rho \, dz$ is the total column density of material at heights between $-z$ and $z$, and we assume symmetry about $z=0$. If we write out the total column as the sum of the gas, stellar, and dark components, $\Sigma(z) = \Sigma_{\rm g}(z) + \Sigma_*(z) + \Sigma_{\rm d}(z)$, then we can integrate the gaseous part by the usual change of variables $d\Sigma_{\rm g} = 2 \rho_{\rm g} \, dz$, yielding \begin{eqnarray} \lefteqn{\int_0^\infty \left\langle \rho_{\rm g} g_z\right\rangle \, dz = \frac{\pi}{2} G \Sigma_{\rm g}^2 + 4\pi G} \nonumber \\ & & {} \cdot \int_0^\infty \rho_{\rm g} \left(\Sigma_*(z) + \Sigma_{\rm d}(z) - \frac{1}{2\pi G} \int_0^z \beta\Omega^2\, dz'\right) \, dz. \label{eq:wgt_integral2} \end{eqnarray} The dark matter scale height is much larger than the gas scale height, so we can approximate $\Sigma_d(z) = 2 \rho_{\rm d} z$ in \autoref{eq:wgt_integral2}, where $\rho_d$ is the dark matter density inside the plane. Similarly, the stellar scale height is at least as large as the gas scale height. We can therefore use the approximation suggested by \citet{ostriker10a}, \begin{eqnarray} \lefteqn{\int_0^\infty \left\langle \rho_{\rm g} g_z\right\rangle \, dz \approx \frac{\pi}{2} G \Sigma_{\rm g}^2 \cdot {}} \nonumber \\ & & \left[1 + \frac{\zeta_{\rm d} \rho_{\rm d} + \zeta_* \rho_*}{\rho_{\rm g,mp}} - \frac{4}{\pi G \Sigma_{\rm g}^2} \int_0^\infty \rho_{\rm g} \int_0^z \beta\Omega^2\, dz' \, dz \right], \label{eq:wgt_integral3} \end{eqnarray} where $\rho_{\rm g,mp}$ is the midplane gas density, $\rho_{*,\rm mp}$ is the midplane stellar density, and $\zeta_{\rm d}$ and $\zeta_*$ are numerical factors of order unity that depend on the gas density distribution and the relative scale heights of gas and stars.\footnote{Note that \citet{ostriker11a}'s equation 2 is a special case of \autoref{eq:wgt_integral3}; one can derive their equation by adopting $\beta=1$, and assuming that the angular velocity $\Omega$ arises purely from a spherical matter distribution. Also note that our $\zeta_{\rm d}$ and $\zeta_*$ differ from theirs by a factor of 4. We choose our normalisation so that $\zeta \rightarrow 1$ exactly in the limiting case where the gas and stars have the same vertical distribution} For the dark matter, which has a scale height much larger than the gas scale height, $\zeta_{\rm d}\approx 1.33$. The stellar scale height can range from much larger than that of the gas, in which case $\zeta_* \approx 1.33$ as for the dark matter, to comparable to the gas, in which case $\zeta_* \approx 1$, with exact equality holding in the case where the gas and stars have identical vertical distributions. We therefore define \begin{equation} \label{eq:fgP} f_{g,P} \equiv \left[1 + \frac{\zeta_{\rm d} \rho_{\rm d} + \zeta_* \rho_*}{\rho_{\rm g,mp}} - \frac{4}{\pi G \Sigma_{\rm g}^2} \int_0^\infty \rho_{\rm g} \int_0^z \beta\Omega^2\, dz' \, dz \right]^{-1} \end{equation} so that \begin{equation} \label{eq:wgt_fgP} \int_0^\infty \left\langle \rho_{\rm g} g_z\right\rangle \, dz = \frac{\pi}{2} G f_{g,P}^{-1}\Sigma_{\rm g}^2. \end{equation} The physical meaning of $f_{g,P}$ is that it is the fraction of the midplane pressure due to the local self-gravity of the gas (the unity term in \autoref{eq:fgP}), as opposed to local dark matter (as represented by the $\rho_{\rm d}$ term), local stars (as represented by the $\rho_*$ term), or material of any type interior to the radius under consideration (as represented by the $\beta\Omega^2$ term). In the Solar neighbourhood, \citet{mckee15a} obtain estimates $\rho_{\rm g,mp} = 0.041$ $M_\odot$ pc$^{-3}$, $\rho_* = 0.043$ $M_\odot$ pc$^{-3}$, and $\rho_{\rm d} \ll \rho_*$. Using their equation 94, and adopting $\beta=0$ at the midplane, gives $1/(\pi G) \int_0^z \beta \Omega^2 \, dz' = 0.01$ $M_\odot$ pc$^{-2}$ at $z=150$ pc, approximately the gas scale height. Using these values in \autoref{eq:fgP}, and adopting $\zeta_* = 1.33$ since the stellar scale height is much larger than the gas scale height, gives $f_{g,P} \approx 0.4$ for the Solar neighbourhood, similar to $f_{g,Q}$. Finally, inserting \autoref{eq:phimp} and \autoref{eq:wgt_fgP} into \autoref{eq:vert_hydrostatic1} gives \begin{equation} \rho_{\rm g,mp} = \frac{\pi}{2 \phi_{\mathrm{mp}} f_{g,P}} G \left(\frac{\Sigma_{\rm g}}{\sigma_{\rm g}}\right)^2. \end{equation} Rewriting in terms of $Q$, we arrive at our final expression for the midplane density, \begin{equation} \label{eq:rhomp} \rho_{\rm g,mp} = \frac{(1+\beta) f_{g,Q}^2}{\pi Q^2 \phi_{\mathrm{mp}} f_{g,P}} \left(\frac{\Omega^2}{G}\right). \end{equation} \subsection{Energy Equilibrium} The third assumption of our model is that gas discs are in energy equilibrium, meaning that the rate at which energy is lost due to dissipation of turbulence (ultimately leading to radiative losses) balances the rate at which it is added due to star formation feedback and input of gravitational energy due to non-axisymmetric torques. We must therefore calculate each of these three rates. \subsubsection{Turbulent Dissipation} Dissipation of supersonic turbulence has been subject to extensive study \citep{stone98a, mac-low98a, mac-low99b, lemaster09a}, and the consensus of this work is that the energy is lost to shocks (and, in weakly-ionised plasmas, ion-neutral friction -- \citealt{burkhart15a}) in roughly a flow crossing time at the outer scale of the turbulence. Thus the dissipation rate per unit area should be the kinetic energy per unit area divided by the crossing time. To determine the crossing time, we approximate that the outer scale of the turbulence is of order the gas scale height, and following \citet{forbes12a} we approximate this as \begin{equation} H_{\rm g} \approx \frac{\sigma_{\rm g}^2}{\pi G [\Sigma_{\rm g} + (\sigma_{\rm g}/\sigma_*)\Sigma_*]}, \end{equation} where the factor $\sigma_{\rm g}/\sigma_*$ in the denominator has been chosen to interpolate between the two extreme cases where $\sigma_{\rm g}/\sigma_* \ll 1$ and $\sigma_{\rm g}/\sigma_* = 1$. In the former case, the gas is so much thinner than the stars that the stellar distribution contributes negligibly to the vertical gravity of the gas, while in the latter case the two components have approximately the same vertical distribution. With this approximation, we can write the loss rate as \begin{eqnarray} \label{eq:ldiss} \mathcal{L} & = & \eta\frac{\Sigma_{\rm g} (\sigma_{\rm g}^2-\sigma_{\rm th}^2)}{H_{\rm g}/\sqrt{\sigma_{\rm g}^2-\sigma_{\rm th}^2}} \\ & = & \frac{2(1+\beta)}{\pi G Q^2} \eta \phi_Q \phi_{\rm nt}^{3/2} f_{g,Q}^2 \Omega^2 \sigma_{\rm g}^3, \label{eq:loss_rate} \end{eqnarray} In \autoref{eq:ldiss}, the numerator is the kinetic energy per unit area, the denominator is the scale height crossing time, and $\sigma_{\rm th}$ is the purely thermal portion of the gas velocity dispersion, which is not subject to radiative loss because the gas temperature is assumed to be set by radiative equilibrium. The quantity $\eta$ is a factor of order unity that defines the exact loss rate, with $\eta = 3/2$ corresponding to all the energy being radiated in a single scale height-crossing time; we adopt this as our fiducial value. The factors \begin{equation} \label{eq:phiQ} \phi_Q \equiv 1 + \frac{Q_{\rm g}}{Q_*} \end{equation} and \begin{equation} \label{eq:phith} \phi_{\rm nt} \equiv 1 - \frac{\sigma_{\rm th}^2}{\sigma_{\rm g}^2} \end{equation} are both close to unity for most galaxies. We have $\phi_Q = 2$ if $Q_{\rm g} \approx Q_*$, and we adopt this as a fiducial value. Values of $\phi_Q$ significantly greater than unity are possible only if $Q_* < Q_{\rm g}$. Similarly, the quantity $\phi_{\mathrm{nt}}$ deviates significantly from unity only for gas velocity dispersions so small that they approach the thermal velocity dispersion, which is $\approx 5$ km s$^{-1}$ in H~\textsc{i}-dominated galaxies, and $\approx 0.2 - 0.5$ km s$^{-1}$ in H$_2$-dominated ones. For most purposes we will use $\phi_{\mathrm{nt}} = 1$ as a fiducial value, corresponding to $\sigma_{\rm g} \gg \sigma_{\rm th}$, but where necessary we will evaluate $\phi_{\mathrm{nt}}$ numerically. \subsubsection{Driving by Star Formation} \label{sssec:driving} Following a number of authors \citep{matzner02a, krumholz06d, krumholz17a, goldbaum11a, faucher-giguere13a}, we approximate that the rate at which star formation adds energy to the gas is determined by the asymptotic momentum of shells of gas driven by supernovae or other forms of stellar feedback. Specifically, if an energetic feedback event (such as a supernova) occurs, it will sweep up a bubble of interstellar gas that will, after all the thermal energy injected by the event has been radiated, contain asymptotic radial momentum $p$. We approximate that this event adds an amount of energy $\approx p \sigma_{\rm g}$ to the gas when the shell breaks up and merges with the turbulence. Thus if the star formation rate per unit area is $\dot{\Sigma}_*$, and the mean momentum injected per unit mass of stars formed is $\langle p_*/m_*\rangle$, the rate of energy gain per unit area from star formation is \begin{equation} \mathcal{G} = \left\langle\frac{p_*}{m_*}\right\rangle \sigma_{\rm g} \dot{\Sigma}_*. \end{equation} As discussed above, for single supernovae $\langle p_*/m_*\rangle \approx 3000$ km s$^{-1}$ \citep{cioffi88a, thornton98a, martizzi15a, kim15a, walch15b}. The momentum injected may be somewhat enhanced by clustering, though probably by at most a factor of $\sim 4$ when averaging over a realistic cluster mass function \citep{sharma14a, gentry17a, gentry18a, kim17a}. For simplicity we will ignore this effect and adopt the single supernova value $\langle p_*/m_*\rangle \approx 3000$ km s$^{-1}$ as our fiducial choice. It is convenient to express the rate of star formation as \begin{equation} \label{eq:sfr} \dot{\Sigma}_* = \epsilon_{\rm ff} f_{\rm sf} \frac{\Sigma_{\rm g}}{t_{\rm ff}}. \end{equation} Here $f_{\rm sf}$ is the fraction of the gas that is in a star-forming molecular phase rather than a warm atomic phase, and $t_{\rm ff}$ and $\epsilon_{\rm ff}$ are the free-fall time and star formation rate per free-fall time in this gas. As noted above, there is extensive observational evidence that $\epsilon_{\rm ff} \approx 0.01$ over a very wide range of star-forming environments \citep{krumholz07e, krumholz12a, garcia-burillo12a, evans14a, salim15a, usero15a, heyer16a, vutisalchavakul16a, leroy17a, onus18a}. We adopt $\epsilon_{\rm ff} = 0.015$, the best fit from \citet{krumholz12a}, as our fiducial choice. We pause here to note that, in contrast to the other studies cited, \citet{lee16a}, building on the work of \citet{murray11b}, report the existence of a population of clouds with very high star formation efficiencies, $\epsilon_{\rm ff} \approx 1$. If this result were correct, it would have profound implications for models such as the one we propose. However, it is hard to reconcile this observation with the results of the numerous other studies cited above, which have failed to detect the purported high efficiency cloud population. We argue that the likely explanation for this discrepancy is a methodological bias. \citet{lee16a} compute their efficiencies based on the ratio of ionising luminosity to instantaneous gas mass. The difficulty with this technique is that the ionising luminosity is a measure of stars formed $\sim 3-5$ Myr ago, rather than the instantaneous rate at which the gas that is currently present is forming stars. The high efficiency regions that \citet{lee16a} identify are those associated with the largest and most luminous H~\textsc{ii} regions in the Milky Way, all of which have substantially disrupted their environments. \citeauthor{lee16a}'s method assumes that it is possible to map these giant bubbles one-to-one onto still-extant molecular clouds, neglecting the possibility that their present masses are not reflective of the mass of gas that went into making the ionising stars. Such a discrepancy in mass could occur because the parent clouds have been disrupted into multiple pieces by stellar feedback, or because there have been substantial flows of mass in (ongoing accretion) or out (mass loss via feedback -- \citealt{feldmann11a}) of the star forming region. In contrast, no studies that measure star formation rates using indicators other than ionising luminosity, or that target embedded sources for which the cloud identification is much less uncertain, find a population of high efficiency clouds. Indeed, even using ionising luminosity as a star formation tracer, but in external galaxies where there is no line of sight confusion and thus it is not necessary to try to assign individual H~\textsc{ii} regions to individual molecular clouds, \citet{leroy17a} find $\epsilon_{\rm ff} \lesssim 1\%$, and with a much smaller dispersion than \citet{lee16a}. This finding strongly supports the hypothesis that \citeauthor{lee16a}'s cloud matching procedure is the source of the discrepancy between their results and the rest of the literature. For this reason, we use the value of $\epsilon_{\rm ff}$ found by all other techniques. There is some subtlety in choosing $f_{\rm sf}$ and $t_{\rm ff}$. Some authors have simply set $f_{\rm sf} \approx 1$ and evaluated $t_{\rm ff}$ using the midplane density, and this approach is reasonable for starburst galaxies where the entire ISM is continuous, molecular, star-forming medium. However, such an approach is clearly not reasonable for galaxies like the Milky Way, where the mean density at the midplane is $\approx 1$ cm$^{-3}$, but star formation occurs exclusively in molecular clouds that constitute only $f_{\rm sf}\approx 30\%$ of the mass, but are a factor of $\gtrsim 100$ denser, giving $t_{\rm ff} \approx t_{\rm ff,mp}/10$. Indeed, such an assumption is even problematic for galaxies on the star forming main sequence at $z\sim 2$, since for some of these galaxies the midplane density implied by \autoref{eq:rhomp} is $\lesssim 10$ cm$^{-3}$. This clearly cannot all be star-forming molecular material. In our model we follow the approach set out in \citet{forbes14a}, who base their model on the observations compiled by \citet{krumholz12a}. In this model, stars are assumed to form in a continuous medium with a free-fall time determined from $\rho_{\rm g,mp}$ as long as the resulting star formation timescale, \begin{equation} \label{eq:tsf_Toomre} t_{\rm sf,T} \equiv \frac{t_{\rm ff}}{\epsilon_{\rm ff}} = \frac{\pi Q}{4 f_{g,Q}\epsilon_{\rm ff}}\sqrt{\frac{3 f_{g,P} \phi_{\mathrm{mp}}}{2(1+\beta)}} \frac{1}{\Omega}, \end{equation} is shorter than $t_{\rm sf,max} \approx 2$ Gyr, the value that appears to result in galaxies like the Milky Way where the gas breaks up into individual molecular clouds whose densities are decoupled from the mean midplane density \citep{bigiel08a, leroy08a, leroy13a}. Following the terminology of \citet{krumholz12a}, we refer to the former case as the ``Toomre regime" and the associated timescale $t_{\rm sf,T}$ defined by \autoref{eq:tsf_Toomre} as the Toomre star formation timescale, since when it applies the density in star-forming regions is set by Toomre stability of the entire disc. We refer to the latter case as the ``GMC regime", since it applies when star-forming regions have densities determined by local considerations rather than global disc stability. Thus, we take the star formation rate to be \begin{equation} \label{eq:sfr1} \dot{\Sigma}_* = f_{\rm sf} \Sigma_{\rm g} \max\left(t_{\rm sf,T}^{-1}, t_{\rm sf,max}^{-1}\right), \end{equation} where the first case is the Toomre regime and the second is the GMC regime. In terms of the galactic orbital period, the condition for being in the Toomre regime is \begin{eqnarray} \label{eq:ttoomre} t_{\rm orb} < t_{\rm orb,T} & \equiv & \frac{8\epsilon_{\rm ff} f_{g,Q}}{Q} \sqrt{\frac{2(1+\beta)}{3 f_{g,P} \phi_{\mathrm{mp}}}} t_{\rm sf,max} \\ & = & 35 f_{g,Q,0.5} f_{g,P,0.5}^{-1/2}\mbox{ Myr}, \end{eqnarray} where $t_{\rm orb} = 2\pi/\Omega$ is the galactic orbital period, $f_{g,Q,0.5} \equiv f_{g,Q}/0.5$ and similarly for $f_{g,P,0.5}$, and the numerical evaluation uses the fiducial values given in \autoref{tab:quantities}. The value of $f_{\rm sf}$ can be computed from theoretical models \citep[e.g.,][]{krumholz09a, krumholz09b, mckee10a, krumholz13c}. For galaxies in the Toomre regime, one usually has $f_{\rm sf}\approx 1$, but this is not true for galaxies in the GMC regime. For now we choose to leave $f_{\rm sf}$ as a free parameter. Finally, using \autoref{eq:Q}, we have \begin{eqnarray} \mathcal{G} & = & f_{\rm sf} \frac{\sqrt{2(1+\beta)}}{\pi G Q} f_{g,Q} \left\langle\frac{p_*}{m_*}\right\rangle \Omega \sigma_{\rm g}^2 \nonumber \\ & & \qquad {} \times \max\left( \sqrt{\frac{32(1+\beta)}{3\pi^2 f_{g,P}}}\frac{f_{g,Q}}{Q \phi_{\mathrm{mp}}}\epsilon_{\rm ff} \Omega, \frac{1}{t_{\rm sf,max}}\right). \end{eqnarray} Note that we are implicitly neglecting other possible energy injection mechanisms, such as magnetorotational or thermal instability. \subsection{Radial Transport} \subsubsection{The Transport Rate Equation} In a standard ``top down" derivation of the star formation law, the next step would be to equate the rates of loss from turbulent dissipation $\mathcal{L}$ and gain from star formation feedback $\mathcal{G}$. Since these have different scalings -- $\mathcal{L} \propto \Omega^2 \sigma_{\rm g}^3$ and $\mathcal{G} \propto \epsilon_{\rm ff} \Omega^2 \sigma_{\rm g}^2$ (in the Toomre regime) or $\mathcal{G} \propto t_{\rm sf,max}^{-1} \Omega \sigma_{\rm g}^2$ (in the GMC regime) -- such equality can hold everywhere within the disc only if $\sigma_{\rm g}$ takes on a particular, fixed value (and hence $Q$ is non-constant), or if $\epsilon_{\rm ff}$ is non-constant. For example, \citet{ostriker11a} make the former choice, while \citet{faucher-giguere13a} and \citet{hayward17a} make the latter. Neither option provides a particularly good match to observations, for the reasons discussed in \autoref{sec:intro}. Our model is based on the realisation that there is an alternative source of energy, radial transport. Such transport injects energy at scales comparable to the gas scale height, which then cascades down to become turbulent on smaller scales. \citet{krumholz10c} show that the time evolution of the gas velocity dispersion obeys \begin{eqnarray} \frac{\partial\sigma_{\rm g}}{\partial t} & = & \frac{\mathcal{G}-\mathcal{L}}{3\sigma_{\rm g}\Sigma_{\rm g}} + \frac{\sigma_{\rm g}}{6\pi r\Sigma_{\rm g}} \frac{\partial \dot{M}}{\partial r} + \frac{5(\partial\sigma_{\rm g}/\partial r)}{6\pi r\Sigma}\dot{M} \nonumber \\ & & \qquad {} - \frac{1-\beta}{6\pi r^2\Sigma_{\rm g}\sigma_{\rm g}}\Omega \mathcal{T}, \label{eq:encons} \end{eqnarray} where $\mathcal{T}$ is the torque exerted by non-axisymmetric stresses, and \begin{equation} \label{eq:mdot} \dot{M}= -\frac{1}{v_\phi(1+\beta)}\frac{\partial \mathcal{T}}{\partial r} \end{equation} is the rate of inward mass accretion through the disc. Note that $\dot{M}$ here and throughout refers explicitly to mass accretion \textit{through} the disc rather than \textit{onto} the disc from outside, unless explicitly stated otherwise. There is clear physical interpretation for \autoref{eq:encons}. The first term on the right hand side is the net effect of star formation driving ($\mathcal{G}$) and dissipation of turbulence ($\mathcal{L}$), the second and third represent advection of kinetic energy as gas moves through the disc, and the final term represents transfer of energy from the galactic gravitational potential to the gas. We pause here to comment on the physical assumptions that lie behind \autoref{eq:encons}. This equation is simply the time- and azimuthally-averaged version of the equation of energy conservation for a thin disc with a time-steady rotation curve, and it holds regardless of the nature of the torque $\mathcal{T}$. Thus it can apply equally well to gas transport driven by transient or steady spiral waves (as in a modern galaxy) or transport coming from the mutual torquing of giant clumps (as in a high-$z$ galaxy). However, \autoref{eq:encons} does not include another energy source that is at least in principle possible: transfer of energy from stars to gas without any transport of the gas itself, for example due to stellar spiral arms or bars directly driving turbulent gas motions. That is, it is possible to ``pay" for an increase in gas kinetic energy by having the stars decrease their energy by either flowing down the potential well or decreasing their velocity dispersion, and such transfer could take place even if gas does not flow down the potential well, or even flows up it. Such star-to-gas direct transfer probably is important in some regions, particularly those with little gas and strong bars, as discussed in \autoref{ssec:limitations}. However, numerous numerical simulations of both local \citep{agertz09b, goldbaum15a, goldbaum16a} and high-$z$ \citep{bournaud07a, bournaud09a, ceverino10a} galaxies offer strong evidence that direct star-to-gas energy transfer cannot be a dominant source of gas kinetic energy. These simulations show that gas does flow inward at roughly the rate predicted by our model, even when a live stellar disc and its spiral waves are included in the simulations, and, conversely, that turbulence and inflow occur even in simulations that do not include a massive stellar disc. Neither of these findings is consistent with the hypothesis that stellar driving rather than gas transport dominates the energy budget in most galaxies. If we search for solutions where that gas is in energy equilibrium, $\partial\sigma_{\rm g}/\partial t = 0$, then \autoref{eq:encons} implies that \begin{equation} \label{eq:eneq} \frac{\sigma_{\rm g}^2}{2\pi r} \frac{\partial \dot{M}}{\partial r} + \frac{5 \sigma_{\rm g} \dot{M}}{2\pi r} \frac{\partial \sigma_{\rm g}}{\partial r} - \frac{1-\beta}{2\pi r^2} \Omega \mathcal{T} = \mathcal{L} - \mathcal{G} \end{equation} This is a second order ordinary differential equation in $\mathcal{T}$ (since $d\dot{M}/dr$ involves the second derivative of $\mathcal{T}$), with $\mathcal{L} - \mathcal{G}$ as a forcing term. Physically valid solutions to this equation are subject to the constraint $\mathcal{T} \rightarrow 0$ as $r\rightarrow 0$, so that no torques are exerted (and thus no energy is added) at $r=0$. \subsubsection{The Critical Velocity Dispersion} Following \citet{forbes14a}, we note that the solutions to this equation are only consistent with thermodynamic constraints when $\mathcal{L} \geq \mathcal{G}$, i.e., when the dissipation of turbulence is stronger than driving, so the forcing term is positive. If this inequality holds, then gravitationally-driven turbulence transports mass inward and converts gravitational potential energy into turbulent motion at the rate required to maintain the gas in a state of marginal stability. In the opposite case, however, gravitational instability would be required to convert energy from random motions into a net outward transport of mass, which is unphysical on thermodynamic grounds -- the turbulence is assumed to be randomly oriented, so there is plausible physical mechanism by which it could self-organise to generate a net outward mass transport. If $\mathcal{L} = \mathcal{G}$ exactly, then driving by star formation is by itself sufficient to offset the decay of turbulence, and there is no gravitational instability or radial transport. The condition that $\mathcal{L} = \mathcal{G}$ for a marginally stable disc with $Q = Q_{\rm min}$ is satisfied if the gas velocity dispersion (total thermal plus non-thermal) is \begin{eqnarray} \label{eq:sigma_sf} \sigma_{\rm g} = \sigma_{\rm sf} & \equiv & \frac{4f_{\rm sf} \epsilon_{\rm ff}}{\sqrt{3 f_{g,P}} \pi \eta \phi_{\mathrm{mp}} \phi_Q \phi_{\mathrm{nt}}^{3/2}} \left\langle\frac{p_*}{m_*}\right\rangle \nonumber \\ & & \;{} \times \max\left[1, \sqrt{\frac{3 f_{g,P}}{8(1+\beta)}} \frac{Q_{\rm min} \phi_{\mathrm{mp}}}{4 f_{g,Q} \epsilon_{\rm ff}}\frac{t_{\rm orb}}{t_{\rm sf,max}}\right]. \end{eqnarray} With this definition, we can rewrite the equation for energy equilibrium, \autoref{eq:eneq}, as \begin{equation} \label{eq:eneq1} \frac{\sigma_{\rm g}^2}{2\pi r} \frac{\partial \dot{M}}{\partial r} + \frac{5 \sigma_{\rm g} \dot{M}}{2\pi r} \frac{\partial \sigma_{\rm g}}{\partial r} - \frac{1-\beta}{2\pi r^2} \Omega \mathcal{T} = \mathcal{L} \left(1-\frac{\sigma_{\rm sf}}{\sigma_{\rm g}}\right). \end{equation} With the energy equation written in this way, the physical meaning of $\sigma_{\rm sf}$ becomes clear. It is the velocity dispersion that star formation alone is capable of maintaining, without any additional energy input from mass transport. As the velocity dispersion of the ISM approaches this limit, the net rate of turbulent dissipation diminishes, and the amount of gravitational transport required to maintain marginal stability does as well. The fraction of the energy supplied by star formation is simply $\sigma_{\rm sf}/\sigma_{\rm g}$, while the fraction supplied by gravity is $1-\sigma_{\rm sf}/\sigma_{\rm g}$. Once the galaxy reaches $\sigma_{\rm g} = \sigma_{\rm sf}$ exactly, the mass inflow rate drops to 0, and the galaxy is no longer constrained to have $Q = Q_{\rm min}$; it can instead take on any value of $Q \geq Q_{\rm min}$. We can also express the condition that $\mathcal{L} = \mathcal{G}$, and thus gravitational power shut off, in terms of the surface density. Combining \autoref{eq:Qdef} and \autoref{eq:sigma_sf}, the critical surface density at which this occurs is \begin{eqnarray} \label{eq:Sigma_sf} \Sigma_{\rm sf} & = & \frac{8\sqrt{2(1+\beta)} f_{\rm sf} \epsilon_{\rm ff}}{\sqrt{3} \pi G Q \eta \phi_{\rm mp} \phi_Q \phi_{\rm nt}^{3/2}} \left\langle\frac{p_*}{m_*}\right\rangle \frac{f_{g,Q}}{f_{g,P}^{1/2}} \frac{1}{t_{\rm orb}} \nonumber \\ & & \quad \cdot {} \max\left[1, \sqrt{\frac{3 f_{g,P}}{2(1+\beta)}} \frac{Q \phi_{\rm mp}}{8 f_{g,Q} \epsilon_{\rm ff}} \frac{t_{\rm orb}}{t_{\rm sf,max}}\right]. \end{eqnarray} Transport shuts off wherever $\Sigma_{\rm g}$ falls below $\Sigma_{\rm sf}$. Note that higher values of $Q$ imply lower values of $\Sigma_{\rm sf}$, i.e., the more gravitationally stable the disc, the lower the total surface density that can be maintained by star formation alone. The maximum surface density that can be sustained by star formation alone in a marginally stable disc is given by $\Sigma_{\rm sf}$ evaluated with $Q = Q_{\rm min}$. Numerical evaluation of \autoref{eq:sigma_sf} and \autoref{eq:Sigma_sf} requires some care due to the $\phi_{\rm nt}$ term in the denominator. Our fiducial choice for this term is $\phi_{\rm nt} = 1$, appropriate for highly-supersonic gas ($\sigma_{\rm sf} \gg \sigma_{\rm th}$). In most cases this choice is not problematic. However, one regime of interest for our theory is H~\textsc{i}-dominated regions like the outer Milky Way or the majority of $z=0$ dwarfs, which have $f_{\rm sf} \ll 1$. Examination of \autoref{eq:sigma_sf} would seem to suggest that sufficiently small values of $f_{\rm sf}$ will produce correspondingly small values of $\sigma_{\rm sf}$, in which case the approximation that $\sigma_{\rm sf} \gg \sigma_{\rm th}$, and thus $\phi_{\rm nt} \approx 1$, is no longer valid; indeed, for $\sigma_{\rm sf} \rightarrow \sigma_{\rm th}$ we have $\phi_{\rm nt} \rightarrow 0$. Thus we cannot simply assume $\phi_{\rm nt} = 1$ when evaluating \autoref{eq:sigma_sf} for H~\textsc{i}-dominated regions; a more sophisticated approach is required. If one substitutes the full definition $\phi_{\rm nt} = 1 - (\sigma_{\rm th} / \sigma_{\rm g})^2$ into \autoref{eq:sigma_sf}, the resulting equation is a cubic in $\sigma_{\rm g}^2$. While we can solve this exactly, the solution is extremely cumbersome and unenlightening. It is more useful to obtain the solution in the two limiting cases $\sigma_{\rm g} \gg \sigma_{\rm th}$ and $\sigma_{\rm g} \rightarrow \sigma_{\rm th}$; numerical solution of the full cubic shows that $\sigma_{\rm sf}$ transitions smoothly between the two limits. The solution for $\sigma_{\rm g} \gg \sigma_{\rm th}$ is simply what we would have obtained by naively plugging in $\phi_{\rm nt} = 1$, which is \begin{eqnarray} \sigma_{\rm sf} & = & 11\mbox{ km s}^{-1} f_{\rm sf} f_{g,P,0.5}^{-1/2} \nonumber \\ & & \quad {} \cdot \max\left(1, 1.0 f_{g,P,0.5}^{1/2} f_{g,Q,0.5}^{-1} t_{\rm orb,100}\right), \label{eq:sigma_sf_num1} \\ \Sigma_{\rm sf} & = & 36 \, M_\odot\mbox{ pc}^{-2} f_{\rm sf} f_{g,Q,0.5} f_{g,P,0.5}^{-1/2} t_{\rm orb,100}^{-1} \nonumber \\ & & \quad {} \cdot \max\left(1, 1.0 f_{g,P,0.5}^{1/2} f_{g,Q,0.5}^{-1} t_{\rm orb,100}\right), \end{eqnarray} where $t_{\rm orb,100} = t_{\rm orb}/100$ Myr, for $\Sigma_{\rm sf}$ we have used $Q=Q_{\rm min}$, and for all quantities we have used the fiducial parameter choices as given in \autoref{tab:quantities}. We treat the $\sigma_{\rm sf} \approx \sigma_{\rm th}$ limit by defining the Mach number $\mathcal{M}_{\rm sf}$ corresponding to $\sigma_{\rm sf}$ by \begin{equation} \sigma_{\rm sf} = \sigma_{\rm th}\sqrt{1 +\mathcal{M}_{\rm sf}^2} \end{equation} (so that $\sigma_{\rm th} = \sigma_{\rm sf}$ corresponds to $\mathcal{M}_{\rm sf} = 0$), and solve \autoref{eq:sigma_sf} to first order in $\mathcal{M}_{\rm sf}$. This gives \begin{eqnarray} \mathcal{M}_{\rm sf} & = & \left\{\frac{4f_{\rm sf} \epsilon_{\rm ff}}{\sqrt{3 f_{g,P}} \pi \eta \phi_{\mathrm{mp}} \phi_Q} \left\langle\frac{p_*}{m_*}\right\rangle\frac{1}{\sigma_{\rm th}} \right. \nonumber \\ & & \quad \left. {} \cdot \max\left[1, \sqrt{\frac{3 f_{g,P}}{8(1+\beta)}} \frac{Q_{\rm min} \phi_{\mathrm{mp}}}{4 f_{g,Q} \epsilon_{\rm ff}}\frac{t_{\rm orb}}{t_{\rm sf,max}}\right]\right\}^{1/3} \\ & = & 0.60 f_{\rm sf,0.1}^{1/3} f_{g,P,0.5}^{1/6} \sigma_{\rm th,5}^{-1/3} \nonumber \\ & & \quad {} \cdot \max\left(1, 1.0 f_{g,P,0.5}^{1/6} f_{g,Q,0.5}^{-1/3} t_{\rm orb,100}^{1/3}\right), \label{eq:sigma_sf_num2} \\ \Sigma_{\rm sf} & = & \frac{\sqrt{8(1+\beta)} f_{g,Q} \sigma_{\rm th}}{G Q_{\rm min} t_{\rm orb}} \\ & = & 16\,M_\odot\mbox{ pc}^{-2} f_{g,Q,0.5} \; \sigma_{\rm th,5} t_{\rm orb,100}^{-1} \end{eqnarray} where $\sigma_{\rm th,5} = \sigma_{\rm th}/5$ km s$^{-1}$. Thus we find that, for the relatively modest star-forming fractions typical of H~\textsc{i}-dominated regions, the maximum Mach number that can be sustained by star-formation is of order $0.5$. Since $\sigma_{\rm th} \approx 5$ km s$^{-1}$ in the warm neutral medium, this in turn implies overall velocity dispersions of $\approx 6-8$ km s$^{-1}$. Thus we find that, regardless of the value of $f_{\rm sf}$ or various other parameters, our model predicts that the maximum velocity dispersion that can be sustained by star formation alone is $\sigma_{\rm sf} \approx 6-10$ km s$^{-1}$. A corollary of this statement is that, if we observe a galaxy's velocity dispersion to be close to $\sigma_{\rm sf}$, we can conclude that the turbulence within it is primarily powered by star formation, whereas if we observe the velocity dispersion to be $\gg \sigma_{\rm sf}$, we can conclude that the turbulence is primarily powered by gravity. We also note that our finding that star formation at a rate consistent with the observed Kennicutt-Schmidt relation is capable of powering a velocity dispersion of $\approx 10$ km s$^{-1}$ and no more is not new; several numerical simulations of supernova-driven turbulence have reached the same conclusion from their numerical experiments \citep[e.g.,][]{joung09a, kim11a, kim15b}. \subsubsection{The Steady-State Mass Inflow Rate} \label{sssec:mdot_ss} With $\sigma_{\rm sf}$ defined, we are now in a position to calculate the mass inflow rate for galaxies with $\sigma_{\rm g} > \sigma_{\rm sf}$ and $Q = Q_{\rm min}$. \citet{krumholz10c} obtained a transport equation analogous to \autoref{eq:eneq1} in the limit $\sigma_{\rm g} \gg \sigma_{\rm sf}$, and for constant $\beta$ (i.e., fixed rotation curve index) showed that it admits an analytic steady state solution with $\sigma_{\rm g}$ and $\dot{M}$ independent of radius. Numerical solution of the full time-dependent system (\autoref{eq:encons}) shows that galaxies tend to approach this steady state \citep{forbes12a, forbes14a}, so motivated by this result we look for similar solutions ($\beta$, $\sigma$, $\dot{M}$ all independent of $r$) for the more general case given by \autoref{eq:eneq1}.\footnote{An important subtlety: in writing \autoref{eq:sigma_sf_num2} we evaluated $\phi_{\rm nt}$ using $\sigma_{\rm g} = \sigma_{\rm sf}$. This is the correct approach to finding the value of $\sigma_{\rm sf}$ that can be sustained by star formation alone. However, in \autoref{eq:eneq1}, $\sigma_{\rm sf}$ must be evaluated using the actual value of $\sigma_{\rm g}$, which may be larger.} A solution of this form must have $\mathcal{T} = -\dot{M} v_\phi r$, and inserting this into \autoref{eq:eneq1} we immediately obtain that the mass inflow rate must be \begin{eqnarray} \label{eq:mdot_steady} \dot{M} = \dot{M}_{\rm ss} & \equiv & \frac{4 (1+\beta) \eta \phi_Q \phi_{\mathrm{nt}}^{3/2}}{(1-\beta) G Q_{\rm min}^2} f_{g,Q}^2 \sigma_{\rm g}^3 \left(1 - \frac{\sigma_{\rm sf}}{\sigma_{\rm g}}\right) \\ & = & 0.71 f_{g,Q,0.5}^2 \sigma_{\rm g,10}^3\;M_\odot\mbox{ yr}^{-1} \nonumber \\ & & {} \cdot \left(1 - \frac{\sigma_{\rm th}^2}{\sigma_{\rm g}^2}\right)^{3/2} \left(1 - \frac{\sigma_{\rm sf}}{\sigma_{\rm g}}\right), \end{eqnarray} where $\sigma_{\rm g,10} = \sigma_{\rm g}/10$ km s$^{-1}$; the numerical evaluation uses the fiducial values in \autoref{tab:quantities}, except that we have retained the explicit dependence on $\phi_{\rm nt}$ because it is important in H~\textsc{i}-dominated regions, as explained above. The quantity $\dot{M}_{\rm ss}$ is the steady-state mass inflow rate that is required to keep a galactic disc in energy equilibrium. Thus we expect galactic discs with $\sigma_{\rm g} \approx 10$ km s$^{-1}$, and thus slightly above $\sigma_{\rm sf}$ and $\sigma_{\rm th}$ to have mass inflow rates of order $1$ $M_\odot$ yr$^{-1}$. As $\sigma_{\rm g}$ decreases and approaches both $\sigma_{\rm sf}$ and $\sigma_{\rm th}$, the inflow rate rapidly falls to zero, while as it increases the inflow rate rises as $\dot{M}_{\rm ss} \propto \sigma_{\rm g}^3$. \begin{figure*} \includegraphics[width=0.8\textwidth]{example_sol} \caption{ \label{fig:example_sol} Example solutions for our fiducial model, using the parameters chosen for local dwarfs, local spirals, ULIRGs, and high-redshift star-forming galaxies given in \autoref{tab:example_sol}. Note that different columns have different $x$ axis ranges. Rows show, from top to bottom, gas surface density $\Sigma_{\rm g}$, star formation surface density $\dot{\Sigma}_*$, ratio of gas velocity dispersion $\sigma_{\rm g}$ to dispersion provided by star formation $\sigma_{\rm sf}$, and mass inflow rate $\dot{M}$. } \end{figure*} \begin{table} \begin{tabular}{c@{$\quad$}cccc} \hline Parameter & Local dwarf & Local spiral & ULIRG & High-$z$ \\ \hline $\sigma_{\rm g}$ [km s$^{-1}$] & 6 & 10 & 60 & 40 \\ $r_{\rm out}$ [kpc] & 5 & 10 & 1 & 5 \\ $v_{\phi}$ at $r_{\rm out}$ [km s$^{-1}$] & 60 & 200 & 250 & 200 \\ $\beta$ & 0.5 & 0 & 0.5 & 0 \\ $Z'$ & 0.2 & 1 & 1 & 1 \\ \hline \end{tabular} \caption{ \label{tab:example_sol} Parameters for example solutions. Note that $r_{\rm out}$ is the outermost radius at which we compute the solution, and $Z'$ is the metallicity normalised to Solar used in the KMT+ model for $f_{\rm sf}$ (see main text). } \end{table} We show some example equilibrium solutions in \autoref{fig:example_sol}; the examples are representative of the range of galaxies to which we can apply our model, including a local dwarf, a local spiral similar to the Milky Way, a local ULIRG, and a high redshift star-forming disc. The exact parameters for each model are given in \autoref{tab:example_sol}. All models use $f_{g,Q} = f_{g,P} = 0.5$, and an inner radius of $0.1$ kpc. We use a value of $f_{\rm sf}$ computed using the KMT+ model of \citet{krumholz13c} with a clumping factor $f_c = 1$, since the gas surface densities here are the true ones rather than a beam-diluted average. To apply this theory we require a value for the midplane stellar plus dark matter density. If the rotation curve index is independent of radius, and is dominated by stars and dark matter, then the minimum density at the midplane required to produce the rotation curve is \begin{equation} \label{eq:rho_min} \rho_{\mathrm{min}} = \frac{v_\phi^2}{4\pi G r^2} \left(2\beta+1\right). \end{equation} The true value is likely to be somewhat higher, since $\rho_{\rm min}$ applies for a spherical mass distribution, which we would expect if dark matter alone were dominating the rotation curve; we therefore adopt a stellar density $\rho_{\rm *} = 2 \rho_{\rm min}$. We compute the thermal velocity dispersion $\sigma_{\rm th}$ as $\sigma_{\rm th} = f_{\rm sf} \sigma_{\rm th,mol} + (1-f_{\rm sf})\sigma_{\rm th, WNM}$, where $\sigma_{\rm th,mol} = 0.2$ km s$^{-1}$ (appropriate for molecular gas) and $\sigma_{\rm th,WNM} = 5.4$ km s$^{-1}$ appropriate for warm neutral gas. The results illustrate the qualitative behaviour of the model: local spirals and dwarfs with modest velocity dispersions and modest star formation rates have $\sigma_{\rm g}/\sigma_{\rm sf} \approx 1$, and as a result also have low mass inflow rates, $\sim 10^{-2}$ $M_\odot$ yr$^{-1}$ for the dwarf and $\sim 1$ $M_\odot$ yr$^{-1}$ for the spiral. In contrast, rapidly star-forming ULIRGs and high-redshift galaxies have high $\sigma_{\rm g}/\sigma_{\rm sf}$ and high inflow rates. The turbulence in these galaxies is driven almost entirely by inflow. \subsection{Equilibria without Transport or without Feedback} It is worth considering the alternatives to our model that result from omitting either feedback or transport, in order to demonstrate why both are important. First consider omitting feedback, as in \citet{krumholz10c}. This amounts to setting $\langle p_*/m_*\rangle = 0$, and thus all the relations we have derived continue to apply, but with $\sigma_{\rm sf} = 0$ and $\Sigma_{\rm sf} = 0$. The other alternative is models without transport, which require that $\mathcal{G} = \mathcal{L}$. As noted above, this requirement can be satisfied in two ways. One is that we can keep the star formation law (\autoref{eq:sfr}) fixed. In the GMC regime we have $\mathcal{G} \propto \Sigma_{\rm g} \sigma_{\rm g}$ while $\mathcal{L} \propto \Sigma_{\rm g}^2 \sigma_{\rm g}$, and thus $\mathcal{G} = \mathcal{L}$ is possible only for a single value of $\Sigma_{\rm g}$; since real galaxies clearly do not all have a single surface density, we discount this solution and instead focus on the Toomre regime. In the Toomre regime we have $\mathcal{G} = \mathcal{L}$ whenever $\sigma_{\rm g} = \sigma_{\rm sf}$ (\autoref{eq:sigma_sf}). This implies that \begin{eqnarray} Q & = & f_{g,Q} \frac{\kappa \sigma_{\rm sf}}{\pi G \Sigma_{\rm g}} \nonumber \\ \label{eq:Q_notransport} & = & \frac{8\sqrt{2(1+\beta)}}{\sqrt{3}\pi \eta \phi_{\mathrm{mp}} \phi_Q \phi_{\mathrm{nt}}^{3/2}} f_{\rm sf} \epsilon_{\rm ff} \left\langle\frac{p_*}{m_*}\right\rangle \frac{f_{g,Q}}{f_{g,P}^{1/2} G \Sigma_{\rm g} t_{\rm orb}} \\ & = & 3.6 f_{g,Q,0.5} f_{g,P,0.5}^{-1/2} t_{\rm orb,100}^{-1} \Sigma_{\rm g,10}^{-1}, \end{eqnarray} where $\Sigma_{\rm g,10} = \Sigma_{\rm g}/10$ $M_\odot$ pc$^{-2}$. Thus if we do not include transport and keep the star formation law fixed, the model still predicts that $Q\approx 1$ for Solar Circle conditions ($\Sigma_{\rm g,10} \approx 1$, $t_{\rm orb,100} \approx 2$). However, for conditions like those found in ULIRGs ($\Sigma_{\rm g,10}\sim 100$, $t_{\rm orb,10} \sim 3$) or high-$z$ star-forming discs ($\Sigma_{\rm g,10} \sim 10$, $t_{\rm orb,100} \sim 1$), the predicted value of $Q$ is much smaller than unity. Conversely, we can hold $Q$ fixed and treat the quantity $f_{\rm sf} \epsilon_{\rm ff}$ as a free parameter, and use the relation $\mathcal{G} = \mathcal{L}$ to solve for it. In this case only the Toomre regime exists, and it is characterised by a star formation efficiency per free-fall time \begin{eqnarray} \label{eq:epsff_fg} \epsilon_{\rm ff} & = & \frac{\sqrt{3} \pi \eta \phi_{\mathrm{mp}} \phi_Q \phi_{\mathrm{nt}}^{3/2}}{4 f_{\rm sf}} \left\langle\frac{p_*}{m_*}\right\rangle^{-1} f_{g,P}^{-1/2} \sigma_{\rm g} \\ & = & 0.027 f_{\rm sf}^{-1} f_{g,P,0.5}^{-1/2} \sigma_{\rm g,10} \end{eqnarray} Thus $\epsilon_{\rm ff}$ is $\sim 1\%$ for $\sigma_{\rm g} \approx 10$ km s$^{-1}$, but rises to $\gtrsim 10\%$ for the higher velocity dispersions typically seen in ULIRGs or high-redshift star-forming discs. Note that \autoref{eq:epsff_fg} is identical, up to factors of order unity, to equation 37 of \citet{faucher-giguere13a}. \section{Comparison to Observations} \label{sec:observations} We can use our steady state model to calculate a wide range of observables, and in this section we compare the model predictions to observations. We also compare contrasting models without transport and without feedback, in order to highlight how including both mechanisms alters the results. Specifically, throughout this section we will consider four different models, to which we refer as follows: \textbf{Transport+feedback.} This is our fiducial model. It has $\epsilon_{\rm ff} = 0.015$ and two branches: $Q = Q_{\rm min}$ with $\sigma_{\rm g} > \sigma_{\rm sf}$ (or equivalently $\Sigma_{\rm g} > \Sigma_{\rm sf}$), and $Q \geq Q_{\rm min}$ with $\sigma_{\rm g} = \sigma_{\rm sf}$ (or $\Sigma_{\rm g} \leq \Sigma_{\rm sf}$). \textbf{No-feedback.} This is identical to the transport+feedback model, except that $\sigma_{\rm sf} = 0$ and $\Sigma_{\rm sf} = 0$, so $Q = Q_{\rm min}$ under all circumstances. This model is similar to the one proposed by \citet{krumholz10c}. \textbf{No-transport, fixed $\epsilon_{\rm ff}$.} A model without transport, with $\epsilon_{\rm ff} = 0.015$ fixed but $Q$ allowed to vary freely. In this model the value of $Q$ is given by \autoref{eq:Q_notransport}. This model is similar to the one proposed by \citet{ostriker11a}. \textbf{No-transport, fixed $Q$.} A model without transport, with $Q = Q_{\rm min}$ fixed, but $\epsilon_{\rm ff}$ allowed to vary freely. In this model, $\epsilon_{\rm ff}$ takes on the value given by \autoref{eq:epsff_fg}. This model is similar to the one proposed by \citet{faucher-giguere13a}. For each of these models we compute the star-forming fraction $f_{\rm sf}$ using the formalism of \citet{krumholz13c}, with a clumping factor $f_c = 5$ (since we are now dealing with beam-diluted kpc-scale observations), Solar metallicity, and a stellar density equal to 4 times the minimum value given in \autoref{eq:rho_min}. \subsection{The Star Formation Law} \label{ssec:sflaw} A first test of any model of star formation is the prediction it makes for the star formation law, the relation between the gas content of galaxies and their star formation behaviour. Observationally, the star formation law can be expressed as a correlation between the surface density of star formation and either the gas surface density alone, or the gas surface density divided by the galactic orbital period. It can be measured averaged over entire galaxies, or measured in spatially-resolved patches of galaxies. A successful model should be able to reproduce all these observed correlations.\footnote{One can also define a local star formation law, which relates the local rate of star formation within a given cloud to its volumetric properties (density, virial ratio, etc.). There are significant observational constraints on this relationship as well, as discussed in \autoref{sssec:driving}, but in this paper we have used these constraints as an input to the model, not an output, and thus our model cannot be said to predict this relation. However, the local volumetric star formation relation is distinct from the projected, area-averaged one, and it is perfectly possible to match observations of one without successfully reproducing the other. Indeed, in the following sections we will encounter a number of models that do exactly that. Thus the models we consider do constitute predictions for the areal star formation law.} \subsubsection{Spatially-Resolved Observations} First consider spatially-resolved observations. For both the transport+feedback model and the no-feedback model, the star formation rate at each point in the disc is described by \autoref{eq:sfr1} with $\epsilon_{\rm ff} = 0.015$. If we omit star formation feedback, only the $Q = Q_{\rm min}$ solution branch exists, whereas in our fiducial transport+feedback model we can have $Q > Q_{\rm min}$ for $\Sigma_{\rm g} < \Sigma_{\rm sf}$. (Recall that we are limiting our attention to discs in energy equilibrium without significant external energy input; external stimulation can produce $Q \gg Q_{\rm min}$ -- \citealt{inoue16a}.) In practice, however, this makes relatively little difference in the star formation law unless we adopt $Q \gg Q_{\rm min}$, though we shall see that it makes a considerable difference for other observables. Thus for simplicity we simply adopt $Q = Q_{\rm min}$ everywhere, in which case the transport+feedback and no-feedback models are the same. In the no-transport, fixed $\epsilon_{\rm ff}$ model, the value of $Q$ is given by \autoref{eq:Q_notransport}. Substituting this into \autoref{eq:sfr1} (and recalling that the GMC regime does not exist in this case) gives a star formation law \begin{equation} \label{eq:sflaw_observed_os} \dot{\Sigma}_* = \pi G \eta \phi_{\mathrm{mp}}^{1/2} \phi_Q \phi_{\mathrm{nt}}^{3/2} \left\langle\frac{p_*}{m_*}\right\rangle^{-1} \Sigma_{\rm g}^2. \end{equation} This relation is identical up to factors of order unity to equation 10 of \citet{ostriker11a}, which is not surprising since it is based on the same physical assumptions. In the no-transport, fixed $Q$ model, we instead have $Q=Q_{\rm min}$ and a value of $\epsilon_{\rm ff}$ given by \autoref{eq:epsff_fg}. Inserting this into \autoref{eq:sfr1} gives \begin{equation} \label{eq:sflaw_observed_fqh} \dot{\Sigma}_* = \pi^2 G \eta \phi_{\mathrm{mp}}^{1/2} \phi_Q \phi_{\mathrm{nt}}^{3/2} f_{g,P}^{-1} Q_{\rm min} \left\langle\frac{p_*}{m_*}\right\rangle^{-1} \Sigma_{\rm g}^2, \end{equation} with no dependence on the orbital period. This equation is identical up to factors of order unity with equation 18 of \citet{faucher-giguere13a}. It is also nearly identical to \autoref{eq:sflaw_observed_os} -- the scalings are the same, and the leading coefficients differ only by a factor of $\pi Q_{\rm min}/f_{g,P} \sim 1$.\footnote{Despite the fact that \autoref{eq:sflaw_observed_os} and \autoref{eq:sflaw_observed_fqh} make nearly identical predictions for the star formation law, the routes by which they arrive at these predictions are quite different. In deriving \autoref{eq:sflaw_observed_os}, one assumes that the star formation efficiency per free-fall time is constant. The scaling $\dot{\Sigma}_* \propto \Sigma_{\rm g}^2$, implying a star formation timescale that declines as $t_{\rm sf}\propto 1/\Sigma_{\rm g}$, arises because the gas velocity dispersion is constant, and this leads to a midplane density that increases as the square of $\Sigma_{\rm g}$. This in turn leads to a free-fall time that scales as $\Sigma_{\rm g}^{-1}$. In contrast, in deriving \autoref{eq:sflaw_observed_fqh} one assumes that the midplane density is not varying, since it is fixed by the condition $Q=Q_{\rm min}$. Instead, the efficiency of star formation is proportional to $\Sigma_{\rm g}$. Thus \autoref{eq:sflaw_observed_os} corresponds to a picture where the star formation process is not sensitive to the gas surface density in a galaxy, but the midplane density is, while \autoref{eq:sflaw_observed_fqh} arises from a picture where the midplane density is independent of gas surface density, but the star formation process is not.} Thus for the purposes of comparing to observation we need only consider one form of the no-transport model. An important point to note is that the factor $f_{\rm sf}$ vanishes in both \autoref{eq:sflaw_observed_os} and \autoref{eq:sflaw_observed_fqh}, as it must, since in these models the star formation rate always self-adjusts to maintain force and energy balance without any help from transport. We therefore have two prospective predictions of the star formation law to consider: our fiducial transport+feedback model (\autoref{eq:sfr1} evaluated with $Q=Q_{\rm min}$), and a no-transport model (\autoref{eq:sflaw_observed_os}). We plot the model predictions together with resolved observations in \autoref{fig:sflaw_resolved}. The fiducial model does a good job of describing the data for plausible input values of $t_{\rm orb}$ -- the range plotted is $50-500$ Myr, which roughly covers the span of the data, which include regions from galactic centres to outskirts. In particular, the fiducial model properly captures the curvature seen in the data, where the slope of $\dot{\Sigma}_*$ versus $\Sigma_{\rm g}$ is clearly steeper in the range $\log(\Sigma_{\rm g}/M_\odot\,\mathrm{pc}^{-2}) \approx 0.5 - 1$ than at either higher or lower surface density. In comparison, the no-transport model produces noticeably too steep a slope compared to the observations. The mismatch is most apparent at surface densities of $\sim 100$ $M_\odot$ pc$^{-2}$, where a model without transport tends to over-predict the star formation rate by more than an order of magnitude. Moreover, the no-transport model is unable to reproduce the curvature of the data associated with the atomic- to molecular-dominated transition at $\approx 10$ $M_\odot$ pc$^{-2}$, because the star formation rate is insensitive to the thermal or chemical state of the ISM in this case. \begin{figure} \includegraphics[width=\columnwidth]{sflaw_resolved} \caption{ \label{fig:sflaw_resolved} Comparison between theoretical models predictions of the star formation law and observation of nearby galaxies at $\sim 1$ kpc resolution. Lines represent models; solid green lines are the transport+feedback model (\autoref{eq:sfr1}; T+F in the legend), evaluated for orbital times evenly spaced in logarithm from $t_{\rm orb} = 50 - 500$ Myr, with lighter colours (toward the top) corresponding to shorter orbital times. The dashed black line is the no-transport model (\autoref{eq:sflaw_observed_os}; NT in the legend), which has no dependence on orbital time. All models use the fiducial parameters given in \autoref{tab:quantities}, and we compute the star-forming molecular fraction $f_{\rm sf}$ from the KMT+ model \citep{krumholz13c} as in \autoref{sssec:mdot_ss}, using a Solar-normalised metallicity $Z' = 1/3$, appropriate for dwarfs and outer discs. Coloured histograms show observations; colours indicate the distribution of individual pixels in the $\Sigma_{\rm g} - \dot{\Sigma}_*$ plane for inner galaxies (blue; \citealt{leroy13a}; L10 in the legend) and outer galaxies and dwarfs (red; \citealt{bigiel10a}; B10 in the legend); red circles with error bars show the median and scatter of the outer galaxy data. } \end{figure} \subsubsection{Unresolved Observations} \label{sssec:sflaw_unresolved} \begin{figure*} \includegraphics[width=0.8\textwidth]{sflaw_unresolved} \caption{ \label{fig:sflaw_unresolved} Comparison between theoretical model predictions of the star formation law and observations of marginally-resolved galaxies, with one measurement per galaxy. In each panel, the $y$ axis shows the star formation rate per unit area $\dot{\Sigma}_*$. In the left column the $x$ axis shows total gas surface density $\Sigma$, while in the right it shows $\Sigma/t_{\rm orb}$. The top row shows our fiducial transport+feedback (T+F) model (\autoref{eq:sfr_averaged}), while the bottom row shows the no-transport (NT) model (\autoref{eq:sflaw_observed_os}). Coloured lines show the model predictions, evaluated using orbital periods from $t_{\rm orb} = 5$ Myr (lighter colours) to $500$ Myr (darker colours), with lines evenly spaced in $\log t_{\rm orb}$; note that only one line appears in the lower left panel, because the relationship between $\Sigma_{\rm g}$ and $\dot{\Sigma}_*$ is independent of $t_{\rm orb}$ in the no-transport model. All plots have star-forming ISM fraction $f_{\rm sf}$ computed from the KMT+ model as in \autoref{ssec:sflaw}, and use the fiducial values given in \autoref{tab:quantities}, except that we use $\beta=0.5$ rather than 0 because a substantial part of the sample consists of circumuclear starbursts which are in regions with rising rather than flat rotation curves. Coloured points, which are the same in each panel, show data culled from the following sources: local galaxies from \citet[][K98 in the legend]{kennicutt98a}, $z\sim 2$ sub-mm galaxies from \citet{bouche07a}[][B07 in the legend], and galaxies on and somewhat above the star-forming main sequence at $z\sim 1 - 3$ from \citet[][D08, D10 in the legend]{daddi08a, daddi10a}, \citet[][G10 in the legend]{genzel10a}, and \citet[][T13 in the legend]{tacconi13a}. The observations have been homogenised to a \citet{chabrier05a} IMF and the convention for $\alpha_{\rm CO}$ suggested by \citet{daddi10b}; see \autoref{app:alpha_CO} and \citet{krumholz12a} for details. } \end{figure*} For unresolved observations, we have access only to the surface densities of gas and star formation averaged over the entire disc, and to the rotation period at the disc edge. To compare our model to such data, we must take care to average the model predictions in the same way. Doing so precisely requires knowing the radial variation of the gas surface density and all the other factors in \autoref{eq:sfr1}, which is obviously not possible for unresolved observations. However, we can make a rough estimate for the effects of area-averaging by considering a disc with radially-constant values of the gas velocity dispersion $\sigma_{\rm g}$, rotation curve index $\beta$, stability parameter $Q$, the various gas fractions $f_{g,Q}$ and $f_{g,P}$, and the star-forming fraction $f_{\rm sf}$. From \autoref{eq:Qdef}, we can see that such a a disc has a surface density that varies with radius as $\Sigma_{\rm g} \propto v_\phi/r \propto r^{\beta-1}$. Thus if the disc extends from inner radius 0 to some finite outer edge, the area-averaged surface density is larger than the surface density at the edge by a factor of $2/(1+\beta)$. The effects of area-averaging on the star formation rate depend on the star formation law. First consider our transport+feedback case or a case with no feedback, both of which follow \autoref{eq:sfr1}. In discs where the majority of the star formation occurs in the GMC regime, where the star formation timescale is constant, the area-averaged star formation rate is larger than the value at the outer edge by the same factor. However, in portions of the disc in the Toomre regime, \autoref{eq:sfr1} gives a star formation rate per unit area that varies as $\dot{\Sigma}_* \propto \Sigma_{\rm g} \Omega \propto r^{2(\beta-1)}$. For $\beta \neq 1$, this gives an area-averaged star formation surface density that is larger than the value at the disc edge by a factor of $1/\beta$. Thus the area-averaged version of \autoref{eq:sfr1} can be written \begin{equation} \label{eq:sfr_averaged} \langle\dot{\Sigma}_*\rangle \approx f_{\rm sf} \langle \Sigma_{\rm g}\rangle \phi_a \max\left[\frac{4 \epsilon_{\rm ff} f_{g,Q}}{\pi Q} \sqrt{\frac{2(1+\beta)}{3 f_{g,P} \phi_{\mathrm{mp}}}} \Omega_{\rm out}, t_{\rm sf,max}^{-1}\right], \end{equation} where the angle brackets indicate area averages, and $\Omega_{\rm out}$ is the angular velocity at the outer edge of the star-forming disc. The factor $\phi_a$ represents the difference in the factors by which area-averaging enhances the star formation rate compared to the gas surface density. It is unity for discs in the GMC regime; in the Toomre regime it is $(1+\beta)/2\beta$ for $\beta \neq 0$. The case of a flat rotation curve, $\beta = 0$, requires special consideration, since in the Toomre regime such a disc has a total star formation rate that diverges logarithmically near the disc centre. As noted by \citet{krumholz16a}, this divergence is a result of the unphysical assumption that a flat rotation curve can continue all the way to $r=0$; such a rotation curve has a divergent shear, which in turn makes the midplane density required to maintain constant $Q$, and thus the total star formation rate, diverge. If one instead considers the more realistic case of a rotation curve that is flat only to some finite inner radius $r_0$, then the area-averaged star formation rate is larger than the value at the disc edge at radius $r_1$ by a factor of $2\ln (r_1/r_0)$, and thus $\phi_a = \ln (r_1/r_0)$. In practice this factor cannot be that large, because extended discs with flat rotation curves also tend to have much of their star formation in the GMC regime, where this extra enhancement does not occur. For this reason, we will adopt $\phi_a = 2$ as a fiducial value, recognising that it can be somewhat larger or smaller depending on the rotation curve and how much of the disc is in the Toomre regime. We can proceed analogously to derive the offsets between the local and disc-averaged star formation laws for the alternative no-transport models. In the no-transport, fixed $Q$ model, the star formation law obeys $\dot{\Sigma}_* \propto \Sigma_{\rm g}^2$ (\autoref{eq:sflaw_observed_fqh}), we again have $\dot{\Sigma}_* \propto r^{2(\beta-1)}$, and the factor $\phi_a$ is therefore the same as in the transport+feedback case. In the no-transport, fixed $\epsilon_{\rm ff}$ model (\autoref{eq:sflaw_observed_os}), we cannot calculate the run of $\Sigma_{\rm g}$ versus radius from our assumptions, because the values of gas surface denstiy $\Sigma_{\rm g}$ and velocity dispersion $\sigma_{\rm g}$ are independent of one another. Thus we cannot directly calculate $\phi_a$ without making an additional assumption about the radial variation of $\Sigma_{\rm g}$. For simplicity, however, we will assume the same radial variation as in the $Q=Q_{\rm min}$ models, and thus obtain the same $\phi_a$. Thus the area-averaged versions of \autoref{eq:sflaw_observed_os} or \autoref{eq:sflaw_observed_fqh} are identical to the original versions, with an added factor of $\phi_a$ on the right hand side. We compare the model predictions to a sample of unresolved observations culled from the literature in \autoref{fig:sflaw_unresolved}. In plotting the data we use the CO-H$_2$ conversion factor $\alpha_{\rm CO}$ recommended by \citet{daddi10a}, and we discuss this choice further in \autoref{app:alpha_CO}. We see that the transport+feedback model agrees reasonably well with the data, while the no-transport model produces noticeably too steep a slope in both $\dot{\Sigma}_*$ versus $\Sigma_{\rm g}$ and $\dot{\Sigma}_*$ versus $\Sigma_{\rm g}/t_{\rm orb}$. The model including transport fares significantly better. \subsection{Gas Velocity Dispersions} \label{ssec:sfr_vdisp} A second observable that we can predict is the gas velocity dispersions in galaxies, and its correlation with star formation. Consider a galaxy with a constant gas velocity dispersion $\sigma_{\rm g}$. Using the star formation relation \autoref{eq:sfr1} and our definition of $Q$ (\autoref{eq:Qdef}), we can write the star formation rate per unit area as \begin{eqnarray} \dot{\Sigma}_* & = & f_{\rm sf} \frac{\sqrt{8(1+\beta)} f_{g,Q}}{G Q} \frac{\sigma_{\rm g}}{t_{\rm orb}^2} \nonumber \\ & & \quad {} \cdot \max \left[\frac{8\epsilon_{\rm ff} f_{g,Q}}{Q} \sqrt{\frac{2(1+\beta)}{3f_{g,P} \phi_{\rm mp}}}, \frac{t_{\rm orb}}{t_{\rm sf,max}}\right]. \label{eq:sigma_sfr_obs} \end{eqnarray} As in \autoref{sssec:sflaw_unresolved}, we can derive an unresolved version of this relation under the assumption that $Q$, $f_{\rm sf}$, and $\beta$ are constant with radius. Integrating over radius, we find that the total star formation rate is \begin{eqnarray} \dot{M}_* & = & \sqrt{\frac{2}{1+\beta}} \frac{\phi_a f_{\rm sf}}{\pi G Q} f_{g,Q} v_{\phi,\rm out}^2 \sigma_{\rm g} \nonumber \\ & & \quad {} \cdot \max\left[ \sqrt{\frac{2(1+\beta)}{3 f_{g,P} \phi_{\rm mp}}} \frac{8 \epsilon_{\rm ff} f_{g,Q}}{Q}, \frac{t_{\rm orb,out}}{t_{\rm sf,max}} \right], \label{eq:sigma_sfr_unresolved} \end{eqnarray} where $v_{\phi,\rm out}$ and $t_{\rm orb,out}$ are the circular velocity and orbital period evaluated at the outer edge of the star-forming disc. In our transport+feedback model, \autoref{eq:sigma_sfr_obs} and \autoref{eq:sigma_sfr_unresolved} are to be evaluated with $Q = Q_{\rm min}$ if $\sigma_{\rm g} > \sigma_{\rm sf}$. If $\sigma_{\rm g} = \sigma_{\rm sf}$, then we can have any $Q \geq Q_{\rm min}$. Finally, values of $\sigma_{\rm g} < \sigma_{\rm sf}$ are not possible in equilibrium. Our alternative models have a variety of other behaviours. In the no-feedback model \autoref{eq:sigma_sfr_obs} and \autoref{eq:sigma_sfr_unresolved} are the same, but with $\sigma_{\rm sf} = 0$, and thus $Q = Q_{\rm min}$ for all $\sigma_{\rm g}$, and all values of $\sigma_{\rm g}$ are allowed. Conversely, in the no-transport, fixed $\epsilon_{\rm ff}$ model, $\sigma_{\rm g}$ can only take on the one value $\sigma_{\rm sf}$; no other values are allowed in equilibrium, and $\dot{\Sigma}_*$ are $\dot{M}_*$ are independent of this. Finally, in the no-transport, fixed $Q$ model, we have $Q = Q_{\rm min}$, and we must use \autoref{eq:epsff_fg} for $\epsilon_{\rm ff}$. Substituting this value of $\epsilon_{\rm ff}$ into \autoref{eq:sigma_sfr_obs} and \autoref{eq:sigma_sfr_unresolved} gives the relationships \begin{eqnarray} \dot{\Sigma}_* & = & \frac{8(\beta+1)\pi \eta \sqrt{\phi_{\rm mp} \phi_{\rm nt}^3} \phi_Q}{G Q^2 \langle p_*/m_*\rangle f_{g,P}} \frac{\sigma_{\rm g}^2}{t_{\rm orb}^2} \\ \dot{M}_* & = & \frac{4 \eta \sqrt{\phi_{\rm mp} \phi_{\rm nt}^3} \phi_Q \phi_a}{G Q^2 \langle p_*/m_*\rangle} \frac{f_{g,Q}^2}{f_{g,P}} v_{\phi,\rm out}^2 \sigma_{\rm g}^2. \end{eqnarray} Note that the transport+feedback and no-feedback models both predict $\dot{M}_* \propto \sigma_{\rm g}$ for $\sigma_{\rm g} > \sigma_{\rm sf}$, while the two no-transport models predict very different scalings: no relationship between $\dot{M}_*$ and $\sigma_{\rm g}$ for the no-transport, fixed $\epsilon_{\rm ff}$ model, and a much stronger scaling, $\dot{M}_* \propto \sigma_{\rm g}^2$, for the no-transport, fixed $Q$ model. This difference, first pointed out by \citet{krumholz16a}, provides a very clear observational signatures that can be used to distinguish models with and without transport.\footnote{The scaling between $\dot{M}_*$ and gas fraction for the no-transport, fixed $Q$ model that we obtain here is slightly different from that given in \citet{krumholz16a}, because here we have treated this model as having fixed total $Q$. In contrast, the \citet{faucher-giguere13a} model to which \citet{krumholz16a} compare assumed fixed $Q_{\rm g}$.} The physical origin of this difference is easy to understand. The star formation rate is $\dot{\Sigma}_* = \epsilon_{\rm ff} \Sigma_{\rm g}/t_{\rm ff}$. For a fixed rotation curve, orbital time, and gas fraction, the gas surface density scales as $\Sigma_{\rm g} \propto \sigma_{\rm g} / Q$, and the midplane density scales as $\rho_{\rm mp} \propto Q^{-2}$, implying that the free-fall time scales as $t_{\rm ff} \propto Q$, with no explicit dependence on $\sigma_{\rm g}$. The overall scaling is therefore $\dot{\Sigma}_* \propto \epsilon_{\rm ff} \sigma_{\rm g} / Q^2$. The difference between the transport+feedback and no-transport models then follows from their assumed variations in $\epsilon_{\rm ff}$ and $Q$. Our fiducial transport+feedback model has $\epsilon_{\rm ff}$ and $Q$ both constant, so we obtain a linear scaling $\dot{\Sigma}_* \propto \sigma_{\rm g}$. The no-transport, fixed $\epsilon_{\rm ff}$ model has constant $\sigma_{\rm g}$ and varying $Q$, so it predicts no relationship between $\dot{\Sigma}_*$ and $\sigma_{\rm g}$, with all the variations in star formation rate being driven by changes in $Q$. The no-transport, fixed $Q$ model has $\epsilon_{\rm ff} \propto \sigma_{\rm g}$ (\autoref{eq:epsff_fg}), so it predicts $\dot{\Sigma}_* \propto \sigma_{\rm g}^2$. \begin{table} \begin{tabular}{c@{$\quad$}cccc} \hline Parameter & Local dwarf & Local spiral & ULIRG & High-$z$ \\ \hline $f_{\rm sf}$ & 0.2 & 0.5 & 1.0 & 1.0 \\ $v_\phi$ [km s$^{-1}$] & 100 & 220 & 300 & 200 \\ $t_{\rm orb}$ [Myr] & 100 & 200 & 5 & 200 \\ $\beta$ & 0.5 & 0.0 & 0.5 & 0.0 \\ $f_{g,Q} = f_{g,P}$ & 0.9 & 0.5 & 1.0 & 0.7 \\ $\phi_a$ & 1 & 1 & 2 & 3 \\ $\dot{M}_{*,\rm min}$ [$M_\odot$ yr$^{-1}$] & - & - & 1 & 1 \\ $\dot{M}_{*,\rm max}$ [$M_\odot$ yr$^{-1}$] & 0.5 & 5 & - & - \\ \hline \end{tabular} \caption{ \label{tab:sfrvdisp_param} Parameter values used for the theoretical models shown in \autoref{fig:sigma_sfr_unresolved}. See main text for details. } \end{table} \begin{figure*} \includegraphics{sfrvdisp} \caption{ \label{fig:sigma_sfr_unresolved} Comparison between the observed correlation between gas velocity dispersion and star formation rate and theoretical models. Solid lines represent theoretical models, with the model plotted indicated in each panel; clockwise from top left, these are the transport+feedback model, the no-feedback model, the no-transport, fixed $Q$ model, and the no-transport, fixed $\epsilon_{\rm ff}$ model. The lines shown are for four representative sets of parameters, corresponding roughly to those appropriate for local dwarfs galaxies, local spiral galaxies, ULIRGs, and high-$z$ star-forming discs; the lines fade outside the range of star formation rates for which they are applicable. See \autoref{tab:sfrvdisp_param} and the main text for details. The coloured points represent observations, and are the same in every panel. Data shown in include: H$\alpha$ observations of local galaxies from two surveys (GHASP, \citealt{epinat08a}, and DYNAMO, \citealt{green14a}) as well as smaller studies \citep{moiseev15a, varidel16a}; H~\textsc{i} observations of nearby galaxies from THINGS \citep{leroy08a, walter08a, ianjamasimanana12a} and from the survey of dwarfs by \citet{stilp13a}; a compilation of molecular line observations of nearby ULIRGS \citep{downes98a, sanders03a, veilleux09a, scoville15a, scoville17a}; H$\alpha$ observations of high-redshift galaxies from the samples of \citet{epinat09a}, \citet{law09a}, \citet{lemoine-busserolle10a}, and the WiggleZ \citep{wisnioski11a} and SINS-KMOS-3D \citep{wisnioski15a, wuyts16a} surveys at $z\sim 1-3$; H$\alpha$ observations of lensed galaxies at $z\sim 2-3$ from \citet{jones10a}; a sample of galaxies at $z\sim 1$ from the KMOS survey \citep{wisnioski15a} as analysed by \citet{di-teodoro16a}, and a sample from the KROSS survey \citep{stott16a, johnson17a}. Full details on the data set are given in \autoref{app:sigma_sfr_data}. } \end{figure*} To compare the various theoretical models to observations, since the results depend on $f_{\rm sf}$, $v_\phi$, $t_{\rm orb}$, $\beta$, and the gas fraction, we must choose values for these parameters. For unresolved observations, the data will also depend on $\phi_a$. Following our approach in \autoref{sssec:mdot_ss}, we consider four different possibilities that should be broadly representative of the ranges these parameters can take. We label these cases local dwarf, local spiral, ULIRG, high-$z$, with the final case intended to be typical of the observed high-redshift star-forming discs. We summarise the chosen parameters in \autoref{tab:sfrvdisp_param}; all other parameters have their fiducial values as specified in \autoref{tab:quantities}, and we compute the ISM thermal velocity dispersion $\sigma_{\rm th}$ as in \autoref{sssec:mdot_ss}. Broadly speaking, the dwarf is characterised by a high gas fraction, a low star-forming fraction, and a low orbital velocity, and has $\phi_a = 1$ because it is entirely in the GMC regime; the ULIRG has a high orbital velocity, a high gas fraction, and a short orbital period. It has a larger value of $\phi_a$, since it is entirely in the Toomre regime. The local spiral and high-$z$ star-forming disc have properties intermediate between these extremes, with the high-$z$ system having a higher gas fraction, star-forming fraction, and $\phi_a$. Finally, we note that each set of model parameters is found only in some finite range of star formation rates; for example, objects with $f_{\rm sf} = 0.2$, as we adopt for our local dwarf case, do not generally produce star formation rates of $10$ $M_\odot$ yr$^{-1}$. For this reason we use each set of properties only up to some maximum or down to some minimum star formation rate. We give the limiting values $\dot{M}_{*,\rm min}$ and $\dot{M}_{*,\rm max}$ in \autoref{tab:sfrvdisp_param} as well. We compare the predictions of our transport+feedback model and the three alternatives to observations in \autoref{fig:sigma_sfr_unresolved}. Details on the observations and our processing of them are given in \autoref{app:sigma_sfr_data}. We see that the transport+feedback model is in generally good agreement with the observations at both low and high star formation rates. In particular, it captures the behaviour that the velocity dispersion reaches a floor of $\approx 10$ km s$^{-1}$ at low star formation rates,\footnote{Some of the data, particularly the GHASP and \citet{moiseev15a} samples, have $\sigma_{\rm g} \approx 20$ km s$^{-1}$ at low star formation rates, but this is likely an artefact of using H$\alpha$-estimates of $\sigma_{\rm g}$, as the other tracers are all systematically lower. See \autoref{app:sigma_sfr_data} for further discussion.} while increasing rapidly with star formation rate at star formation rates above a few $M_\odot$ yr$^{-1}$. (The dwarf case rises to high velocity dispersions at star formation rates that are too low, but this is an artefact of choosing to fix $f_{\rm sf}$, when in fact no galaxy with a star formation rate of $\gtrsim 1$ $M_\odot$ yr$^{-1}$ has $f_{\rm sf} = 0.2$ and $v_\phi = 100$ km s$^{-1}$, as we have adopted in the dwarf case.) In contrast, the alternative models all have an obvious failing. The no-feedback model does well at high star formation rates, but fails to capture the floor imposed by star formation at low star formation rates, instead predicting that the velocity dispersion should fall to very small values. Conversely, the no-transport, fixed $\epsilon_{\rm ff}$ model correctly captures the behaviour at low star formation rates, but fails to reproduce the observed increase in velocity dispersion at higher star formation rates. Finally, the no-transport, fixed $Q$ model has qualitatively correct behaviour, but seriously under-predicts the velocity dispersion at all star formation rates. This failure is a direct result of having too steep a relationship between star formation and gas surface density, as seen in \autoref{ssec:sflaw}. \subsection{Mass Transport} \label{ssec:obs_transport} A final observable, or at least potential observable, that we can predict is the correlation between mass inflow rate and physical properties of the star-forming disc. As discussed in \autoref{sec:intro}, at present we have direct detections of inflow rates only for a handful of nearby galaxies, but we can compare our model to these, and predict the results of future observations and simulations. To make predictions for this correlation using our transport+feedback model, for any choice of $\sigma_{\rm g}$ and ancillary parameters (gas fraction, rotation curve, etc.), we can use \autoref{eq:mdot_steady} to compute the mass inflow rate, and \autoref{eq:sigma_sfr_unresolved} to compute the corresponding star formation rate. When $\sigma_{\rm g} \gg \sigma_{\rm sf}$, this leads to a predicted scaling between inflow and star formation $\dot{M} \propto \dot{M}_*^3/(v_\phi^6 f_{\rm sf}^3)$, with the coefficient depending on the gas fraction, rotation curve index, and whether the galaxy is in the Toomre regime. We can use the same method for the no-feedback model simply by setting $\sigma_{\rm sf} = 0$, but the results are only slightly different, so we refrain from showing them. Models without transport, depending on one's perspective, either predict that the inflow rate should be zero or make no predictions at all regarding its value. \begin{figure} \includegraphics[width=\columnwidth]{inflow_sfr} \caption{ \label{fig:inflow_sfr} Inflow rate through this disc versus star formation rate. Blue points show the observations of \citet{schmidt16a}, while lines show the predictions of our transport+feedback model for the four example cases whose parameters are given in \autoref{tab:sfrvdisp_param}. } \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{inflow_sfr_mod} \caption{ \label{fig:inflow_sfr_mod} Same as \autoref{fig:inflow_sfr}, but with the star formation rate normalised by $f_{\rm sf} v_\phi^2$. Note that, compared to \autoref{fig:inflow_sfr}, the $x$ axis range has been compressed from 4 dex to 2 dex. } \end{figure} We show predicted mass inflow rates for the same four example cases used in \autoref{ssec:sfr_vdisp} in \autoref{fig:inflow_sfr}. Given the extremely strong scaling of the inflow rate with $v_\phi$ and $f_{\rm sf}$, it is not surprising that the example cases cover a very wide range of possible inflow rates for a given star formation rate. Thus the model is consistent with the data, in that the data lie near the ``local spiral" parameter choices, where we expect them, but this is a relatively weak statement. A more interesting test is to normalise out the dependence on the rotation curve velocity and star forming fraction. Our model predicts that $\dot{M}_* \propto f_{\rm sf} v_\phi^2$, whereas $\dot{M}$ is independent of these two parameters (except very close to $\sigma_{\rm g} = \sigma_{\rm sf}$), and thus our model makes a much stronger prediction for the correlation of $\dot{M}$ with $\dot{M}_*/(f_{\rm sf} v_\phi^2)$. For the observations, we take $v_{\phi}$ from Table 2 of \citet{leroy13a}; as a proxy for the star-forming fraction $f_{\rm sf}$, we use $f_{\rm sf} = \dot{\Sigma}_* / (\Sigma_{\rm g}/2\mbox{ Gyr})$, where the values of $\Sigma_{\rm g}$ and $\dot{\Sigma}_*$ are taken from the same table. We plot the correlation of $\dot{M}$ with $\dot{M}_*/(f_{\rm sf} v_\phi^2)$ in \autoref{fig:inflow_sfr_mod}. We see that both the data and the models cluster much more tightly than in the plot of $\dot{M}$ versus $\dot{M}_*$ (note the difference in $x$-axis range in \autoref{fig:inflow_sfr} versus \autoref{fig:inflow_sfr_mod}), and that the data remain quite close to the model lines. The remaining difference between the theoretical model results for dwarfs and spirals are due to the differences in gas fraction and rotation curve index between the two, and the observations, with one possible exception, are well within the space of models that are plausibly spanned by the gas fraction and rotation curve index range of nearby galaxies. The outlier at the left side of the plot is NGC 2903. (In \autoref{fig:inflow_sfr} this is the point to the at the lowest inflow rate and second lowest star formation rate.) For this galaxy, \citet{schmidt16a} state that their fits are likely unreliable due to complex kinematics driven by a strong bar. While this comparison is encouraging, the observations from \citet{schmidt16a} cover only a very narrow range of galaxy properties. Stronger tests are clearly warranted. The most obvious target for such comparisons are nearby starburst galaxies, for which our transport+feedback model predicts large inflow rates. These galaxies are nearby enough that one can make high resolution CO or HCN maps from which kinematic information can be extracted. While the kinematics are likely to be complex and thus analysis will be more difficult than it is for quiescent spirals, the predicted signal is also much larger. \section{Implications for Galaxy Formation} \label{sec:discussion} \subsection{Equilibrium Inflow and Star Formation} \label{ssec:gal_eq} The model we present here has important implications for the formation of disc galaxies. To explore these further, we begin by changing our perspective. Thus far we have been developing a theory that takes as input the gas content and other ancillary properties of a galaxy, and returns as output the inflow rate $\dot{M}$ that is required to maintain pressure and energy equilibrium in a galaxy of given physical parameters, and the star formation rate $\dot{M}_*$ that accompanies this equilibrium configuration. This framing of the problem is appropriate if we are interested in behaviours on timescales short compared to the gas consumption or flow timescales. However, over cosmological timescales it is more natural to think of the inflow rate as given. Gas will fall onto the central galaxy at a rate dictated by cosmological structure formation, recycling of gas ejected at earlier epochs, and processing through the gaseous halo. At least for large galaxies where the gas consumption timescale is shorter than the Hubble time, the galaxy will adapt its structure to be in equilibrium given this inflow rate, a point previously made by \citet{dekel09a}. We can use this picture to calculate the evolution of galaxies' velocity dispersions. Tidal torque theory suggests that the specific angular momentum of infalling gas will increase with halo mass and cosmic time, and thus gas accreting onto galaxies tends to arrive at their outskirts, where orbital times, and thus star formation timescales, are relatively long. Some of this gas will form stars before gravitational instability moves it inward, but some will be forced to flow inward toward the galactic bulge. This is particularly true at high redshift, when galaxies accrete quickly. We can therefore approximate that the inflow rate must be comparable to the infall rate (including in this rate recycling of gas ejected from the galaxy but not the halo), which in turn dictates the velocity dispersion in the galaxy, as suggested by \citet{genel12b}. In practice, we can calculate this velocity dispersion from \autoref{eq:mdot_steady}, setting $\dot{M} = \dot{M}_{\rm g,acc}$, where $\dot{M}_{\rm g,acc}$ is the gas accretion rate onto the galaxy. However, from the velocity dispersion and the galaxy rotation curve we can in turn compute the run of gas surface density, and thence the star formation rate. For a simple case of radially-constant $\beta$, $f_{\rm sf}$, and gas fractions, we can compute this analytically using \autoref{eq:sigma_sfr_unresolved}. A more realistic calculation would consider the time and radial variation of the gas fraction and star-forming ISM fraction, and such models have had significant success \citep[e.g.,][]{forbes12a, forbes14a, tonini16a}, but this level of complexity requires semi-analytic solution, and our goal here is qualitative insight. For this reason, we choose to neglect these complications and simply ignore radial- and time-variation. Doing so allows us to compute an equilibrium star formation rate in the disc of the galaxy as a function of a galaxy's inflow rate and rotation curve, which in turn are functions of the halo mass and redshift. We emphasise that this is the star formation rate in the disc; in a simple equilibrium picture, the total star formation rate must equal the accretion rate minus the rate of mass ejection by star formation or black hole feedback. The procedure we have just outlined provides a means of computing the disc star formation rate from the inflow rate and the rotation curve, and with the remaining mass balance coming from a combination of star formation in a bulge and ejection of gas through galactic winds. \subsection{Cosmological Halo Evolution} \label{ssec:halo_evol} To make use of the simple picture outlined above, we must have methods to calculate the halo accretion rate, circular velocity, and orbital period as a function of redshift. First consider the accretion rate. In the interest of simplicity we neglect the contribution from gas recycling, and just attempt to calculate the infall rate of pristine gas. To first order this is determined by the dark matter accretion rate, which in the context of a $\Lambda$CDM cosmology\footnote{Throughout this section we assume a cosmology with $\Omega_m = 0.27$, $\Omega_\Lambda = 0.73$, $h=0.71$, and $\sigma_8 = 0.81$. We use this rather than more recent \textit{Planck} cosmological parameters because we do not have calibrations of the accretion formulae we adopt below for the more recent cosmological parameters. In practice this will make little difference, since our goal in this section is to develop a rough intuitive model rather than perform precision calculations.} can be calculated with the extended Press-Schechter (EPS) formalism, with some additional calibration from simulations. Following \citet{krumholz12d} and \citet{forbes14a}, we adopt the approximate dark matter accretion rates found by \citet{neistein08a} and \citet{bouche10a}: \begin{equation} \label{eq:halo_acc} \dot{M}_{h,12} \approx -\alpha M_{h,12}^{1+\beta} \dot{\omega}, \end{equation} where $M_{h,12}$ and $\dot{M}_{h,12}$ are the halo mass and accretion rate normalised to $10^{12}$ $M_\odot$, and $\omega$ is the self-similar time variable of the EPS formalism (i.e., $\omega$ is time measured in units of the linear growth time for structure), whose time derivative is well fit by \begin{equation} \dot{\omega} \approx -0.0476[1 + z + 0.093(1+z)^{-1.22}]^{2.5}\mbox{ Gyr}^{-1}. \end{equation} The functional form of \autoref{eq:halo_acc} follows from the EPS formalism, and the value of $\beta$ follows from the power spectrum of density fluctuations, while $\alpha$ is a free parameter to be calibrated from full dark matter simulations. The \citet{neistein08a} fit, updated to current cosmological parameters, gives $\alpha = 0.628$ and $\beta=0.14$. With these parameters, the accretion rate evaluates numerically to approximately \begin{equation} \dot{M}_{h} \approx 39 \, M_{h,12}^{1.1} \left(1+z\right)^{2.2}\,M_\odot\mbox{ yr}^{-1}, \end{equation} at $z < 1$, with a slightly steeper slope with $1+z$ at higher $z$. In practice, however, rather than use this approximate expression we generate histories of mass versus redshift by direct numerical integration of \autoref{eq:halo_acc}. The next step in estimating the baryonic accretion rate is to correct for the efficiency with which baryons penetrate the hot halo of the galaxy. Following \citet{forbes14a}, we compute the gas accretion rate using the model of \citet{faucher-giguere11a}, \begin{eqnarray} \label{eq:gas_acc} \dot{M}_{\rm g,acc} & = & \epsilon_{\rm in} f_b \dot{M}_{h} \\ \epsilon_{\rm in} & = & \min\left[\epsilon_0 M_{h,12}^{\beta_{M_h}}(1+z)^{\beta_z}, \epsilon_{\rm max}\right], \end{eqnarray} where $f_{\rm b}\approx 0.17$ is the universal baryon fraction, and the parameters $\left(\epsilon_0, \beta_{M_{\rm h}}, \beta_z, \epsilon_{\rm max}\right) = (0.31, -0.25, 0.38, 1)$ are the results of fits to cosmological simulations. This combined with \autoref{eq:halo_acc} enables us to compute the mass accretion rate for an arbitrary halo. The formula for $\epsilon_{\rm in}$ is the result of a fit to the results of a series of SPH simulations run by \citet{faucher-giguere11a}, and it is calibrated to be most accurate for $z > 2$. Despite this we will continue to use it at lower $z$; examination of \citeauthor{faucher-giguere11a}'s figure 9 suggests that it remains reasonably accurate down to $z \approx 0$ for halos below $10^{12}$ $M_\odot$, but that it overestimates the accretion rates for more massive halos at low redshift by a factor of a few. In addition to the accretion rate, we need the circular velocity and orbital period, or, equivalently, the circular velocity and disk radius. We also follow \citet{forbes14a} in computing the characteristic radius of the disc as \begin{equation} r_d \approx 0.035 r_{\rm vir} = 0.035 \cdot 163 \, M_{h,12}^{1/3}(1+z)^{-1}\mbox{ kpc}, \end{equation} where $r_{\rm vir}$ is the virial radius, the numerical value of 163 kpc is for a halo overdensity of 200, and the coefficient of $0.035$ is roughly consistent with the findings by \citet{kravtsov13a} and \citet{somerville18a} that galaxies have ratios of half-mass to virial radius $r_h/r_{\rm vir} \simeq 0.015 - 0.018$.\footnote{To be precise, since our equilibrium models have gas surface density profiles $\Sigma_{\rm g} \propto 1/r$ for flat rotation curves, implying a mass that scales as $r$, the outer radius should be exactly twice the half-mass radius. Thus our coefficient of 0.035 corresponds to $r_h/r_{\rm vir} = 0.0175$.} For the circular velocity, we note that a \citet{navarro97a} profile has a maximum circular velocity \citep{mo10a} \begin{eqnarray} v_{c,\rm max} & \approx & 0.465\sqrt{\frac{c}{\ln(1+c)-c/(1+c)}} v_{\rm vir} \\ v_{\rm vir} & \approx & 117 \, M_{h,12}^{1/3} (1+z)^{1/2} \mbox{ km s}^{-1}, \end{eqnarray} where $c$ is the halo concentration; the numerical coefficients are again for a halo overdensity of 200, and we adopt $c=10$ as a fiducial value. The true circular velocity will be somewhat larger than this because in the star-forming parts of galaxies the baryons contribute non-negligibly to the gravitational potential. We very roughly adopt a relation \begin{equation} v_\phi = \phi_v v_{c,\rm max} \end{equation} with $\phi_v = 1.4$, which gives $v_\phi = 200$ km s$^{-1}$ for a $10^{12}$ $M_\odot$ halo at $z=0$. This is a crude approximation, but, as noted above, our goal here is a qualitative toy model, not a precise calculation. The orbital time follows immediately from $r_d$ and $v_\phi$. \subsection{Model Results and Interpretation} \begin{figure} \includegraphics[width=\columnwidth]{halo_hist} \caption{ \label{fig:halo_hist} Evolution of the ratio of gas accretion rate (\autoref{eq:gas_acc}) to disc star formation rate (\autoref{eq:sigma_sfr_unresolved}; top panel) and gas velocity dispersion (\autoref{eq:mdot_steady}) to star formation-supported velocity dispersion (\autoref{eq:sigma_sf}; bottom panel) as a function of redshift. Each line represents the evolutionary path of a particular halo, with the lightest colour (bottom lines in the lower panel) corresponding to a halo with a present-day mass of $M_{h,0} = 10^{12}$ $M_\odot$, and the darkest (top lines in the lower panel) to a halo with a present-day mass of $M_{\rm h,0} = 10^{13}$ $M_\odot$. Intermediate lines are uniformly spaced by 0.1 dex in $M_{\rm h,0}$. The inflection points visible at $z \approx 0.5$ corresponds to where halos switch from star formation occurring mainly in the Toomre regime (at higher $z$) to occurring mainly in the GMC regime (at lower $z$). Shaded regions in the upper panel indicate regimes of bulge building, disc building, and central quenching, and in the lower panel indicate regions of transport-driven versus feedback-driven turbulence; see main text for details. } \end{figure} We now use the formalism of \autoref{ssec:halo_evol} to compute the mass and accretion histories of a range of halos with present-day masses of $M_{h,0} = 10^{12} - 10^{13}$ $M_\odot$. For each one we compute the velocity dispersion $\sigma_{\rm g}$, the velocity dispersion that can be supported by star formation $\sigma_{\rm sf}$, and the disc star formation rate $\dot{M}_{*, \rm disc}$ as outlined in \autoref{ssec:gal_eq}. We restrict our attention to halos in this mass range for three reasons. First, halos substantially smaller than this are not observable beyond the local Universe, while those substantially larger host clusters rather than single galaxies. Thus the observational sample beyond $z=0$ is mostly limited to this mass range. Second, it is not clear if the equilibrium assumption is applicable in halos smaller than $\sim 10^{12}$ $M_\odot$. These host dwarf galaxies, and the gas consumption timescale in modern-day dwarfs is generally comparable to or longer than the Hubble time \citep[e.g.,][]{bolatto11a, hunter12a, jameson16a}. This might also be the case at higher redshift, but this is uncertain because the gas consumption timescale depends on both the star formation rate and the rate at which star formation drives gas out via galactic winds, and the latter is highly uncertain for low-mass galaxies beyond the local universe \citep{forbes14b}. Limiting our attention to halos above $10^{12}$ $M_\odot$ avoids this issue. A related point is that the mass loading factor for these halos is unlikely to be $\gtrsim 1$, so we need not adopt a complex model to treat this phenomenon either. Third and finally, to evaluate our models we require values of $f_{\rm sf}$ and $\sigma_{\rm th}$, and these depend on the molecular fraction in the ISM. The dependence of this fraction on halo mass and redshift is highly complex and substantially uncertain \citep[e.g.,][]{obreschkow09a, fu10a, lagos11a, krumholz12d, forbes14a}. However, we expect that the molecular fraction will be smallest in small halos at low redshift, since these combine low metallicity and low gas surface density. At $z=0$, we observe that halos of mass $10^{12}$ $M_\odot$ halos (Milky Way-sized) host galaxies with $f_{\rm H_2}\sim 0.5$ within the scale radius. By restricting our attention to halos with present-day masses above this limit, we stay in the part of parameter space where the ISM is at least marginally molecule-dominated, and thus we can adopt $f_{\rm sf} \approx 1$ and $\sigma_{\rm th} \approx 0.2$ km s$^{-1}$ without making too large an error. In contrast, $10^{11}$ $M_\odot$ halos (LMC-sized) have present-day molecular fractions $\sim 0.1$ \citep[e.g.,][]{jameson16a}. Treating them would require that we adopt a model for the time evolution of the molecule fraction, which is beyond the scope of this paper. \subsubsection{Star formation} We are interested in two diagnostic ratios from these models, which we plot in the upper and lower panels of \autoref{fig:halo_hist}. The first of these is $\dot{M}_{\rm g, acc}/\dot{M}_{*, \rm disc}$, the ratio between the rate of accretion onto and then through the disc and the rate at which that accretion flow should convert into stars as it moves through the disc; \citet{dekel14a} describe this quantity as the ``wetness factor". If $\dot{M}_{\rm g,acc} \gg \dot{M}_{*,\rm disc}$, then the rate of mass flow onto the disc from the IGM, and through it toward the galactic centre, greatly exceeds the rate at which we expect that flow to convert into stars. Consequently, the majority of the flow will not be converted to stars before it reaches the galactic centre. It may still be ejected in outflows, but unless all of it is lost in this fashion, a substantial mass flux will reach the bulge region. Consequently, an era when $\dot{M}_{\rm g,acc} \gg \dot{M}_{*,\rm disc}$ should correspond to an era when galaxies are building up their bulges. Now suppose the reverse holds, $\dot{M}_{\rm g,acc} \ll \dot{M}_{*,\rm disc}$. Taken literally this would mean that the star formation rate exceeds the gas accretion rate. Of course such a configuration cannot represent a steady state in which the rate of gas flow through the galaxy matches the rate of gas accretion into it, which violates the assumption we made in deriving $\dot{M}_{*,\rm disc}$. To understand what occurs in this regime, it is helpful to consider what happens as a galaxy evolves, with our intuition guided by the results of more detailed time-dependent models \citep[e.g.,][]{forbes14a}. From \autoref{fig:halo_hist} we see that at early times halos have $\dot{M}_{\rm g, acc}/\dot{M}_{*, \rm disc} \gg 1$, and that this ratio gradually decreases to $\sim 1$. When this ratio is $\sim 1$, gas only barely reaches the galactic centre before the last of it is consumed and turned into stars; all star formation therefore occurs in the disc. As the gas supply tapers off over cosmic time and the ratio tries to drop even further, the gas supply is insufficient to keep up with the rate of consumption into stars. The equilibrium between supply and consumption is easiest to maintain in the outer parts of galaxies, both because this is where the majority of the gas lands, and because this is where the star formation rate per unit area is smallest. Thus the failure of equilibrium is likely to occur inside out: less and less of the gas that is accreting onto the galaxy will be able to reach the centre before transforming into stars. This reduction in central gas surface density in turn reduces $\dot{M}_{*,\rm disc}$ compared to the value we have computed under the assumption of constant radial mass flux, thereby maintaining $\dot{M}_{\rm g,acc} \sim \dot{M}_{*,\rm disc}$. The price of this balance is that the centre of the galaxy ceases star formation, and thus quenches. Examining the upper panels of \autoref{fig:halo_hist}, we see halos show a clear progression from bulge building at high-redshift to disc building at intermediate redshift, to central quenching near the present day. We should not put too much weight on the exact redshift range over which this transition occurs, particularly since our fitting formula for gas accretion onto halos likely overestimates the rate of cold gas flow at lower redshift. Nonetheless, qualitatively this progression is a natural explanation for a commonly-observed phenomenon: galaxies that transition from the blue, star-forming cloud to the quenched, red sequence do so by ceasing their star formation from the inside out, after a central stellar bulge builds up \citep[e.g.,][]{fang12a, fang13a, cheung12a, genzel14a, nelson16a, belfiore17a}. Models that include radial transport of gas via gravitational instability are able to reproduce this qualitative behaviour \citep[e.g.,][]{forbes12a, cacciato12a, forbes14a, tonini16a, stevens16a}, but the analytic model we develop here allows a particularly simple and straightforward explanation for both the inside-out quenching phenomenon and the redshift at which it occurs. However, we end this section by cautioning that this simple steady state, quasi-equilibrium picture almost certainly misses some of the complications that occur in real cosmological galaxy formation. At least for halos at and slightly above the more massive end of the range we consider, which tend to quench at $z\sim 2$, both observations \citep{barro13a, barro16a, barro17a, tacchella15a, tacchella18a} and simulations \citep{zolotov15a, tacchella16a, tacchella16b} suggest that galaxies pass through a phase of ``compaction", where the central gas surface density is driven to very high values, before finally quenching. Such compaction events are not captured in our simple steady state model, which may be more applicable to galaxies in less massive halos such as those that host the Milky Way. \subsubsection{Turbulence driving} The other diagnostic ratio of interest is $\sigma_{\rm g} / \sigma_{\rm sf}$. Recall that $1-\sigma_{\rm sf}/\sigma_{\rm g}$ is the fraction of the energy required to maintain the turbulence that comes from star formation feedback (\autoref{eq:sigma_sf}). Thus $\sigma_{\rm g} = 2 \sigma_{\rm sf}$ corresponds to star formation feedback and transport (gravity) contributing equally to the turbulence. In the lower panel of \autoref{fig:halo_hist}, we show the time evolution of $\sigma_{\rm g} / \sigma_{\rm sf}$, with the regions where transport driving and feedback driving dominate the energy budget highlighted. Note that, under our simplifying assumption that the inflow rate is always non-zero, we cannot ever reach $\sigma_{\rm g} = \sigma_{\rm sf}$, corresponding to the point where transport driving ceases completely. The most obvious trend in \autoref{fig:halo_hist} is that more massive halos are further into the transport-driving regime, and spend more of their evolutionary history there. Massive galaxies, by virtue of their large accretion rates, tend to have high gas surface densities that require high velocity dispersions to maintain. Such velocity dispersions can only be maintained by gravitational power. The second clear trend is that $\sigma_{\rm g}/\sigma_{\rm sf}$ drops as we approach $z=0$. This is driven partly by a drop off in cosmological accretion rates, which produce less gas-rich discs that require lower values of $\sigma_{\rm g}$ to remain marginally stable. It is also partly by galaxies transitioning from the Toomre regime to the GMC regime in their star formation, which puts a ceiling on the depletion time and thus a floor on $\sigma_{\rm sf}$. This effect is visible as the downturn in $\sigma_{\rm g}/\sigma_{\rm sf}$ below $z\approx 0.5$. Thus the qualitative picture to which we come is that the transition from transport-driven turbulence to feedback-driven turbulence depends primarily on galaxy mass, and secondarily on redshift, with transport-driving dominating at high mass and high-$z$. \subsection{Limits of model applicability} \label{ssec:limitations} We conclude this discussion by considering to what types of galaxies, or within which parts of galaxies, our model applies. As noted in \autoref{sec:model}, our model assumes that the gas inflow rate is able to self-adjust to maintain marginal gravitational stability. The numerous simulations discussed in \autoref{ssec:theory} leave little doubt that this self-adjustment works in one direction: if the gas becomes gravitationally unstable, it will develop non-axisymmetric structures that exert torques and drive a net inflow, thereby raising the velocity dispersion and pushing the system back toward stability. However, our model also assumes the converse, that in galaxies that are gravitationally stable there will not be net radial transport. There are clearly locations where this is not true, such as the inner few kpc of the Milky Way. In this region the gas fraction is so small that the gas effectively acts like a passive tracer moving in the fixed stellar potential. Both the gas and the combined gas-star disc are Toomre-stable, but the stars are arranged in a bar that, depending on the galactocentric radius, either forces gas to shock by preventing it from flowing on non-intersecting orbits \citep[e.g.,][]{binney91a, sormani15a} or drives acoustic instabilities that are unrelated to self-gravity \citep[e.g.,][]{montenegro99a, krumholz15d, krumholz17a}. Moreover, these effects can drive gas outward as well as in. In any region where such effects are dominant, our model is not applicable. However, this is not a major limitation because regions such as the Milky Way centre contribute very little to the total gas mass or star formation budget of the Universe. The Milky Way's centre is gas-depleted to an extent that is unusual even among local spirals \citep{bigiel12a}, and is likely related to it being a ``green valley" galaxy on the verge of quenching \citep{bland-hawthorn16b}. Even including the central molecular zone, the central few kpc of the Galaxy contain $\sim 10\%$ of its star formation or ISM mass \citep[e.g.,][]{kruijssen14b}. Indeed, this gas-poverty is likely the reason that there is a strong bar, since simulations show that bar formation does not take place until the gas fraction drops to well under 10\% \citep{athanassoula13a}. Beyond the Milky Way, the majority of gas in nearby spirals lies in regions where the gas and stellar surface densities and velocity dispersions are such that gas and stars are strongly coupled \citep[their Figure 5]{romeo17a}. Estimates of the fraction of galaxies that contain stellar bars at all vary from $\sim 20\%$ \citep[e.g.,][]{melvin14a, cervantes-sodi17a} to $\sim 60\%$ \citep[e.g.,][]{erwin17a}, with the higher figures largely coming from surveys capable of detecting bars below $\sim 1$ kpc in size. Thus large bars that could conceivably affect the dynamics of the majority of the ISM mass or star formation in a galaxy appear to be rare even at $z=0$, as one might expect since even a strongly-barred galaxy like the Milky Way has little of its gas or star formation in the region where the bar dominates the dynamics. Though there is significant debate in the literature about whether the bar fraction declines with redshift (e.g., \citealt{melvin14a} versus \citealt{erwin17a}), there are strong theoretical reasons to believe that bars were less prominent in the past, both due to the higher gas fractions found at $z>0$ \citep[e.g.,][]{tacconi13a} and the fact that bars take time to grow. Thus if barred regions do not dominate the ISM mass budget at $z=0$, they should be even less important in the early Universe. In summary, our model should apply to most of the ISM and most star-forming regions over most of cosmic time. We do, however, caution that it should not be applied to extremely gas-poor, bar-dominated regions like the Milky Way centre. These are better described by models that treat the stellar potential as decoupled from the gas \citep[e.g.,][]{binney91a, sormani15a, krumholz17a}. \section{Summary} \label{sec:summary} We present a new model for the gas in galactic discs, based on a few simple physical premises. We propose that galactic discs maintain a state of vertical hydrostatic equilibrium, marginal gravitational stability, and balance between dissipation of turbulence and injection of turbulent energy by star formation feedback and radial transport. The inclusion of both radial transport and feedback as potential energy sources is the primary new feature of our model, and despite the apparent simplicity of this addition, it yields a dramatic improvement in both predictive power and agreement with observation compared to simpler equilibrium models. We find that star formation alone is able to maintain a velocity dispersion of $\sigma_{\rm sf} \approx 6 - 10$ km s$^{-1}$, with the exact value depending on the gas fraction, the thermal velocity dispersion, and the fraction of the interstellar medium in the star-forming molecular phase. In galaxies where the gas surface density is low, this is sufficient to maintain energy and hydrostatic balance, and there is no net radial flow. However, in many observed galaxies this velocity dispersion is insufficient to keep the gas in a state of marginal gravitational stability. In this case, the instability produces spiral structures and clumps that exert non-axisymmetric torques, leading to a net mass flow inward. The inflow releases gravitational potential energy, which manifests as non-circular, turbulent motions in the transported gas. The fraction of the turbulent power that originates from this process rather than from star formation feedback is $1 - \sigma_{\rm sf}/\sigma_{\rm g}$, where $\sigma_{\rm g}$ is the gas velocity dispersion. This fraction is small in quiescent star-forming galaxies at the present cosmic epoch, but is larger in both starbursts and high-redshift galaxies. The model we derive from this simple picture shows excellent agreement with a range of observational diagnostics, including the star formation law for both resolved and unresolved systems and the relationship between galaxies' star formation rates and velocity dispersions. It is also consistent with the limited data available on the observed rates of radial inflow in nearby galaxies. The agreement holds across a wide range of galaxy types and masses, from nearby dwarfs with star formation rates $\lesssim 0.1$ $M_\odot$ yr$^{-1}$, to starbursts and high-redshift discs with star formation rates $\gtrsim 100$ $M_\odot$ yr$^{-1}$. We also predict that high gas inflow rates should be measurable in nearby starburst galaxies, whose kinematics have yet to be analysed for inflow. In contrast, we show that models that neglect either radial transport or star formation feedback fail at either high or low star formation rate, or in some cases both. Our model provides a natural explanation for the cosmic epochs at which galaxies build up bulges and discs, and at which they quench. At high redshift, galaxies' mass transport rates naturally exceed their star formation rates; this is a natural consequence of the high velocity dispersions found in high-redshift galaxies, and the stronger scaling between velocity dispersion and transport rate ($\dot{M} \propto \sigma_{\rm g}^3$) than between velocity dispersion and star formation rate ($\dot{M}_* \propto \sigma_{\rm g}$). As a result, they tend to move mass inward toward a bulge. As accretion rates decline as the density of the universe diminishes, transport rates decline as well, and do so faster than star formation rates. This leads to a configuration where most star formation occurs in galaxies' discs. Finally, once the star formation rate is smaller than the mass transport rate, gas does not reach galaxy centres at all, and the centres quench, explaining the common observation that quenching tends to occur inside out. In future work we plan to apply this model to radially-dependent models of galaxy formation, such as those of \citet{forbes12a} and \citet{forbes14a}. Such an application promises to yield new insights into the origin of the radial structure of galactic discs, and the evolution of this structure over cosmic time. We also plan to test the model against cosmological simulations, where inflow rates are determined directly from the hydrodynamics (Burkhart et al.~2018, in preparation). \section*{Acknowledgements} MRK acknowledges support from the Australian Research Council's \textit{Discovery Projects} funding scheme (project DP160100695). BB is supported by the NASA Einstein Postdoctoral Fellowship and an Institute for Theory and Computation Fellowship. JCF is supported by an Institute for Theory and Computation Fellowship. We thank S.~Oh for helpful discussions of bar surveys, and the referee, A.~Dekel, for helpful comments. \bibliographystyle{mn2e}
2112.03803
\section{Acknowledgement} This work was supported partially by National Natural Science Foundation of China (No. 61906218), Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515011497) and Science and Technology Program of Guangzhou (No. 202002030371). \section{Introduction} Recent top-performing approaches to solving video understanding tasks are based on supervised learning with a large amount of labeled data for training. Due to the strong data fitting capacity of deep convolutional neural networks, competitive performance can be achieved for recognizing actions in videos~(Carreira et al.~\citeyear{I3D};~\citealt{zhang2021morphmlp}). One of the key factors for the success may owe to the strong correlation between action class and object/background known as representation bias in~(Li et al.~\citeyear{li2018resound};~\citealt{choi2019can}). For example, the action \textit{Riding Bike} could be recognized by the presence of the object \textit{Bike} and the action \textit{Swimming} is recognized by the scene \textit{water}. Such representation bias in action datasets may provide shortcuts to solve the data-label fitting problem. Nevertheless, the learned feature representation without proper motion modelling may be biased to static visual cues, which limits the generalization ability to recognize or detect actions requiring temporal reasoning. \input{figures/1_Teaser} To verify this issue, we first pre-trained the R3D network~(Hara et al.~\citeyear{r3d}) for feature extraction in two different ways. The first one is supervised learning by using manual annotations on the UCF101 dataset~(Soomro et al.~\citeyear{ucf101}), while the second one is our self-supervised method trained on the same dataset by mitigating the representation bias. Then, the learned feature representations are evaluated on the HMDB51 dataset~\cite{hmdb51}. The downstream task is defined as simple temporal-order (natural/shuffled) classification as illustrated in Fig.~\ref{fig:teaser} to assess the generalization ability of the learned features. Fig.~\ref{fig:teaser}(a) shows that motion information is suppressed while static visual cues are maintained by video shuffling. Without dealing with the problem of representation bias, the supervised method performs worse than ours (\textbf{73.5\% v.s. 79.1\%} \greenp{5.6$\uparrow$}) in the downstream task of temporal-order classification (different from the recognition task used for pre-training). This verifies our hypothesis that the generalization ability may degrade due to the misleading guidance by static visual cues. In this paper, we propose a novel method to suppress static visual cues (S$^2$VC) for self-supervised video representation learning, such that the representation bias is mitigated. Since the pixel space of each frame in a video is highly complicated with high dimensionality, it is not robust to directly extract static cues from it. To estimate the distribution of the pixel space, each video frame is encoded to obtain a latent vector under multivariate standard normal distribution by using normalizing flows (NF). However, when constrained to a specific video, each latent variable cannot be simply considered as one-dimensional (univariate) standard normal. We model static factors in a video as a random variable such that the conditional distribution of each latent variable becomes standard normal with shifting and scaling. The standard deviation of the conditional distribution that reflects the correlation between latent variables and static factors is then empirically estimated to select static cues. Based on probabilistic analysis, static cues are suppressed to generate motion-preserved videos by the invertibility of the NF model. Such generated videos are treated as pseudo positives for contrastive learning to mitigate the representation bias w.r.t. static visual cues. The contributions of this work are three-fold: \emph{i}. We develop a novel method to suppress static visual cues (S$^2$VC) via normalizing flows for self-supervised video representation learning, such that the problem of representation bias is mitigated with improved generalization ability. \emph{ii}. Based on probabilistic analysis, static cues are recognized and suppressed to generate motion-preserved videos for self-supervised pre-training. \emph{iii}. Extensive experiments with quantitative and qualitative evaluation demonstrate the effectiveness of our method on various downstream tasks. \section{Related Work} \noindent\textbf{Self-supervised Video Representation Learning} aims at learning visual representations without using manually-annotated labels. Existing methods for video representation learning can be divided into two categories. The first one is to design pretext tasks, in which pseudo labels are automatically generated from videos for training. Representative methods along this line include predicting rotation~\cite{jing2018self}, cloze~\cite{st_cloze}, clip order~(Misra et al.~\citeyear{shuffle_and_learn};~\citealt{lee2017unsupervised, VCOP}), playback speed~(\citealt{speednet}; Wang et al.~\citeyear{pace};~\citealt{PRP,RSPNet}) and so on. The second category is based on contrastive learning which has recently achieved great success in the image domain~\cite{moco,moco_v2,SimCLR}. The key idea is to train a feature extractor that makes a training sample similar to its generated positives and dissimilar to its negatives in the embedding space. Existing methods have been proposed to generate positive pairs by video clips sampled from the same video~\cite{CVRL,wang2021enhancing,lin2021learning}, or codes from the same position of adjacent frames~(Han et al.~\citeyear{DPC},~\citeyear{MemDPC}). Since additional modalities are available in videos, positive pairs can also be determined by audio~(\citealt{owens2018audio,XDC};~Korbar et al.~\citeyear{korbar2018cooperative}), text~\cite{sun2019videobert}, or optical flow~(Han et al.~\citeyear{han2020self}). Though existing methods show improved performance for downstream tasks, they may be still biased to static visual cues like background or non-moving objects. To solve this problem, this paper proposes to generate motion-preserved videos by normalizing flows for less-biased representation learning. \input{figures/3_1_PPL} \noindent\textbf{Flow-based Generative Model} is one of the widely used approaches for data generation developed with strong theory in probability~(\citealt{INN}; Dinh et al.~\citeyear{nice}; Dinh et al.~\citeyear{realnvp}). It builds on a series of invertible and differentiable functions that transforms the highly-complicated raw data distribution to the simple and interpretable standard normal distribution. This transforming sequence is called normalizing flows (NF) and is served as the foundation of invertible neural network. In recent years, NF has been successfully deployed in many applications including image generation~\cite{glow}, compression~\cite{xiao2020invertible}, colorization~\cite{cINN}, adversarial attack~(Dolatabadi et al.~\citeyear{advflow}), minimally invasive surgery~\cite{medical_app}, etc. To the best of our knowledge, this work is the first to suppress static cues in videos by using NF for self-supervised learning. \section{Methodology} The objective of our proposed method is to mitigate the representation bias brought by the strong correlation between actions and static visual cues, such that the learned features can be better generalized to different kinds of downstream tasks. The rationale is to perform probability-based video transformations that preserve motion information but suppress static visual cues (S$^2$VC) for videos. In the following subsections, we first introduce the overall architecture of the proposed method. Then, details are given to elaborate the idea of the proposed S$^2$VC for motion-preserved video generation via normalizing flows~(NF). At last, we present the way to integrate the novel S$^2$VC with existing self-supervised methods for video representation learning. \subsection{Overall Architecture}\label{architect} The training pipeline of our method is shown on the left of Fig.~\ref{fig:framework}. For a given unlabelled input video $V$, we start with two random augmentations and get $\hat{V}=\hat{s}(V)$ and $\widetilde{V}=\widetilde{s}(V)$ respectively, where $\hat{s}$ and $\widetilde{s}$ are randomly sampled from the basic data augmentation set $S$. It consists of (e.g.) random cropping, random horizontal flip, color jittering and Gaussian blur. One of the randomly augmented videos $\widetilde{V}$ is used to generate the motion-preserved video $\widetilde{V_p}$ by suppressing static visual cues via normalizing flows. To mitigate the computational cost, a spatially down/up-sampling process is performed before/after the flow-based generative model. The information loss is compensated by the residual video. After that, $\hat{V}$ and $\widetilde{V}_p$ are fed into the 3D backbone $F$ for feature extraction to obtain $v = F(\hat{V})$ and $v_p = F(\widetilde{V}_p)$. The feature extractor $F$ is learned by minimizing the distance between $v$ and $v_p$, and maximizing the distance between $v$ and other pseudo negatives from the momentum dictionary, whose features are extracted by the momentum encoder $F'$. \subsection{Suppressing Static Visual Cues} \label{sec:adv_attack} The proposed method for suppressing static visual cues is illustrated on the right of Fig.~\ref{fig:framework}. For an input video, each frame is first encoded to obtain latent variables under standard normal distribution by normalizing flows (NF). Then, the motion-preserved video is generated by suppressing the less-varying latent variables (static cues) along time. Details on these two steps are provided as follows. \noindent \textbf{Encoding Video Frames via Normalizing Flows.} Denote the vectorized frames in the input video as $X_1, \cdots, X_L \in \mathbb{R}^{d}$, where $L$ is the number frames and $d$ is the product of image height, width and channels. These $d$-dimensional vectors can be considered as a sequence of observations for a random vector $X$ with the probability density $p_X$. Since the dimension of the random vector $X$ is very high, it is intractable to directly estimate the density $p_X$ correctly. Moreover, $p_X$ is highly complicated due to variations like camera motions and illumination changes in videos. Without accuracy estimation of the data distribution, it not robust to extract static cues directly from the raw observed data. As a result, we propose to estimate the density $p_X$ of the high-dimensional random vector $X$ by normalizing flows (NF). The idea of NF~(Kobyzev et al.~\citeyear{NF_review}) is depicted on the right of Fig.~\ref{fig:framework}. A sequence of simple invertible transformations $f_1, \cdots, f_k$ (e.g., affine coupling and channel-wise permutation/convolution) maps $X$ to the latent random vector $Z$, which has the same dimension as $X$. Denote the composition function of $f_1, \cdots, f_k$ as $f$, i.e., $f = f_k \circ \cdots \circ f_1$. The mapping $f$ from $X \in \mathbb{R}^{d}$ to $Z \in \mathbb{R}^{d}$ is invertible and differentiable. By using $f$, $X$ from the highly complicated distribution can be transformed to $Z$ in a straightforward predefined distribution such as multivariate standard normal. To determine the parameters $\theta$ in the mapping function $f$, the density $p_X$ is rewritten by the change-of-variables rule as, \begin{equation} \begin{aligned} p_X(X) = p_Z\left( f_\theta(X) \right) \left| \det (D f_\theta) (X) \right| \end{aligned} \end{equation} where $p_Z$ is the probability density of the latent random vector $Z$ and $\det(D f_\theta)(X)$ denotes the determinant of the Jacobian matrix of partial derivatives of $f_\theta$ over $X$. Given an image dataset $\mathcal{D}_{\textrm{NF}}$ (e.g., ImageNet), the model parameters $\theta$ are learned by maximizing the log-likelihood as follows, \begin{equation} \begin{aligned} \max_\theta \mathbf{E}_{X \sim \mathcal{D}_{\textrm{NF}}} \left( \log p_Z\left( f_\theta(X) \right) + \log \left| \det (D f_\theta) (X) \right| \right) \end{aligned} \end{equation} where $\mathbf{E}$ is the mathematical expectation. In our method, the predefined density $p_Z$ is set to multivariate standard normal as in~(Dinh et al.~\citeyear{realnvp}), i.e., $Z \sim \mathcal{N}(\mathbf{0}, I)$, where $\mathbf{0}$ is a $d$-dimensional zero vector and $I$ is a $d \times d$ unit matrix. With the pre-trained flow model, the vectorized frames $X_1, \cdots, X_L$ are mapped into the latent space to obtain $Z_1, \cdots, Z_L$, which are used to detect the temporally-varying patterns and extract static cues. \noindent \textbf{Motion-preserved Video Generation.} Since the latent vector $Z$ follows $d$-dimensional standard normal distribution $\mathcal{N}(\mathbf{0}, I)$, latent variables in $Z$ are independent with each other. Thus, we propose to analyze each latent variable $Z^i, i \in \{ 1, \cdots, d \}$ in $Z$ to identify static cues separately. If $Z^i$ is completely random without any other information given in advance, it is obvious that the distribution of each latent variable $Z^i$ is one-dimensional (univariate) standard normal, i.e., $Z^i \sim \mathcal{N}(0, 1)$. Nevertheless, when the latent variable $Z^i$ is constrained to be in a certain video, the completely random assumption is not valid. We regard static factors (e.g., background, scene) inherited in the input video affecting the distribution of each $Z^i$ as a random variable $Y$. For selection of static cues, the objective is to determine the density $p_{Z^i|Y}$ of $Z^i$ conditioning on $Y$. Let the dependence between $Z^i$ and $Y$ be modelled by the correlation coefficient $\rho_i$. To make the marginal density $p_{Z^i}$ standard normal, the joint density $p_{Z^i,Y}$ is assumed to be two-dimensional (bivariate) normal for maximum entropy. Denote $(Z^i,Y) \sim \mathcal{N}(0, \mu, 1, \sigma^2, \rho_i)$, where $\mu, \sigma^2$ are the mean and variance of $Y$ respectively. According to properties of normal conditional distribution~\cite[268-269]{probability_ross_textbook}, the conditional density $p_{Z^i|Y=y}$ for a given value of $Y = y$ is still normal and can be written as, \begin{equation}\label{prob_cond} \begin{aligned} (Z^i|Y=y) \sim \mathcal{N}(\frac{1}{\sigma} \rho_i (y - \mu), 1-\rho^2_i) \end{aligned} \end{equation} This equation implies that the latent variable $Z^i$ conditioning on an input video can be considered as standard normal random variable with shifting and scaling. With the condition $Y = y$, the mean and variance are changed to $ \rho_i (y - \mu) / \sigma$ and $1-\rho^2_i$, respectively. For a latent variable $Z^i$ strongly correlated with static factors represented by $Y$, the dependence modelled by the correlation $\rho_i$ between $Z^i$ and $Y$ is large. According to eq.~\eqref{prob_cond}, this means the variance $1-\rho^2_i$ is small for the latent variable $Z^i$ encoding static cues. Notice that the variance $1-\rho^2_i$ is independent of the value of $Y=y$. The variance or standard deviation (STD) can be estimated empirically by the observations $Z_{1}^i,\dots,Z_{L}^i$ of the latent variable $Z^i$ in a video. Denote the STD of the conditional density $p_{Z^i|Y=y}$ as $\sigma_{Z^i|Y}$. We propose to select the set $C_s$ of latent variables with small empirical STDs as static cues, i.e., \begin{equation}\label{select_sc} \begin{aligned} C_s = \{i | \sigma_{Z^i|Y} \approx \textrm{STD}(Z_{1}^i, \dots, Z_{L}^i) < \alpha \} \end{aligned} \end{equation} where $\alpha$ is the threshold hyperparameter used to decide whether the $i$-th latent variable is selected or not. Let the latent vector that preserves motion information but suppresses static cues be $Z_p$. For $i \in C_s$, $Z_p^i$ is set to $\rho_i (y - \mu) / \sigma$ with the highest probability density, i.e., the mean of the conditional distribution $Z^i|Y=y$ as derived in eq.~\eqref{prob_cond}. In this way, the variance of $Z_p^i$ is equal to 0 for minimum (zero) information entropy to suppress static cues. Since the marginal density $p_Y$ takes the maximum value at $Y = \mu$, we set $Z^i_p = 0$ for $i \in C_s$ by substituting $y = \mu$ into $\rho_i (y - \mu) / \sigma$. For $i \notin C_s$, motion cues are preserved by setting $Z^i_p = Z^i$. Due to invertibility of the NF model $f_\theta$, each frame in the motion-preserved video is generated by, \begin{equation} X_p = f_\theta^{-1}(Z_p) \end{equation} The pseudo-code of our method is given in the supplementary material. Other strategies to suppress static cues are also presented for comparison in the ablation study. \noindent \textbf{Discussion on Generative Models.} \emph{i}. The generative adversarial network (GAN) has achieved success in the literature by jointly training a generator and a discriminator in an adversarial manner~\cite{GAN,Bi_cGAN}. In most existing methods based on GAN, there is no encoder to transform the image modality into the latent space. Though the generator in GAN could be used for encoding, the generation results are without explicit probability interpretation. Hence, it is difficult if not impossible to suppress static cues by the GAN approach. \emph{ii}. Different from GAN, the variational auto-encoder (VAE)~\cite{VAE,cVAE} can encode an input image $X$ to a latent vector $Z$ under a multivariate normal distribution $\mathcal{N}(\bm{m}_X, \textrm{diag}(\bm{\sigma}_X^2))$. The mean vector $\bm{m}_X$ and standard deviation vector $\bm{\sigma}_X$ are determined by learnable parameters and the input image $X$. The latent vector $Z$ is obtained by randomly sampling from $\mathcal{N}(\bm{m}_X, \textrm{diag}(\bm{\sigma}_X^2))$ and can be written as $Z = \bm{m}_X + \bm{\sigma}_X \odot \bm{\epsilon}$, where $\bm{\epsilon} \sim \mathcal{N}(\bm{0},I)$ and $\odot$ is the element-wise product. Due to the randomness in computing the observations of $Z$, the encoded vectors $Z_1, \cdots, Z_L$ in a video may not be able to preserve the continuity of the input frames over time by using VAE. On the other hand, $\bm{m}_X, \bm{\sigma}_X$ in the latent distribution depend on the input image $X$, so static cues cannot be directly selected by eq.~\eqref{select_sc}. \emph{iii}. By using NF, the encoded latent vector $Z$ is with expressive probability interpretation, which follows multivariate standard normal distribution $\mathcal{N}(\bm{0},I)$ independent of the input image $X$. Thanks to the differentiable property of the NF model, the encoded latent vectors $Z_1, \cdots, Z_L$ preserve the continuity over time. Moreover, as experimentally shown in~(Dinh et al.~\citeyear{realnvp}; Kingma et al.~\citeyear{glow}), the latent space in NF encodes semantically meaningful concepts (like smile, blond hair, male, etc. on face dataset). Because of these advantages, the flow-based approach instead of GAN and VAE is used to suppress static visual cues in our method. \subsection{Integrated with Contrastive Learning}\label{suppress} The proposed S$^2$VC method is integrated in the framework of contrastive learning to obtain video representations less-biased to static cues. In this work, positive pairs are constituted by the generated motion-preserved videos and corresponding inputs for self-supervised pre-training. Given a video dataset $\mathcal{D}$ with $N$ samples $\mathcal{D} = \{V^1, V^2, ... , V^N\}$ for training, the loss function is defined as: \begin{equation}\label{eqn:cl_loss} \begin{footnotesize} \begin{aligned} \mathcal L =-\mathbf{E}\left[\log\frac{\exp(v^{(i)}\cdot v_{p}^{(i)}/\tau)}{ \exp(v^{(i)}\cdot v_{p}^{(i)}/\tau) + \sum_{j \neq i}\exp(v^{(i)}\cdot v_{p}^{(j)}/\tau)}\right] \end{aligned} \end{footnotesize} \end{equation} where $\tau$ denotes the temperature parameter for model learning with hard negatives~\cite{instance_discrimination}. In each positive pair, the motion-preserved video shares the same motion information as the original one but removes static cues. By minimizing the loss function in eq.~\eqref{eqn:cl_loss}, the similarity of features in each positive pair is maximized. Thus, the proposed method learns discriminative video representations which simultaneously preserve motion information and suppress static cues. As an efficient and effective baseline, MoCo~\cite{moco} is employed for contrastive learning. Furthermore, our method can serve as a powerful data augmentation technique and easily be integrated with other self-supervised learning methods, e.g., DPC in~(Han et al.~\citeyear{DPC}). \section{Experiments} \input{tables/4_1_1_sota} \subsection{Datasets and Implementation Details} Datasets used for experiments includes UCF101~(Soomro et al.~\citeyear{ucf101}), HMDB51~\cite{hmdb51}, Kinetics-400~\cite{k400}, and its subset Kinetics-200~\cite{s3d}. We use the flow model as described in AdvFlow~(Dolatabadi et al.~\citeyear{advflow}) for video frame encoding and generation. Two backbone networks, i.e., S3D~\cite{s3d} and R3D-18~(Hara et al.~\citeyear{r3d}), are evaluated for contrastive learning. If not specified, we employ MoCo with S3D as the baseline and integrate our S$^2$VC with MoCo (optimized by eq.~\eqref{eqn:cl_loss} with $\tau$ set to 0.07). For a fair comparison, we set the input clip length and resolution as 32, $128^2$ for S3D and 16, $112^2$ for R3D. We conduct consistent augmentation for each frame in a video clip. The batch size is set as 128 and the learning rate is initialized as 1e-3. Total epochs we used for pretraining the network are 500 on UCF101, 200 on K200, and 100 on K400, respectively. Please refer to the supplementary for more implementation details. \subsection{Action Recognition} We conduct self-supervised pre-training on two settings, i.e. linear probe and finetune. For evaluation, following the common practice in~(Carreira et al.~\citeyear{I3D}; \citealt{wang2021multi}), we sample each video using half-overlap sliding window, and apply ten-crops test to each video clip. Then, we average the predicted accuracy as our validation result. The results comparing with the state of the art are reported in Table~\ref{tab:sota_action_recognition_cmp}. \noindent\textbf{Linear Probe.} Follow the SimCLR~\cite{SimCLR}, we fix the weights of the pre-trained 3D CNNs and train a linear classifier after the last conv layer for 100 epochs. We can observe from the last two columns in Table~\ref{tab:sota_action_recognition_cmp} that our method significantly surpasses existing works that use single RGB modality for pre-training. Comparing with MemDPC~(Han et al.~\citeyear{MemDPC}), the improvement by our method is up to 11.9\% on UCF101 and 6.0\% on HMDB51. \noindent\textbf{Finetune.} We finetune the overall model for 500 epochs and show the results in Table \ref{tab:sota_action_recognition_cmp}. When the proposed S$^2$VC is introduced into MoCo, with the same backbone S3D and the same pre-train dataset UCF101, it can bring 5.2\% and 8.6\% improvements on UCF101 and HMDB51, respectively. Due to limited computational resource, the S3D is pre-trained on K200 for only 200 epochs. The results obtained under this setting have already been better than the CBT~\cite{sun2019learning} pre-trained on the larger scale K600+, and comparable with the SpeedNet~\cite{speednet} pre-trained on K400. With the R3D pre-trained on UCF101, our method also achieves competitive performance and outperforms the VCOP~\cite{VCOP} pre-trained on the same dataset. Though the CoCLR~(Han et al.~\citeyear{han2020self}) obtain higher accuracy than ours, it needs the additional optical flow modality complementary to RGB for pre-training. Compared with the IMRNet~\cite{IMRNet} using multiple modalities in compressed videos for pre-training, our method achieves better results. The performance gains by our method over the IMRNet are 4.4\% and 5.5\% respectively on the two datasets by using the same backbone and pre-training dataset K400. \subsection{Video Retrieval} In this section, our method is evaluated by the video retrieval task. Following the setting in~\cite{VCOP}, we use the pre-trained 3D CNN with fixed weight as feature extractor. The training set is defined as the \textit{gallery} and each 16-frame video clip from the test set is used as a \textit{query}. If the category of the query appears in the retrieved $\mathcal{K}$-nearest neighbors, we record it as a \textit{hit} during the test time. Accuracy comparison with other self-supervised learning methods on the UCF101 and HMDB51 is reported in Tables \ref{tab:recallatk_ucf101} and \ref{tab:recallatk_hmdb51}. When using the S3D as backbone, combining the S$^2$VC with MoCo brings a 4.1\% improvement on Top1 accuracy and 7.5\% improvement on Top5 accuracy on the UCF101 dataset. For the HMDB51, the Top1 and Top5 gains are 1.7\% and 5.7\%, respectively. Additionally, our method outperforms the state of the art for comparison, e.g., 6.2\% better than the BE~\cite{wang2020removing} under the same settings on the HMDB51. These results validate that more discriminative and generalizable representations can be extracted by our method. \input{tables/4_2_1_UCF101_Retrieval} \input{tables/4_2_2_HMDB51_Retrieval} \subsection{Ablation Study} \input{figures/4_4_2_AlphaAblation} \input{tables/4_3_1_Strategy} \textbf{Motion Threshold $\alpha$.} In our method, $\alpha$ in eq.~\eqref{select_sc} is an important hyperparameter to determine how many static cues are suppressed. Retrieval results of different $\alpha$ are shown on the left of Fig.~\ref{fig:alpha_discussion}. These results show that as $\alpha$ increases, the retrieval accuracy first increases, and then decreases after reaching the peak at 0.5 (the default value in this work). Interestingly, when we select $\alpha$ as 0.8, which means only 6.5\%/5.7\% latent variables w.r.t motion are preserved in UCF101/HMDB51, the results are still better than small $\alpha$. This indicates the importance of suppressing sufficient static cues and keeping conspicuous motion information for action recognition. We also visualize the generation results of different $\alpha$ on the right of Fig.~\ref{fig:alpha_discussion}. If $\alpha$ is too small, the effect for suppressing static cues is insufficient. In contrast, for too large $\alpha$, useful action cues may also be suppressed. \noindent\textbf{Strategy for Suppressing Static Cues.} In this experiment, we evaluate different Strategies for suppressing static cues. First, we compare with a simple thresholding frame difference (TFD) method by pixel-level operation. Similar to our method, each frame in a video is reshaped to $\mathbb{R}^{d}$ in the TFD. Then, the top $20\%$ pixels with the largest STD along the time dimension are persevered (approximately equal to the amount of the preserved motion cues in the proposed S$^2$VC when $\alpha=0.5$). Besides the TFD, three variants of the proposed S$^2$VC to determine the suppressed latent variables are evaluated: (a) set to random noise: set latent variables in $C_s$ to normal noise for the first frame and keep them unchanged for other frames. (b) shuffle - in clip: randomly shuffle each latent variable in $C_s$ between frames of a video. (c) shuffle - in frame: randomly shuffle each latent variable in $C_s$ within a frame. For fair comparison, we pre-train all the methods with the S3D on the UCF101 for 100 epochs. Retrieval results are shown in Table \ref{tab:strategy}. We have the following observations: \emph{i}. The in-clip shuffle method brings little gain to the baseline model, since the channels already have similar values (small at standard deviation). \emph{ii}. All methods that strongly disturb the static cues show great improvement to the baseline model. \emph{iii}. TFD surpass the MoCo baseline on UCF101 but perform worse on HMDB51. This indicates that it is not robust and hard to generalize to a new dataset by simply detecting motion according to pixel level difference. \emph{iv}. It performs the best by setting all latent variables in $C_s$ to zero. Recall that zero suppressed latent variables refers to the minimum information entropy with the highest probability. Other suppressing methods are not the most likely or with randomness to carry static information. As a result, we use the S$^2$VC (set to 0) as default. \subsection{Analysis on Suppressing Effects} \noindent\textbf{Intra-class similarity of different samples.} Given $\alpha=0.5$, we investigate the visual similarity over different samples in the same action category. Specifically, we randomly select ten classes from UCF101/HMDB51 and sample a subset of video clips for each category. Then, we measure the cosine similarity of different samples frame-by-frame with the latent vector $Z$ and the motion-preserved vector $Z_p$, respectively. As frames in the same class may have a similar scene but large difference in moving regions, the cosine similarity is smaller if the latent vectors contain less static visual cues. The decreased similarity of $Z_p$ compare with $Z$ shown in Fig.~\ref{fig:cos_sim} is aligned with the above analysis. This phenomenon demonstrates that our method reduces the intra-class similarity of static object/background significantly, which ensures that the generated motion-preserved videos are less biased to static cues. We also observe that different categories show widely varied ratios over $Z$ v.s. $Z_p$, which means action classes have various similarities on static cues. \input{figures/4_4_3_CosSim} \input{figures/4_4_4_Latent_Semantic} \noindent\textbf{Visualization of the suppressing quality.} More intuitively, we compare the generation of motion-preserved videos with minor/intense camera motion. As shown in Fig.~\ref{fig:quality}, the generation is more robust to noise like camera movement and is able to focus on the most salient motion by suppressing static cues in the latent space encoded by the NF. \subsection{Analysis on Performance Improvement} \noindent\textbf{Relative Improvement over Static Classification.} As our method suppress static visual cues, it may bring negative impact to classes that have a high correlation with non-moving object or background. To study the correlation between the temporal dependency of actions and the performance gain brought by our method, we plot the class-level relative performance improvement in Fig.~\ref{fig:relative}. In this experiment, we first train a randomly initialized S3D baseline using static videos (stacked copy images). Since the stacked duplicate images provide no temporal information, the performance of the baseline model indicates how much a category depends on the static visual cues. The plot shows that although there exist some classes that our method leads to a worse result, the overall performance is better. Moreover, there is a clear negative relationship between relative gain and baseline performance, which suggests that the superiority of our method is mainly coming from precisely identifying actions with high temporal dependency. We also find that our model shows a stronger negative correlation compared with MoCo. \input{figures/5_1_RelativePerformance} \input{figures/5_2_Heatmap} \noindent\textbf{Salient Regions Compared with Optical Flow.} In this experiment, we visualize the energy of the last convolutional layer with the Class-Activation Map (CAM) technique~\cite{zhou2016learning}. We sample from the HMDB51 instead of the UCF101 used for pre-training to show the generalizability. We also visualize the optical flow for reference, which indicates the significant motion cues in the video. The results are depicted in Fig.~\ref{fig:S$^2$VC_heatmap}. From these samples, we find a strong correlation between highly activated regions and the dominant mover in the scene. The network pre-trained with the S$^2$VC tends to focus more on the moving object. For example, in the second row, only our method concentrates on the two boys dribbling the ball on either side separately. \section{Conclusion} In this paper, we present a novel method to suppress static visual cues (S$^2$VC), which mitigates the representation bias over less-moving object/background in videos. Due to the difficulty in estimating the pixel-level distribution, video frames are encoded to a latent space under multivariate standard normal distribution by normalizing flows. Then, less-varying latent variables along time are selected as static cues based on probabilistic analysis and suppressed to generate motion-preserved videos. The proposed S$^2$VC is integrated with the self-supervised learning framework to extract video representations that focus more on \textit{motion cues}. Extensive experiments with visualization validate that features learned by our method pay more attention to moving objects and can be better generalized to different downstream tasks. \section{Pseudo Code} The pseudo-code of the proposed method to generate motion-preserved videos for suppressing static visual cues is presented in Algorithm~\ref{algo:bg_suppress_detail} as follows. \input{sections/S_3_pseudo_code} \section{Implementation Details} \noindent\textbf{Network.} The backbones used for experiments are the S3D and 3D Resnet-18 (R3D). During pre-training, we attach a projection head to the last convolutional (conv) layer, i.e. the \textit{block5} for the S3D and the \textit{res4} for 3D Resnet-18. The projection head consists of one global spatio-temporal average pooling and one fully connected layer with output dimension 128. During evaluation, the projection head is discarded and replaced by a linear classifier. \noindent\textbf{Pipeline.} Before suppressing static cues, the spatial resolution of each video frame is down-sampled to $64\times 64$ for computational efficiency. Then, the down-sampled frames are fed into the flow-based generative model pre-trained on ImageNet as in~\cite{advflow}) to obtain latent variables, for suppressing static cues. \noindent\textbf{Momentum Dictionary.} The size of the momentum dictionary for pre-training on UCF101 is set to 2048. As Kinetics contains 200K videos, we use a larger memory bank with the size of 16,384 to save features. \noindent\textbf{Basic Data Augmentation.} The hyperparameters for the basic data augmentations are: color jittering (0.4, 0.4, 0.4, 0.1, p=0.8), gray scale (p=0.2), gaussian blur (p=0.5), horizontal flip (p=0.5). We conduct consistent augmentation for each frame in a video clip. \section{Additional Experiments} \subsection{Integrating with Other SSL Approach} Besides the MoCo, the proposed S$^2$VC can be considered as a data augmentation technique and integrated with other self-supervised learning (SSL) methods, e.g., DPC~\cite{DPC}. In this experiment, we follow the default settings of the DPC except for adding the proposed S$^2$VC for data augmentation. We pre-train two models w/o and w/ suppressing static visual cues on the UCF101 dataset for 300 epochs. As shown in Table~\ref{tab:dpc}, our method surpasses the DPC on both video retrieval and action recognition tasks, which demonstrates the effectiveness of our method. \input{tables/S_5_DPC} \subsection{Analysis on Bias Caused by Static Cues} \noindent\textbf{Statistics of Static Information.} In this experiment, we investigate the proportion of human/motion regions in natural videos. The YOLOv5 method~(Jocher et al.~\citeyear{yolo_v5_doi}) is employed to detect human and the standard Farneback optical flow algorithm~\cite{farneback} is used to detect motion as illustrated in Fig.~\ref{fig:staticstic}(a). By averaging the proportion of detected human regions frame-by-frame for videos, we obtained the mean human proportions on different datasets as shown in the first row of Fig.~\ref{fig:staticstic}(b). Likewise, the mean motion proportion is calculated and shown in the second row. It can be observed that videos in all the three publicly available activity datasets have less than 20-percent of regions containing human, and have only about 30-percent of regions related to motion. Both Statistics demonstrate motion information is \textit{less dominated} than static visual cues in videos. \input{figures/S_1_Statistics} \noindent\textbf{Natural/Shuffled Video Classification.} We extend the natural/shuffled video classification experiment by pre-training on a larger-scale dataset Kinetics-400. We use the 3D ResNet-18 as the backbone and compare our self-supervised pre-trained model with the open-source supervised pre-training method~\cite{r3d}. As shown in Table \ref{tab:bin_test}, the proposed S$^2$VC method outperforms the supervised pre-trained model with a large margin for both small and large scale datasets. This result indicates that the feature learned without dealing with the representation bias problem \textit{may still be wrongly guided by static visual cues} even when pre-training on a larger-scale dataset. \input{tables/S_2_Binary_Classification} \input{figures/S_6_Cos_Sim} \input{figures/S_7_Visualize_Sample} \subsection{Analysis on Suppressing Effects} \noindent\textbf{The Cosine Similarity.} Besides the \textit{Intra-class} comparison in the manuscript, the \textit{Intra-video} and \textit{Inter-class} comparison results are presented in this section. Denote $V_A$ and $V_B$ as two videos from different action categories. \textit{Intra-video}: Computing the similarity frame-wise between adjacent frames in the same video $V_A$. \textit{Inter-class}: Computing the similarity between one frame from $V_A$ and the other frame from $V_B$. Notice that only the spatial (static) visual similarity from frame vs. frame is evaluated, i.e., no temporal information is included. The results are shown in Fig.~\ref{fig:intra_video_cos_sim} and Fig.~\ref{fig:inter_class_cos_sim}. All these results demonstrate that the proposed S$^2$VC reduces the spatial visual similarity significantly. Since static visual cues contribute to higher spatial similarity, these results validate that static cues are mostly erased in the motion-preserved latent vector $Z_p$. Under the framework of contrastive learning, the learned features rely more on motion cues to better separate each video from others. \noindent\textbf{Visualization Results.} More visualization examples of the generated motion-preserved video are shown in Fig.~\ref{fig:aug_visualize}. We consider different types of actions and show which parts of the frame are preserved after S$^2$VC. We observe most spatial cues are weakened clearly, like the sports venue in the first row. The preserved regions are highly correlated to human motion and are able to recognize the action categories according to the visual relation along the time dimension. \input{figures/S_8_Norm_fit} \subsection{Distribution Visualization} In this section, we evaluate whether the pixel and latent space follows normal distribution, respectively. For this purpose, we randomly sampled 8 pixels from each frame or latent variables conditioning on static factors in a video. Then, we fit the normal distribution to each of the pixels or latent variables. The fitting results, mean square error (MSE) and Kolmogorov-Smirnov test decision are shown in Fig.~\ref{fig:norm_fit}. The MSE is to measure the normalized difference between the data and ground-truth normal distribution. For the Kolmogorov-Smirnov test, the data for each of the pixels or latent variables is first normalized by subtracting the mean and standard deviation. Then, the normalized data is tested with the null hypothesis ($H_0$) that the data follows a standard normal distribution with significance level $\alpha = 0.05$. From Fig.~\ref{fig:norm_fit}, we can see that the MSE of fitting normal distribution in pixel level is remarkably larger than that in the latent space. For the results of the statistical test, we cannot reject the null hypothesis that latent variables conditioning on a video follow normal distribution in most cases. Recall that the distribution of each latent variable encoded by normalizing flows should be one-dimensional (univariate) standard normal, if it is completely random without any other information given in advance. However, when constrained in a video, the distribution is affected by the shared static cues, so that the standard normal distribution is shifted and scaled. The fitting results in Fig.~\ref{fig:norm_fit} aligns with our analysis.
1409.7187
\section{Introduction} \label{sec:intro} The estimation problem considered in this paper is the following. Suppose we have independent observations of the (nonnegative) random variable $X$, but we are interested in estimating the distribution of the (nonnegative) random variable $Y$. The crucial element in our set up is that we explicitly know the relation between the Laplace transforms of the random variables $X$ and $Y$, i.e., we have a mapping $\Psi$ which maps Laplace transforms of random variables to complex-valued functions defined on the right-half complex plane, and which maps the Laplace transform of $X$ to the Laplace transform of $Y$. A straightforward estimation procedure could be the following. (i)~Estimate the Laplace transform of $X$ by its evident empirical estimator; denote this estimate by $\tilde{X}_n$; (ii)~estimate the Laplace transform of $Y$ by $\Psi \tilde{X}_n$; (iii)~apply Laplace inversion on $\Psi \tilde{X}_n$, so as to obtain an estimate of the distribution of $Y$. To justify this procedure, there are several issues that need to be addressed. First, $\tilde{X}_n$ may not lie in the domain of the mapping $\Psi$, and second, $\Psi \tilde{X}_n$ may not be a Laplace transform, and thus not amenable for Laplace inversion. The main contribution of this paper is that we specify a procedure in which the above caveats are addressed, leading to the result that the plug-in estimator described above converges, in probability, to the true value as $n$ grows large. In addition we have bounds on its performance: the expected absolute estimation error is $O(n^{-1/2} \log(n+1))$. Perhaps surprisingly, the techniques used primarily rely on an appropriate combination of standard proof techniques. Our result is valid under three mild regularity conditions: two of them are essentially of a technical nature, whereas the third can be seen as a specific continuity property that needs to be imposed on the mapping $\Psi$. In this paper, two specific examples are treated in greater detail. In the first, an M/G/1 queueing system is considered: jobs of random size arrive according to a Poisson process with rate $\lambda>0$, the job sizes are i.i.d.\ samples from a nonnegative random variable $B$, and the system is drained at unit rate. Suppose that we observe the amount of work arriving in intervals of fixed length, say $\delta>0$; these observations are compound Poisson random variables, distributed as \[X\stackrel{\rm d}{=} \sum_{i=1}^N B_i,\] with $N$ Poisson distributed with mean $\lambda\delta$, independent of the job sizes, and with $B_1, B_2, \ldots$ mutually independent and distributed as $B$. We show how our procedure can be used to estimate the distribution of the workload $Y$ from the compound Poisson observations; the function $\Psi$ follows from the Pollaczek-Khinchine formula. As we demonstrate, the regularity conditions mentioned above are met. In the second example, often referred to as `decompounding', the goal is to determine the job size distribution from compound Poisson observations. \vspace{2mm} {\it Literature.} Related work can be found in various branches of the literature. Without aiming at giving an exhaustive overview, we discuss some of the relevant papers here. The first branch consists of papers on estimating the probability distribution of a non-observed random variable by exploiting a given functional relation between the Laplace transforms of $X$ and $Y$. The main difficulty that these papers circumvent is the issue of `ill-posedness': a sequence of functions $(f_n)_{n \in \mathbb{N} }$ may not converge to a function $f$, as $n$ grows large, even if the corresponding Laplace transforms of $f_n$ do converge to the Laplace transform of $f$. Remedies, based on `regularized Laplace inversion' have been proposed, in a compound Poisson context, by Shimizu \cite{Shimizu2010} (including Gaussian terms as well) and Mnatsakanov {\it et al.} \cite{MnatsakanovRuymgaartRuymgaart2008}; the rate of convergence is typically just $1/\log n$ in an appropriately chosen $L_2$-norm. Hansen and Pitts \cite{HansenPitts2006} use the Pollaczek-Khinthcine formula to construct estimators for the service-time distribution and its stationary excess distribution in an $M/G/1$ queue, and show that the estimated stationary excess distribution is asymptotically Normal. Some related papers that use Fourier instead of Laplace inversion are \cite{VanEsEtAl2007}, \cite{ComteDuvalGenonCatalot2014}, \cite{ComteDuvalGenonCatalotKappus2014} and \cite{HallPark2004}. Van Es {\it et al.} \cite{VanEsEtAl2007} estimate the density of $B_i$ by inverting the empirical Fourier transform associated with a sample of $X$, and prove that this estimator is weakly consistent and asymptotically normal. Comte {\it et al.} \cite{ComteDuvalGenonCatalot2014} also estimate the density of $B_i$ using the empirical Fourier transform of $X$, by exploiting an explicit relation derived by Duval \cite{Duval2013} between the density of $X$ and $B_i$. They show that this estimator achieves the optimal convergence rate in the minimax sense over Sobolev balls. Comte {\it et al.} \cite{ComteDuvalGenonCatalotKappus2014} extend this to the case of mixed compound Poisson distributions (where the intenstiy $\lambda$ of the Poisson process is itself a random variable), and provide bounds on the $L^2$-norm of the density estimator. Finally, Hall and Park \cite{HallPark2004} estimate service-time characteristics from busy period data in an infinite-server queueing setting, and prove convergence rates in probability. A second branch of research concerns methods that do not involve working with transforms and inversion. Buchmann and Gr\"ubel \cite{BuchmannGruebel2003} develop a method for decompounding: in the case the underlying random variables have a discrete distribution by relying on the so-called Panjer recursion, and in the case of continuous random variables by expressing the distribution function of the summands $B_i$ in terms of a series of alternating terms involving convolutions of the distribution of $X$. The main result of this paper concerns the asymptotic Normality of specific plug-in estimators. This method having the inherent difficulty that probabilities are not necessarily estimated by positive numbers, an advanced version (for the discrete case only) has been proposed by the same authors in \cite{BuchmannGruebel2004}. This method has been further extended by B{\o}gsted and Pitts \cite{BogstedPitts2010} to that of a general (but known) distribution for the number of terms $N$. Duval \cite{Duval2013} estimates the probability density of $B_i$ by exploiting an explicit relation between the densities of $X$ and $B_i$, which however is only valid if $\lambda \delta < \log 2$. She shows that minimax optimal convergence rates are achieved in an asymptotic regime where the sampling rate $\delta$ converges to zero. The introduction of \cite{BogstedPitts2010} gives a compact description of the state-of-the-art of this branch of the literature. A third body of work concentrates on the specific domain of queueing models, and develops techniques to efficiently estimate large deviation probabilities. Bearing in mind that estimating small tail probabilities directly from the observations may be inherently slow and inefficient \cite{GlynnTorres1996}, techniques have been developed that exploit some structural understanding of the system. Assuming exponential decay in the exceedance level, the pioneering work of Courcoubetis {\it et al.} \cite{CourcoubetisEtAl1995} provide (experimental) backing for an extrapolation technique. The approach proposed by Zeevi and Glynn \cite{ZeeviGlynn2004} has provable convergence properties; importantly, their results are valid in great generality, in that they cover e.g.\ exponentially decaying as well as Pareto-type tail probabilities. Mandjes and van de Meent \cite{MandjesVandeMeent2009} consider queues with Gaussian input; it is shown how to accurately estimate the characteristics of the input stream by just measuring the buffer occupancy; interestingly, and perhaps counter-intuitively, relatively crude periodic measurements are sufficient to estimate fine time-scale traffic characteristics. As it is increasingly recognized that probing techniques may play a pivotal role when designing distributed control algorithms, there is a substantial number of research papers focusing on applications in communication networks. A few examples are the procedure of Baccelli {\it et al.}\ \cite{BaccelliKauffmannVeitch2009} that infers input characteristics from delay measurements, and the technique of Antunes and Pipiras \cite{AntunesPipiras2011} that estimates the interrenewal distribution based on probing information. This paper contributes to this line of research by showing how a Laplace-transform based estimator, using samples of the workload obtained by probing, can be used to estimate the workload in an M/G/1 queue; cf.\ Section \ref{subsec:mg1} and \ref{sec:num}. \vspace{2mm} {\it Organization.} The rest of this paper is organized as follows. In Section \ref{sec:method} we formally define our Laplace-transform based estimator, and Section \ref{sec:rates} shows that the expected absolute estimation error is $O(n^{-1/2} \log(n+1))$. In Section \ref{subsec:mg1} we apply this result to an estimation problem in queueing theory, and in Section \ref{subsec:decompounding} to a decompounding problem. Section \ref{sec:aux} contains a number of auxiliary lemmas used to prove the main theorems in this paper. A numerical illustration is provided in Section \ref{sec:num}. \vspace{2mm} {\it Notation.} We finish this introduction by introducing notation that is used throughout this paper. The real and imaginary part of a complex number $z \in \mathbb{C} $ are denoted by $\Re(z)$ and $\Im(z)$; we use the symbol ${\rm i}$ for the imaginary unit. We write $ \mathbb{C} _+ := \{ z \in \mathbb{C} \mid \Re(z) \geq 0 \}$ and $ \mathbb{C} _{++} := \{ z \in \mathbb{C} \mid \Re(z) > 0 \}$. For a function $f: [0, \infty) \rightarrow \mathbb{R} $, let $\bar{f}(s) = \int_0^{\infty} f(x) e^{-s x} {{\rm d}}x$ denote the Laplace transform of $f$, defined for all $s \in \mathbb{C} $ where the integral is well-defined. For any nonnegative random variable $X$, let $\tilde{X}(s) := \mathbb{E}[\exp(-s X)]$ denote the Laplace transform of $X$, defined for all $s \in \mathbb{C} _+$. (Although $\tilde{X}(s)$ may be well-defined for $s$ with $\Re(s) < 0$, we restrict ourselves without loss of generality to $ \mathbb{C} _{+}$, which is contained in the domain of $\tilde{X}(s)$ for each nonnegative random variable $X$.) For $t \in (0, \infty)$, as usual, $\Gamma(t) := \int_0^{\infty} x^{t-1} e^{-x} {\rm d}x$ denotes the Gamma function. The complement of an event $A$ is written as $A^{\mathfrak{c}}$; the indicator function corresponding to $A$ is given by ${\bf 1}_A$. \section{Laplace-transform based estimator} \label{sec:method} In this section we formally define our plug-in estimator. The setting is as sketched in the introduction: we have $n$ i.i.d.\ observations $X_1, \ldots, X_n$ of the random variable $X$ at our disposal, and we wish to estimate the distribution of $Y$, where we know a functional relation between the transforms of $X$ and $Y$. Let $\mathcal{X}$ be a collection of (single-dimensional) nonnegative random variables, and let the collection $\tilde{\mathcal{X}} = \{ \tilde{X}(\cdot) \mid X \in \mathcal{X} \}$ represent their Laplace transforms. Let \[ \Psi: \tilde{\mathcal{X}} \rightarrow \{ g: \mathbb{C} _{+} \rightarrow \mathbb{C} \} \] map each Laplace transform in $\tilde{\mathcal{X}}$ to a complex-valued function on $ \mathbb{C} _{+}$. Finally, let $Y$ be a nonnegative random variable such that $\tilde{Y}(s) = (\Psi \tilde{X})(s)$ for some unknown $X \in \mathcal{X}$ and all $s \in \mathbb{C} _{+}$, i.e., $\Psi$ maps the Laplace transform of $X$ onto the Laplace transform of $Y$. We are interested in estimating the cumulative distribution function $F^Y(w)$ of $Y$ at a given value $w > 0$, based on the sample $X_1, \ldots, X_n$. The distributions of both $X$ and $Y$ are assumed to be unknown, but the mapping $\Psi$ {\it is} known (and will be exploited in our estimation procedure). A natural approach to this estimation problem is to (i) estimate the Laplace transform of $Y$ by $\Psi \tilde{X}_n$, where $\tilde{X}_n$ is the `na\"{\i}ve' estimator \[ \tilde{X}_n(s) = \frac{1}{n} \sum_{i=1}^n \exp(-s X_i), \quad (s \in \mathbb{C} _{+}); \] observe that $\tilde{X}_n$ can be interpreted as the Laplace transform of the empirical distribution of the sample $X_1, \ldots, X_n$; then (ii) estimate the Laplace transform corresponding to the distribution function $F^Y$ by $s \mapsto s^{-1} (\Psi \tilde{X}_n)(s)$; and finally (iii) apply an inversion formula for Laplace transforms and evaluate the resulting expression in $w$. Note that in step (ii) we relied on the standard identity \[\int_0^\infty e^{-sw} F^Y(w)\,{\rm d}w = \frac{\mathbb{E}[e^{-sY}]}{s}.\] There are two caveats, however, with this approach: first, the transform $\tilde{X}_n$ is not necessarily an element of $\tilde{\mathcal{X}}$, in which case $\Psi \tilde{X}_n$ is undefined, and second, the function $s^{-1} (\Psi \tilde{X}_n)(s)$ is not necessarily a Laplace transform and thus not amenable for inversion. To overcome the first issue, we let $E_n$ be the event that $\tilde{X}_n \in \tilde{\mathcal{X}}$. We assume that $E_n$ lies in the natural filtration generated by $X_1, \ldots, X_n$, is not defined in terms of characteristics of the (unknown) $X$, and also that $\prob{E_n^{\mathfrak{c}}} \rightarrow 0$ as $n \rightarrow \infty$. For the main result of this paper, Theorem \ref{thm:convergenceRate} in Section \ref{sec:rates}, it turns out to be irrelevant how $F^Y(w)$ is estimated on $E_n^{\mathfrak{c}}$ (as long as the estimate lies in $[0,1]$); we could, for example, estimate it by zero on this event. In concrete situations, it is typically easy to determine a suitable choice for the sets $E_n$; for both applications considered in Section \ref{sec:appl}, we explicitly identify the $E_n$. On the event $E_n$, we estimate the Laplace transform of $F^Y$ by the plug-in estimator \begin{align} \label{eq:emplap} \bar{F}^Y_n(s) := \frac{1}{s} (\Psi \tilde{X}_n)(s), \quad (s \in \mathbb{C} _{++}). \end{align} To overcome the second issue, of $\bar{F}^Y_n(s)$ not necessarily being a Laplace transform, we estimate $F^Y(w)$ by applying a \emph{truncated} version of Bromwich' Inversion formula \citep{Doetsch1974}: \begin{align} \label{eq:estimator} F^Y_n(w) = \int_{-\sqrt{n}}^{\sqrt{n}} \frac{1}{2 \pi} e^{(c + {\rm i} y) w} \bar{F}^Y_n(c + {\rm i} y) {{\rm d}}y, \end{align} where $c$ is an arbitrary positive real number. In the `untruncated' version of Bromwich' Inversion formula the integration in \eqref{eq:estimator} is over the whole real line. Since that integral may not be well-defined if $\bar{F}^Y_n$ is not a Laplace transform, we integrate over a finite interval (which grows in the sample size $n$). The thus constructed estimator has remedied the two complications that we identified above. The main result of this paper, which describes the performance of this estimator as a function of the sample size $n$, is given in the next section. \section{Main result: convergence rate} \label{sec:rates} In this section we show that the expected absolute estimation error of our estimator $F^Y_n(w)$, as defined in the previous section, is bounded from above by a constant times $n^{-1/2} \log(n+1)$. Our result is proven under the following assumptions. \begin{itemize} \item[(A1)] For each $n \in \mathbb{N} $ there is an event $A_n \subset E_n$, such that $\prob{A_n^{\mathfrak{c}}} \leq \kappa_1 n^{-1/2}$ for some $\kappa_1 > 0$ independent of $n$; \item[(A2)] $F^Y(y)$ is continuously differentiable on $[0, \infty)$, and twice differentiable at $y=w$; \item[(A3)] There are constants $\kappa_2 \geq 0$, $\kappa_3 \geq 0$ and (nonnegative and random) $Z_n$, $n \in \mathbb{N} $, such that $\sup_{p \in (1,2)} \mathbb{E}[|Z_n|^p] \leq \kappa_3 n^{-1/2}$ for all $n \in \mathbb{N} $, and such that, on the event $A_n$, \[ | (\Psi \tilde{X}_n)(s) - (\Psi \tilde{X})(s) | \,\leq \,\kappa_2 \Big|\tilde{X}_n(s) - \tilde{X}(s) \Big| +Z_n \:\:\text{ a.s.},\] for all $s = c + {\rm i} y$, $n \in \mathbb{N} $, and $-\sqrt{n} \leq y \leq \sqrt{n}$. \end{itemize} These assumptions are typically `mild'; we proceed with a short discussion of each of them. Assumption (A1) ensures that the contribution of the complement of $A_n$ (and also that of the complement of $E_n$) to the expected absolute estimation error is sufficiently small. The difference between $A_n$ and $E_n$ is that the definition of $E_n$ does not involve the unknown $X \in \mathcal{X}$ (which enables us to define the estimator $F_n^Y(w)$ \emph{without} knowing the unknown $X$), whereas $A_n$ may actually depend on $X$. It is noted that in specific applications, this helps when checking whether the assumptions (A1)--(A3) are satisfied; cf.\ the proofs of Theorems~\ref{thm:mg1rates} and \ref{thm:decompoundingrates}. If $\prob{A_n^{\mathfrak{c}}} \sim n^{-a}$ for some $a \in (-1/2,0)$ then (A1) does not hold and Theorem \ref{thm:convergenceRate} is not valid. Assumption (A2) is a smoothness condition on $F^Y$ that we use to control the error caused by integrating in \eqref{eq:estimator} over a finite interval, instead of integrating over $ \mathbb{R} $. The twice-differentiability assumption is only used to apply Lemma \ref{lem:ntaillemma} with $f = F^Y$ in the proof of Theorem \ref{thm:convergenceRate}. It can be replaced by any other condition that guarantees that, for all $n \in \mathbb{N} $ and some $\kappa_4 > 0$ independent of $n$, \[ \left| \int_{|y|>\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{F}^Y(c+{\rm i} y) {\rm d} y \right| \leq \frac{\kappa_4}{\sqrt{n}}. \] Continuous differentiability of $F^Y$ makes sure that Bromwich' Inversion formula applied to $\bar{F}^Y$ yields $F^Y$ again, i.e., $F^Y(w) = \int_{-\infty}^{\infty} ({2 \pi} )^{-1}e^{(c + {\rm i} y) w} \bar{F}^Y(c + {\rm i} y) {{\rm d}}y$, cf.\ \cite[][Chapter 4]{Schiff1999}. This equality is still true if the derivative of $F^Y(y)$ with respect to $y$ is piecewise continuous with finitely many discontinuity points, and in addition continuous at $y=w$.\iffalse{Ik heb dit iets veranderd; $F$ moet continu zijn ipv piecewise continu. Maar dat volgt als de afgeleide piecewise continu is lijkt me? Piecewise continu impliceert toch dat de afgeleide welgedefinieerd is op de discontinuitetspunten? Of noem je de afgeleide van bv $x \mapsto \sqrt{x}$ piecewise continu op $[0,\infty)$? Ikzelf niet, op zich.}\fi Assumption (A3) can be seen as a kind of Lipschitz-continuity condition on $\Psi$ that guarantees that $\Psi \tilde{X}_n$ is `close to' $\Psi \tilde{X}$ if $\tilde{X}_n$ is `close to' $\tilde{X}$. This condition is necessary to prove the weak consistency of our estimator. The formulation with the random variables $Z_n$ allows for a more general setting than with $Z_n = 0$, and is used in both applications in Section \ref{sec:appl}. A straightforward example that satisfies assumptions (A1)--(A3) is the case where $X \stackrel{d}{=} Y + W$, where $W$ is a known nonnegative random variable. If the cdf of $Y$ satisfies the smoothness condition (A2), then, with $\tilde{Y}(s) = (\Psi \tilde{X})(s) := \tilde{X}(s) / \tilde{W}(s)$, $A_n^{\mathfrak{c}} = E_n^{\mathfrak{c}} = \emptyset$, $Z_n = 0$ a.s., $c > 0$ arbitrary, and $\kappa_2 := \sup_{s = c + \textrm{i}y, - \sqrt{n} \leq y \leq \sqrt{n}} 1 / |\tilde{Z}(s)|$, it is easily seen that assumptions (A1)--(A3) are satisfied. More involved examples that satisfy the assumptions are presented in Section \ref{sec:appl}. \vspace{\baselineskip} \begin{theorem}\label{thm:convergenceRate} Let $w > 0$, $c > 0$, and assume (A1)--(A3). Then $F^Y_n(w)$ converges to $F^Y(w)$ in probability, as $n \rightarrow \infty$, and there is a constant $C > 0$ such that, for all $n \in \mathbb{N} $, \begin{align} \label{eq:convrates} \mathbb{E}[|F^Y_n(w) - F^Y(w)|] \leq C n^{-1/2} \log(n+1). \end{align} \end{theorem} \begin{proof} It suffices to prove \eqref{eq:convrates}, since this implies weak consistency of $F^Y_n(w)$. Fix $n \in \mathbb{N} $. The proof consists of three steps. In Step 1 we bound the estimation error on the event $A_n$, in Step 2 we consider the complement $A_n^{\mathfrak{c}}$, and in Step 3 we combine Step 1 and 2 to arrive at the statement of the theorem. Some of the intermediate steps in the proof rely on auxiliary results that are presented in Section \ref{sec:aux}. {\bf Step 1.} We show that there are positive constants $\kappa_4$ and $\kappa_5$, independent of $n$, such that, for all $n\in{\mathbb N}$ and $p \in (1,2)$, \begin{equation} \label{eq:ratesonA} \mathbb{E}[|F_n^Y(w) - F^Y(w) | \cdot {\bf 1}_{A_n}] \leq \kappa_4 n^{-1/2} + \kappa_5 (p-1)^{-1/p} n^{1/2 - 1/p}. \end{equation} To prove the inequality \eqref{eq:ratesonA}, consider the following elementary upper bound: \begin{eqnarray} \nonumber \lefteqn{ \mathbb{E}\left[\left|F_n^Y(w) - F^Y(w) \right| \cdot {\bf 1}_{A_n}\right]}\\ & =& \mathbb{E}\left[\left| \int_{-\sqrt{n}}^{\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) {{\rm d}}y - \int_{-\infty}^{\infty} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{F}^{Y}(c+{\rm i} y) {{\rm d}}y \:\right| \cdot {\bf 1}_{A_n} \right] \nonumber \\ \nonumber &=& \frac{1}{2\pi}\mathbb{E}\left[\left| \int_{-\sqrt{n}}^{\sqrt{n}} e^{(c+{\rm i} y) w} (\bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y)) {{\rm d}}y \right.\right.\\ &&\nonumber\hspace{15mm}\left.\left.- \int_{|y|>\sqrt{n}} e^{(c+{\rm i} y) w} \bar{F}^{Y}(c+{\rm i} y) {{\rm d}}y \:\right| \cdot {\bf 1}_{A_n} \right] \\ \label{eq:term1inproof} &\leq& \mathbb{E}\left[\, \left| \int_{-\sqrt{n}}^{\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} (\bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y)) {{\rm d}}y\, \right| \cdot {\bf 1}_{A_n} \right] \\ \label{eq:term2inproof} &&\hspace{15mm}+\: \left| \,\int_{|y|>\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{F}^{Y}(c+{\rm i} y) {{\rm d}}y \,\right|. \end{eqnarray} We now treat the terms \eqref{eq:term1inproof} and \eqref{eq:term2inproof} separately, starting with the latter. By assumption (A2) and the observation \[\int_w^{\infty} \left(\frac{\rm d}{{\rm d}y}F^Y(y + w)\right) \frac{e^{-c y}}{y} \frac{w}{e^{-c w}} {{\rm d}}y \le \int_w^{\infty} \left(\frac{\rm d}{{\rm d}y}F^Y(y + w)\right){{\rm d}}y = F^Y(\infty)-F^Y(2w)<1,\] we conclude that \[\int_w^{\infty} \left|\frac{\rm d}{{\rm d}y}F^Y(y + w)\right|\, \frac{e^{-c y}}{y} {{\rm d}}y < \frac{e^{-c w}}{w} < \infty,\] and therefore $F^Y$ satisfies the conditions of Lemma \ref{lem:ntaillemma}. As a result, \eqref{eq:term2inproof} satisfies \begin{align} \label{eq:term2binproof} \left| \int_{|y|>\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{F}^{Y}(c+{\rm i} y) {{\rm d}}y \right| \leq \frac{\kappa_4}{\sqrt{n}}, \end{align} for some constant $\kappa_4 > 0$ independent of $n$. We now bound the term \eqref{eq:term1inproof}. It is obviously majorized by \[\mathbb{E}\left[ \int_{-\sqrt{n}}^{\sqrt{n}} \frac{1}{2 \pi} e^{c w} \left| \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y) \right| {{\rm d}}y \cdot {\bf 1}_{A_n} \right].\] Let $p\in (1,2)$ and $q > 1$, with $p^{-1} + q^{-1} = 1$. By subsequent application of H\"older's Inequality, this expression is further bounded by \[\mathbb{E}\left[ \left(\int_{-\sqrt{n}}^{\sqrt{n}} \left(\frac{1}{2 \pi} e^{c w} \right)^q {{\rm d}}y \right)^{1/q} \left(\int_{-\sqrt{n}}^{\sqrt{n}} \left| \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y) \right|^p {{\rm d}}y \right)^{1/p} \cdot {\bf 1}_{A_n} \right].\] By computing the first integral, and an application of Jensen's inequality, this is not larger than \[e^{c w} \frac{(2 \sqrt{n})^{1/q}}{2 \pi} \left(\mathbb{E}\left[ \int_{-\sqrt{n}}^{\sqrt{n}} \left| \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y) \right|^p {{\rm d}}y \cdot {\bf 1}_{A_n} \right]\right)^{1/p} .\] Finally applying Fubini's Theorem, we arrive at the upper bound \begin{equation} \label{eq:term3inproof} e^{c w} \frac{(2 \sqrt{n})^{1/q}}{2 \pi} \left( \int_{-\sqrt{n}}^{\sqrt{n}} \mathbb{E}\left[ \left| \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y) \right|^p \cdot {\bf 1}_{A_n} \right] {{\rm d}}y \right)^{1/p}. \end{equation} We now study the behavior of (\ref{eq:term3inproof}), being an upper bound to (\ref{eq:term1inproof}), as a function of $n$. To this end, we first derive a bound on the integrand. Assumption (A3) implies that there exists a sequence of nonnegative random variables $Z_n$, $n\in{\mathbb N}$, such that \begin{eqnarray} \nonumber \left| \bar{F}^Y_n(s) - \bar{F}^{Y}(s) \right| \cdot {\bf 1}_{A_n} &=& \left| s^{-1} (\Psi \tilde{X}_n) (s) - s^{-1} (\Psi \tilde{X})(s) \right| \cdot {\bf 1}_{A_n} \\ \label{eq:eqnwithsinverse} &\leq & \left( \kappa_2 \left| \tilde{X}(s) - \tilde{X}_n(s) \right| + Z_n \right) \cdot |s^{-1}| \cdot {\bf 1}_{A_n} \text{ a.s.,} \end{eqnarray} for all $s = c + {\rm i} y$ with $-\sqrt{n} \leq y \leq \sqrt{n}$. Now recall the so-called $c_r$-inequality \[\mathbb{E}[|X+Y|^p] \leq 2^{p-1} (\mathbb{E}[|X|^p] + \mathbb{E}[|Y|^p]),\] and the obvious inequality ${\bf 1}_{A_n}\leq 1$ a.s. As a consequence of Lemma \ref{lem:sqrtlemma}, we thus obtain \begin{eqnarray*} \lefteqn{ \mathbb{E}\left[ \left| \bar{F}^{{Y}}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y) \right|^p \cdot {\bf 1}_{A_n} \right] } \\ &\leq& 2^{p-1} \left( \kappa_2^p \mathbb{E}\left[ \left| \tilde{X}(c+{\rm i} y) - \tilde{X}_n(c+{\rm i} y) \right|^p \right] + \mathbb{E}\left[|Z_n|^p\right] \right) |c + {\rm i} y|^{-p} \\ &\leq& 2^{p-1} (2^p \kappa_2^p + \kappa_3) n^{-1/2} |c + {\rm i} y|^{-p}. \end{eqnarray*} From \eqref{eq:term3inproof} and the straightforward inequality \begin{eqnarray} \label{eq:integralofsinverse} \int_{-\infty}^{\infty} |c + {\rm i} y|^{-p} {\rm d} y & \leq& \int_{-\infty}^{\infty} \frac{1}{(c^2+y^2)^{p/2}} {\rm d} y = c^{1-p} \int_{0}^{\infty} \frac{z^{-1/2}}{(1+z)^{p/2}} {\rm d} z \\ \nonumber &=& C_0(p ):=c^{1-p} \pi^{1/2} \frac{\Gamma((p-1)/2)}{\Gamma(p/2)}, \end{eqnarray} it follows that \begin{eqnarray} \nonumber \lefteqn{ \hspace{-5mm}\mathbb{E}\left[ \left| \int_{-\sqrt{n}}^{\sqrt{n}} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} (\bar{F}^{Y}_{{n}}(c+{\rm i} y) - \bar{F}^{Y}(c+{\rm i} y)) {{\rm d}}y \right| \cdot {\bf 1}_{A_n} \right]} \\ \nonumber &\leq & e^{c w} \frac{(2 \sqrt{n})^{1/q}}{2 \pi} \left( 2^{p-1} (2^p \kappa_2^p + \kappa_3) n^{-1/2} \int_{-\sqrt{n}}^{\sqrt{n}} |c + {\rm i} y|^{-p} {{\rm d}}y \right)^{1/p} \\ \label{eq:term5inproof} &\leq & C_1(p)\, n^{1/2-1/p}, \end{eqnarray} where \[C_1(p) := e^{c w} \,\frac{2^{2-2/p}}{2 \pi} (2^p \kappa_2^p + \kappa_3)^{1/p} \left( C_0(p ) \right)^{1/p}.\] It follows from $\Gamma((p-1)/2) = 2 \,\Gamma((p+1)/2) / (p-1)$ that \[\lim_{p \downarrow 1} (p-1)^{1/p} C_1(p)< \infty.\] This implies that there is a $\kappa_5 > 0$ such that \begin{eqnarray} \label{eq:c1c3} C_1(p) \leq \kappa_5 (p-1)^{-1/p} \quad \text{ for all $p \in (1,2)$}. \end{eqnarray} Upon combining the results presented in displays \eqref{eq:term1inproof}, \eqref{eq:term2inproof}, \eqref{eq:term2binproof}, \eqref{eq:term5inproof}, and \eqref{eq:c1c3}, we obtain Inequality \eqref{eq:ratesonA}, as desired. {\bf Step 2.} On the complement of the event $A_n$ we have, by assumption (A1), \begin{align} \label{eq:Acresult} &\mathbb{E}\left[|F^{Y}_{n}(w) - F^Y(w) | \cdot {\bf 1}_{A_n^{\mathfrak{c}}}\right] \leq \prob{ A_n^{\mathfrak{c}} } \leq \kappa_1 n^{-1/2}. \end{align} {\bf Step 3.} When combining Inequalities \eqref{eq:ratesonA} and \eqref{eq:Acresult}, we obtain that \begin{eqnarray*} \mathbb{E}\left[|F^{Y}_{n}(w) - F^Y(w) |\right] &= &\mathbb{E}\left[|F^{Y}_{n}(w) - F^Y(w) | \cdot {\bf 1}_{A_n}\right] + \mathbb{E}\left[|F^{Y}_{n}(w) - F^Y(w) | \cdot {\bf 1}_{ A_n^{\mathfrak{c}}}\right] \\ &\leq& \kappa_4 n^{-1/2} + \kappa_5 (p-1)^{-1/p} n^{1/2 - 1/p} + \kappa_1 n^{-1/2}. \end{eqnarray*} Now realize that we have the freedom to pick in the above inequality any $p\in(1,2)$. In particular, the choice $p=p_n: = 1 + 1/(2 \log(n+1)) \in (1,2)$ yields the bound \begin{eqnarray*} \mathbb{E}[|F^{Y}_{n}(w) - F^Y(w) | & \leq& \kappa_4 n^{-1/2} + \kappa_5 (2 \log(n+1))^{1/p_n} n^{1/2 - 1/p_n} + \kappa_1 n^{-1/2} \\ &\leq& (\kappa_4 + 2 \kappa_5 e^{1/2} + \kappa_1) n^{-1/2} \log(n+1), \end{eqnarray*} using \[\frac{1}{2}-\frac{1}{p_n}=-\frac{1}{2}+\frac{1}{1+2 \log(n+1)}\] and \[n^{1/(1+2\log(n+1))} = \exp\left(\frac{\log n}{1+ 2 \log(n+1)}\right) \leq e^{1/2}.\] This finishes the proof of Thm.\ \ref{thm:convergenceRate}. \end{proof} \begin{remark} \label{rem:cdf} Contrary to some of the literature mentioned in Section \ref{sec:intro} (e.g.\ \cite{MnatsakanovRuymgaartRuymgaart2008} and \cite{Shimizu2010}), we are not estimating a density but a cumulative distribution function. This difference translates into an additional $|s^{-1}|$ term in Equation \eqref{eq:eqnwithsinverse}, which enables us to bound the integral in Equation \eqref{eq:integralofsinverse}. This appears to be an crucial step in the proof of Theorem \ref{thm:convergenceRate}, because it means that the ill-posedness of the inversion problem (the fact that the inverse Laplace transform operator is not continuous) does not play a r\^ole: convergence of $\bar{F}_n^Y(\cdot)$ to $\bar{F}^Y(\cdot)$ implies convergence of $F_n^Y(w)$ to $F^Y(w)$. As a result, we do not need regularization techniques as in \cite{MnatsakanovRuymgaartRuymgaart2008} and \cite{Shimizu2010}. \end{remark} \section{Applications} \label{sec:appl} In this section we discuss two examples that have attracted a substantial amount of attention in the literature. In both examples, the verification of the Assumptions (A1)--(A3) can be done, as we will demonstrate now. \subsection{Workload estimation in an M/G/1 queue} \label{subsec:mg1} In our first example we consider the so-called M/G/1 queue: jobs arrive at a service station according to a Poisson process with rate $\lambda>0$, where these jobs are i.i.d.\ samples with a service time distribution $B$; see e.g.\ see \cite{Cohen1982} for an in-depth account of the M/G/1 queue, and \cite{NazarathyPollet2012} for an annotated bibliography on inference in queueing models. Under the stability condition $\rho:= \lambda \mathbb{E}[B] \in (0,1)$ the queue's stationary workload is well defined. Our objective, motivated by the setup described in \cite{denBoerMandjesNunezZuraniewski2014}, is to estimate ${\mathbb P}(Y>w)$, where $Y$ is the stationary workload, and $w>0$ is a given threshold. The idea is that this estimate is based on samples of the queue's input process. In more detail, the procedure works as follows. By the Pollaczek-Khintchine formula \citep{Kendall1951}, the Laplace transform of the stationary workload distribution $Y$ satisfies the relation \begin{eqnarray} \label{eq:PKformula} \tilde{Y}(s) = \frac{s (1 - \rho)}{s - \lambda + \lambda \tilde{B}(s)}, \quad s \in \mathbb{C} _{+}. \end{eqnarray} For subsequent time intervals of (deterministic) length $\delta > 0$, the amount of work arriving to the queue is measured. These observations are i.i.d.\ samples from a compound distribution $X \stackrel{\rm d}{=} \sum_{i=1}^N B_i$, with $N$ Poisson distributed with parameter $\lambda \delta$, and the random variables $B_1, B_2, \ldots$ independent and distributed as $B$ (independent of $N$). By Wald's equation we have $\mathbb{E}[X] = \delta \rho$, and a direct computation yields $\tilde{X}(s) = \exp(-\lambda \delta + \lambda \delta \tilde{B}(s))$. Combining this with \eqref{eq:PKformula}, we obtain the following relation between the Laplace transforms of $X$ and $Y$: \begin{eqnarray} \label{eq:XinY} \tilde{Y}(s) = \frac{s (1 - \delta^{-1} \mathbb{E}[X])}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))}. \end{eqnarray} Here ${\rm Log}$ is the distinguished logarithm of $\tilde{X}(s)$ \citep{Chung2001}, which is convenient to work with in this context \cite{VanEsEtAl2007}. Our goal is to estimate ${\mathbb P}(Y \leq w)=F^Y(w)$, for a given $w > 0$, based on an independent sample $X_1, \ldots, X_n$. We use the estimator $F_n^Y(w)$ defined in Section \ref{sec:method}, for an arbitrary $c > 0$, and with \begin{itemize} \item[(i)] $\mathcal{X}$ the collection of all random variables $X'$ of the form $\sum_{i=1}^{N'} B'_i$ with $N'$ Poisson distributed with strictly positive mean, $\{B'_i\}_{i \in \mathbb{N} }$ i.i.d., independent of $N'$, nonnegative, and with $0 < \mathbb{E}[X'] < \delta$; \item[(ii)] the sets \[E_n := \{ 0 \le \frac{1}{n} \sum_{i=1}^n X_i < \delta \},\] so as to ensure that the `empirical occupation rate' of the queue, $\delta^{-1} n^{-1} \sum_{i=1}^n X_i$, is strictly smaller than one, and that therefore the Pollaczek-Khintchine formula holds; \item[(iii)] $\Psi$ defined through \[(\Psi \tilde{X})(s) = \frac{s (1 + \delta^{-1} \tilde{X}'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))},\:\:\:s \in \mathbb{C} _{+},\] where $\tilde{X}'(t)$ denotes the derivative of $\tilde{X}(t)$ in $t \in (0,\infty)$ and $\tilde{X}'(0) = \lim_{t \downarrow 0} \tilde{X}'(t) = -\mathbb{E}[X]$. \end{itemize} \vspace{3mm} \begin{theorem} \label{thm:mg1rates} Consider the estimation procedure outlined above. Suppose $F^Y$ is continuously differentiable, twice differentiable in $w$, and $\mathbb{E}[B^2] < \infty$. Then there is a constant $C > 0$ such that \[\mathbb{E}\left[|F_n^Y(w) - F^Y(w)|\right] \leq Cn^{-1/2} \log(n+1) \] for all $n \in \mathbb{N} $. \end{theorem} \begin{proof} Let $n \in \mathbb{N} $ be arbitrary, and define the events \[ A_{n,1} := \left\{ \sup_{- \sqrt{n} \leq y \leq \sqrt{n}} \left|\frac{\tilde{X_n}(c + {\rm i} y)}{\tilde{X}(c + {\rm i} y)} - 1\right| \leq \min\left\{ \frac{1}{2}, \frac{c \delta (1 - \delta^{-1} \mathbb{E}[X])}{2 \log 4 } \right\} \right\}, \] \[ A_{n,2} := \left\{ \delta^{-1} \left|\mathbb{E}[X] - \frac{1}{n} \sum_{i=1}^n X_i\right| \leq \delta^{-1} \mathbb{E}[X] (1 - \delta^{-1} \mathbb{E}[X])\right\}, \] and $A_{n} := A_{n,1} \cap A_{n,2}$. We have $A_{n,2} \subset E_n$ (because, using $\rho = \delta^{-1} \mathbb{E}[X]$, the event $A_{n,2}$ implies $\delta^{-1} n^{-1} \sum_{i=1}^n X_i \in [\rho^2, \rho (2 - \rho)] \subset (0,1)$) and thus $A_n \subset E_n$. We show that assumptions (A1)--(A3), as defined in Section \ref{sec:rates}, are satisfied. To this end, we only need to show (A1) and (A3), since (A2) is assumed in the statement of the theorem. $\rhd$ Assumption {(A1).} Let \[\beta := \exp(-2 \lambda \delta) \min\left\{ \frac{1}{2},\frac{c \delta (1 - \delta^{-1} \mathbb{E}[X])}{2 \log 4 } \right\}.\]Then, for $s \in \mathbb{C} _{+}$, \begin{equation}\label{eq:inftildeX} |\tilde{X}(s) | = |\exp(- \lambda \delta + \lambda \delta \tilde{B}(s))| \geq \exp(- \lambda \delta + \lambda \delta \Re( \tilde{B}(s)))\geq \exp(-2 \lambda \delta), \end{equation} which implies that \begin{align*} \prob{A_{n,1}^{\mathfrak{c}}} \leq \prob{ \sup_{- \sqrt{n} \leq y \leq \sqrt{n}} | \tilde{X_n}(c + {\rm i} y) - \tilde{X}(c + {\rm i} y) | > \beta}, \end{align*} so that Lemma \ref{lem:duvroye} then yields \begin{align*} \prob{A_{n,1}^{\mathfrak{c}}} < 4 \left(1 + \frac{8 \sqrt{n} \mathbb{E}[|X|]}{\beta} \right) e^{-n \beta^2 / 18} + \prob{\left| \frac{1}{n} \sum_{i=1}^n X_i \right| \geq \frac{4}{3} \mathbb{E}[|X|]}. \end{align*} Since $\sqrt{n} \exp(-n \beta^2 / 18) = O(n^{-1/2})$ and \[\prob{\left| \frac{1}{n} \sum_{i=1}^n X_i \right| \geq \frac{4}{3} \mathbb{E}[|X|]} \leq \prob{\left| \mathbb{E}[X] - \frac{1}{n} \sum_{i=1}^n X_i \right| \geq \frac{1}{3} \mathbb{E}[X] } \leq n^{-1} \frac{9 \mathbb{E}[(X - \mathbb{E}[X])^2]}{\mathbb{E}[X]^2}, \] using the nonnegativity of $X$ and Chebyshev's inequality, it follows that ${\mathbb P}({A_{n,1}^{\mathfrak{c}}} )= O(n^{-1/2})$. It also follows easily from Chebyshev's inequality that $\prob{A_{n,2}^{\mathfrak{c}}} = O(n^{-1/2})$. It follows immediately that $\prob{A_{n}^{\mathfrak{c}}} \leq \prob{A_{n,1}^{\mathfrak{c}}} +\prob{A_{n,2}^{\mathfrak{c}}} =O(n^{-1/2})$, which implies that assumption (A1) is satisfied. $\rhd$ {Assumption (A3).} Fix $y \in[-\sqrt{n}, \sqrt{n}]$ and $s = c + {\rm i} y$. Then \begin{eqnarray} \nonumber \lefteqn{ |\,(\Psi \tilde{X}_n)(s) - (\Psi \tilde{X})(s)\,|}\\ &\leq& \left|\frac{s (1 + \delta^{-1} \tilde{X}_n'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}_n(s))} - \frac{s (1 + \delta^{-1} \tilde{X}_n'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right| \nonumber \\ \nonumber &&+\, \left|\frac{s (1 + \delta^{-1} \tilde{X}_n'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} - \frac{s (1 + \delta^{-1} \tilde{X}'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right| \\ \nonumber &\leq & \left| \frac{ \delta^{-1} {\rm Log}(\tilde{X}(s)) - \delta^{-1} {\rm Log}(\tilde{X}_n(s))}{s (1 + \delta^{-1} \tilde{X}'(0))} \cdot \frac{s (1 + \delta^{-1} \tilde{X}_n'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}_n(s))} \cdot \frac{s (1 + \delta^{-1} \tilde{X}'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right| \\ \nonumber &&+ \, \left| \frac{\delta^{-1} (\tilde{X}_n'(0)) - \tilde{X}'(0))}{(1 + \delta^{-1} \tilde{X}'(0))} \cdot \frac{s (1 + \delta^{-1} \tilde{X}'(0))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right| \\ \nonumber &\leq& \frac{\delta^{-1}}{1 + \delta^{-1} \mathbb{E}[X]} \left| {\rm Log}(\tilde{X}(s)) - {\rm Log}(\tilde{X}_n(s)) \right| \cdot \left| s^{-1} (\Psi \tilde{X}_n)(s) \right| \cdot \left| (\Psi \tilde{X})(s) \right|\\ \label{eq:mg1logeq1} &&+\, \frac{\delta^{-1}}{1 + \delta^{-1} \mathbb{E}[X]} \left| \mathbb{E}[X] -\frac{1}{n} \sum_{i=1}^n X_i \right| \cdot \left| (\Psi \tilde{X})(s) \right|. \end{eqnarray} If $f: \mathbb{R} \rightarrow \mathbb{R} $ is a continuous function with $f(0) = 1$ and $f(t) \neq 0$ for all $t \in \mathbb{R} $, then for all $t$ such that $|f(t) -1| \leq \frac{1}{2}$ we have ${\rm Log}(f(t)) = L(f(t))$, where, for $z \in \mathbb{C} ,$ $|z-1|<1$, \begin{align} \label{eq:Lfunction} L(z) = \sum_{j \geq 1} \frac{(-1)^{j-1}}{j} (z-1)^j; \end{align} this follows from the construction of the distinguished logarithm \citep{Chung2001}. In addition, if $|z-1| \leq \frac{1}{2}$, then \begin{align} \label{eq:Lfunction2} |L(z)| \leq \sum_{j \geq 1} \frac{1}{j} |z-1|^j = \log\left(\frac{1}{1 - |z-1|}\right) \leq |z-1| \log 4. \end{align} This implies that, on $A_n$, we have \begin{eqnarray} \nonumber \lefteqn{\hspace{-1cm}\left|\,{\rm Log}(\tilde{X}_n(c + {\rm i} y)) - {\rm Log}(\tilde{X}(c + {\rm i} y)) \right| = \left|\,{\rm Log}\left( \frac{\tilde{X}_n(c + {\rm i} y)}{\tilde{X}(c + {\rm i} y)}\right) \right| = \left|\,L\left( \frac{\tilde{X}_n(c + {\rm i} y)}{\tilde{X}(c + {\rm i} y)}\right) \right|}\\ &\leq& \left| \frac{\tilde{X}_n(c + {\rm i} y)}{\tilde{X}(c + {\rm i} y)} - 1 \right| \log 4 \leq \left| \tilde{X}_n(c + {\rm i} y) - \tilde{X}(c + {\rm i} y) \right| (\log 4 )\exp(2 \lambda \delta),\label{eq:mg1logeq2} \end{eqnarray} where the last inequality follows from \eqref{eq:inftildeX}. Furthermore, we have on $A_n$ that \begin{eqnarray} \nonumber\lefteqn{ |s^{-1} (\Psi \tilde{X}_n)(s)| = \left| \frac{1 - \delta^{-1} \frac{1}{n} \sum_{i=1}^n X_i}{s + \delta^{-1} {\rm Log}(\tilde{X}_n(s))} \right|} \\ \nonumber &\leq& \left| \frac{1 - \delta^{-1} \frac{1}{n} \sum_{i=1}^n X_i}{1-\delta^{-1} \mathbb{E}[X]} \right| \cdot \left| \frac{s + \delta^{-1} {\rm Log}(\tilde{X}_n(s))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right|^{-1} \cdot \left| \frac{1 - \delta^{-1} \mathbb{E}[X]}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right| \\ \nonumber &\leq& \left| 1 + \delta^{-1} \frac{\mathbb{E}[X] - \frac{1}{n} \sum_{i=1}^n X_i}{1-\delta^{-1} \mathbb{E}[X]} \right| \cdot \left| 1 + \frac{\delta^{-1} {\rm Log}(\tilde{X}_n(s) / \tilde{X}(s))}{s + \delta^{-1} {\rm Log}(\tilde{X}(s))} \right|^{-1} \cdot \left| s^{-1} \tilde{Y}(s) \right| \\ \nonumber &\leq& \left(1 + \frac{\delta^{-1} \mathbb{E}[X] (1 - \delta^{-1} \mathbb{E}[X])}{1-\delta^{-1} \mathbb{E}[X]} \right) \cdot \left| 1 + (\Psi \tilde{X})(s) \frac{\delta^{-1} L(\tilde{X}_n(s) / \tilde{X}(s))}{s (1 - \delta^{-1} \mathbb{E}[X])} \right|^{-1} \cdot c^{-1} \\ &\leq& \left(1 + \frac{\delta^{-1} \mathbb{E}[X] (1 - \delta^{-1} \mathbb{E}[X])}{1-\delta^{-1} \mathbb{E}[X]} \right) \cdot 2 \cdot c^{-1}, \label{eq:mg1logeq3} \end{eqnarray} since $|1+z|^{-1} \leq (1 -|z|)^{-1} \leq (1 - 1/2)^{-1}$ for all $z \in \mathbb{C} $ with $|z| \leq \frac{1}{2}$; bear in mind that, in particular, \begin{align*} \left| (\Psi \tilde{X})(s) \frac{\delta^{-1} L(\tilde{X}_n(s) / \tilde{X}(s))}{s (1 - \delta^{-1} \mathbb{E}[X])} \right| \leq \frac{\delta^{-1} c^{-1} |L(\tilde{X}_n(s) / \tilde{X}(s))| }{1 - \delta^{-1} \mathbb{E}[X]} \leq \frac{\delta^{-1} c^{-1} \log 4}{1 - \delta^{-1} \mathbb{E}[X]} \left| \frac{\tilde{X}_n(s)}{\tilde{X}(s)} - 1 \right| \leq \frac{1}{2} \end{align*} on $A_{n,1}$. Finally, writing \[ Z_n = \frac{\delta^{-1}}{1 + \delta^{-1} \mathbb{E}[X]} \left| \,\mathbb{E}[X] -\frac{1}{n} \sum_{i=1}^n X_i \,\right| \cdot \left| (\Psi \tilde{X})(s) \right|, \] and noting that $\mathbb{E}[X^2]< \infty$, it follows from Lemma \ref{lem:sqrtlemma} and $|(\Psi \tilde{X})(s)| \leq 1$ that $\mathbb{E}[|Z_n|^p] \leq \kappa_3 n^{-1/2}$ for all $p \in (1,2)$ and some $\kappa_3 > 0$ independent of $n$ and $p$. Combining this with equations \eqref{eq:mg1logeq1}, \eqref{eq:mg1logeq2}, and \eqref{eq:mg1logeq3}, implies that assumption (A3) holds. \end{proof} \vspace{\baselineskip} \begin{remark} An important problem in \cite{denBoerMandjesNunezZuraniewski2014} is to develop heuristics for choosing $\delta$, in order to minimize the expected estimation error. In the proof of Theorem \ref{thm:convergenceRate} we show the following upper bound \begin{align} \label{eq:ubmg1} \mathbb{E}\left[ | F_n^Y(w) - F^Y(w)|\right] &\leq \kappa_4 n^{-1/2} + C_1(p) n^{1/2 - 1/p} + \prob{A_n^{\mathfrak{c}}}, \end{align} where $p =p_n= 1 + 1 / (2 \log(n+1))$. A close look at the proof reveals that $\lim_{p \downarrow 1} (p-1)^{1/p} C_1(p) = \exp(cw) \pi^{-1} (2 \kappa_2 + \kappa_3)$, and for the M/G/1 example it is not difficult to show that $\kappa_2 = \kappa_2(\delta) \leq 2 c^{-1} \delta^{-1}$, $\kappa_3 = \kappa_3(\delta) \leq (1 + {\mathbb V}{\rm ar}\,[X]) \delta^{-1} (1 + \rho)^{-1}$, ${\mathbb V}{\rm ar}\,[X] = \delta \lambda \mathbb{E}[B^2]$, and $\prob{A_n^{\mathfrak{c}}} = O(n^{-1})$. This means that, for large $n$, the right-handside of \eqref{eq:ubmg1} can be approximated by \begin{align} \label{eq:anotherub} (\alpha + \beta \delta^{-1}) e^{1/2} n^{-1/2} \log(n+1) \end{align} where \[ \alpha: = \kappa_4 + e^{cw} \pi^{-1} \lambda \mathbb{E}[B^2] (1 + \rho)^{-1},\:\:\:\:\: \beta := e^{cw} \pi^{-1} (4 c^{-1} + (1 + \rho)^{-1}). \] If we neglect the $\log(n+1)$ term, then, on a fixed time horizon of length $T = \delta n$, the upper bound \eqref{eq:anotherub} equals \begin{align*} (\alpha \delta^{1/2} + \beta \delta^{-1/2}) e^{1/2} T^{-1/2}, \end{align*} which suggests that $\delta$ should be chosen that minimizes $\alpha \delta^{1/2} + \beta \delta^{-1/2}$. In the application \cite{denBoerMandjesNunezZuraniewski2014} $\alpha$ and $\beta$ are unknown (because they depend on e.g.\ $\lambda$ and $\mathbb{E}[B^2]$), but if they can be replaced by known upper bounds $\alpha_u$ and $\beta_u$, then a heuristic choice for $\delta$ is to pick a minimizer of $\alpha_u \delta^{1/2} + \beta_u \delta^{-1/2}$ (yielding $\delta = \beta_u/\alpha_u$). \end{remark} \begin{remark} \label{rem:low} Interestingly, the technique described above enables a fast and accurate estimation of rare-event probabilities (i.e., $1-F^Y(w)$ for $w$ large), even in situations in which the estimation is based on input $X_1,\ldots,X_n$ for which the corresponding queue would not have exceeded level $w$. This idea, which resonates the concepts developed in \cite{MandjesVandeMeent2009}, has been worked out in detail in \cite{denBoerMandjesNunezZuraniewski2014}. A numerical illustration of our estimator in this setting, and a comparison to the empirical estimator, is provided in Section \ref{sec:num}. \end{remark} \subsection{Decompounding} \label{subsec:decompounding} Our second application involves decompounding a compound Poisson distribution, a concept that has been studied in the literature already (see the remarks on this in the introduction). We start by providing a formal definition of the problem. Let $\mathcal{X}$ denote the collection of random variables of the form $\sum_{i=1}^{N'} Y'_i$, with $N'$ Poisson distributed with $\mathbb{E}[N'] > 0$, and $(Y'_i)_{i \in \mathbb{N} }$ i.i.d.\ nonnegative random variables, independent of $N'$, and with $\prob{Y'_1 = 0} = 0$ (which can be assumed without loss of generality). For each $\tilde{X} \in \tilde{\mathcal{X}}$, let, for $s \in \mathbb{C} _{+}$, \[ (\Psi \tilde{X})(s) = 1 + \frac{1}{- \log(\tilde{X}(\infty))} {\rm Log}( \tilde{X}(s)),\] where ${\rm Log}$ denotes the distinguished logarithm of $\tilde{X}$, and \[\tilde{X}(\infty):= \lim_{s \rightarrow \infty, s \in \mathbb{R} } \tilde{X}(s) = \lim_{s \rightarrow \infty, s \in \mathbb{R} } e^{\mathbb{E}[N] (-1 + \tilde{Y_1}(s))} = e^{-\mathbb{E}[N]} \] if $X = \sum_{i=1}^N Y_i$; here the last equality follows from $\prob{Y_1 = 0} = 0$. Let $X = \sum_{i=1}^N Y_i$ be an element of $\mathcal{X}$, for some particular $Y \stackrel{\rm d}{=} Y_1$ and a Poisson distributed random variable $N$ with mean $\lambda > 0$. Since $-\log(\tilde{X}(\infty)) = \lambda$ and $\tilde{X}(s) = \exp(-\lambda + \lambda \mathbb{E}[-s Y])$, we have $\tilde{Y} = \Psi \tilde{X}$. The idea is to estimate $F^Y(w)$, for $w > 0$, based on a sample $X_1, \ldots, X_n$ of $n \in \mathbb{N} $ independent copies of $X$, using the estimator $F_n^Y(w)$ of Section \ref{sec:method}, with, for $n\in{\mathbb N}$, \[E_n: = \left\{ \frac{1}{n} \sum_{i=1}^n {\bf 1}_{\{X_i = 0\}} \in (0,1) \right\}\] and arbitrary $c> 0$. \vspace{\baselineskip} \begin{theorem} \label{thm:decompoundingrates} Consider the estimation procedure outlined above. Suppose $F^{Y}$ is continuously differentiable, twice differentiable in $w$, and suppose $\mathbb{E}[|X|^2] < \infty$. Then there is a constant $C>0$ such that \[\mathbb{E}\left[|F_n^Y(w) - F^Y(w)|\right] \leq C n^{-1/2} \log(n+1) \] for all $n \in \mathbb{N} $. \end{theorem} \begin{proof} Write \[\lambda_n = -\log(\tilde{X}_n(\infty)) = -\log\left(\frac{1}{n} \sum_{i=1}^n {\bf 1}_{\{X_i = 0\}}\right)\] (being well-defined on $E_n$), and define \[ A_{n,1} := \left\{ \sup_{-\sqrt{n} \leq y \leq \sqrt{n}} | \tilde{X}_n(c + {\rm i} y) - \tilde{X}(c + {\rm i} y) | \leq \exp(-2 \lambda)/2 \right\}, \] \[ A_{n,2} := \left\{ \frac{\lambda }{ 2} \leq \lambda_n \leq 2 \lambda \right\}, \] and $A_n = A_{n,1} \cap A_{n,2}$. Note that $A_{n,2} \subset E_n$ and thus $A_n \subset E_n$. We show that assumptions (A1)--(A3) are valid. Because we explicitly assumed (A2), we are left with verifying (A1) and (A3). These verification resemble those of the M/G/1 example. $\rhd$ {Assumption (A1).} $\prob{A_{n,1}^{\mathfrak{c}}} = O( \sqrt{n} \exp(-n \beta^2 / 18)) = O(n^{-1/2})$ follows from Lemma \ref{lem:duvroye}, with $\beta = \exp(-2 \lambda)/2$, together with Chebyshev's Inequality and the assumption $\mathbb{E}[|X|^2] < \infty$. $\prob{A_{n,2}^{\mathfrak{c}}} = O(n^{-1/2})$ follows from Hoeffding's Inequality, and thus \[\prob{A_n^{\mathfrak{c}}} \leq \prob{A_{n,1}^{\mathfrak{c}}} + \prob{A_{n,2}^{\mathfrak{c}}} = O(n^{-1/2}).\] $\rhd$ {Assumption (A3).} On $A_n$, for $s = c + {\rm i} y$, $-\sqrt{n} \leq y \leq \sqrt{n}$, we have \[ \left|\,\frac{\tilde{X}_n(s) }{ \tilde{X}(s)} - 1\,\right| \leq \left|\tilde{X}_n(s) - \tilde{X}(s)\right|\, e^{2 \lambda} \leq \frac{1}{2},\] where $|\tilde{X}(s)|^{-1} \leq \exp(2 \lambda)$ follows as in \eqref{eq:inftildeX}, and thus \begin{eqnarray*} \left|{\rm Log}(\tilde{X}_n(s)) - {\rm Log}(\tilde{X}(s)) \right| &=& \left|{\rm Log}(\tilde{X}_n(s) / \tilde{X}(s)) \right| = \left|L(\tilde{X}_n(s) / \tilde{X}(s)) \right| \\ &\leq& \left|\tilde{X}_n(s) - \tilde{X}(s)\right| \,(\log 4) \,e^{2 \lambda}, \end{eqnarray*} using \eqref{eq:Lfunction} and \eqref{eq:Lfunction2}. This implies \begin{eqnarray*} \lefteqn{\left| (\Psi \tilde{X}_n)(s) - (\Psi \tilde{X})(s) \right| \cdot {\bf 1}_{A_n} }\\ &\leq& \left| \lambda_n^{-1} {\rm Log}(\tilde{X}_n(s)) - \lambda_n^{-1} {\rm Log}(\tilde{X}(s))\right| \cdot {\bf 1}_{A_n} + \left| \lambda_n^{-1} {\rm Log}(\tilde{X}(s)) - \lambda^{-1} {\rm Log}(\tilde{X}(s))\right| \cdot {\bf 1}_{A_n} \\ &\leq& \left|\lambda_n^{-1}\right| \cdot \left|{\rm Log}(\tilde{X}_n(s)) - {\rm Log}(\tilde{X}(s))\right| \cdot {\bf 1}_{A_n} + \left| \lambda_n^{-1} - \lambda^{-1}\right| \cdot \left|{\rm Log}(\tilde{X}(s))\right| \cdot {\bf 1}_{A_n} \\ &\leq& \frac{2 \,\log 4}{\lambda} \,e^{2 \lambda} \cdot\left|\tilde{X}_n(s) - \tilde{X}(s)\right| \cdot {\bf 1}_{A_n} + Z_n \:\text{ a.s.,} \end{eqnarray*} with $Z_n = 2 \lambda^{-2} |\lambda_n - \lambda| \cdot {\bf 1}_{A_n}$. By definition of $A_{n,2}$, $Z_n$ is bounded, and it follows from Hoeffding's inequality that there is a $\kappa_3 > 0$ independent of $n$ such that, for all $1 < p < 2$, $\mathbb{E}[|Z_n|^p] \leq \kappa_3 n^{-1/2}$. This shows that (A3) is valid. \end{proof} \begin{remark} \label{rem:otherdistributions} The decompounding example above can also be carried out with distributions other than Poisson. For example, if $N$ is Bin$(M, p)$ distributed, for known $M \in \mathbb{N}$ and unknown $p \in (0,1)$, then $\tilde{X}(s) = ( p \tilde{Y}(s) + 1-p)^M$, $\tilde{X}(\infty) = (1-p)^M$, and thus \[\tilde{Y}(s) = (\Psi \tilde{X})(s) := \frac{\tilde{X}(s)^{1/M} - \tilde{X}(\infty)^{1/M}}{1 - \tilde{X}(\infty)^{1/M}}.\] Or, if $N$ is negative binomially distributed, i.e. \[ \prob{N=n} = \left( \begin{array}{cc} n + M - 1 \\ n \end{array} \right) (1-p)^M p^n, \quad (n = 0,1,2,\ldots), \] for some known $M \in \mathbb{N} $ and unknown $p \in (0,1)$, then $\tilde{X}(s) = (1-p)^M (1 - p \tilde{Y}(s))^{-M}$, $\tilde{X}(\infty) = (1-p)^M$, and thus \[ \tilde{Y}(s) = (\Psi \tilde{X})(s) := \frac{1 - \tilde{X}(\infty)^{1/M} \tilde{X}(s)^{-1/M}}{1 - \tilde{X}(\infty)^{1/M}}. \] For both examples it is not difficult to construct $A_n$ and $Z_n$, in the same spirit as in the proof of Theorem \ref{thm:decompoundingrates}, such that the convergence rates $\mathbb{E}[|F_n^Y(w) - F^Y(w)|] = O(n^{-1/2} \log(n+1))$ hold. The key requirement on $N$ to obtain these rates is that the relation $\tilde{X}(s) = \mathbb{E}[ \tilde{Y}(s)^N]$ can be inverted, such that we can write $\tilde{Y}(s) = (\Psi \tilde{X})(s)$ for some mapping $\Psi$. \end{remark} \section{Auxiliary lemmas} \label{sec:aux} This section contains a number of auxiliary lemmas that are used in the proofs of Theorems \ref{thm:convergenceRate}, \ref{thm:mg1rates}, and~\ref{thm:decompoundingrates}. \begin{lemma} \label{lem:sqrtlemma} Let $c > 0$, $n \in \mathbb{N} $ and let $X_1, \ldots, X_n$ be i.i.d.\ nonnegative random variables distributed as $X$. For all $p \in (1,2)$ and $s \in c + {\rm i} \mathbb{R} $, \begin{eqnarray*} \mathbb{E}\left[ \left| \tilde{X}(s) - \frac{1}{n} \sum_{i=1}^n \exp(-s X_i) \right|^p \right] \leq 2^p n^{-1/2}, \end{eqnarray*} and \begin{eqnarray*} \mathbb{E}\left[ \left| \mathbb{E}[X] - \frac{1}{n} \sum_{i=1}^n X_i \right|^p \right] \leq (1 + {\mathbb V}{\rm ar}\,[X]) n^{-1/2}, \end{eqnarray*} where the last inequality is only informative if \,${\mathbb V}{\rm ar}\,[X]<\infty$. \end{lemma} \begin{proof} Let $s \in c + i \mathbb{R} $. Since $X_i \geq 0$ a.s.\ for all $i=1, \ldots, n$, we have \begin{align*} \left| \tilde{X}(s) - \frac{1}{n} \sum_{i=1}^n \exp(-s X_i) \right|^{p-1} \leq \left( |\tilde{X}(s)| + \frac{1}{n} \sum_{i=1}^n \left| \exp(-s X_i) \right| \right)^{p-1} \leq 2^{p-1} \text{ a.s.} \end{align*} Jensen's Inequality then implies \begin{align*} \mathbb{E}\left[ \left| \tilde{X}(s) - \frac{1}{n} \sum_{i=1}^n \exp(-s X_i) \right|^p \right] &\leq 2^{p-1} \sqrt{ \mathbb{E}\left[ \left| \frac{1}{n} \sum_{i=1}^n (\exp(-s X_i) - \mathbb{E}[\exp(-s X)]) \right|^2 \right] } \\ &= 2^{p-1} \sqrt{ \frac{1}{n} \mathbb{E}[\left| \exp(-s X) - \mathbb{E}[\exp(-s X)] \right|^2]} \leq 2^p n^{-1/2}. \end{align*} Furthermore, we have, again by Jensen's Inequality, \begin{eqnarray*} \lefteqn{ \mathbb{E}\left[ \left| \mathbb{E}[X] - \frac{1}{n} \sum_{i=1}^n X_i \right|^p \right] \leq n^{-p} \mathbb{E}\left[ \left| \sum_{i=1}^n (X_i - \mathbb{E}[X]) \right|^2 \right]^{p/2}}\\ &\leq& n^{-p} n^{p/2} \mathbb{E}[(X - \mathbb{E}[X])^2]^{p/2} \leq n^{-1/2} {\mathbb V}{\rm ar}\,[X]^{p/2} \leq n^{-1/2} (1 + {\mathbb V}{\rm ar}\,[X]). \end{eqnarray*} This proves the claims. \end{proof} \begin{lemma} \label{lem:duvroye} Let $n \in \mathbb{N} $, and let $X_1, \ldots, X_n$ be i.i.d.\ nonnegative random variables distributed as $X$. Let $\alpha > 0$, $\beta > 0$, $c > 0$, and \[\tilde{X}_n(s) := \frac{1}{n} \sum_{i=1}^n \exp(-s X_i),\] for $s \in \mathbb{C} _{+}$. Then \begin{eqnarray*} \prob{ \sup_{|t| \leq \alpha} | \tilde{X}(c + {\rm i} t) - \tilde{X}_n(c + {\rm i} t)| > \beta} &<& 4 \left(1 + \frac{8 \alpha \mathbb{E}[|X|]}{\beta} \right) \exp(-n \beta^2 / 18) \\ && +\: \prob{ \left| \frac{1}{n} \sum_{i=1}^n X_i \right| \geq \frac{4}{3} \mathbb{E}[|X|]}. \end{eqnarray*} \end{lemma} \begin{proof} One can show that, for all $t,s \in [-\alpha, \alpha]$, \[ |\tilde{X}(c + {\rm i} t) - \tilde{X}(c + {\rm i} s)| \leq \mathbb{E}\left[|1 - \exp({\rm i} (t-s) X)|\right],\] and \[ |\tilde{X}_n(c + {\rm i} t) - \tilde{X}_n(c + {\rm i} s)| \leq |t-s| \left| \frac{1}{n} \sum_{i=1}^n X_i \right|,\] whereas, for each $t_i \in [-\alpha, \alpha]$, \begin{eqnarray*} \lefteqn{\prob{ |\tilde{X}(c + {\rm i} t_i) - \tilde{X}(c + {\rm i} t_i)| > \frac{1}{3} \beta}} \\ &\leq &\prob{ | \Re\big(\tilde{X}(c + {\rm i} t_i) - \tilde{X}(c + {\rm i} t_i)\big)| > \frac{1}{6} \beta} + \prob{ | \Im\big(\tilde{X}(c + {\rm i} t_i) - \tilde{X}(c + {\rm i} t_i)\big)| > \frac{1}{6} \beta}\\ &\leq &4 \exp(-2 n \beta^2 / 36), \end{eqnarray*} using Hoeffding's inequality. The claim then follows along precisely the same lines as the proof of \citep[Theorem 1]{Devroye1994}. \end{proof} \begin{lemma} \label{lem:ntaillemma} Let $w > 0$, $c > 0$, and let $f:[0, \infty) \rightarrow [0,1]$ be a continuously differentiable function, twice differentiable in the point $w$, and such that $\int_w^{\infty} |f'(y+w)| e^{-cy} y^{-1} \,{{\rm d}}y < \infty$. There exists a $\kappa_4 > 0$ such that, for all $m > 0$, \[ \left| \int_{|y|>m} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{f}(c+{\rm i} y) {\rm d} y \right| \leq \frac{\kappa_4}{m}. \] \end{lemma} \begin{proof} Fix $m > 0$. Observe that \begin{eqnarray} \nonumber \lefteqn{ \int_{|y|\leq m} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \bar{f}(c+{\rm i} y) {{\rm d}}y = \int_{|y|\leq m} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \int_0^{\infty} e^{-(c+{\rm i} y) x} f(x) {{\rm d}}x\,{{\rm d}}y } \\ \nonumber &=& \int_0^{\infty} \frac{1}{2 \pi} f(x) \int_{|y|\leq m} e^{(c+{\rm i} y) (w-x)} {{\rm d}}y\,{{\rm d}}x = \int_0^{\infty} \frac{1}{\pi} f(x) e^{c (w-x)} \frac{\sin(m (w-x))}{w-x} {{\rm d}}x \\ &=& \int_{-w}^{\infty} \frac{1}{\pi} f(y + w) e^{-c y} \frac{\sin(m y)}{y} {{\rm d}}y, \label{eq:mstepeq1} \end{eqnarray} using Fubini's Theorem and the variable substitution $y: = x-w$, together with the obvious identity $\sin(-m y) /( -y) = \sin(m y)/y$. We consider the integral \eqref{eq:mstepeq1} separately over the domain $[w, \infty)$ and $[-w,w]$. For the interval $[w, \infty)$, we have \begin{eqnarray} \nonumber \lefteqn{ \left|\int_w^{\infty} \frac{1}{\pi} \frac{f(y + w) e^{-c y}}{y} \sin(m y) {{\rm d}}y \right|}\\ &\leq&\left| \left[ \frac{1}{\pi} \frac{f(y + w) e^{-c y}}{y} \frac{\cos(m y)}{-m} \right]_{y=w}^{\infty} \right| + \left|\int_w^{\infty} \frac{1}{\pi} \pd{}{y} \left[ \frac{f(y + w) e^{-c y}}{y} \right] \frac{\cos(m y)}{m} {{\rm d}}y \right|\nonumber \\ \nonumber &\leq & \left| \frac{1}{\pi} \frac{f(w + w) e^{-c w}}{w} \frac{\cos(m w)}{m} \right|\nonumber \\ &&+\: \int_w^{\infty} \frac{1}{\pi} |f'(y+w)| e^{-cy} y^{-1} \frac{1}{m} {{\rm d}}y + \int_w^{\infty} \frac{1}{\pi} f(y+w) (c e^{-cy} y^{-1} + e^{-c y } y^{-2}) \frac{1}{m} {{\rm d}}y\nonumber \\ \label{eq:prooflemmaeq0} &\leq & \frac{1}{m} \cdot \left(\frac{e^{-c w}}{w \pi} + \frac{1}{\pi} \int_w^{\infty} |f'(y+w)| e^{-cy} y^{-1} {{\rm d}}y + \frac{e^{-c w}}{\pi w} + \frac{e^{-c w}}{\pi c w^2} \right). \end{eqnarray} We now consider the integral \eqref{eq:mstepeq1} on the interval $[-w, w]$. Write $\phi(y) := f(y + w) e^{-c y}$ and $g(y) := (\phi(y) - \phi(0) - \phi'(0) y)/y$, and observe that $g$ is continuously differentiable on the interval $[-w, w]$ (which follows from the fact that $f''(w)$ exists). We have \begin{eqnarray} \nonumber \lefteqn{\left|\phi(0) - \int_{-w}^{w} \frac{1}{\pi} f(y + w) e^{-c y} \frac{\sin(m y)}{y} {{\rm d}}y \right|} \\ \nonumber &=& \left|\phi(0) - \int_{-w}^{w} \frac{1}{\pi} \Big(\phi(0) + \phi'(0) y + g(y) y \Big) \frac{\sin(m y)}{y} {{\rm d}}y \right| \\ \label{eq:prooflemmaeq1} &\leq & \phi(0) \left|1 - \int_{-w}^{w} \frac{1}{\pi} \frac{\sin(m y)}{y} {{\rm d}}y \right| + \left|\int_{-w}^{w} \frac{1}{\pi} g(y) \sin(m y) {{\rm d}}y \right|; \end{eqnarray} realize that $\int_{-w}^w \pi^{-1} \phi'(0) \sin(my) {\rm d}y =0$. We first bound the first term of \eqref{eq:prooflemmaeq1}. \begin{align*} \left|1 - \int_{-w}^{w} \frac{\sin(m y)}{\pi y} {{\rm d}}y \right| \leq \left|1 - \int_{-\infty}^{\infty} \frac{\sin(m y)}{\pi y} {{\rm d}}y \right| + \left| \int_{w}^{\infty} \frac{2\sin(m y)}{\pi y} {{\rm d}}y \right| = \left| \int_{w}^{\infty} \frac{2\sin(m y)}{\pi y} {{\rm d}}y \right|. \end{align*} Write $h(a) := \int_w^{\infty} e^{-a y} \,y^{-1}\,{\sin(m y)} {{\rm d}}y$, $a \geq 0$. Then $\lim_{a \rightarrow \infty} h(a) = 0$, \begin{eqnarray*} h'(a) &=& \int_w^{\infty} -e^{-a y} \sin(m y) {{\rm d}}y \\&=& -e^{-a w} \int_0^{\infty} e^{-a x} \sin(m(x+w)) {{\rm d}}x = -e^{-a w} \frac{m \cos(w m) + a \sin(w m)}{a^2+m^2}, \end{eqnarray*} and thus \begin{eqnarray*} \left| \int_{w}^{\infty} \frac{\sin(m y)}{y} {{\rm d}}y \right| & = &|h(0) | = \left| \lim_{a \rightarrow \infty} h(a) - \int_0^{\infty} h'(a) {\rm d}a \right|\\ &=& \left| \int_0^{\infty} e^{-a w} \frac{m \cos(w m) + a \sin(w m)}{a^2+m^2}{\rm d}a \right| \\ &\leq& \int_0^{\infty} e^{-a w} \frac{m + a}{a^2+m^2} {\rm d}a \leq \frac{2}{m} \int_0^{\infty} e^{-a w} {\rm d}a = \frac{2}{m w}, \end{eqnarray*} which implies \begin{align} \label{eq:prooflemmaeq2} \left|1 - \int_{-w}^{w} \frac{\sin(m y)}{\pi y} {{\rm d}}y \right| \leq \frac{4}{w \pi m}. \end{align} The second term of \eqref{eq:prooflemmaeq1} is bounded by \begin{eqnarray} \nonumber \lefteqn{\left|\int_{-w}^{w} \frac{1}{\pi} g(y) \sin(m y) {{\rm d}}y \right|} \\ \nonumber &=& \left|\frac{1}{\pi} g(w) \frac{\cos(-m w)}{m} - \frac{1}{\pi} g(-w) \frac{\cos(m w)}{m} - \int_{-w}^{w} \frac{1}{\pi} g'(y) \frac{\cos(-m y)}{m} {{\rm d}}y \right| \\ \label{eq:prooflemmaeq3} &\leq& \frac{|g(w) - g(-w)|}{\pi m} + \frac{1}{\pi m} \int_{-w}^w |g'(y)| {{\rm d}}y. \end{eqnarray} Combining \eqref{eq:mstepeq1}, \eqref{eq:prooflemmaeq0}, \eqref{eq:prooflemmaeq1}, \eqref{eq:prooflemmaeq2} and \eqref{eq:prooflemmaeq3}, using $f(w) = \phi(0)$, it follows that \begin{eqnarray*} \lefteqn{\left| \int_{|y| > m} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \overline{f}(c+{\rm i} y) {{\rm d}}y \right|}\\ &=&\left| f(w) - \int_{|y|\leq m} \frac{1}{2 \pi} e^{(c+{\rm i} y) w} \overline{f}(c+{\rm i} y) {{\rm d}}y \right|\\ &=& \left| f(w) - \int_{-w}^{w} \frac{1}{\pi} f(y + w) e^{-c y} \frac{\sin(m y)}{y} {{\rm d}}y - \int_{w}^{\infty} \frac{1}{\pi} f(y + w) e^{-c y} \frac{\sin(m y)}{y} {{\rm d}}y \right| \\ &\leq & f(w) \frac{4}{\pi m w} + \frac{|g(w) - g(-w)|}{\pi m} + \frac{1}{\pi m} \int_{-w}^w |g'(y)| {{\rm d}}y \\ &&+\,\frac{1}{m} \cdot \left(\frac{e^{-c w}}{w \pi} + \frac{1}{\pi} \int_w^{\infty} |f'(y+w)| e^{-cy}{y} {{\rm d}}y + \frac{e^{-c w}}{\pi w} + \frac{e^{-c w}}{\pi c w^2} \right). \end{eqnarray*} Defining \begin{eqnarray*} \kappa_4 &:=& f(w) \frac{4}{\pi w} + \frac{|g(w) - g(-w)|}{\pi} + \frac{1}{\pi} \int_{-w}^w |g'(y)| {{\rm d}}y \\ & &+\: \frac{e^{-c w}}{w \pi} + \frac{1}{\pi} \int_w^{\infty} |f'(y+w)| e^{-cy}{y} {{\rm d}}y + \frac{e^{-c w}}{\pi w} + \frac{e^{-c w}}{\pi c w^2}, \end{eqnarray*} this implies the stated of the lemma. \end{proof} \section{Numerical illustration} \label{sec:num} We provide a numerical illustration of the performance of our estimator, inspired by an application of estimating high-load probabilities in communication links \cite{denBoerMandjesNunezZuraniewski2014}. In particular, we consider an M/G/1 queue in stationarity that serves jobs at unit speed, and whose (unknown) service time distribution is exponential with mean $1/20$. We choose the (unknown) arrival rate $\lambda$ from $\{10, 18, 19\}$; this corresponds to load factors $\rho$ of 0.50, 0.90, and 0.95. For $n = 10,000$ consecutive time intervals of length $\delta = 0.10$, the amount of work arriving to the queue in each interval is recorded. Based on these samples, we estimate the tail probabilities $\prob{Y>w}$ of the workload distribution $Y$ for different values of $w$, using the Laplace-transform based estimator outlined in Section \ref{subsec:mg1}. We test values of $w$ corresponding to the 90th, 99th, and 99.9th percentile of $Y$; the particular values, denoted by $w_{.9}$, $w_{.99}$, and $w_{.999}$, are given in Table \ref{table:w}. \begin{table}[!ht] \begin{center} \caption{90th, 99th, and 99.9th percentiles of $W$, for different values of $\rho$.} \label{table:w} \begin{tabular}{r|rrr} $\rho$ & $w_{.9}$ & $w_{.99}$ & $w_{.999}$ \\ \hline 0.50 & 0.1609 & 0.3912 & 0.6215 \\ 0.90 & 1.0986 & 2.2499 & 3.4012 \\ 0.95 & 2.2513 & 4.5539 & 6.8565 \end{tabular} \end{center} \end{table} For each $\rho \in \{0.50, 0.90, 0.95\}$ and each of the three corresponding values of $w$, we run 1000 simulations and record the relative estimation error \begin{equation} \label{eq:relativererror} \left| \frac{ (1 - F_n^Y(w)) - \prob{Y>w}}{\prob{Y>w}} \right|, \end{equation} where $F_n^Y(w)$ denotes the outcome of the Laplace-transform based estimator. The simulation average of \eqref{eq:relativererror}, for different values of $\rho$ and $w$, is reported in Table \ref{table:outcomes}, at the lines starting with `Laplace'. We compare the performance of the Laplace-transform based estimator to that of the empirical estimator that samples the workload $W(i \delta)$ at time points $i \delta$, $i=1,\ldots, n$, and estimates the tail probability $\prob{Y>w}$ by the fraction $n^{-1} \sum_{i=1}^n {\bf 1}_{Y(i \delta) > w}$. The corresponding simulation average of the relative estimation error is reported in Table \ref{table:outcomes}, at the lines starting with `Empirical'. Table \ref{table:outcomes} shows that the Laplace-transform based estimator has a lower relative error than the empirical estimator, for all-but-one tested instances of $\rho$ and $w$. This is perhaps not surprising, since the `Laplace' estimator is based on i.i.d.\ samples (of the amount of work arriving to the queue in $\delta$ time units), whereas the `Empirical' estimator is based on correlated samples (of the workload in the queue). A third estimator, that is based on the same samples as the `Empirical' estimator, can be constructed as follows: consider the samples of the workload process $Y(i \delta)$, $i=1,\ldots, n$, and let $Q = \{ Y(i \delta) - (Y((i-1) \delta) - \delta) \mid Y((i-1) \delta) \geq \delta, 2 \leq i \leq n\}$. If, for some $i$, $Y((i-1) \delta) \geq \delta$, then the amount of work arrived in the $\delta$ time units prior to time point $i \delta$ is precisely equal to $Y(i \delta) - (Y((i-1) \delta) - \delta)$. (If $Y((i-1) \delta) < \delta$, then the exact amount of work arrived between time points $(i-1)\delta$ and $i \delta$ can not be inferred from the workload samples). If we apply the Laplace-transform based estimator on the samples in the set $Q$ (which are independent samples from the amount of work arriving to the queue in $\delta$ time units), then we obtain an estimate of $\prob{Y>w}$ that is based on the same samples as the `Empirical' estimator. The relative estimation error of this third estimator is reported in Table \ref{table:outcomes}, at the lines starting with `Laplace, censored'. \begin{table}[ht] \caption{Average relative estimation error} \label{table:outcomes} \begin{tabular}{lr} \\ $\rho = 0.50$ \, \, \, \, \, & \begin{tabular}{r|rrr} Estimator & $w = w_{.9}$ & $w = w_{.99}$ & $w = w_{.999}$ \\ \hline Laplace & 0.05 & 0.13 & 0.25\\ Empirical & 0.50 & 0.50& 0.67\\ Laplace, censored & 0.15& 0.39 &0.67 \end{tabular} \\ & \\ $\rho = 0.90$ \, \, \, \, \, & \begin{tabular}{r|rrr} Estimator & $w = w_{.9}$ & $w = w_{.99}$ & $w = w_{.999}$ \\ \hline Laplace & 0.19 & 0.40 & 0.65 \\ Empirical & 0.29 & 0.96 & 1.82\\ Laplace, censored & 0.23 & 0.49 & 0.81 \end{tabular} \\ & \\ $\rho = 0.95$ \, \, \, \, \, & \begin{tabular}{r|rrr} Estimator & $w = w_{.9}$ & $w = w_{.99}$ & $w = w_{.999}$ \\ \hline Laplace & 0.39 & 0.96 & 2.09 \\ Empirical & 0.52 & 1.36 & 1.83 \\ Laplace, censored & 0.43 & 1.07 & 2.34 \end{tabular} \end{tabular} \end{table} Table \ref{table:outcomes} shows that the `Laplace, censored' estimator still outperforms the `Empirical' estimator, in all-but-one instances. Both these estimators are based on the same samples of the workload process. A notable disadvantage of `Empirical' estimator is that it requires the system to reach high load in order to obtain informative estimates. In practice, particularly in the context of operated communication links, this is not desirable: network operators would certainly intervene if the network load reaches exceedingly high levels. These interventions hamper the estimation of the probability that this high load occurs. In contrast, both the Laplace-transform based estimators produce informative estimates of $\prob{Y>w}$, even if all sampled values of the workload process are below $w$. \section{Discussion, concluding remarks} \label{sec:disc} In this paper we have discussed a technique to estimate the distribution of a random variable $Y$, focusing on the specific context in which we have i.i.d.\ observations $X_1,\ldots,X_n$, distributed as a random variable $X$, where the relation between the Laplace transforms of $X$ and $Y$ is known. Our problem was motivated from a practical question of an internet service provider, who wished to develop statistically sound techniques to estimate the packet delay distribution based on various types of probe measurements; specific quantiles of the delay distribution are mutually agreed upon by the service provider and its customers, and posted in the service level agreement. To infer whether these service level agreements are met, the internet provider estimates several tail probabilities of the delay distribution. This explains why we have focused on the setup presented in our paper, concentrating on estimating the distribution function $F^Y(w)$ and bounding the error ${\mathbb E}[|F_n^Y(w)-F^Y(w)|]$ for this $w$. It is noted that various other papers focus on estimating the density, and often use different convergence metrics; some establish asymptotic Normality. A salient feature of our analysis is that the ill-posedness of Laplace inversion, i.e., the fact that the inverse Laplace transform operator is not continuous, does not play a r\^ole. Our estimate $F_n^Y(w)$ is `close' to $F^Y(w)$ if the Laplace transform $\bar{F}_n^Y$ is `close' to the Laplace transform $\bar{F}^Y$, measuring `closeness' of these Laplace transforms by the integral \eqref{eq:term3inproof}. Our assumptions (A1)-(A3) ensure that this integral converges to zero (as $n$ grows large), and Section \ref{sec:appl} shows that these conditions are met in practical applications. We therefore do not need regularized inversion techniques as in \cite{MnatsakanovRuymgaartRuymgaart2008} and \cite{Shimizu2010}, with convergence rates of just $1 / \log(n)$. (See further Remark \ref{rem:cdf}). \vspace{2mm} {\sc Acknowledgments ---} {\small This research is partially funded by SURFnet, Radboudkwartier 273, 3511 CK Utrecht, The Netherlands. We thank Rudesindo N\'u\~nez-Queija (University of Amsterdam) and Guido Janssen (Eindhoven University of Technology, the Netherlands) for useful discussions and providing literature references. The constructive comments and suggestions of the anonymous referees have improved the paper, and are kindly acknowledged. Part of this work was done while the first author was affiliated with Eindhoven University of Technology and University of Amsterdam.} {\small
1002.1913
\section{Introduction} In relativistic cosmology, the trichotomy of the Fried\-mann-Robertson-Walker (FRW) models is of prime importance. The spatial geometry determines the evolution of the cosmological model: In the hyper\-bo\-loidal ($k = {-1}$) or the spatially flat ($k = 0$) case, the universe exhibits an initial singularity (`big bang'), and from that `moment' on, the universe is forever expanding. In the case of a closed cosmological model ($k={+1}$) we observe a fundamentally different behavior: ``[\ldots] the dynamical equations of general relativity show that the spatially closed 3-sphere universe will exist for only a finite span of time. [\ldots] at a finite time after the big bang, the universe will achieve a maximum size [\ldots], and then will begin to recontract. [\ldots] a finite time after recontraction begins, a `big crunch' will occur.'' The quotation is taken from~\cite{Wald:1984}. The recollapse of closed FRW cosmologies holds because the \textit{strong energy condition} is imposed on the matter (which is assumed to be a perfect fluid), i.e., $\rho + 3 p \geq 0$, where $\rho$ is the energy density and $p$ the pressure. On this basis it would be tempting to view the recollapse of the spatially closed FRW cosmologies satisfying the strong energy condition as a paradigm for models with less symmetries. That this belief is erroneous has been demonstrated in~\cite{Barrow/Galloway/Tipler:1986}. At least in the case of cosmological models of Bianchi type~IX, into which the closed FRW models are naturally embedded, there exists a rigorous result by Lin and Wald~\cite{Lin/Wald:1989}: Assuming the dominant energy condition, i.e., $|p_i| \leq \rho$ for the anisotropic pressures $p_1$, $p_2$, $p_3$, and non-negative average principal pressure $p$, i.e., $3 p = (p_1+p_2+p_3) \geq 0$, then the `closed-universe-recollapse conjecture'~\cite{Barrow/Galloway/Tipler:1986} holds. In the locally rotationally symmetric (LRS) case \textit{with isotropic matter}, it is sufficient to require that $\rho + 3 p \geq \epsilon \rho$ for an arbitrarily small $\epsilon>0$, see~\cite{Heinzle/Rohr/Uggla:2005}. (Note that if $p/\rho \rightarrow -1/3$, there exist models that expand forever approaching the Einstein static universe in the limit.) In this letter, we approach the problem from a different direction by investigating cases where `closed-universe-recollapse' does not hold. We prove that there exist cosmological models (of Bianchi type~IX) satisfying the strong energy condition that do not recollapse but expand forever. Two points are important to emphasize: (i) For these models the strong energy condition is satisfied `by a wide margin'. The assumption we make is that $w = p/\rho$ is a constant, i.e., $w = \mathrm{const} > -1/3$; however, $w < (1-\sqrt{3})/3 \approx -0.244$, hence the average pressure is not positive. (ii) We prove the existence of a \textit{typical} class of models that expand forever (where `typical' is understood in the sense of an open set of initial data of the Einstein equations). An interesting observation is that these cosmological models exhibit partial (i.e., directional) accelerated expansion for late times. As a matter of course, we do not propose the cosmological models we analyze as actual models of the universe. However, we want to emphasize that the matter model we consider is not `exotic'~\cite{Vollick:1997}, i.e., it satisfies all the standard energy conditions (weak, strong and dominant), as opposed to certain `exotic' matter models in cosmology (e.g., phantom fields and dark energy, see for instance ~\cite{Amendola:2004, Scherrer/Sen:2008}; the breaking of the energy conditions stems from the aim to account for the accelerated expansion of the universe). The properties of the matter source we consider in this paper resemble those of collisionless matter, elastic matter, and magnetic fields as regards their fundamental aspects~\cite{Calogero/Heinzle:2009, Calogero/Heinzle:2009a}. Classes of models encompassing these important examples (or a subset thereof) have been the basis of previous work, see, e.g.,~\cite{Barrow}. Another explicit example, an example that satisfies our concrete assumptions, is the anisotropic fluid model~\cite{Ellis}. For the models we consider the anisotropic pressures (parallel and perpendicular pressure) are required to satisfy certain bounds (that are compatible with the energy conditions). In particular, the isotropic pressure (average pressure) depends linearly on the energy density, as is usual for perfect fluids, where the proportionality constant is strictly larger than $-\textfrac{1}{3}$. Note, however, that the approach we take does not require to specify a concrete matter model as long as the basic assumptions of~\cite{Calogero/Heinzle:2009, Calogero/Heinzle:2009a} and the necessary bounds on the anisotropic pressures are satisfied. Finally, note that the restriction to matter sources of the general type of~\cite{Calogero/Heinzle:2009a, Calogero/Heinzle:2009} might yield a special case of the `closed-universe-recollapse conjecture' and, in view of the results of this paper, lead to specific bounds on the matter quantities. Let us briefly comment on the approach we take. We use the dynamical systems approach to spatially homogeneous cosmologies~\cite{Wainwright/Ellis:1997}. However, as we will see, it is essential to avoid the standard Hubble-normalized variables---the results we present here are rather elusive in that approach. \parskip2ex \paragraph{Equations.} For models of Bianchi type~IX the metric can be written as \begin{equation}\label{metric} d s^2 = -d t^2 + g_{i j}(t) \:\hat{\omega}^i\, \hat{\omega}^j\:, \end{equation} where $\{\hat{\omega}^1,\hat{\omega}^2,\hat{\omega}^3\}$ is a symmetry-adapted coframe satisfying $d\hat{\omega}^1 = - \hat{\omega}^2 \wedge \hat{\omega}^3$ (and cyclic permutations). The Einstein equations comprise evolution equations for the metric, $\partial_t g_{i j} = -2 g_{i l} k^l_{\:\, j}$, and for the extrinsic curvature $k^i_{\:\, j}$, see, e.g.,~\cite{Wainwright/Ellis:1997}. The Gauss constraint reads ${}^3\!R +(\tr k)^2 - k^i_{\:\, j} k^j_{\:\, i} = 2 \rho\,$, where $\rho = -T^0_{\:\, 0}$ is the energy density associated with the energy-momentum tensor $T^\mu_{\:\, \nu}$ and ${}^3\!R$ the scalar three-curvature. In the vacuum case or orthogonal perfect fluid case the metric is diagonal, i.e., $g_{i j}(t) = \diag\big( g_{11}(t), g_{22}(t), g_{33}(t)\big)$; in the locally rotationally symmetric (LRS) case we have $g_{22}(t) \equiv g_{33}(t)$. We restrict ourselves to diagonal metrics even if the matter source is anisotropic. In the diagonal case, an isotropic matter source is characterized by an energy-momentum tensor with $T^i_{\:\, j} = \diag\big( p_1, p_2, p_3\big)$. The `isotropic pressure' $p$ is the average of the anisotropic pressures $p_1$, $p_2$, $p_3$, i.e., $\tr T = 3 p$. Define $w$ and $w_i$, $i=1,2,3$, according to \begin{equation} p = w \rho\:,\qquad p_i = w_i \rho \:; \end{equation} obviously, $w_1 + w_2 + w_3 = 3 w$. Matter that is consistent with LRS symmetry satisfies $w_2 = w_3$. For perfect fluids, $w_1 = w_2 = w_3 = w$, where $w$ is typically assumed to be a constant. In this paper we consider anisotropic matter that generalizes perfect fluid matter: We assume that the energy density and the isotropic pressure satisfy a linear equation of state, $w=\mathrm{const}$, where the strong energy condition is supposed to hold, i.e., $w > -1/3$. The rescaled anisotropic pressures, $w_1$, $w_2$, $w_3$, are assumed to be functions of the metric via $(s_1,s_2,s_3)$, where \begin{equation}\label{skdef} s_k = g^{kk} \big(g^{11} + g^{22} + g^{33}\big)^{-1} \quad(\text{no sum over $k$})\:; \end{equation} obviously, $s_1 + s_2 + s_3 =1$. The functions \begin{equation} w_k = w_k(s_1,s_2,s_3) \end{equation} are such that there exists an isotropic state of the matter where $w_1 = w_2 = w_3 = w$, and remain bounded (and take limits) under extreme conditions (when one or more of the $s_i$ are zero). In particular, there exists a constant $v_-$ such that $w_1(0,s_2,s_3) = w_2(s_1,0,s_3) = w_3(s_1,s_2,0) = v_-$. There exist excellent examples for matter models of this type, e.g., collisionless matter, elastic matter, or magnetic fields; for a detailed discussion we refer to~\cite{Calogero/Heinzle:2009}. Let us define the Hubble scalar $H$ as $H= -\tr k/3$ and the shear tensor $(\sigma_1,\sigma_2,\sigma_3)$ as the traceless part of the extrinsic curvature, i.e., $k^i_{\:\, i} = -H - \sigma_i$ (no sum over $i$); $\sigma_1 + \sigma_2 + \sigma_3 = 0$. Furthermore, we introduce the `densitized metric' $(n_1,n_2,n_3)$ by $n_k = g_{k k} (\det g)^{-1/2}$; note that $n_k > 0$. Then the Einstein equations can be expressed as evolution equations for $H$, $(\sigma_1,\sigma_2,\sigma_3)$, and $(n_1,n_2,n_3)$ plus one constraint, which can be used to express $\rho$ in terms of the other variables; this leads to the fact that the matter enters the equations only via $w$ and $w_1$, $w_2$, $w_3$. In the LRS case, which we will focus on henceforth, there exists a plane of rotational symmetry, which we choose to be spanned by the second and the third frame vector. Accordingly, $n_2 = n_3$ and $\sigma_2 = \sigma_3$ as well as $s_2 = s_3$; consistently, the matter satisfies $w_2 = w_3$. Let $\sigma := \sigma_2$ and let $s := s_2$; then \begin{equation}\label{sigs} (\sigma_1, \sigma_2, \sigma_3) = ({-2}\sigma, \sigma, \sigma),\; (s_1,s_2,s_3) = (1-2s,s,s). \end{equation} Eq.~\eqref{skdef} implies $s = (2 + n_2/n_1)^{-1}$, so that $s\in(0,1/2)$. Finally we abbreviate the rescaled anisotropic pressure in the plane of rotational symmetry by $u$; more specifically, \begin{equation}\label{uofsdef} u(s) := w_2(1-2s,s,s)\:. \end{equation} The Einstein equations can then be expressed in the variables $H$, $\sigma$, $n_1$, and $n_2$, where $u(s)$ appears in these equations. In the dynamical system approach to cosmology the Einstein equations are expressed in terms of normalized variables. We define the `dominant variable' $D$, see, e.g.,~\cite{Heinzle/Rohr/Uggla:2005}, by \begin{subequations}\label{domvars} \begin{equation}\label{Ddef} D = \sqrt{H^2 + \frac{n_1 n_2}{3}}\:, \end{equation} and we introduce normalized variables according to \begin{equation}\label{normvars} \Sigma_D = \frac{\sigma}{D} \:,\quad r = \frac{n_1}{D}\,\sqrt{\frac{n_1^2}{D^2} + \frac{n_2^2}{9 D^2}}\:; \end{equation} in addition we use the variable \begin{equation}\label{sinn1n2} s = \Big(2 + \frac{n_2}{n_1}\Big)^{-1} \:. \end{equation} \end{subequations} Further, we define a normalized energy density $\Omega_D = \rho/(3 D^2)$, and we replace the cosmological time $t$ by a rescaled time variable $\tau$ through \begin{equation}\label{newtime} \frac{d}{d\tau} = \frac{1}{D} \,\frac{d}{d t}\:. \end{equation} Henceforth, a prime denotes differentiation w.r.t.\ $\tau$. \begin{Remark} The evolution equation for $H$ is \begin{equation}\label{Heq} H^\prime = -\frac{1}{D} \,\big( H^2 + q_D D^2 \big) \:, \end{equation} where $q_D$ is given by $q_D = 2 \Sigma_D^2 + (1/2)( 1 + 3 w) \Omega_D$. This leads to an important remark: Equation~\eqref{Heq} implies that $H$ is decreasing if $H = 0$, i.e., $H^\prime |_{H = 0} = -q_D D^{-1}< 0$; therefore, a cosmological model with $H(\tau_0) > 0$ at some time $\tau_0$ satisfies $H(\tau) > 0$ $\forall \tau \leq \tau_0$. Consequently, by proving the existence of models that satisfy $H(\tau) > 0$ for all sufficiently large $\tau$, we prove the existence of models with $H>0$ and thus positive expansion for all times. \end{Remark} It is not difficult to prove that the transformation~\eqref{domvars} between the `metric variables' $H$, $\sigma$, $n_1$, $n_2$ and the dynamical systems variables $D$, $\Sigma_D$, $r$, $s$ is one-to-one on the set $H > 0$. (This is sufficient for our purposes, see the previous remark.) Expressed in the variables $D$, $\Sigma_D$, $r$, $s$, the Einstein evolution equations split into a decoupled equation for $D$, \begin{equation}\label{Ddecoupled} D^\prime = -D \Big( H_D (1+ q_D) + \Sigma_D (1 - H_D^2)\Big)\:, \end{equation} and a system of coupled equations for the normalized variables~\eqref{normvars} and~\eqref{sinn1n2}, \begin{subequations}\label{dynsys} \begin{align} r^\prime & = r \left( 2 H_D (q_D - H_D \Sigma_D) - \frac{54 \Sigma_D s^2}{1-4 s + 13 s^2} \right) \:,\\[0.5ex] \nonumber \Sigma_D^\prime & = -(2- q_D) H_D\Sigma_D - (1-H_D^2) (1-\Sigma_D^2) \: + \\ \label{SigDeq} & \qquad\qquad\qquad + \textfrac{1}{3} N_{1D}^2 + 3\Omega_D \big(u(s) -w\big) \:,\\[0.5ex] s^\prime & = -12 s \big( \textfrac{1}{2} - s \big) \Sigma_D \:, \end{align} \end{subequations} where $q_D = 2 \Sigma_D^2 + \textfrac{1}{2} ( 1 + 3 w) \Omega_D$, and $H_D = H_D(r,s)$ and $N_{1D} = N_{1D}(r,s)$ are functions of $r$ and $s$, see~\eqref{HDN1Dinrs}. The Gauss constraint reads \begin{equation}\label{gcon} \Sigma_D^2 + \textfrac{1}{12} N_{1D}^2(r,s) + \Omega_D = 1\:; \end{equation} it is used to solve for $\Omega_D$. The functions $H_D(r,s)$ and $N_{1D}(r,s)$ are \begin{subequations}\label{HDN1Dinrs} \begin{align} \label{HDinrs} H_D & := \frac{H}{D} = H_D(r,s) = \sqrt{1 - r \frac{1-2s}{\sqrt{1-4s+13 s^2}}} \:\,,\\ \label{N1Dinrs} N_{1D} & := \frac{n_1}{D} = N_{1D}(r,s) = \sqrt{\frac{3 r s}{\sqrt{1-4s +13 s^2}}}\:\,; \end{align} \end{subequations} in particular, $H_D(r,s)$ and $N_{1D}(r,s)$ are well-defined and regular on the preimage of the set $\mathbb{R}^+ \times \mathbb{R}^+$. (We refrain from going into details in this paper, since we merely use that~\eqref{HDN1Dinrs} is well-behaved for sufficiently small $r$; however, we may refer to~\cite{Calogero/Heinzle:2009a}.) \begin{Remark} More common than~\eqref{domvars} is the Hubble-nor\-ma\-lized approach, see, e.g.,\cite{Wainwright/Ellis:1997}, where the Hubble scalar $H$ is employed instead of $D$ to construct scale-invariant variables and a simpler set of normalized variables is used instead of~\eqref{normvars} and~\eqref{sinn1n2}. Although the resulting equations are simpler than~\eqref{dynsys}, we do not have a choice; we will see that the Hubble-normalized approach necessarily fails to uncover our results. \end{Remark} The dynamical system~\eqref{dynsys} completely describes the dynamics of locally rotationally symmetric cosmological models of Bianchi type~IX in their expanding phase ($H > 0$). In other words, each solution of~\eqref{dynsys} yields an LRS Bianchi type~IX model in its expanding phase, and conversely, the expanding phase of each model is represented by a solution of~\eqref{dynsys}. The main advantage of the system~\eqref{dynsys} over other representations of the Einstein equations lies in its extendibitly: The system~\eqref{dynsys} possesses a regular extension to its boundaries $\Sigma_D = \pm 1$, $s=0$, $s=\textfrac{1}{2}$, and $r=0$. \parskip2ex \paragraph{Anisotropic matter.} The function $u(s)$ in~\eqref{SigDeq} encodes the properties of the matter model. For isotropic matter we have $u(s) \equiv w$; for anisotropic matter, $u(s)$ represents the rescaled anisotropic pressure in the plane of local rotational symmetry, cf.~\eqref{uofsdef}; recall that $s\in [0,1/2]$. From $w_2(s_1,0,s_3) = v_-$ we obtain $u(0) = v_-$, cf.~\eqref{sigs} and~\eqref{uofsdef}. The value of $u$ at $s=1/2$ is not independent; to see this recall first that $w_1 + 2 w_2 = 3 w$; now, $s = 1/2$ corresponds to $(s_1,s_2,s_3) = (0,1/2,1/2)$, so $w_1(0,1/2,1/2) + 2 w_2(0,1/2,1/2) = 3 w$; hence, $w_1(0,s_2,s_3) = v_-$ results in $v_- + 2 u(1/2) = 3 w$. Summarizing, \begin{equation}\label{uvals} u(0) = v_- \:,\qquad u(1/2) = w + (1/2) \big(w-v_-\big)\:. \end{equation} For reasonable matter models like collisionless matter, elastic matter, or magnetic fields, $u(s)$ is a monotone function on $[0,1/2]$ interpolating between the values~\eqref{uvals}; we refer to~\cite{Calogero/Heinzle:2009} for examples. It is often beneficial to use the anisotropy parameter $\beta$ instead of the constant $v_-$; it is defined as \begin{equation}\label{betadef} \beta = \frac{2 ( w- v_-)}{1-w}\:. \end{equation} Finally, let us state the assumptions on the matter model we consider in this paper. We assume that \begin{equation}\label{wdomain} -\frac{1}{3} \,<\, w\,<\, \frac{1-\sqrt{3}}{3} \approx -0.244 \:, \end{equation} hence the \textit{strong energy condition is satisfied}. Furthermore, $v_-$ is assumed to satisfy \begin{equation}\label{vmdomain} \begin{split} & \textfrac{1}{6} \left( 1 + 6 w - \sqrt{-3+(1-3 w)^2} \right) < v_- \\ & \qquad\qquad v_- < \textfrac{1}{6} \left( 1 + 6 w + \sqrt{-3+(1-3 w)^2} \right). \end{split} \end{equation} The admissible domain of the parameters $w$ and $v_-$ is depicted in Fig.~\ref{vpmminmax}. Clearly, the dominant energy condition is satisfied since the rescaled anisotropic pressure $v_-$ (which is the pressure in the plane of symmetry) and its counterpart $v_+ = 3 w- 2 v_-$ (which is the pressure in the orthogonal direction) satisfy $|v_\pm| < 1$. \begin{figure}[Ht] \begin{center} \includegraphics[width=0.45\textwidth]{figure1.eps} \caption{The admissible values of the rescaled isotropic pressure $w$ and $v_-$, $v_+ = 3 w -2 v_-$, which represent (the extremes of) the rescaled anisotropic pressures in the plane of symmetry and orthogonal to it, respectively. Both the strong and the dominant energy condition are satisfied. $(1-\sqrt{3})/3 \approx -0.24$.} \label{vpmminmax} \end{center} \end{figure} \parskip2ex \paragraph{Results.} Since the dynamical system~\eqref{dynsys} extends regularly to $r=0$ it is suggestive to analyze the system induced on that surface. From~\eqref{HDN1Dinrs} we obtain $H_D|_{r=0} = 1$, $N_{1D}^2|_{r=0} =0$, so that~\eqref{gcon} becomes $\Omega_D = 1 -\Sigma_D^2$. This in turn implies $q_D = \textfrac{1}{2} (1 + 3 w) + \textfrac{3}{2} (1 - w) \Sigma_D^2$. Insertion into~\eqref{dynsys} yields \begin{subequations}\label{dynsysonr0} \begin{align} \Sigma_D^\prime & = -3 (1-\Sigma_D^2) \Big( \textfrac{1}{2} (1 -w) \Sigma_D - \big(u(s)-w \big)\Big) \:,\\ s^\prime & = -12 \Sigma_D \,s \left(\textfrac{1}{2} - s \right)\:. \end{align} \end{subequations} The state space for this two-dimensional dynamical system is $[-1,1] \times [0,1/2]$. The dynamical systems analysis is straightforward. We use that $u(s)$ is a function such that $u(0) = v_-$ and $u(1/2) = w + (w - v_-)/2$; here, $w$ and $v_-$ are assumed to satisfy~\eqref{wdomain} and~\eqref{vmdomain}, respectively. We focus our attention on the fixed point $\mathrm{R}$ of~\eqref{dynsysonr0}, which is given by \begin{equation} \mathrm{R}: r=0, \: s = 0, \:\Sigma_D = -\beta \:. \end{equation} It is straightforward to prove that $\mathrm{R}$ is a sink for the flow of the system~\eqref{dynsysonr0}, because \begin{subequations} \begin{align} \label{sinkprop} s^{-1} s^\prime \big|_{\mathrm{R}} & = 6 \beta \,< \,0 \:,\\[0.5ex] \nonumber (\Sigma_D + \beta)^{-1} (\Sigma_D + \beta)^\prime \big|_{\mathrm{R}} & = -\textfrac{3}{2} ( 1- \beta^2) (1 - w) \,< \,0\:, \end{align}% \addtocounter{equation}{1}% hence the eigenvalues of the linearization of~\eqref{dynsysonr0} at $\mathrm{R}$ are negative. The crucial property of the fixed point $\mathrm{R}$ is revealed by considering the full system~\eqref{dynsys}: $\mathrm{R}$ is a sink not only on the boundary $r=0$, but also for the full system~\eqref{dynsys}. To see this we simply compute \begin{equation}\label{r-1r'} r^{-1} r^\prime \big|_{\mathrm{R}} = 2 \Big( \textfrac{3}{2} (1- w) \beta^2 + \textfrac{1}{2} ( 1 + 3 w) + \beta \Big) < 0 \end{equation} \end{subequations} and use~\eqref{vmdomain} to establish that the r.h.\ side is negative. Since the fixed point $\mathrm{R}$ is a sink for the flow of the system~\eqref{dynsys}, i.e., for LRS Bianchi type~IX models with anisotropic matter satisfying~\eqref{wdomain} and~\eqref{vmdomain}, there exists an open subset of LRS type~IX initial data such that the corresponding solutions converge to $\mathrm{R}$. Because $H_D = 1$ at $\mathrm{R}$ we infer from~\eqref{HDinrs} that $H$ is positive for these solutions for late times; by the remark following~\eqref{Heq} we obtain that $H$ is positive, i.e., these models are \textit{expanding for all times}. In the following we analyze the asymptotic behavior of these forever expanding LRS type~IX solutions in terms of the metric variables: For a solution of~\eqref{dynsys} that converges to the point $\mathrm{R}$ we find $r\rightarrow 0$, $s\rightarrow 0$, $\Sigma_D \rightarrow -\beta$, hence $H_D \rightarrow 1$, $N_{1D} \rightarrow 0$ by~\eqref{HDN1Dinrs}. Furthermore, $N_{2D}:= n_2/D \rightarrow \infty$; to see this we first note that $s\rightarrow 0$ implies $s \sim N_{1D} N_{2D}^{-1}$ by~\eqref{sinn1n2}. Then, from $N_{1D} \sim \sqrt{r s}$, see~\eqref{N1Dinrs}, we conclude that $N_{2 D} \sim \sqrt{r/s}$. Using that $s^{-1} s^\prime|_{\mathrm{R}} = 6 \beta$, cf.~\eqref{sinkprop}, and $r^{-1} r^\prime|_{\mathrm{R}} = 2 (q + \beta)$, cf.~\eqref{r-1r'}, where \begin{equation*} q:= q_D\big|_{\mathrm{R}} = \textfrac{3}{2} (1- w) \beta^2 + \textfrac{1}{2} ( 1 + 3 w)\:, \end{equation*} we finally get $N_{2 D}^{-1} N_{2 D}^\prime|_{\mathrm{R}} = q - 2 \beta > 0$, and the claim follows. The fact that $N_{2D} = n_2/D \rightarrow \infty$ implies that $N_2 := n_2/H \rightarrow \infty$ as $\tau\rightarrow \infty$ since $H \simeq D$ in the limit (because $H_D = H/D = 1$). This property is completely unproblematic here, but makes the treatment of the problem extremely difficult in the standard Hubble-normalized approach. For large $\tau$, the shear variables are $\sigma_1 = -2\sigma = -2 D \Sigma_D \simeq 2 D \beta$ and $\sigma_2 = \sigma_3 = \sigma = D \Sigma_D \simeq -D \beta$. To obtain the metric we integrate $\partial_t g_{kk} = 2 g_{kk} ( H + \sigma_k )$ (which is a consequence of $\partial_t g_{i j} = -2 g_{i l} k^l_{\:\, j}$). In the first step we note that $H = H_D D = D$ so that the equation reads $\partial_t g_{kk} = 2 g_{kk} ( D + \sigma_{k} )$, i.e., \begin{equation}\label{partialgs} \partial_\tau g_{11} = 2 g_{11} ( 1 + 2 \beta )\:,\quad \partial_\tau g_{22} = 2 g_{22} ( 1 - \beta )\:. \end{equation} Second, we integrate $D^{-1} D^\prime|_{\mathrm{R}} = -(1+q)$ and~\eqref{newtime} to get $\tau = (1 + q)^{-1} \log(t-t_0) + \tau_0$ with constants $\tau_0$ and $t_0$. Therefore,~\eqref{partialgs} leads to an asymptotic behavior of the metric represented by \begin{equation}\label{metricexp} g_{11} \propto (t- t_0)^{\frac{2(1+2\beta)}{1+q}} \:,\quad g_{22} \propto (t- t_0)^{\frac{2(1-\beta)}{1+q}} \end{equation} as $t\rightarrow \infty$. Note that $\beta$ is negative, but $1 + 2\beta$ is positive, which is a consequence of~\eqref{vmdomain}. An interesting observation is the occurence of partial (directional) accelerated expansion. A straightforward calculation reveals that $(1-\beta)/(1+q) > 1$, hence lengths in the plane of local rotational symmetry expand at an accelerated rate. The maximal rate of acceleration is obtained by maximizing $(1-\beta)/(1+q)$ over the domain depicted in Fig.~\ref{vpmminmax}; the maximal value of $(1-\beta)/(1+q) \approx 1.112$ is attained for $w$ close to $-1/3$. We conclude by restating the main result: The behavior~\eqref{metricexp} is typical, i.e., there exists an open set of LRS type~IX initial data such that the associated solutions of the Einstein equations with anisotropic matter behave as~\eqref{metricexp} as $t\rightarrow \infty$ and thus expand forever. (These solutions do not behave extraordinarily in other respects; for instance, there exist forever expanding solutions that isotropize toward the singularity.) In this sense, eternal expansion is as likely as recollapse in the case of LRS Bianchi type~IX with anisotropic matter that satisfies the conditions~\eqref{wdomain} and~\eqref{vmdomain}, and thus in particular the strong energy condition. \parskip2ex \paragraph{Acknowledgements.} We gratefully acknowledge the hospitality of the Mittag-Leffler Institute, Sweden. SC is supported by Ministerio Ciencia e Innovaci\'on, Spain (Project MTM2008-05271).
1211.0558
\section{Introduction and Preliminaries} The main theme of the paper will be to produce many disparate models of an $\aleph_0$-stable theory, assuming some type of non-structure hypothesis. In all cases, to show the complexity of a model $M$, we concentrate on the regular types $p\in S(M)$ that have {\em finite dimension in $M$} i.e., for some (equivalently for every) finite $A\subseteq M$ on which the regular type is based and stationary, we have $\dim(p|A,M)$ finite. That is, there is no infinite, $A$-independent set of realizations of $p|A$ in $M$. Clearly, this notion is isomorphism invariant. If $f$ is an isomorphism between $M$ and $N$, then $p\in S(M)$ has finite dimension in $M$ if and only if $f(p)$ has finite dimension in $N$. This yields a criterion for two models to be non-isomorphic: If two models are isomorphic, their regular types of finite dimension must correspond. In order to get our non-structure results, we need to identify and analyze both those regular types that are capable of having finite dimension in a model (which we term {\em eni}) as well as those regular types that are capable of `supporting' an eni type. Lumped together, these regular types are called {\em eni-active} (see Definition~\ref{lump}) and we call a regular type {\em dull} if it is not eni-active. With Proposition~\ref{eninotdull}, we see that this eni-active/dull partition of regular types has many equivalent descriptions. It is particularly useful that this dichotomy is preserved under the equivalence relation of non-orthogonality. The paper begins by stating some well-known results about models of $\aleph_0$-stable theories and then identifying various species of regular types. We close Section~1 by proving a structure result for dull types, Proposition~\ref{DULLchain}, that indicates their name is apt. This result lays the foundation for Theorems~\ref{eniactivetheorem} and \ref{maingap}. In Section~2, we define the notion of having a `DOP witness' and define many different variants of `eni-DOP'. Fortunately, with Theorem~\ref{equiv}, we see that an $\aleph_0$-stable theory admits one of these variants if and only if it admits them all. Thereafter, we choose the term `eni-DOP' for its brevity. Theorem~\ref{equiv} also asserts that among $\aleph_0$-stable theories, eni-DOP is equivalent to the Omitting Types Order Property (OTOP), as well as to the existence of an independent triple of countable, saturated models over which the prime model is not saturated. Our first major result, Theorem~\ref{eniDOPthm}, proves that among $\aleph_0$-stable theories, those possessing eni-DOP are Borel complete. The existence of a {\em finite approximation\/} to a DOP witness (see Subsection~\ref{twotypes}) gives a procedure for constructing a model $M_G$ to code any bipartite graph $G$. In such a coding, the edge set of $G$ corresponds to the types of finite dimension in $M_G$. However, it is far from obvious how to recover the vertex set of $G$ from $M_G$. A weak attempt at this is given in Proposition~\ref{biggy}, where given an isomorphism $f$ between two models $M_G$ and $M_{G'}$, there is a number $\ell$ (depending largely on ${\rm wt}(f(a)/a)$) so that the image of a complete graph of size $m>\ell$ is almost complete. As the number $\ell$ depends on the isomorphism and cannot be predicted in advance, we obtain our Borel completeness result by first coding an arbitrary tree ${\cal T}$ into a graph $G^*_{{\cal T}}$ in which each node $\eta\in{\cal T}$ corresponds to a sequence of finite, complete subgraphs of arbitrarily large size. Then, by composing this map with the coding of graphs into models described above, we obtain a $\lambda$-Borel embedding of subtrees of $\lambda^{<\omega}$ into models of our theory. It is noteworthy that had we been able to add finitely many constant symbols to the language, the proof of Borel completeness in the expanded language would have been much easier. Once Theorem~\ref{eniDOPthm} has been established, for the remainder of the paper we assume that $T$ is $\aleph_0$-stable with eni-NDOP. In Section~5 we introduce and relate several notions of decompositions of a given model $M$. In Definition~\ref{decompdef}, decompositions are named [regular, eni, eni-active] according to the the species of ${\rm tp}(a_\nu/M_{\nu^-})$. With Theorems~\ref{regulartheorem}, \ref{eniactivetheorem}, and \ref{enitheorem} we measure the extent to which one can recover a model $M$ from a decomposition of it. Some of these results appear or are implicit in \cite{SHM} and \cite{K}, but are included here to contrast the pros and cons of each species of decomposition. In Section 6 we define an $\aleph_0$-stable theory $T$ to be {\em eni-deep} if it has eni-NDOP and some model $M$ has an eni-active decomposition with an infinite branch. With Theorem~\ref{enideepthm}, we prove that any $\aleph_0$-stable, eni-deep theory is Borel complete. The proof uses a major result from \cite{ShL} as a black box. Finally, in Section 7, we collect our results into Theorem~\ref{maingap}, that characterizes those $\aleph_0$-stable theories that have maximally large families of $L_{\infty,\aleph_0}$-inequivalent models of every cardinality. We are grateful to the anonymous referee for mentioning that the class of eni types need not be closed under non-orthogonality and for insisting that the relationship between eni-active types and chains be described more precisely. \medskip \centerline{{\bf For the whole of this paper, all theories are $\aleph_0$-stable.}} \subsection{Preliminary facts about $\aleph_0$-stable theories} We begin by enumerating several well-known facts about models of $\aleph_0$-stable theories. \begin{Definition} {\em A non-algebraic, stationary type $p$ is {\em regular} if $p$ is orthogonal to every forking extension of itself. $p$ is {\em strongly regular via $\varphi$} if $\varphi\in p$ and for every strong type $q$ containing $\varphi$, $q$ is either orthogonal or parallel to $p$. } \end{Definition} It is well known that the binary relation of non-orthogonality is an equivalence relation on the class of stationary, regular types. \begin{Fact} \label{Fact} \begin{enumerate} \item Over any set $A$, prime and atomic models (indeed, constructible) models exist and are unique up to isomorphisms over $A$; \item If $M$ is a model and $p\not\perp M$, then there is a strongly regular $q\in S(M)$ non-orthogonal to $p$; \item Strongly regular types over models are RK-minimal, i.e., if $M\preceq N$, $q\in S(M)$ is strongly regular, and there is some $a\in N\setminus M$ such that ${\rm tp}(a/M)\not\perp q$, then $q$ is realized in $N$; \item Any pair $M\preceq N$ of models admits a {\em strongly regular resolution\/} i.e., a continuous, elementary chain $\<M_i:i\le\alpha\>$ of elementary substructures of $N$ such that $M_0=M$, $M_\alpha=N$, and $M_{i+1}$ is prime over $M_i\cup\{a_i\}$, where ${\rm tp}(a_i/M_i)$ is strongly regular; \item For any complete type $p\in S(M)$ over a model, there is a finite subset $A\subseteq M$ over which $p$ is based and stationary; \item A model is $a$-saturated (i.e., ${\bf F}^a_{\kappa(T)}$-saturated in the notation of \cite{Shc}) if and only if it is $\aleph_0$-saturated. \end{enumerate} \end{Fact} By combining Fact~\ref{Fact}(2) and (3), we obtain the very useful `3-model Lemma'. \begin{Lemma} \label{threemodel} Suppose $N_0\preceq N_1\preceq M$, $p\in S(N_1)$ is realized in $M$, and is non-orthogonal to $N_0$. Then there is a strongly regular $q\in S(N_0)$ non-orthogonal to $p$ that is realized in $M$ by an element $e$ satisfying $\fg e {N_0} {N_1}$. \end{Lemma} {\bf Proof.}\quad By Fact~\ref{Fact}(2), choose a strongly regular $q\in S(N_0)$ non-orthogonal to $p$. Let $q'$ be the non-forking extension of $q$ to $S(N_1)$. As $p$ is realized in $M$, it follows from Fact~\ref{Fact}(3) that $q'$ is realized in $M$ as well. But any $e$ realizing $q'$ satisfies $\fg e {N_0} {N_1}$. \medskip The following notion is implicit in several proofs of atomicity in \cite{SHM}. \begin{Definition} \label{essentiallyfinite} {\em A set $A$ is {\em essentially finite with respect to a strong type $p$\/} if, for all finite sets $D$ on which $p$ is based and stationary, there is a finite $A_0\subseteq A$ such that $p|DA_0\vdash p|DA$. } \end{Definition} \begin{Lemma} \label{basicorth} Fix a strong type $p$. If either of the following conditions hold \begin{enumerate} \item $p\perp A$ and $B$ is a (possibly empty) $A$-independent set of finite sets; or \item if $A$ is essentially finite with respect to $p$, $p\perp B$, and $\fg A {A\cap B} {B}$ \end{enumerate} then $A\cup B$ is essentially finite with respect to $p$. \end{Lemma} {\bf Proof.}\quad To establish (1), suppose $B=\{b_i:i\in I\}$ is $A$-independent. Choose any finite $D$ over which $p$ is based and stationary. Now, choose a finite $B_0\subseteq B$ such that $\fg D {AB_0} B$ and then choose a finite $A_0\subseteq A$ such that $\fg {DB_0} {A_0} A$. We claim that $p|DA_0B_0\vdash p|DAB$. To see this, first note that since $p\perp A$, we have $p\perp A_0$, which coupled with $\fg {DA_0B_0} {A_0} A$ implies $p|DA_0B_0\vdash p|DAB_0$. But then, since $\fg {DB_0} A {(B\setminus B_0)}$ we obtain $p|DAB_0\vdash p|DAB$, proving (1). To prove (2), write $E:=A\cap B$. Choose a finite $D$ on which $p$ is based and stationary. Choose $B_0\subseteq B$ finite such that $\fg D {B_0A} B$. As $\fg A E B$ we have $\fg B {EB_0} {DB_0A}$ and $EB_0\subseteq DB_0A\cap B$. Choose a finite $A_0\subseteq A$ such that $\fg {DB_0} {A_0} A$. Finally, as $A$ is essentially finite with respect to $p$, choose $A_1\subseteq A$ finite such that $A_0\subseteq A_1$ and $p|DB_0A_1\vdash p|DB_0A$. Put $D^*:=DB_0A_1$. As $D^*A=DB_0A$, we have $\fg B {EB_0} {D^*A}$ and $EB_0\subseteq D^*A$. To see that $p|D^*\vdash p|DAB$, first note that from above, $p|D^*\vdash p|D^*A$. Also, since $p\perp B$, $p\perp EB_0$ and since $\fg {D^*A} {EB_0} B$ we conclude that $p|D^*A\vdash p|DAB$. \medskip Next, we give a criterion for $\lambda$-saturation of a model of an $\aleph_0$-stable theory. For the moment, call a non-algebraic type $p\in S(M)$ {\em $\lambda$-full\/} if $\dim(p|A,M)\ge\lambda$ for some (every) finite set $A\subseteq M$ on which $p$ is based and stationary. \begin{Lemma} \label{charsat} For $\lambda$ any infinite cardinal, a model $M\models T$ is $\lambda$-saturated if and only if every strongly regular $p\in S(M)$ is $\lambda$-full. \end{Lemma} {\bf Proof.}\quad Left to right is clear, so fix an infinite cardinal $\lambda$ and a model $M$ in which every strongly regular type is $\lambda$-full. If $M$ is not $\lambda$-saturated, then there is a subset $A\subseteq M$, $|A|<\lambda$, and a type $q\in S(A)$ that is omitted in $M$. Among all possible choices, choose $q$ of least Morley rank. Let $q'\in S(M)$ denote the unique non-forking extension of $q$ to $M$, let $a$ be any realization of $q'$, and let $N=M[a]$ be any prime model over $M\cup\{a\}$. By Fact~\ref{Fact}(2) there is an element $b\in N\setminus M$ such that $p={\rm tp}(b/M)$ is strongly regular. Choose $B\subseteq M$, $|B|<\lambda$, such that $A\subseteq B$, $p$ is based and stationary over $B$, and ${\rm tp}(a/Bb)$ forks over $B$. Since $p$ is $\lambda$-full, there is $b^*\in M$ realizing $p|B$. Choose any $a^*\in{\frak C}$ realizing $q|B$ with ${\rm tp}(a^*/Bb^*)$ forking over $B$. Now $a^*\not\in M$, lest $q$ be realized in $M$. Thus, $r={\rm tp}(a^*/M)$ is non-algebraic, yet $MR(r)<MR(q)$, hence $r|C$ is realized in $M$ for any $C\supseteq Bb^*$ on which $r$ is based and stationary and $|C|<\lambda$. However, any realization of $r|C$ is a realization of $q$, contradicting $q$ being omitted in $M$. \medskip The following Corollary is immediate. \begin{Corollary} \label{notcountablesat} A countable model $M$ is saturated if and only if every strongly regular $q\in S(M)$ has infinite dimension. \end{Corollary} Given two sets $A,B$, we say that {\em $A$ has the Tarski-Vaught property in $B$,} written $A\subseteq_{TV} B$, if $A\subseteq B$ and every $L(A)$-formula $\varphi(x,a)$ that is realized in $B$ is also realized in $A$. \begin{Lemma} \label{retain} \begin{enumerate} \item If $B\subseteq_{TV} B'$, then for every $a$, if ${\rm tp}(a/B)$ is isolated by the $L(B)$-formula $\varphi(x,b)$, then ${\rm tp}(a/B')$ is also isolated by $\varphi(x,b)$. \item Suppose that $B$ and $C$ are sets with $B$ containing a model $M$ and $\fg B M C$. Then $B\subseteq_{TV} BC$. Furthermore, if $A$ is atomic over $B$, then $\fg {AB} M C$ and $A$ is atomic over $BC$ via the same $L(B)$-formulas. \item Suppose that $\<A_i:i<\alpha\>$ and $\<B_i:i<\alpha\>$ are both continuous, increasing subsets of a model such that each $A_i$ contains and is atomic over $B_i$, and $B_i\subseteq_{TV} B_j$ whenever $i<j<\alpha$. Then: \begin{enumerate} \item $B_i\subseteq_{TV} \bigcup B_i$; \item $\bigcup A_i$ is atomic over $\bigcup B_i$; and \item If, in addition, each $A_i$ was maximal atomic over $B_i$ inside $N$, then each $A_i\preceq\bigcup A_i\preceq N$ and $\bigcup A_i$ is maximal atomic over $\bigcup B_i$. \end{enumerate} \end{enumerate} \end{Lemma} {\bf Proof.}\quad (1) is Lemma~XII~1.12(3) of \cite{Shc}, but we prove it here for convenience. Let $\psi(x,b_1,b')$ be any formula over $B'$ with $b_1$ from $B$ and $b'$ from $B'$. Let $$\theta(y,z,w):=\forall x\forall x' ([\varphi(x,y)\wedge\varphi(x',y)]\rightarrow (\psi(x,z,w)\leftrightarrow\psi(x,z,w)))$$ It suffices to show that $\theta(b,b_1,b')$ holds. However, if it failed, then since $b,b_1$ are from $B$ and $B\subseteq_{TV} B'$, we would have $\neg\theta(b,b_1,b_2)$ for some $b_2$ from $B$. But this contradicts $\varphi(x,b)$ isolating ${\rm tp}(a/B)$. (2) That $B\subseteq_{TV} BC$ follows from the finite satisfiability of non-forking over models. That $\fg {AB} M C$ is a restatement of isolated types being dominated over models, and the atomicity of $A$ over $BC$ follows from (1). (3) Let $A^*:=\bigcup_{i<\alpha} A_i$ and $B^*:=\bigcup_{i<\alpha} B_i$. The preservation of the TV-property under continuous chains of sets is identical to the preservation of elementarity under continuous chains of models, so $B_i\subseteq_{TV} B^*$ for each $i$. To see that $A^*$ is atomic over $B^*$, choose a finite subset $D\subseteq A^*$ and choose $i<\alpha$ such that $D\subseteq A_i$. If $\varphi(\overline{x},b_i)$ isolates ${\rm tp}(D/B_i)$, then by iterating (1), the same formula isolates ${\rm tp}(D/B_j)$ for every $i<j<\alpha$, and hence also isolates ${\rm tp}(D/B^*)$. To obtain (c), suppose that each $A_i$ is maximal atomic inside $N$ over $B_i$. As there is a prime model $N_i\preceq N$ containing each $A_i$, the maximality of $A_i$ implies that $A_i\preceq N$, so $A^*\preceq N$ as well. To demonstrate that $A^*$ is maximal, choose any $c\in N$ such that $A^*c$ is atomic over $B^*$. Choose $i<\alpha$ such that both ${\rm tp}(c/A^*)$ does not fork and is stationary over $A_i$ and ${\rm tp}(c/B_i)$ is isolated. We will show that $cA_i$ is atomic over $B_i$, which implies $c\in A_i$ by the maximality of $A_i$. To show this atomicity, first note that since $A_i$ is atomic over $B_i$ and $B_i\subseteq_{TV} B^*$, it follows from (1) that ${\rm tp}(A_i/B_i)\vdash{\rm tp}(A_i/B^*)$, hence $\fg {A_i} {B_i} {B^*}$. The transitivity of non-forking implies that $\fg {cA_i} {B_i} {B^*}$. Since $cA_i$ is atomic over $B^*$, it follows from the Open Mapping Theorem that $cA_i$ is atomic over $B_i$. \medskip Here is an example of a pair of sets with the Tarski-Vaught property. It is proved in Lemma~XII~2.3(3) of \cite{Shc}. \begin{Fact} \label{Vconfiguration} Suppose that $M_0,M_1,M_2$ are models with $\fg {M_1} {M_0} {M_2}$, $N_0$ is a-saturated and independent from $M_1M_2$ over $M_0$, $N_1$ is a-prime over $N_0M_1$, and $N_2$ is a-prime over $N_0M_2$. Then $M_1M_2\subseteq_{TV} N_1N_2$. \end{Fact} \subsection{Species of stationary regular types} We begin this section by recalling a definition from \cite{ShL}. \begin{Definition}\label{supporting} {\em A stationary, regular type $q$ {\em lies directly above $p$\/} if there is a non-forking extension $p'\in S(N)$ of $p$ with $N$ $\aleph_0$-saturated, a realization $c$ of $p'$, and an $\aleph_0$-prime model $N[c]$ over $N\cup\{c\}$ such that $q\not\perp N[c]$, but $q\perp N$. A regular type $q$ {\em lies above $p$\/} if there is a sequence $p_0,\dots,p_n$ of types such that $p_0=p$, $p_n=q$, and $p_{i+1}$ lies directly above $p_i$ for each $i<n$. We say that $p$ {\em supports} $q$ if $q$ lies above $p$. } \end{Definition} The following Lemma gives a sufficient condition for supporting that does not mention $\aleph_0$-saturation. \begin{Lemma} \label{notsat} Suppose $p\in S(M)$ is regular, $a$ is any realization of $p$, and $M(a)$ is prime over $M\cup\{a\}$. If a stationary, regular $q\not\perp M(a)$, but $q\perp M$, then $q$ lies directly above $p$, hence $p$ supports $q$. \end{Lemma} {\bf Proof.}\quad Using Fact~\ref{Fact}(2) and the fact that `lying directly above $p$' is closed under non-orthogonality, we may assume that $q\in S(M(a))$. Fix a finite $A\subseteq M(a)$ over which $q$ is based and stationary. As $M(a)$ is atomic over $M\cup\{a\}$, ${\rm tp}(A/Ma)$ is isolated. Choose any $\aleph_0$-saturated model $N\succeq M$ with $\fg N M a$. It follows by Finite Satisfiability that $Ma\subseteq_{TV} Na$, so by Lemma~\ref{retain}(1), ${\rm tp}(A/Na)$ is isolated as well (in fact, by the same formula isolating ${\rm tp}(A/Ma)$). As ${\rm tp}(A/Na)$ is $\aleph_0$-isolated, we can choose an $\aleph_0$-prime model $N[a]$ over $N\cup\{a\}$ that contains $A$. Now, $p={\rm tp}(a/N)$ is a non-forking extension of $p$ and $q\not\perp N[a]$. As $A$ is dominated by $a$ over $M$, $\fg a M N$ and $q\perp M$ we conclude that $q\perp N$. Thus, $q$ lies directly above $p$, so $p$ supports $q$ by definition. \medskip \begin{Definition} \label{eni} {\em A stationary, regular type $p$ is {\em eni} (eventually non-isolated) if there is a finite set $A$ on which $p$ is based and stationary, but $p|A$ is non-isolated. Such a $p$ is ENI if it is both eni and strongly regular. } \end{Definition} \begin{Definition} \label{lump} {\em The {\em ENI-active} types are the smallest class of stationary, regular types that contain the ENI types and are closed under automorphisms of the monster model, non-orthogonality, and supporting. Similarly, {\em eni-active} types are the smallest class of stationary, regular types that are closed under automorphisms, non-orthogonality, and supporting. } \end{Definition} With Proposition~\ref{eninotdull} we will see that every eni type is ENI-active, hence the classes of ENI-active and eni-active types coincide. One should note that whereas the class of eni types need not be closed under non-orthogonality, class of eni-active types is. \begin{Definition} {\em A stationary regular type $p$ is {\em dull\/} if it is not ENI-active. } \end{Definition} Again, it follows from Proposition~\ref{eninotdull} below that a stationary regular type is dull if and only if it is not eni-active. Thus, in the notation of Definition~3.7 of \cite{ShL}, if we take {\bf P} to be {\em either} the class of ENI types {\em or} the closure of the class of eni types under non-orthogonality, then ${\bf P^{\rm active}}$ denotes the class of ENI-active types and ${\bf P^{\rm dull}}$ denotes the dull types. The remainder of this subsection is aimed at proving Proposition~\ref{eninotdull}. \begin{Lemma} \label{infinitedimension} Suppose that a model $M$ is prime over a finite set $A$, $c$ is a realization of a regular type $p\in S(M)$, and $M(c)$ is any prime model over $M\cup\{c\}$. If $p$ has infinite dimension in $M$, then $M(c)$ is also prime over $A$. In particular, $M$ and $M(c)$ are isomorphic over $A$. \end{Lemma} {\bf Proof.}\quad First, by increasing $A$ as necessary (while still keeping it finite) we may assume that $p$ is based and stationary on $A$. To prove the Lemma, first note that it suffices to find a pair of models $N\preceq N'\preceq M$ such that $A\subseteq N$ and $N'$ isomorphic over $A$ to any prime model $N(c)$ over $N\cup\{c\}$. Indeed, once we have such $N$ and $N'$, then as they are both countable and atomic over $A$, hence both are isomorphic to $M$ over $A$. Thus, $N(c)$ is isomorphic to both $M$ and $M(c)$ over $A$ and the Lemma follows. To produce the submodels $N$ and $N'$, first choose an infinite, $A$-independent set $J\subseteq M$ of realizations of $p|A$. Choose a partition $J=J_0\cup J_1$ into disjoint, infinite sets. Next, choose $B\subseteq M$ to be maximal subject to the conditions that $AJ_0\subseteq B$ and $\fg B A {J_1}$. Let $N\preceq M$ be prime over $B$. \medskip \noindent{\bf Claim:} $N=B$, hence $\fg N A {J_1}$. \medskip {\bf Proof.}\quad Choose any $e\in N$. As $N$ is atomic over $B$, choose a finite set $C$, $A\subseteq C\subseteq B$ such that ${\rm tp}(e/C)\vdash{\rm tp}(e/B)$. As $J_0\subseteq B$, it follows that ${\rm tp}(e/C)\vdash{\rm tp}(e/BJ_1)$ [Why? For $\bar{a}_1$ any tuple from $J_1$, a formula $\varphi(x,c,b,\bar{a}_1)\in{\rm tp}(e/BJ_1)$ iff there is a cofinite $J_0'\subseteq J_0$ such that $\varphi(x,c,b,\bar{a}_0)\in{\rm tp}(e/B)$ for some $\bar{a}_0$ from $J_0'$.] In particular, $\fg {e} B {J_1}$, which by transitivity implies $\fg {Be} {A} {J_1}$. Thus, the maximality of $B$ implies that $e\in B$, proving the Claim. \medskip Now choose $a\in J_1$ arbitrarily and choose any $N'\preceq M$ to be prime over $N\cup\{a\}$. As ${\rm tp}(a/A)={\rm tp}(c/A)$ is stationary and both $a$ and $c$ are independent from $N$ over $A$, it follows that ${\rm tp}(a/N)={\rm tp}(c/N)$. In particular, if we choose $N(c)$ to be any prime model over $N\cup\{c\}$, it will be isomorphic to $N'$ over $N$. Thus, $N$ and $N'$ are as desired, completing the proof of the Lemma. \medskip \begin{Definition} \label{dullpairdef} {\em A pair of models $M\preceq N$ is a {\em dull pair} if, for every $d\in N\setminus M$, ${\rm tp}(d/M)$ is dull whenever it is regular. } \end{Definition} \begin{Lemma} \label{dullpairone} Suppose $M\preceq N$ is a dull pair, $c\in N\setminus M$ has ${\rm tp}(c/M)$ strongly regular, and $M(c)\preceq N$ is prime over $M\cup\{c\}$. Then $M(c)\preceq N$ is a dull pair. \end{Lemma} {\bf Proof.}\quad Choose any $d\in N\setminus M(c)$ such that $p:={\rm tp}(d/M(c))$ is regular. There are two cases. First, if $p\not\perp M$, then by Lemma~\ref{threemodel} there is $e\in N\setminus M$ such that $q:={\rm tp}(e/M)$ is strongly regular and non-orthogonal to $p$. As $M\preceq N$ is a dull pair, $q$ and hence $p$ must be dull. On the other hand, suppose that $p\perp M$. If $p$ were not dull, it would be ENI-active, which by Lemma~\ref{notsat} would imply that ${\rm tp}(c/M)$ is ENI-active as well, again contradicting $M\preceq N$ being a dull pair. \medskip \begin{Lemma} \label{dullpairlemma} Suppose that $M\preceq N$ is a dull pair. Then for any $M'$ satisfying $M\preceq M'\preceq N$, both $M\preceq M'$ and $M'\preceq N$ are dull pairs. \end{Lemma} {\bf Proof.}\quad That $M\preceq M'$ is a dull pair is immediate. For the other pair, we argue by induction on $\alpha$ that \begin{quotation} For any $M'$ satisfying $M\preceq M'\preceq N$, if there is a strongly regular resolution $M=M_0\preceq M_1\preceq\dots\preceq M_\alpha=M'$ then $M'\preceq N$ is a dull pair. \end{quotation} This would suffice by Fact~\ref{Fact}(4), which asserts the existence of a strongly regular resolution of any $M'$. When $\alpha=0$ there is nothing to prove. If $\alpha$ is a non-zero limit ordinal, then for any $d\in N\setminus M'=M_\alpha$ such that $p:={\rm tp}(d/M_\alpha)$ is regular, choose $\beta<\alpha$ such that $q:={\rm tp}(d/M_\beta)$ is parallel to $p$. By induction we have that $q$ is dull, hence $p$ is dull as well. Finally, assume the inductive hypothesis holds for $\beta$ and suppose $M'$ has a strongly regular resolution of length $\alpha=\beta+1$. By the inductive hypothesis, $M_\beta\preceq N$ is a dull pair, ${\rm tp}(c_\beta/M_\beta)$ is strongly regular, and $M'=M_\alpha$ is prime over $M_\beta\cup\{c_\beta\}$. Thus, Lemma~\ref{dullpairone} implies that $M'\preceq N$ is a dull pair, and our induction is complete. \medskip \begin{Proposition} \label{eninotdull} \begin{enumerate} \item Every eni type is ENI-active; \item A type is eni-active if and only if it is ENI-active; \item A stationary, regular type is dull if and only if it is not eni-active. \end{enumerate} \end{Proposition} {\bf Proof.}\quad Once we have proved (1), Clauses (2) and (3) follow immediately from the definitions. Fix an eni type $p$. Choose a finite set $A$ on which $p$ is based and stationary, yet $p|A$ is not isolated. Let $M$ be prime over $A$. As $M$ is atomic over $A$, it follows that $M$ omits $p|A$. Let $e$ be any realization of $p|M$ and let $M(e)$ be prime over $M\cup\{e\}$. By way of contradiction, assume that $p$ were not ENI-active, i.e., $p$ is dull. We will obtain our contradiction by showing that $M(e)$ is also prime over $A$, which is a contradiction since $M(e)$ visibly realizes $p|A$. To obtain this result, we begin with the following Claim. \medskip\noindent{\bf Claim.} There is a strongly regular resolution $M=M_0\preceq\dots\preceq M_n=M(e)$ of finite length $n$. \medskip {\bf Proof.}\quad First choose any maximal sequence $M=M_0'\preceq M_1'\preceq M_n'\preceq M(e)$ satisfying the conditions: (1) $M_{i+1}$ is prime over $M_i\cup\{d_i\}$ where ${\rm tp}(d_i/M_i')$ is strongly regular and (2) ${\rm tp}(e/M_{i+1}')$ forks over $M_i'$. As the sequence of ordinals $\<RM(e/M_i'):i\le n\>$ is strictly decreasing, such a sequence can have at most finite length. Also, for any such sequence, we must have $e\in M_n'$, because if not, then by Facts~\ref{Fact}(2,3), there would be some strongly regular type $q\in S(M_n')$ realized in $M(e)\setminus M_n'$ with $q\not\perp {\rm tp}(e/M_n')$. However, if $d_n$ were any realization of $q$ in $M(e)$ and $M_{n+1}'\preceq M(e)$ were any prime model over $M_n'\cup\{d_n\}$, we would have $e$ forking with $M_{n+1}'$ over $M_n'$, which would contradict the maximality of the sequence. Thus, any maximal sequence has $e\in M_n'$. It follows that $M_n'$ is prime over $M\cup\{e\}$, hence there is an isomorphism $f:M_n'\rightarrow M(e)$ fixing $M\cup\{e\}$ pointwise. Then the sequence $\<f(M_i'):i\le n\>$ is a strongly resolution of $M(e)$, completing the proof of the Claim. \medskip Fix such a strongly regular resolution $M=M_0\preceq M_1\preceq M_n=M(e)$ where $M_{i+1}$ is prime over $M_i\cup\{c_i\}$ and ${\rm tp}(c_i/M_i)$ is strongly regular. Next, note that $e$ forks over $M$ with any $d\in M(e)\setminus M$, so if ${\rm tp}(d/M)$ is regular it must be non-orthogonal to $p$ and hence dull. That is, $M\preceq M(e)$ is a dull pair. It follows from Lemma~\ref{dullpairlemma} that $M_i\preceq M(e)$ is also a dull pair whenever $i<n$. Using this, we complete the proof by showing, by induction on $i\le n$, that each $M_i$ is prime over $A$. When $i=0$ this is immediate by hypothesis. So fix $i<n$ and assume that $M_i$ is prime over $A$. Let $q_i:={\rm tp}(c_i/M_i)$. Choose a finite set $B$, $A\subseteq B\subseteq M_i$ on which $q_i$ is based and stationary. Note that $M_i$ is prime over $B$ as well. As $M_i\preceq M(e)$ is a dull pair, $q_i$ is strongly regular and dull. In particular, $q_i$ is not ENI, hence $q_i$ has infinite dimension in $M_i$. Thus, $M_{i+1}$ is prime over $B$ by Lemma~\ref{infinitedimension}. However, as ${\rm tp}(B/A)$ is isolated, it follows that $M_{i+1}$ is prime over $A$ as well. \medskip \subsection{On dull types} We begin by defining a strong notion of substructure. \begin{Definition} {\em $N$ is an {\em $L_{\infty,\aleph_0}$-substructure of $M$\/} if $N\preceq M$ and for all finite $A\subseteq N$, $$(N,a)_{a\in A}\equiv_{\infty,\aleph_0} (M,a)_{a\in A}$$ } \end{Definition} The paradigm of this notion is when $M$ is atomic and $N\preceq M$. In this case, both $N$ and $M$ are atomic, hence back-and-forth equivalent, over every finite $A\subseteq N$. With Proposition~\ref{DULLchain}, we prove that this stronger notion of substructure holds for every dull pair $N\preceq M$. We begin with a Lemma which gets its strength when coupled with Lemma~\ref{basicorth}. \begin{Lemma} \label{atomicextension} Suppose $A\subseteq C$ is essentially finite with respect to a regular, stationary, but not eni type $p\in S(C)$. If $C$ is atomic over $A$, then so is $C\cup\{e\}$ for any realization $e$ of $p$. \end{Lemma} {\bf Proof.}\quad It suffices to show that $De$ is atomic over $A$ for every finite $D\subseteq C$. So fix any finite $D\subseteq C$. Choose a finite $D^*$, $D\subseteq D^*\subseteq C$ with $p$ based and stationary on $D^*$. As $A$ is essentially finite with respect to $p$, choose a finite $A_0\subseteq A$ such that $p|D^*A_0\vdash p|D^*A$. Since $p$ is not eni and $D^*A_0$ is finite, ${\rm tp}(e/D^*A_0)$ is isolated. Coupled with the fact that $D^*$ is atomic over $A$, this implies that $D^*e$ and hence $De$ is atomic over $A$, as required. \medskip \begin{Lemma} \label{dulltechnical} Suppose that $N\preceq N(c)$, where $N(c)$ is prime over $N\cup\{c\}$ and $c$ realizes a dull type $p\in S(N)$. Then for every finite set $A$, there is $M\preceq N$ over which $p$ is based and an infinite Morley sequence $J\subseteq N$ in $p|M$ such that \begin{itemize} \item $A\subseteq M$; \item $N$ is atomic over $M\cup J$; and \item $N(c)$ is atomic over $M\cup J\cup\{c\}$. \end{itemize} \end{Lemma} {\bf Proof.}\quad Without loss, we may assume that $p$ is based and stationary on $A$. As $p$ is not eni, $p$ has infinite dimension in $N$, so we can find an infinite Morley sequence $J^*\subseteq N$ in $p|A$. Partition $J^*$ into two disjoint, infinite pieces $J^*=J_0\cup J$. Arguing as in the proof of Lemma~\ref{infinitedimension}, choose $B\subseteq N$ maximal subject to the conditions (1) $AJ_0\subseteq B$ and (2) $\fg B {A} J$. Just as in \ref{infinitedimension}, $B$ is the universe of an elementary substructure, which we denote as $M$. Clearly, $A\subseteq M$. \medskip\noindent{\bf Claim 1.} $M\preceq N$ is a dull pair. \medskip {\bf Proof.}\quad Choose any $e\in N$ such that $q={\rm tp}(e/M)$ is regular. We show that any such $q$ must be non-orthogonal to $p$ and hence be dull. If this were not the case, then we would have $\fg e M J$, which would contradict the maximality of $M$. \medskip \medskip\noindent{\bf Claim 2.} $N$ is atomic over $M\cup J$. \medskip {\bf Proof.}\quad Choose $N_0\preceq N$ to be maximal atomic over $M\cup J$. We argue that $N_0=N$. If this were not the case, choose $e\in N$ such that $q:={\rm tp}(e/N_0)$ were regular. As $M\preceq N$ is a dull pair, it follows from Lemma~\ref{dullpairlemma} that $q$ is dull and hence not eni by Proposition~\ref{eninotdull}. We argue by cases. First, if $q$ were non-orthogonal to $M$, then by Lemma~\ref{threemodel} there would be $d\in N\setminus M$ such that $\fg d M {N_0}$ which, since $J\subseteq N_0$, would contradict the maximality of $M$. On the other hand, if $q\perp M$, then by Lemmas~\ref{basicorth}(1) and \ref{atomicextension} we would have $N_0\cup\{e\}$ atomic over $M\cup J$, which contradicts the maximality of $N_0$. \medskip\noindent{\bf Claim 3.} $M$ is maximal in $N(c)$ such that $\fg M A {Jc}$. \medskip {\bf Proof.}\quad First, it is clear that $\fg M A {Jc}$ by the defining property of $M$ and because $\fg c A N$. The verification of the maximality of $M$ inside $N(c)$ is an exercise in non-forking. Namely, choose any $e\in N(c)$ such that $\fg {eM} A {Jc}$. As $J\cup\{c\}$ is independent over $A$, we have $\fg {eMc} A {J}$, hence $\fg {ec} M {J}$. As $N$ is atomic over $M\cup J$ by Claim~2, we obtain $\fg {ec} M N$. Combining this with the fact that $\fg e M c$ yields $\fg e M {Nc}$, hence $\fg e M {N(c)}$. Thus, $e\in M$ as required. \medskip We finish by using analogues of the proofs of Claims 1 and 2 (using $Jc$ in place of $J$) to prove that $M\preceq N(c)$ is a dull pair and that $N(c)$ is atomic over $MJc$. \medskip \begin{Lemma} \label{iso} Suppose that $N\models T$, ${\rm tp}(c/N)$ is dull, and $N(c)$ is any prime model over $N\cup\{c\}$. Then $N$ is an $L_{\infty,\aleph_0}$-elementary substructure of $N(c)$. \end{Lemma} {\bf Proof.}\quad Given $N\preceq N(c)$ and a finite $A$, by enlarging $A$ slightly we may assume that $p={\rm tp}(c/N)$ is based and stationary on $A$. Apply Lemma~\ref{dulltechnical} to obtain $M\preceq N$ and $J$ such that $A\subseteq M$, $N$ is atomic over $MJ$, and $N(c)$ is atomic over $MJc$. Let $g:MJ\rightarrow MJc$ be any elementary bijection that is the identity on $M$. But now, we show that $(N,a)_{a\in M}\equiv_{\infty,\aleph_0} (N(c),a)_{a\in M}$ by exhibiting the back-and-forth system \begin{quotation} ${\cal F}=\{$all finite partial functions $f:N\rightarrow N(c)$ such that $f\cup g$ is elementary$\}$ \end{quotation} The verification that ${\cal F}$ is a back-and-forth system is akin to the verification that any two atomic models of a complete theory are back-and-forth equivalent. \medskip The following Proposition follows by iterating Lemma~\ref{iso}: \begin{Proposition} \label{DULLchain} Suppose that $N\preceq M$ is a dull pair. Then $N$ is an $L_{\infty,\aleph_0}$-elementary substructure of $M$. \end{Proposition} {\bf Proof.}\quad Choose a strongly regular resolution $N=N_0\preceq N_1\preceq\dots\preceq N_\alpha=M$ As ${\rm tp}(c_{i+1}/N_i)$ is dull for each $i<\alpha$, it follows from Lemma~\ref{iso} that $N_i$ is an $L_{\infty,\aleph_0}$-elementary substructure of $N_{i+1}$. \medskip \section{eni-DOP and equivalent notions} \label{eniDOPsection} We begin with a central notion of \cite{ShL} and contrast it with a slight strengthening. \begin{Definition} {\em A stationary, regular type $p$ has a {\em DOP witness\/} if there is a quadruple $(M_0,M_1,M_2,M_3)$ of models, where $(M_0,M_1,M_2)$ form an independent triple of $a$-models, $M_3$ is $a$-prime over $M_1\cup M_2$, $p$ is based on $M_3$, but $p\perp M_1$ and $p\perp M_2$. A {\em prime DOP witness\/} for $p$ is the same, except that we require that $M_3$ be prime over $M_1\cup M_2$ (as opposed to a-prime). } \end{Definition} Visibly, among stationary, regular types, having either a DOP witness or a prime DOP witness is invariant under parallelism and automorphisms of the monster model ${\frak C}$. Recall that by Fact~\ref{Fact}(6), an a-model is simply an $\aleph_0$-saturated model. As in \cite{ShL}, we are free to vary the amount of saturation of the models $(M_0,M_1,M_2)$. \begin{Lemma} \label{DOPwitness} The following are equivalent for a stationary regular type $p$. \begin{enumerate} \item $p$ has a prime DOP witness; \item There is a quadruple $(M_0,M_1,M_2,M_3)$ of models such that $(M_0,M_1,M_2)$ form an independent triple, $M_3$ is prime over $M_1\cup M_2$, $p$ based on $M_3$, but $p\perp M_1$ and $p\perp M_2$; \item Same as (2), but with $\dim(M_1/M_0)$ and $\dim(M_2/M_0)$ both finite; \item Same as (2), but with $\dim(M_1/M_0)=\dim(M_2/M_0)=1$; \item $p$ has a prime DOP witness $(M_0,M_1,M_2,M_3)$ with $\dim(M_1/M_0)=\dim(M_2/M_0)=1$. \end{enumerate} \end{Lemma} {\bf Proof.}\quad $(1)\Rightarrow(2)$ is immediate. $(2)\Rightarrow(3)$: Let $(M_0,M_1,M_2,M_3)$ be any witness to (2). Choose a finite $d\subseteq M_3$ over which $p$ is based and stationary. As $M_3$ is prime over $M_1\cup M_2$, choose finite $b\subseteq M_1$ and $c\subseteq M_2$ such that ${\rm tp}(d/M_1M_2)$ is isolated by a formula $\varphi(x,b,c)$. Let $M_1'\preceq M_1$ be prime over $M_0b$, let $M_2'\preceq M_2$ be prime over $M_0c$, and let $M_3'\preceq M_3$ be prime over $M_1'\cup M_2'$ with $d\subseteq M_3'$. Then $(M_0,M_1',M_2',M_3')$ are as required in (3). $(3)\Rightarrow(4)$: Assume that (3) holds. Among all possible quadruples of models witnessing (3), choose a triple $(M_0,M_1,M_2,M_3)$ with $\dim(M_1/M_0)+\dim(M_2/M_0)$ as small as possible. Clearly, we cannot have either $M_0=M_1$ or $M_0=M_2$, so $\dim(M_1/M_0)$ and $\dim(M_2/M_0)$ are each at least one. We argue that the minimum sum occurs when $\dim(M_1/M_0)=\dim(M_2/M_0)=1$. Assume this were not the case. Without loss, assume that $\dim(M_1/M_0)\ge 2$. Choose an element $e\in M_1\setminus M_0$ such that ${\rm tp}(e/M_0)$ is strongly regular and let $M_1'\preceq M_1$ be prime over $M_0\cup\{e\}$. Let $M_3'\preceq M_3$ be prime over $M_1'\cup M_2$. There are two cases. On one hand, if $p\not\perp M_3'$, then by e.g., Claim~X.1.4 of \cite{Shc}, choose an automorphic copy $p'$ of $p$ that is based on $M_3'$ with $p\not\perp p'$. Then an automorphic copy of the quadruple $(M_0,M_1',M_2,M_3')$ contradicts the minimality of our choice. On the other hand, if $p\perp M_3'$, then the quadruple $(M_1',M_1,M_3',M_3)$ directly contradicts the minimality of our choice. $(4)\Rightarrow(5)$: Let $(M_0,M_1,M_2,M_3)$ be any witness to (4). Choose a finite $d\subseteq M_3$ over which $p$ is based and stationary. Now, choose an a-model $N_0$ satisfying $\fg {N_0} {M_0} {M_3}$, and choose a-prime models $N_1$ and $N_2$ over $N_0\cup M_1$ and $N_0\cup M_2$, respectively. As $\fg {M_3} {M_\ell} {N_\ell}$ for both $\ell=1,2$, it follows that $p\perp N_1$ and $p\perp N_2$. Also, as $M_1M_2\subseteq_{TV} N_1N_2$ by Fact~\ref{Vconfiguration}, it follows Lemma~\ref{retain}(1) that ${\rm tp}(d/N_1N_2)$ is isolated. Choose a prime model $N_3$ over $N_1\cup N_2$ that contains $d$. Then $(N_0,N_1,N_2,N_3)$ is a prime DOP witness for $p$ with $\dim(N_1/N_0)=\dim(N_2/N_0)=1$. $(5)\Rightarrow(1)$ is immediate. \medskip \begin{Definition} \label{eni-DOP} {\em $T$ has {\em eni-DOP\/} if some eni type $p$ has a DOP witness. Similarly, $T$ has {\em ENI-DOP} (respectively, {\em eni-active DOP\/}) if some ENI-type (respectively, eni-active type) has a DOP witness. } \end{Definition} It is fortunate, at least for the exposition, that $T$ having any of the three preceding notions are equivalent. In fact, this equivalence extends much further. Recall that a stable theory has the {\em Omitting Types Order Property (OTOP)} if there is a type $p(x,y,z)$ (where $x,y,z$ denote finite tuples of variables) such that for any cardinal $\kappa$ there is a model $M^*$ and a sequence $\<(b_\alpha,c_\alpha):\alpha<\kappa\>$ such that for all $\alpha,\beta<\kappa$, $$ M^*\ \hbox{realizes}\ p(x,b_\alpha,c_\beta) \quad\hbox{if and only if} \quad \alpha<\beta$$ \begin{Theorem} \label{equiv} The following are equivalent for an $\aleph_0$-stable theory $T$: \begin{enumerate} \item $T$ has eni-DOP; \item $T$ has ENI-DOP; \item $T$ has eni-active DOP; \item There is an independent triple $(M_0,M_1,M_2)$ of countable, saturated models such that some (equivalently every) prime model over $M_1\cup M_2$ is not saturated; \item There is an independent triple $(N_0,N_1,N_2)$ of countable saturated models and strongly regular types $p,q\in S(N_0)$ such that $N_1$ is $\aleph_0$-prime over $N_0$ and a realization $b$ of $p$, $N_2$ is $\aleph_0$-prime over $N_0$ and a realization $c$ of $q$, and if $N_3$ is prime over $N_1N_2$, then there is a finite $d$ satisfying $\{b,c\}\subseteq d\subseteq N_3$ and an ENI type $r(x,d)$ that is omitted in $N_3$ and orthogonal to both $N_1$ and $N_2$; \item $T$ has OTOP. \end{enumerate} \end{Theorem} {\bf Proof.}\quad If we let {\bf P} denote any of eni, ENI, or eni-active, then it follows from Proposition~\ref{eninotdull} that ${\bf P^{\rm active}}$ (which is the closure of {\bf P} under automorphisms, non-orthogonality and `supporting' within the class of stationary, regular types) would be the set of eni-active types. Thus, Clauses (1), (2) and (3) are equivalent by way of Corollary~3.9 of \cite{ShL}. $(1)\Rightarrow (4)$: Suppose that $(M_0,M_1,M_2,M_3)$ is a DOP witness for an eni type $p$ with each model countable and saturated. Let $N\preceq M_3$ be prime over $M_1\cup M_2$, and by way of contradiction, assume that $N$ is saturated. Then as $N$ and $M_3$ are isomorphic over $M_1\cup M_2$, by replacing $p$ by a conjugate type, we may assume that $p\in S(N)$. We will contradict the saturation of $N$ by finding a finite subset $D^*\subseteq N$ on which $p$ is based and stationary, but $p|D^*$ is omitted in $N$. First, since $p$ is eni and $N$ is saturated, choose a finite $D\subseteq N$ on which $p$ is based and stationary, but $p|D$ is not isolated. As $p\perp M_1$ and $p\perp M_2$, it follows from Lemma~\ref{basicorth} that $M_1M_2$ is essentially finite with respect to $p$. Thus, there is a finite $D^*\subseteq DM_1M_2$ containing $D$ such that $p|D^*\vdash p|DM_1M_2$. As $p|D^*$ is a non-forking extension of $p|D$, it cannot be isolated. We argue that $p|D^*$ cannot be realized in $N$. Suppose $c\in N$ realized $p|D^*$. Then, as $cD^*$ is atomic over $M_1M_2$, we would have ${\rm tp}(c/D^*M_1M_2)$ isolated. However, since ${\rm tp}(c/D^*)\vdash{\rm tp}(c/D^*M_1M_2)$, we have $\fg c {D^*} {M_1M_2.}$ Thus, the Open Mapping Theorem would imply that ${\rm tp}(c/D^*)$ is isolated, which is a contradiction. $(4)\Rightarrow (5)$: Let $(M_0,M_1,M_2)$ exemplify (4), and fix a prime model $M_3$ over $M_1\cup M_2$. As $M_3$ is not saturated, by Lemma~\ref{charsat} there is an ENI $r\in S(M_3)$ of finite dimension in $M_3$. \medskip \noindent{\bf Claim.} $r$ is orthogonal to both $M_1$ and $M_2$. {\bf Proof.}\quad As the cases are symmetric, assume by way of contradiction that $r\not\perp M_1$. By Fact~\ref{Fact}(2) there is a strongly regular $p\in S(M_1)$ nonorthogonal to $r$. Choose a finite $A\subseteq M_3$ such that $r$ is based, stationary and strongly regular over $A$, and $r|A$ is omitted in $M_3$. Choose a finite $B\subseteq M_1$ over which $p$ is based, stationary and strongly regular, and let $r'$ and $p'$ be the unique nonforking extensions of $r|A$ and $p|B$ to $AB$. Since $M_1$ is saturated, $\dim(p|B,M_1)$ is infinite, hence $\dim(p',M_3)$ is infinite as well. Thus, $\dim(r',M_3)$ is also infinite, contradicting the fact that $r|A$ is omitted in $M_3$. \medskip Thus, $r$ has a prime DOP witness by Lemma~\ref{DOPwitness}(2). But now, Lemma~\ref{DOPwitness}(5) gives us the configuration we need. $(5)\Rightarrow(2)$: Given the triple $(M_0,M_1,M_2)$ and the type $r$ in (5), choose an a-prime model $M_3$ over $M_1\cup M_2$. Then $(M_0,M_1,M_2,M_3)$ is a DOP witness for the ENI type $r$. $(5)\Rightarrow(6)$: Given the data from (5), let $w(x,u,y,z)$ be the type asserting that $y$ and $z$ are $M_0$-independent solutions of $p$ and $q$, respectively, $\varphi(u,y,z)$ isolates ${\rm tp}(d/M_1M_2)$ and $r(x,u)$. We argue that the type $\exists u w(x,u,y,z)$ witnesses OTOP. To see this, fix any cardinal $\kappa$. Choose $\{b_i:i<\kappa\}\cup\{c_j:j<\kappa\}$ to be $M_0$-independent, where ${\rm tp}(b_i/M_0)=p$ and ${\rm tp}(c_j/M_0)=q$ for all $i,j\in\kappa$. For each $i,j$, let $M_1(b_i)$ be prime over $M_0\cup\{b_i\}$ and $M_2(c_j)$ be prime over $M_0\cup\{c_j\}$, and let $\overline{M}$ be prime over the union of these models. Now, for each pair $(i,j)$, choose a witness $d_{i,j}$ to $\varphi(u,b_i,c_j)$ from $\overline{M}$ and let $r_{i,j}$ be shorthand for $r(x,d_{i,j})$. It is easily checked that all of the types $r_{i,j}$ are orthogonal. For each pair $(i,j)$ with $i\le j$, choose a realization $e_{i,j}$ of $r_{i,j}$, and let $M^*$ be prime over $\overline{M}\cup\{e_{i,j}:i\le j<\kappa\}$. Then, because of the orthogonality of the $r_{i,j}$, $M^*$ realizes $\exists u w(x,u,b_i,c_j)$ if and only if $i\le j$. $(6)\Rightarrow(1)$: This is Corollary~\ref{notop}. (There is no circularity.) \medskip \section{$\lambda$-Borel completeness} Throughout this section, we {\bf fix a cardinal $\lambda\ge\aleph_0$}. We consider only models of size $\lambda$, typically those whose universe is the ordinal $\lambda$, in a language of size $\kappa\le\lambda$. {\em For notational simplicity, we only consider relational languages.} Although it would be of interest to explore this notion in more generality, here we only study classes ${\bf K}$ of $L$-structures that are closed under $\equiv_{\infty,\aleph_0}$ and study the complexity of ${\bf K}/\equiv_{\infty,\aleph_0}$. \begin{Definition} {\em For any (relational) language $L$ with at most $\lambda$ symbols, let ${L^{\pm}}:=L\cup\{\neg R:R\in L\}$, and let $S^\lambda_L$ denote the set of $L$-structures $M$ with universe $\lambda$. Let $$L(\lambda):=\{R(\overline{\alpha}):R\in{L^{\pm}},\ \overline{\alpha}\in {^{{\rm arity(R)}}\lambda}\}$$ and endow $S^\lambda_L$ with the topology formed by letting $${\cal B}:=\{U_{R(\overline{\alpha})}:R(\overline{\alpha})\in L(\lambda)\}$$ be a subbasis, where $U_{R(\overline{\alpha})}=\{M\in S^\lambda_L:M\models R(\overline{\alpha})\}$. } \end{Definition} \begin{Definition} {\em Given a language $L$ of size at most $\lambda$, a set $K\subseteq S^\lambda_L$ is {\em $\lambda$-Borel} if, there is a $\lambda$-Boolean combination $\Psi$ of $L(\lambda)$-sentences (i.e., a propositional $L_{\lambda^+,\aleph_0}$-sentence of $L(\lambda)$) such that $$K=\{M\in S^\lambda_L:M\models\Psi\}$$ Given two languages $L_1$ and $L_2$, a function $f:S^\lambda_{L_1}\rightarrow S^\lambda_{L_2}$ is {\em $\lambda$-Borel} if the inverse image of every (basic) open set is $\lambda$-Borel. } \end{Definition} That is, $f:S^\lambda_{L_1}\rightarrow S^\lambda_{L_2}$ is $\lambda$-Borel if and only if for every $R\in L_2$ and $\overline{\beta}\in {^{\rm arity(R)}\lambda}$, there is a $\lambda$-Boolean combination $\Psi_{R(\overline{\beta})}$ of $L_1(\lambda)$-sentences such that for every $M\in S^\lambda_{L_1}$, $f(M)\models R(\overline{\beta})$ if and only if $M\models \Psi_{R(\overline{\beta})}$. As two countable structures are isomorphic if and only if they are $\equiv_{\infty,\aleph_0}$, a moment's thought tells us that when $\lambda=\aleph_0$, the notions of $\aleph_0$-Borel sets and functions defined above are equivalent to the usual notion of Borel sets and functions. \begin{Definition} {\em Suppose that $L_1,L_2$ are relational languages with at most $\lambda$ symbols, and for $\ell=1,2$, $K_\ell$ is a $\lambda$-Borel subset of $S^\lambda_{L_\ell}$ that is invariant under $\equiv_{\infty,\aleph_0}$. We say that {\em $(K_1,\equiv_{\infty,\aleph_0})$ is $\lambda$-Borel reducible to $(K_2,\equiv_{\infty,\aleph_0})$}, written $$(K_1,\equiv_{\infty,\aleph_0})\le_\lambda^B (K_2,\equiv_{\infty,\aleph_0})$$ if there is a $\lambda$-Borel function $f:S^\lambda_{L_1}\rightarrow S^\lambda_{L_2}$ such that $f(K_1)\subseteq K_2$ and, for all $M,N\in K_1$, $$M\equiv_{\infty,\aleph_0} N\qquad\hbox{if and only if}\qquad f(M)\equiv_{\infty,\aleph_0} f(N)$$ } \end{Definition} \begin{Definition} {\em A class $K$ is {\em $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$\/} if $(K,\equiv_{\infty,\aleph_0})$ is a maximum with respect to $\le^B_\lambda$. We call a theory $T$ $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$ if $Mod_\lambda(T)$, the class of models of $T$ with universe $\lambda$, is $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$. } \end{Definition} To illustrate this notion, we prove a series of Lemmas, culminating in a generalization of Friedman and Stanley's \cite{FS} result that subtrees of $\omega^{<\omega}$ are Borel complete. We make heavy use of the following characterizations of $\equiv_{\infty,\aleph_0}$-equivalence of structures of size $\lambda$. \begin{Fact} \label{Levy} If $|L|\le\lambda$, the following conditions are equivalent for $L$-structures $M$ and $N$ that are both of size $\lambda$. \begin{enumerate} \item $M\equiv_{\infty,\aleph_0} N$; \item $M$ and $N$ satisfy the same $L_{\lambda^+,\aleph_0}$-sentences; \item If $G$ is a generic filter of the Levy collapsing poset $Lev(\aleph_0,\lambda)$, then in $V[G]$ there is an isomorphism $h:M\rightarrow N$ of countable structures. \end{enumerate} \end{Fact} For all $\aleph_0\le\kappa\le\lambda$, let $L_\kappa$ be the language consisting of the binary relation $\trianglelefteq$ and $\kappa$ unary predicate symbols $P_i(x)$. Let $\kappa\,CT_\lambda$ denote the class of all $L_\kappa$-trees with universe $\lambda^{<\omega}$, colored by the predicates $P_i$. \begin{Lemma} \label{leftpart} For any (relational) language $L$ satisfying $|L|\le\kappa\le\lambda$, $$(S^\lambda_{L},\equiv_{\infty,\aleph_0})\le^B_\lambda (\kappa\,CT_\lambda, \equiv_{\infty,\aleph_0})$$ \end{Lemma} {\bf Proof.}\quad For each $n\in\omega$, let $\<\varphi_{n,i}(\overline{x}):i<\gamma(n)\le\kappa\>$ be a maximal set of pairwise non-equivalent quantifier-free $L$-formulas with $\lg(\overline{x})=n$. As well, fix a bijection $\Phi:\omega\times\kappa\rightarrow\kappa$. Now, given any $L$-structure $M\in S^\lambda_L$, first note that since the universe of $M$ is $\lambda$, the finite sequences from $M$ naturally form a tree isomorphic to $\lambda^{<\omega}$ under initial segment. So $f(M)$ will consist of this tree, with $\trianglelefteq$ interpreted as the initial segment relation. Furthermore, for each $j\in\kappa$, choose $(n,i)\in\omega\times\kappa$ such that $\Phi(n,i)=j$. If $i<\gamma(n)$, then put $$P_j^{f(M)}:=\{\overline{\alpha}\in\lambda^n:M\models\varphi_{n,i}(\overline{\alpha})\}$$ (if $i\ge\gamma(n)$, then for definiteness, say that $P_j$ always fails on $f(M)$). Choose any $M,N\in S^\lambda_L$. It is apparent from the construction that if $M\equiv_{\infty,\aleph_0} N$, then $f(M)\equiv_{\infty,\aleph_0} f(N)$. The other direction is more interesting. Suppose that $f(M)\equiv_{\infty,\aleph_0} f(N)$. Consider the Levy collapsing forcing, $Lev(\aleph_0,\lambda)$, that, for any generic filter $G$, $V[G]$ includes a bijection $g:\omega\rightarrow\lambda$. We work in $V[G]$. Note that both $f(M)$ and $f(N)$ are $\equiv_{\infty,\aleph_0}$-equivalent, countable structures. Thus, in $V[G]$, fix an $L_\kappa$-isomorphism $h:f(M)\rightarrow f(N)$. Using $h$, in $\omega$ steps we construct two branches $\eta,\nu\in \lambda^\omega$, where we think of $\eta$ as a branch through $f(M)$, while $\nu$ is a branch through $f(N)$, satisfying the following three conditions: \begin{itemize} \item For each $n\in\omega$, $h(\eta(n))=\nu(n)$; \item $\{g(n):n\in\omega\}\subseteq{\rm dom}(\eta)$; and \item $\{g(n):n\in\omega\}\subseteq{\rm dom}(\nu)$. \end{itemize} Let $F=\{(\eta(n),\nu(n)):n\in\omega\}$. As $\{g(n):n\in\omega\}$ is all of $\lambda$, it follows that ${\rm dom}(F)=\lambda$ and ${\rm range}(F)=\lambda$. Furthermore, since $h(\eta(n))=\nu(n)$, it follows that $P_j(\eta(n))\leftrightarrow P_j(\nu(n))$ for each $j$. Thus, for each $n$, the $L$-quantifier free types of $\<\eta(i):i<n\>$ and $\<\nu(i):i<n\>$ are the same. In particular, it follows that $F$ is a bijection from $\lambda$ to $\lambda$ that preserves $L$-quantifier-free types. Thus, $F:M\rightarrow N$ is an isomorphism. Of course, the isomorphism $F\in V[G]$, but it follows easily by absoluteness that $M\equiv_{\infty,\aleph_0} N$ in $V$. \medskip \begin{Definition} {\em Given any trees $T$ and $\{S_\eta:\eta\in T\}$, we form the tree $T^*(S_\eta:\eta\in T)$ that `attaches $S_\eta$ to $T$ at $\eta$' as follows: The universe of $T^*(S_\eta:\eta\in T)$ (which, for simplicity, we write as $T^*$ below) is the disjoint union of $$T\sqcup\bigsqcup_{\eta\in T} S_\eta\setminus\{\<\>\}$$ and, for $u,v\in T^*$, we say $u\le_{T^*} v$ if and only if one of the following clauses hold: \begin{itemize} \item $u,v\in T$ and $u\trianglelefteq_T v$; or \item for some $\eta\in T$, $u,v\in{\cal S}_\eta\setminus\{\<\>\}$ and $u\trianglelefteq_{S_\eta} v$; or \item $u\in T$, $v\in S_\eta\setminus\{\<\>\}$ and $u\trianglelefteq_T\eta$. \end{itemize} } \end{Definition} Note that in particular, elements from distinct $S_\eta$'s are incomparable, and that no element of any $S_\eta$ is `below" any element of $T$. It is easily checked that if $T$ and each of the $S_\eta$'s are subtrees of $\lambda^{<\omega}$, then the attaching tree $T^*(S_\eta:\eta\in T)$ can also be construed as being a subtree of $\lambda^{<\omega}$. \begin{Definition} {\em A {\em subtree of $\lambda^{<\omega}$} is simply a non-empty subset of $\lambda^{<\omega}$ that is closed under initial segments. Given a subtree $T$ of $\lambda^{<\omega}$, an element $\eta\in T$ is {\em contained in a branch} if there is some $\nu\in\lambda^\omega$ extending $\eta$ such that $\nu(n)\in T$ for every $n\in\omega$. A subtree $T$ of $\lambda^{<\omega}$ is {\em special} if, for every $\eta\in T$ that is contained in a branch, $\eta$ has no immediate successors that are leaves (i.e., every immediate successor of $\eta$ has a successor in $T$). } \end{Definition} \begin{Lemma} \label{rightpart} $(\aleph_0\, CT_\lambda,\equiv_{\infty,\aleph_0})\le^B_\lambda\, (\hbox{Special subtrees of $\lambda^{<\omega}$}, \equiv_{\infty,\aleph_0})$ \end{Lemma} {\bf Proof.}\quad Fix a bijection $\Phi:\omega\times\omega\rightarrow\omega\setminus\{0,1\}$. Let $T_0$ be the tree $\lambda^{<\omega}$. Also, given any subset $V\subseteq\omega$, let $S_V$ be the rooted tree consisting of one copy of the tree $\omega^{\le m}$ for each $m\in V$. Other than being joined at the root, the copies of $\omega^{\le m}$ are disjoint. Now, suppose we are given $M\in\aleph_0\, CT_\lambda$, i.e., the tree $(\lambda^{<\omega},\trianglelefteq)$, adjoined by countably many unary predicates $P_j(x)$. We construct a special tree $f(M)$ as follows: First, form the tree $T_0=\lambda^{<\omega}$. For each $\eta\in T_0$, let $$V(\eta):=\{\Phi(n,j):M\models P_j(\eta)\}$$ where $n=\lg(\eta)$. Note that each $V(\eta)\subseteq\omega\setminus\{0,1\}$. Let $f(M)$ be the tree $T_0(S_{V(\eta)}:\eta\in T_0)$. By the remark above, as each of $T$, $T_0$ and each $S_V$ is a subtree of $\lambda^{<\omega}$, $f(M)$ is also a subtree of $\lambda^{<\omega}$. Furthermore, note that $T_0$ is recognizable in $f(M)$ as being precisely those elements of $f(M)$ that are contained in an infinite branch. Moreover, for every element $\eta\in f(M)$ that is not contained in an infinite branch, there is a uniform bound on the lengths of $\nu\in f(M)$ extending $\eta$. Combining this with the fact that $1\not\in V(\eta)$ for any $(\eta)\in T_0$, we conclude that $f(M)$ is special. It is easily verified by the construction that if $M\equiv_{\infty,\aleph_0} N$, then $f(M)\equiv_{\infty,\aleph_0} f(N)$. Conversely, suppose that $M,N\in\aleph_0\, CT_\lambda$ and that $f(M)\equiv_{\infty,\aleph_0} f(N)$. Choose any generic filter $G$ for the Levy collapse $Lev(\aleph_0,\lambda)$. Then, in $V[G]$, there is a tree isomorphism $h:f(M)\rightarrow f(N)$ as both $f(M)$ and $f(N)$ are countable and back-and-forth equivalent. It suffices to prove that $M$ and $N$ are isomorphic in $V[G]$. To see this, first note that since `being part of an infinite branch' is an isomorphism invariant, the restriction of $h$ to $T_0$ is a tree isomorphism between the $T_0$ of $M$ and the $T_0$ of $N$. To finish, we need only show that for every $\eta\in T_0$ and $j\in\omega$, $M\models P_j(\eta)$ if and only if $N\models P_j(h(\eta))$. To see this, let $n=\lg(\eta)$ and $k=\Phi(n,j)$. Then $M\models P_j(\eta)$ if and only if there is an immediate successor $\nu$ of $\eta$ that is not part of an infinite branch, but has an extension $\mu$ of length $n+k$ that is a leaf. As this condition is also preserved by $h$, we conclude that $h|_{T_0}$ preserves each of the $\aleph_0$ colors as well. \medskip \begin{Corollary} There are $\lambda$ pairwise $\equiv_{\infty,\aleph_0}$-inequivalent special subtrees of $\lambda^{<\omega}$. \end{Corollary} {\bf Proof.}\quad Let $L=\{R\}$ consist of a single, binary relation, and let $DG$ be the class of all directed graphs (i.e., $R$-structures) with universe $\lambda$. It is well known that there are at least $\lambda$ pairwise $\equiv_{\infty,\aleph_0}$-inequivalent directed graphs. But, by composing the maps given in Lemmas \ref{leftpart} and \ref{rightpart}, we get a $\lambda$-Borel embedding of $(DG,\equiv_{\infty,\aleph_0})$ into $(\hbox{Special subtrees of $\lambda^{<\omega}$},\equiv_{\infty,\aleph_0})$ preserving $\equiv_{\infty,\aleph_0}$ in both directions. \medskip \begin{Theorem} \label{lambdaBorelcomplete} For any infinite cardinal $\lambda$, $(\hbox{Subtrees of $\lambda^{<\omega}$},\equiv_{\infty,\aleph_0})$ is $\lambda$-Borel complete. \end{Theorem} {\bf Proof.}\quad By Lemma~\ref{leftpart}, it suffices to show $$(\lambda\, CT_\lambda,\equiv_{\infty,\aleph_0})\le^B_\lambda (\hbox{Subtrees of $\lambda^{<\omega}$},\equiv_{\infty,\aleph_0})$$ From the Corollary above, fix a set $\{A_i:i\in\lambda\}$ of pairwise $\equiv_{\infty,\aleph_0}$-inequivalent special subtrees of $\lambda^{<\omega}$. As notation, let $A_{\<i\>}$ denote the tree $A_i$, and let $A_{\<\>}$ be the two-element tree $\{\<\>,a\}$ satisfying $\<\>\triangleleft a$. For each $u\subseteq\lambda$, let $T_u=\{\<\>,a\}\cup\{\<i\>:i\in u\}$ and let $S_u=T_u(A_{\<i\>}:i\in u)$. Note that for each $u\subseteq\lambda$, $S_u$ has a unique leaf $a$ attached to $\<\>$, and the trees $S_u$ and $S_v$ are isomorphic if and only if $u=v$. The proof now follows the proof of Lemma~\ref{rightpart}, using the trees $S_u$ to code the color of a node. More formally, let $T_0:=\lambda^{<\omega}$ and fix an enumeration $\<P_j(x):j\in\lambda\>$ of the unary predicates. Given any $M\in\lambda\, CT_\lambda$, for each node $\eta\in T_0$, let $V(\eta):=\{j\in\lambda:M\models V_j(\eta)\}$. Let $f(M)$ be the tree $T_0(S_{V(\eta)}:\eta\in T_0)$. Note that as each of the $A_i$'s were special, $T_0$ is detectable in $f(M)$ as being the set of all nodes $\eta$ that are part of an infinite branch {\bf and} have an immediate successor that is a leaf. The proof now follows Lemma~\ref{rightpart}. In particular, given an isomorphism $h:f(M)\rightarrow f(N)$ in $V[G]$, the restriction of $h$ to $T_0$ is an isomorphism of $M$ and $N$ as $\kappa\, CT_\lambda$-structures. \medskip \section{The Borel completeness of $\aleph_0$-stable, eni-DOP theories} This section is devoted to the proofs of Theorem~\ref{eniDOPthm} and Corollary~\ref{eniDOPcor}. As the proof of the former is lengthy, the section is split into four subsections. The first describes two distinct types of eni-DOP witnesses. The second shows how one can encode bipartite graphs into models of $T$. However, Proposition~\ref{biggy}, which gives a bit of positive information about the shapes of the bipartite graphs $G$ and $H$ whenever the associated models $M_G$ and $M_H$ are isomorphic, is rather weak. Thus, instead of trying to recover arbitrary bipartite graphs, in the third subsection we describe how to encode subtrees ${\cal T}\subseteq\lambda^{<\omega}$ into bipartite graphs $G^{[m]}_{\cal T}$, where the nodes of ${\cal T}$ correspond to complete, bipartite subgraphs of $G^{[m]}_{\cal T}$. Finally, in the fourth subsection we prove Theorem~\ref{eniDOPthm}, with Corollary~\ref{eniDOPcor} following easily from it. \subsection{Two types of eni-DOP witnesses} \label{twotypes} Suppose that $T$ has eni-DOP. Call a 5-tuple $(M_0,M_1,M_2,M_3,r)$ an {\em eni-DOP witness} if it satisfies the assumptions of Theorem~\ref{equiv}(5). A {\em finite approximation ${\cal F}$} to an eni-DOP witness is a 5-tuple $(a,b,c,d,r_d)$, where $a,b,c,d$ are finite tuples from $(M_0,M_1,M_2,M_3)$, respectively, ${\rm tp}(b/a)$ and ${\rm tp}(c/a)$ are each stationary, regular types, each of $b,c$ contain $a$ and $\{b,c\}$ are independent over $a$, $r$ is based and stationary on $d$ with $r_d\in S(d)$ parallel to $r$, and ${\rm tp}(d/bc)\vdash{\rm tp}(d/M_1M_2)$. The last condition, coupled with the fact that $M_0,M_1,M_2$ are each a-models, yields the following {\em Extendability Condition:} $${\rm tp}(d/bc)\vdash{\rm tp}(d/b^*c^*)$$ for all $a^*\supseteq a$, $b^*\supseteq ba^*$, $c^*\supseteq ca^*$ such that $a^*$ is independent from $bc$ over $a$ and $b^*$ is independent from $c^*$ over $a^*$. As well, $r_d$ is ENI, $r_d\perp b$, and $r_d\perp c$. For a fixed choice ${\cal F}=(a,b,c,d,r_d)$ of a finite approximation, the {\em ${\cal F}$-candidates over $a$} consist of all 4-tuples $(b',c',d',r_{d'})$ such that ${\rm tp}(a,b,c,d)={\rm tp}(a,b',c',d')$. There is a natural equivalence relation $\sim_{{\cal F}}$ on the ${\cal F}$-candidates over $a$ defined by $$(b,c,d,r_d)\sim_{{\cal F}}(b',c',d',r_{d'})\qquad\hbox{if and only if} \qquad r_d\not\perp r_{d'}$$ \begin{Lemma} \label{not4} For any eni-DOP witness $(M_0,M_1,M_2,M_3,r)$, for any finite approximation ${\cal F}$, and for any pair $(b,c,d,r_d)$, $(b',c',d',r_{d'})$ of equivalent ${\cal F}$-candidates over $a$, every element of the set $\{b,c,b',c'\}$ depends on the other three over $a$. \end{Lemma} {\bf Proof.}\quad Everything is symmetric, so assume by way of contradiction that $\fg{b}{a}{cb'c'}$. First, as $\fg{b'c'}{c}{b}$, the Extendibility Condition implies that ${\rm tp}(d'/b'c')\vdash{\rm tp}(d'/b'c'bc)$. In particular, $\fg{d'}{b'c'}{bc}$, so $\fg{b}{c}{b'c'd'}$ follows by the symmetry and transitivity of non-forking. Second, it follows from this and the Extendibility Condition that ${\rm tp}(d/bc)\vdash{\rm tp}(d/bcb'c'd')$, so $\fg{d}{bc}{b'c'd'}$. Combining these two facts yields $$\fg{d}{c}{b'c'd'}$$ But then, as $r_d\in S(d)$ is orthogonal to $c$, by e.g., Claim~1.1 of Chapter~X of \cite{Shc}, $r_d$ would be orthogonal to $b'c'd'$, which contradicts $r_d\not\perp r_{d'}$. \medskip It follows from the previous Lemma that there are two types of behavior of a finite approximation ${\cal F}$. The following definition describes this dichotomy. \begin{Definition} {\em Fix an eni-DOP witness $(M_0,M_1,M_2,M_3,r)$. A finite approximation ${\cal F}=(a,b,c,d,r_d)$ of it is {\em flexible\/} if there is an equivalent ${\cal F}$-candidate $(b',c',d',r_{d'})$ over $a$ for which some 3-element subset of $\{b,c,b',c'\}$ is independent over $a$. We say that the eni-DOP witness $(M_0,M_1,M_2,M_3,r)$ is of {\em flexible type} if it has a flexible finite approximation. A witness is {\em inflexible\/} if it is not flexible. } \end{Definition} \begin{Lemma} \label{4.3} Suppose that $(a,b,c,d,r_d)$ and $(a',b',c',d',r_{d'})$ are each finite approximations of an inflexible eni-DOP witness satisfying ${\rm tp}(a)={\rm tp}(a')$ and $r_{d}\not\perp r_{d'}$. Then there is no finite set $A\supseteq aa'$ satisfying ${\rm tp}(bc/A)$ does not fork over $a$, exactly one element from $\{b',c'\}$ is in $A$, and the other element independent from $A$ over $a'$. \end{Lemma} {\bf Proof.}\quad By way of contradiction suppose that $A$ were such a set. For definiteness, suppose $b'\in A$ and $\fg{c'}{a'}{A}$. Let ${\cal F}$ denote the finite approximation exemplified by $(A,bA,cA,dA,r_{dA})$. Fix an automorphism $\sigma\in Aut({\frak C})$ fixing $Ac'$ pointwise such that $\fg{bcd}{Ac'}{\sigma(b)\sigma(c)\sigma(d)}$. Then $(\sigma(b)A,\sigma(c)A,\sigma(d)A,r_{\sigma(d)A})$ is an ${\cal F}$-candidate over $A$. Moreover, since $r_{dA}\not\perp r_{d}\not\perp r_{d'}\not\perp r_{\sigma(d)A}$ the transitivity of non-orthogonality of regular types imply that it is equivalent to $(bA,cA,dA,r_{dA})$. We will obtain a contradiction to the inflexibility of the eni-DOP witness by exhibiting a 3-element subset of $\{b,c,\sigma(b),\sigma(c)\}$ that is independent over $A$. To see this, first note that since $b$ and $c$ are independent over $A$ and ${\rm tp}(c'/A)$ has weight 1, $c'$ cannot fork with both $b$ and $c$ over $A$. For definiteness, suppose that $b$ and $c'$ are independent over $A$. It follows that $\sigma(b)$ is also independent from $c'$ over $A$. These facts, together with the independence of $b$ and $\sigma(b)$ over $Ac'$, imply that the three element set $\{b,\sigma(b),c'\}$ is independent over $A$. We next claim that ${\rm tp}(bc/Ac')$ forks over $A$. If this were not the case, recalling that $b'\in A$, we would have $\fg{bc}{aa'}{b'c'}$. Then, by two applications of the Extendibility Condition, we would have $\fg{bcd}{aa'}{b'c'd'}$, which would contradict $r_d\not\perp r_{d'}$. But now, the results in the previous two paragraphs, together with the fact that ${\rm tp}(c/Ab)$ has weight 1, imply that the set $\{b,\sigma(b),c\}$ is independent over $A$, contradicting the inflexibility of the eni-DOP witness. \medskip \subsection{Coding bipartite graphs into models} \label{4.2} In this subsection, we take a particular eni-DOP witness and show how we can embed an arbitrary bipartite graph $G$ into a model $M_G$. This mapping will be Borel, and isomorphic graphs will give rise to isomorphic models, but the converse is less clear. Proposition~\ref{biggy} demonstrates that the graphs $G$ and $H$ must be similar in some weak sense whenever $M_G$ and $M_H$ are isomorphic. Fix an eni-DOP witness $(M_0,M_1,M_2,M_3,r)$ and a finite approximation ${\cal F}=(a,b,c,d,r_d)$ of it, choosing ${\cal F}$ to be flexible if the witness is. As notation, let $p={\rm tp}(b/a)$ and $q={\rm tp}(c/a)$. We begin by describing how to code arbitrary bipartite graphs into models of $T$. Given a bipartite graph $G=(L_G,R_G,E_G)$, choose sets ${\cal B}_G:=\{b_g:g\in L_G\}$ and ${\cal C}_G:=\{c_h:h\in R_G\}$ such that ${\cal B}_G\cup{\cal C}_G$ is independent over $a$, ${\rm tp}(b_g/a)=p$ for each $b_g\in{\cal B}_G$, and ${\rm tp}(c_h/a)=q$ for each $c_h\in{\cal C}_G$. As well, for each $(g,h)\in L_G\times R_G$, choose an element $d_{g,h}$ such that ${\rm tp}(d_{g,h}b_gc_h/a)={\rm tp}(dbc/a)$ and let $r_{g,h}\in S(d_{g,h})$ be conjugate to $r_d$. Note that $r_{g,h}\perp r_{g',h'}$ unless $(g,h)=(g',h')$. Let ${\cal D}_G=\{d_{g,h}:(g,h)\in E_G\}$ and let ${\cal R}_G=\{r_{g,h}:(g,h)\in E_G\}$. Inductively construct models $M^n_G$ of $T$ as follows: $M^0_G$ is any prime model over ${\cal B}_G\cup{\cal C}_G\cup{\cal D}_G$. Given $M^n_G$, let ${\cal P}_n=\{p\in S(M^n_G):p\perp {\cal R}_G\}$. By the $\aleph_0$-stability of $T$, ${\cal P}_n$ is countable. Let ${\cal E}_n=\{e_s:s\in {\cal P}_n\}$ be independent over $M^n_G$ with each $e_s$ realizing $s$, and let $M^{n+1}_G$ be prime over $M^n_G\cup{\cal E}_n$. Finally, let $M_G=\bigcup_{n\in\omega} M^n_G$. It is easily verified that if $G$ has universe $\lambda$, then the mapping $G\mapsto M_G$ is $\lambda$-Borel. Moreover, it is easy to see that for regular types $r\in S(M_G)$, $$r \ \hbox{has finite dimension in $M_G$ if and only if $r\not\perp r_{g,h}$ for some $(g,h)\in E_G$}$$ Suppose that $f:M_G\rightarrow M_H$ were an isomorphism. Then $f$ maps the regular types in $S(M_G)$ of finite dimension onto the regular types in $S(M_H)$ of finite dimension. Thus, by construction of $M_G$ and $M_H$, this correspondence yields a bijection $$\pi_f:E_G\rightarrow E_H$$ Unfortunately, this identification need not extend to a bipartite graph isomorphism between $G$ and $H$. Specifically, there might be edges $e_1,e_2\in E_G$ that share a vertex of $G$, while the corresponding edges $\pi_f(e_1),\pi_f(e_2)\in E_H$ do not have a common vertex. The bulk of our argument will be to show that images of sufficiently large, complete bipartite subgraphs of $G$ cannot be too wild. To make this precise, for $X\subseteq E_G$, let $v_G(X)$ denote the smallest subset of the vertices of $G$ with $X\subseteq E_{v_G(X)}$. For $\ell$ very large, call a graph $G$ {\em almost $\ell$-complete bipartite} if it is $m_1\times m_2$ bipartite with $0.99\ell\le m_i\le \ell$ for $i=1,2$ and each vertex has valence at least $0.9\ell$. The proof of the following Proposition is substantial, and occupies the remainder of this subsection. \begin{Proposition} \label{biggy} For any bipartite graphs $G$ and $H$ and for any isomorphism $f:M_G\rightarrow M_H$, there is a number $\ell^*$, depending only on $f$, such that for all $\ell\ge\ell^*$, if $G_0\subseteq G$ is any complete $\ell\times\ell$ bipartite subgraph, then $v_H(\pi_f(E_{G_0}))$ contains an almost $\ell$-complete bipartite subgraph. \end{Proposition} {\bf Proof.}\quad Fix bipartite graphs $G$, $H$, and an isomorphism $f:M_G\rightarrow M_H$. As notation, let $a'=f^{-1}(a)$, let ${\cal B}_H'=\{f^{-1}(b):b\in {\cal B}_H\}$, and let ${\cal C}_H'=\{f^{-1}(c):c\in{\cal C}_H\}$. Let $X\subseteq {\cal B}_G\cup{\cal C}_G$ be minimal such that ${\rm tp}(a'/a{\cal B}_G\cup{\cal C}_G)$ does not fork over $Xa$, and let $X'\subseteq{\cal B}_H'\cup{\cal C}_H'$ be minimal such that ${\rm tp}(a/a'{\cal B}_H'{\cal C}_H')$ does not fork over $X'a'$. Note that $|X|\le {\rm wt}(a'/a)$ and $|X'|\le{\rm wt}(a/a')$. Let $\Lambda^*$ be the set of non-orthogonality classes of regular types in $S(M_G)$ of finite dimension in $M_G$. For each $S\in\Lambda^*$ let $(b_s,c_s)$ be the unique element of ${\cal B}_G\times{\cal C}_G$ such that there is a candidate $(a,b_s,c_s,d,r_d)$ over $a$ with $r_d\in S$ and let $(b_s',c_s')$ be the unique element of ${\cal B}_H'\times{\cal C}_H'$ such that there is a candidate $(a',b_s',c_s',d',r_{d'})$ over $a'$ with $r_{d'}\in S$. For $\Lambda$ a finite subset of $\Lambda^*$, let $B(\Lambda)=\{b_s:S\in\Lambda\}$, $C(\Lambda)=\{c_s:S\in \Lambda\}$, and $v(\Lambda)=B(\Lambda)\cup C(\Lambda)$. Dually, define $B'(\Lambda)$, $C'(\Lambda)$, and $v'(\Lambda)$ using $(b'_s,c'_s)$ in place of $(b_s,c_s)$. The proof splits into two cases depending on whether our eni-DOP witness is flexible or inflexible. \medskip\par \noindent{\bf Case 1:} The eni-DOP witness is inflexible. \medskip This case will be substantially easier than the other, and in fact, we prove that there is a number $e$ such that for all sufficiently large $\ell$, the image of any $\ell\times\ell$ bipartite graph contains an $(\ell-e)\times(\ell-e)$ complete, bipartite subgraph. The simplicity of this case is primarily due to the following claim. \medskip\par \noindent{\bf Claim 1.} For any finite $\Lambda\subseteq\Lambda^*$ such that $v(\Lambda)$ is disjoint from $X$ and $v'(\Lambda)$ is disjoint from $X'$, we have $|v(\Lambda)|=|v'(\Lambda)|$. {\bf Proof.}\quad To see this, we again split into cases. First, if $p\perp q$, then we handle the two `halves' separately. Note that for each $S\in \Lambda$, ${\rm tp}(b_sc_s/aa')$ does not fork over $a$, ${\rm tp}(b'_s,c'_s/aa')$ does not fork over $a'$, and by Lemma~\ref{4.3}, each element of $\{b_s,c_s\}$ forks with $b'_sc'_s$ over $aa'$. Since $p\perp q$, this implies $\{b_s,b'_s\}$ fork over $aa'$. It follows that, working over $aa'$, $$Cl_p(B(\Lambda))=Cl_p(B'(\Lambda))$$ hence $|B(\Lambda)|=|B'(\Lambda)|$. It follows by a symmetric argument that $Cl_q(C(\Lambda))=Cl_q(C'(\Lambda))$, so $|C(\Lambda)|=|C'(\Lambda)|$. It follows immediately that $|v(\Lambda)|=|v'(\Lambda)|$. On the other hand, if $p\not\perp q$, then $Cl_p$ is a closure relation on $p^*({\frak C})\cup q^*({\frak C})$, where $p^*$ (resp.\ $q^*$) is the non-forking extension of $p$ (resp.\ $q$) to $aa'$. Furthermore, for each $S\in\Lambda$ we have $Cl_p(b_sc_s)=Cl_p(b'_s,c'_s)$. It follows that $Cl_p(v(\Lambda))=Cl_p(v'(\Lambda))$. As each set is independent over $aa'$, we conclude that $|v(\Lambda)|=|v'(\Lambda)|$. Let $w={\rm wt}(a'/a)$ and $e=w+{\rm wt}(a/a')^2$. Suppose that $G_0\subseteq G$ is an $\ell\times\ell$ complete, bipartite subgraph. Since $|X|\le w$, there is an $(\ell-w)\times(\ell-w)$ complete subgraph $G_0^*\subseteq G_0$ such that $E_{G_0^*}$ is disjoint from $X$. By our choice of $e$ there is an $(\ell-e)\times(\ell-e)$ complete subgraph $G_1\subseteq G_0^*$ such that $\pi_f(b,c)$ is not contained in $X'$ for all pairs $(b,c)\in E_{G_1}$. But then, by Lemma~\ref{4.3}, we have $\pi_f(b,c)$ is disjoint from $X'$ for all $(b,c)\in E_{G_1}$. Now, $G_1$ is an $(\ell-e)\times(\ell-e)$ complete, bipartite subgraph of $G$. In particular, $G_1$ has $2(\ell-e)$ vertices and $(\ell-e)^2$ edges. Let $H_1$ be the subgraph of $H$ whose edges are $E_{H_1}:=\pi_f(E_{G_1})$ and whose vertices are $v(H_1):=v_H(E_{H_1})$. Then $|E_{H_1}|=(\ell-e)^2$ since $\pi_f$ is a bijection and $$|v(H_1)|=|v_H(E_{H_1})|=|v_G(E_{G_1})|=2(\ell-e)$$ by Claim 1. By a classical optimal packing result, this is only possible when $H_1$ is itself a complete, $(\ell-e)\times(\ell-e)$ bipartite subgraph of $H$. \medskip\par \noindent{\bf Case 2:} The eni-DOP witness is flexible. \medskip As we insisted that our finite approximation be flexible, it follows from Lemma~\ref{not4} that $p\not\perp q$, so $p$-closure is a dependence relation on $p({\frak C})\cup q({\frak C})$. As well, for any candidate $(b,c,d,r_d)$ over $a$ and for any finite $A\supseteq a$, there is an equivalent candidate $(b',c',d',r_{d'})$ over $a$ such that $w_p(b'c'/A)=1$. \begin{Definition} {\em For any finite subgraph $G_0\subseteq G$, let $\Lambda(G_0)$ be the set of non-orthogonality classes $\{[r_{d_{g,h}}]: (g,h)\in E_{G_0}\}.$ Note that $|\Lambda(G_0)|=|E_{G_0}|$ by the pairwise orthogonality of the types $r_{d_{g,h}}$. A {\em manifestation ${\cal M}={\cal M}(\Lambda,a)$ over $a$} is a set of candidates $\{(b_s,c_s,d_s,r_{d_s}):S\in\Lambda\}$ over $a$ with $r_{d_s}\in S$ for each $S\in\Lambda$. Associated to any manifestation ${\cal M}$ is a bipartite graph $G({\cal M})$ with `Left Nodes' $L({\cal M})=\{b_s:s\in\Lambda\}$, `Right Nodes' $R({\cal M})=\{c_s:s\in\Lambda\}$, vertices $v({\cal M})=L({\cal M})\cup R({\cal M})$, and edges $E({\cal M})=\{(b_s,c_s):s\in\Lambda\}$. If $G_0$ is a subgraph of $G$, then the {\em canonical manifestation of $\Lambda(G_0)$ over $a$ inside $M_G$\/} is the set $$\{(b_g,c_h,d_{g,h},r_{g,h}):(g,h)\in E_{G_0}\}$$. A set $A$ {\em represents $\Lambda$ over $a$} if $a\subseteq A$ and $v({\cal M})\subseteq A$ for some manifestation ${\cal M}$ of $\Lambda$ over $a$. A manifestation ${\cal M}'$ is {\em $A$-free} if $w_p(b_s',c_s'/A)=1$ for each $S\in\Lambda$ and $\{(b_s',c_s'):S\in\Lambda\}$ are independent over $A$. } \end{Definition} Now, working in the monster model ${\frak C}$, we define a measure of the complexity of $\Lambda$ over $a$. First, note that for any candidate $(b,c,d,r_d)$ over $a$, there is an equivalent candidate $(b','c',d',r_{d'})$ over $a$ with $w_p(b'c'/abc)=1$. By choosing $b'c'$ to be independent over $abc$ from any given $A\supseteq abc$ we can insist that $w_p(b'c'/A)=1$. It follows that $A$-free manifestations of $\Lambda$ exist over any set $A$ representing a finite $\Lambda$. Thus, the following definition makes sense. \begin{Definition} {\em The {\em maximal weight,\/} $mw(\Lambda,a)$, is the largest integer $m$ such that for all finite $A$ representing $\Lambda$ over $a$, there is an $A$-free manifestation ${\cal M}'(\Lambda,a)$ over $a$ with $|v({\cal M}')|=m+\Lambda$. } \end{Definition} \begin{Lemma} \label{connected} Suppose that $G$ is a bipartite graph, $G_0\subseteq G$ is a connected subgraph of $G$, let ${\cal M}(\Lambda(G_0),a)$ be the canonical manifestation of $\Lambda(G_0)$ inside $M_G$, and let ${\cal M}'(\Lambda,a)$ be any other manifestation of $\Lambda(G_0)$. Then $$Cl_p(v({\cal M}')\cup\{v\})=Cl_p(v({\cal M}')\cup v(G_0))$$ for any $v\in v(G_0)$. \end{Lemma} {\bf Proof.}\quad Arguing by symmetry and induction, it suffices to show that for all nonempty $B\subseteq v(G_0)$ and every $c\in v(G_0)\setminus B$ such that $(b,c)\in E_{G_0}$ for some $b\in B$ we have that $c\in Cl_p(v({\cal M}')\cup B)$. But this is immediate, since $Cl_p(\{b',c',b,c\})=Cl_p(\{b',c',b\})$ for all equivalent candidates $(b,c,d,r_d)$ and $(b',c',d',r_{d'})$ over $a$. \medskip \begin{Lemma} \label{leftright} $k(G_0)\le mw(\Lambda(G_0),a)\le |v(G_0)|$ for any bipartite graph $G$ and any finite $G_0\subseteq G$. \end{Lemma} {\bf Proof.}\quad The upper bound is very soft. Let $A\supseteq a\cup v(G_0)$ be arbitrary and let ${\cal M}'$ be any other manifestation of $\Lambda(G_0)$ over $a$. Then $$w_p(v({\cal M}')/a)\le w_p(v({\cal M}')v(G_0)/a)= w_p(v({\cal M}')/av(G_0)) + w_p(v(G_0)/a)$$ Since $w_p(b'_sc'_s/ab_sc_s)\le 1$ for each $S\in \Lambda(G_0)$, we have $w_p(v({\cal M}')/av(G_0))\le |\Lambda(G_0)|$. Also, by the independence of the nodes in $M_G$, $w_p(v(G_0)/a)=|v(G_0)|$. The upper bound on $mw(\Lambda(G_0),a)$ follows immediately. For the lower bound, again choose any $A\supseteq av(G_0)$ and let $C\subseteq v(G_0)$ consist of one vertex from every connected component of $G_0$. Clearly, $A$ represents $\Lambda(G_0)$ and $|C|=CC(G_0)$. Let ${\cal M}'$ be any $A$-free manifestation of $\Lambda(G_0)$ over $a$. Then $$w_p(v({\cal M}')/a)\ge w_p(v({\cal M}')C/a)-CC(G_0)= w_p(v({\cal M}')v(G_0)/a)-CC(G_0)$$ with the second equality coming from Lemma~\ref{connected}. As before, for each $S\in\Lambda(G_0)$, $w_p(b_s'c_s'/ab_sc_s)\le 1$ so $w_p(v({\cal M}')/av(G_0))\le |\Lambda(G_0)|$. On the other hand, the $A$-freeness of ${\cal M}'$ implies that $w_p(v({\cal M}')/A)=|\Lambda(G_0)|$, hence $w_p(v({\cal M}')/av(G_0))=|\Lambda(G_0)|$. Thus, $$w_p(v({\cal M}')v(G_0)/a)=w_p(v({\cal M}')/v(G_0)a)+w_p(v(G_0)/a) =|\Lambda(G_0)|+|v(G_0)|$$ from which the lower bound follows as well. \medskip Now, returning to our isomorphism $f:M_G\rightarrow M_H$, suppose that $G_0$ is any finite subgraph of $G$ that is disjoint from $X$, i.e., so that ${\rm tp}(G_0/aa')$ does not fork over $a$. We then claim: \medskip\par \noindent{\bf Claim 2:} $mw(\Lambda(G_0),a')\le |v(G_0)| + wt(a/a')$ {\bf Proof.}\quad Choose any finite $A$ containing $\{aa'\}\cup v(G_0) \cup v_H(\pi_f(E_{G_0}))$. So $A$ represents $\Lambda(G_0)$ over $a'$. Let ${\cal M}'$ be any $A$-free manifestation of $\Lambda(G_0)$ over $a'$. Now $$w_p(v({\cal M}')/a')\le w_p(v({\cal M}')aG_0/a')=w_p(v({\cal M}')/aa'G_0)+w_p(aG_0/a')$$ But, as before $w_p(b'_sc'_s/aa'b_sc_s)\le 1$, so $w_p(v({\cal M}')/aa'G_0)\le |\Lambda(G_0)|$. Also, $$w_p(aG_0/a')=w_p(G_0/aa')+wt(a/a')=|v(G_0)|+w_p(a/a')$$ and the Claim follows. \medskip Finally, choose a complete, bipartite subgraph $G_0\subseteq G$, where $\ell$ is sufficiently large with respect to $W=wt(a/a')$. Let $H_0$ be the bipartite graph with vertices $v_H(\pi_f(E_{G_0}))$ and edges $\pi_f(E_{G_0})$ and let $H_0^*$ be the subgraph of $H$ with the same vertex set as $H_0$. Note that $E_{H_0}\subseteq E_{H_0^*}$, but that equality need not hold. As $G_0$ is $\ell\times\ell$ complete bipartite, $|v(G_0)|=2\ell$ and $|\Lambda(G_0)|=\ell^2$. It follows immediately that $|E_{H_0}|=\ell^2$ and it follows from Claim~2 and Lemma~\ref{leftright} that $$k(H_0)\le mw(\Gamma(G_0),a')\le 2\ell +W$$ where $W=wt(a/a')$. So, by Corollary~\ref{finitecomb} of the Appendix, $H_0$ contains an almost $\ell$-complete bipartite subgraph $H_1$. But then, $H_1^*$, which is the subgraph of $H$ with the same vertex set as $H_1$, is almost $\ell$-complete as well. \medskip \subsection{Coding trees by complete, bipartite subgraphs} \label{sec4.3} As Proposition~\ref{biggy} is rather weak, we give up on coding arbitrary bipartite graphs into models of $T$. Rather, we seek to code subtrees of $\lambda^{<\omega}$ into bipartite graphs that have large, complete subgraphs. Fix a sufficiently large integer $m$ and a tree ${\cal T}\subseteq \lambda^{<\omega}$. We will construct a bipartite graph $G^{[m]}_{\cal T}$, whose $7m\times 7m$ complete bipartite subgraphs $B^m_{\cal T}(\eta)$ code nodes $\eta\in {\cal T}$. Moreover, additional information about the level of $\eta$ and its set of immediate successors will be coded by the size of the intersection of $B^m_{\cal T}(\eta)$ and $B^m_{\cal T}(\nu)$ for other $\nu\in T$. More precisely, fix a tree $({\cal T},\trianglelefteq)$ and a large integer $m$. We first define a bipartite graph $preG^{[m]}_{\cal T}$ to have universe ${\cal T}\times m\times 14$ with the edge relation $$\{((\eta,i_1,n_1),(\eta,i_2,n_2)):\eta\in {\cal T}, i_i,i_2\in m, \hbox{$n_1+n_2$ is odd}\}$$ So the `left hand side' of $pre G^{[m]}_{\cal T}$ is $L={\cal T}\times m\times\{n\in 14:n$ odd$\}$, the `right hand side' is $R={\cal T}\times m\times\{n\in 14:n$ even$\}$, thereby associating a $7m\times 7m$ complete, bipartite graph to each node $\eta\in {\cal T}$. Next, define a binary relation $E_0$ on $preG^{[m]}_{\cal T}$ by $(\eta_1,i_1,n_1) E_0 (\eta_2,i_2,n_2)$ if and only if \begin{itemize} \item $\eta_2$ is an immediate successor of $\eta_1$, $i_1=i_2$, $n_1=n_2$ and \item either $\lg(\eta_1)=0$ and $n_1\in\{0,1\}$ or $\lg(\eta_1)>0$ and $n_1\in\{10,11,12,13\}$. \end{itemize} Let $E$ be the smallest equivalence relation containing $E_0$, i.e., the reflexive, symmetric and transitive closure of $E_0$. Let $G^{[m]}_{\cal T}:=preG^{[m]}_{\cal T}/E$ and, for each $\eta\in{\cal T}$, let $B^m_{\cal T}(\eta)=\{g\in G^{[m]}_{\cal T}: (\eta,i,n)\in g$ for some $i<m,n<14\}$. As notation, for each $\eta\in{\cal T}$, let $B^m_{\cal T}(\eta)=\{g\in G^{[m]}_{\cal T}: (\eta,i,n)\in g$ for some $i<m,n<14\}$, let ${\cal S}^m_{\cal T}=\{B^m_{\cal T}(\eta):\eta\in{\cal T}\}$, and let $g_{\cal T}:{\cal T}\rightarrow{\cal S}^m_{\cal T}$ be the bijection $\eta\mapsto B^m_{\cal T}(\eta)$. For all of these, when ${\cal T}$ and $m$ are clear, we delete reference to them. Finally, call an element $g\in G^{[m]}_{\cal T}$ a {\em singleton} if $g=\{(\eta,i,n)\}$ for a single element $(\eta,i,n)\in preG^{[m]}_{\cal T}$. All of the following Facts are immediate: \begin{Fact} \label{gg} \begin{enumerate} \item Every $B(\eta)$ is a $7m\times 7m$ complete, bipartite graph; \item If $g\in B(\eta)$ is a singleton and $E(g,h)$, then $h\in B(\eta)$; \item For all $\eta\in{\cal T}$, $i<m$, $(\eta,i,n)$ is a singleton for all $2\le n\le 9$. \item \label{important} If $lg(\nu)<lg(\eta)$, then $B(\nu)\cap B(\eta)=\emptyset$ if and only if $\nu=\eta^-$. Moreover, a nonempty intersection is a complete $m\times m$ bipartite graph if $\eta^-=\<\>_{\cal T}$ and the intersection is $2m\times 2m$ complete, bipartite if $\eta^-\neq\<\>_{\cal T}$. \end{enumerate} \end{Fact} \begin{Lemma} \label{onlycomplete} ${\cal S}=\{all$ $7m\times 7m$ complete, bipartite subgraphs of $G^{[m]}_{\cal T}\}$. \end{Lemma} {\bf Proof.}\quad That each $B(\eta)\in{\cal S}$ is a $7m\times 7m$ complete, bipartite subgraph is clear. Conversely, fix a $7m\times 7m$ complete, bipartite subgraph of $G^{[m]}_{\cal T}$. First, suppose that $X$ contains a singleton $a$. Without loss, assume $a\in X\cap B(\eta)\cap L$. Then $E_X(a)=\{b\in X:E(a,b)\}$ has cardinality $7m$ and is contained in $B(\eta)\cap R$, hence $E_X(a)=B(\eta)\cap R$. But then, $X\cap R$ contains a singleton as well, so arguing similarly, $B(\eta)\cap L=X\cap L$, so $X=B(\eta)$. It remains to show that $X$ contains a singleton. Choose $k$ maximal such that there is $\eta\in{\cal T}$, $lg(\eta)=n$, and $X\cap B(\eta)\neq\emptyset$. Let $\nu=\eta^-$. If $X$ does not contain a singleton, then the maximality of $k$ implies that $X\cap(B(\eta')\setminus B(\nu))=\emptyset$ for all $\eta'\in Succ(\nu)$. Choose any $a\in X\cap B(\eta)\cap L$. Then $a\in B(\nu)$ and moreover, $E_X(a)\subseteq B(\nu)\cap R$. By counting, $E_X(a)=B(\nu)\cap R$, so $X$ contains a singleton, completing the proof of the Claim. \medskip For clarity, let $L_0=\{R_1,R_2\}$ denote the language consisting of two binary relation symbols. Form an $L_0$ structure $({\cal S}^m_{\cal T},R_1,R_2)$ by declaring that $R_1(X,Y)$ holds if and only if $X\cap Y$ is an $m\times m$ complete, bipartite graph and $R_2(X,Y)$ holds if and only if $X\cap Y$ is a $2m\times 2m$ complete, bipartite graph. \begin{Lemma} \label{suff} For any sufficiently large $m$ and trees $({\cal T},\trianglelefteq), ({\cal T}',\trianglelefteq)$, if there is an $L_0$-isomorphism $\Phi:({\cal S}^m_{\cal T},R_1,R_2)\rightarrow ({\cal S}^m_{{\cal T}'},R_1,R_2)$ of the associated $L_0$-structures, then the composition $h:({\cal T},\trianglelefteq)\rightarrow ({\cal T}',\trianglelefteq)$ given by $h=g_{{\cal T}'}^{-1}\circ \Phi\circ g_{\cal T}$ is a tree isomorphism. \end{Lemma} {\bf Proof.}\quad For each $n\in\omega$, let ${\cal T}_n=\{\eta\in{\cal T}:lg(\eta)<n\}$ and define ${\cal T}'_n$ analogously. Using Fact~\ref{gg}(\ref{important}), one proves by induction on $n$ that $h|_{{\cal T}_n}:({\cal T}_n,\trianglelefteq)\rightarrow ({\cal T}'_n,\trianglelefteq)$ is a tree isomorphism. This suffices to prove the Lemma. \medskip \subsection{$\aleph_0$-stable, eni-DOP theories are $\lambda$-Borel complete} \begin{Theorem} \label{eniDOPthm} If $T$ is $\aleph_0$-stable with eni-DOP, then for any infinite cardinal $\lambda$, there is a $\lambda$-Borel embedding ${\cal T}\mapsto M({\cal T})$ from subtrees of $\lambda^{<\omega}$ to $Mod_\lambda(T)$ satisfying $$({\cal T}_1,\trianglelefteq)\cong ({\cal T}_2,\trianglelefteq)\qquad\hbox{if and only if}\qquad M({\cal T}_1)\cong M({\cal T}_2)$$ \end{Theorem} {\bf Proof.}\quad Fix any infinite cardinal $\lambda$. As in Subsection~\ref{4.2}, fix an eni-DOP witness $(M_0,M_1,M_2,M_3,r)$ and a finite approximation ${\cal F}=(a,b,c,d,r_d)$ of it, choosing ${\cal F}$ to be flexible if the witness is. As notation, let $p={\rm tp}(b/a)$ and $q={\rm tp}(c/a)$. As well, for the whole of the proof, fix a recursive, fast growing sequence, $\<m_i:i\in\omega\>$ of integers, e.g., $m_0=10$ and $m_{i+1}=m_i!!$ Given a subtree ${\cal T}\subseteq \lambda^{<\omega}$, let $G^*_{\cal T}$ be the bipartite graph which is the disjoint union $\bigcup_{i\in\omega} G^{[m_i]}_{\cal T}$, where the graphs $G^{[m_i]}_{\cal T}$ are constructed as in Subsection~\ref{sec4.3}. Next, construct a model $M({\cal T}):=M_{G^*_{\cal T}}$ from the bipartite graph $G^*_{\cal T}$ as in Subsection~\ref{4.2}. Clearly, after some reasonable coding, we may assume that $M({\cal T})$ has universe $\lambda$. It is routine to verify that both of the maps ${\cal T}\mapsto G^*_{\cal T}$ and $G^*_{\cal T}\mapsto M_{G^*_{\cal T}}$ (and hence their composition) are $\lambda$-Borel. By looking at the constructions in Subsections~\ref{4.2} and \ref{sec4.3}, it is easily checked isomorphic trees ${\cal T}\cong{\cal T}'$ give rise to isomorphic models $M({\cal T})\cong M({\cal T}')$. To establish the converse, suppose that ${\cal T},{\cal T}'$ are subtrees such that $M({\cal T})\cong M({\cal T}')$. Fix an isomorphism $f:M_{G^*_{{\cal T}}}\rightarrow M_{G^*_{{\cal T}'}}$ and choose $i$ so that $m_i>>\ell^*$, where $\ell^*$ is the constant in the statement of Proposition~\ref{biggy}. For each $\eta\in{\cal T}$, by Fact~\ref{gg}(1), $B^m_{\cal T}(\eta)$ is a $7m_i\times 7m_i$ complete, bipartite subgraph of $G^{[m_i]}_{\cal T}$. Let $E(\eta)$ denote the edge set of $B^m_{\cal T}(\eta)$. Now $\pi_f(E(\eta))$ is a set of $(7m_i)^2$ edges in $G^*_{{\cal T}'}$. Let $v_{{\cal T}'}(\eta)$ be the smallest set of vertices in $G^*_{{\cal T}'}$ whose edge set contains $\pi_f(E(\eta))$. By Proposition~\ref{biggy}, the graph $J(\eta):=(v_{T'}(\eta),\pi_f(E(\eta)))$ has an almost $7m_i$-complete bipartite subgraph $K(\eta)$. Let $K^*(\eta)$ be the subgraph of $G^*_{{\cal T}'}$ whose vertex set is the same as $K(\eta)$. Note that the edge set of $K^*(\eta)$ contains the edge set of $K(\eta)$, so $K^*(\eta)$ is almost $7m_i$-complete as well. As $K^*(\eta)$ is a connected subgraph of $G^*_{{\cal T}'}$, $K^*(\eta)\subseteq G^{[m_j]}_{{\cal T}'}$ for some $j$. As the valence of each vertex of $K^*(\eta)$ is $\sim 7m_i$ and $m_i>>m_k$ for all $k<i$, we must have $j\ge i$. \medskip\par\noindent{\bf Claim:} $j=i$. \medskip\par {\bf Proof.}\quad Choose $\nu\in{\cal T}'$ such that $K^*(\eta)$ and $B^{m_j}(\nu)$ share a connected subgraph $D$ with $e(D)>>N_f$. Arguing as above, there is an almost $7m_j$ complete, bipartite subgraph $H^*(\nu)$ of $G^*_{\cal T}$ whose edge set (almost) contains $\pi_f^{-1}(E(\nu))$, where $E(\nu)$ is the edge set of $B^{m_j}_{{\cal T}'}(\nu)$. As before, $H^*(\nu)\subseteq G^{[m_k]}_{{\cal T}}$ for some $k$, and as the valence of every vertex is large, $k\ge j$. However, almost all of the edges of $D$ correspond to edges of $H^*(\nu)$. In particular, $H^*(\nu)$ contains edges from $B^{m_i}_{\cal T}(\eta)$. But, as $H^*(\eta)$ is connected, this implies $H^*(\eta)\subseteq G^{[m_i]}_{\cal T}$. Thus $k=j=i$. \medskip Thus, we have shown that for each $\eta\in{\cal T}$, $K^*(\eta)$ is an almost $7m_i$ complete bipartite subgraph of $G^{[m_i]}_{{\cal T}'}$. It follows as in the proof of Lemma~\ref{onlycomplete} that for each $\eta\in{\cal T}'$, there is a unique $\nu\in{\cal T}'$ such that the subgraphs $K^*(\eta)$ and $B^{m_i}_{{\cal T}'}(\nu)$ have large intersection in $G^{[m_i]}_{{\cal T}'}$. Define $$\Phi:{\cal S}^{m_i}_{\cal T}\rightarrow {\cal S}^{m_i}_{{\cal T}'}$$ by $\Phi(B^{m_i}(\eta))=B^{m_i}_{{\cal T}'}(\nu)$ for this unique $\nu$. As the argument given above is reversible, $\Phi$ is a bijection. Furthermore, if $D\subseteq B^{m_i}(\eta)$ is either an $m\times m$ or a $2m\times 2m$ complete, bipartite subgraph, then applying Proposition~\ref{biggy} to $D$ yields a connected graph $K^*(D)$ whose number of edges satisfies $$m_i^2-N_f\le e(K(D))\le m_i^2$$ By taking $D=B^{m_i}(\eta_1)\cup B^{m_i}(\eta_2)$ for various $\eta_1,\eta_2\in{\cal T}$, it follows that $\Phi$ is an $L_0$-isomorphism. Thus, by Lemma~\ref{suff}, $({\cal T},\trianglelefteq)\cong({\cal T}',\trianglelefteq)$ as required. \medskip \begin{Corollary} \label{eniDOPcor} If $T$ is $\aleph_0$-stable with eni-DOP, then $T$ is Borel complete. Moreover, for every infinite cardinal $\lambda$, $T$ is $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$. \end{Corollary} {\bf Proof.}\quad For both statements, by Theorem~\ref{lambdaBorelcomplete}, it suffices to show that $$(\hbox{Subtrees of $\lambda^{<\omega}$},\equiv_{\infty,\aleph_0})\ \le^B_\lambda \ (Mod_\lambda(T),\equiv_{\infty,\aleph_0})$$ for every $\lambda\ge\aleph_0$. So fix an infinite cardinal $\lambda$. The map ${\cal T}\mapsto M(T)$ given in Theorem~\ref{eniDOPthm} is $\lambda$-Borel. Choose any generic filter $G$ for the Levy collapsing poset $Lev(\lambda,\aleph_0)$. By Fact~\ref{Levy}, for any subtrees ${\cal T}_1,{\cal T}_2\subseteq\lambda^{<\omega}$ in $V$, ${\cal T}_1\equiv_{\infty,\aleph_0}{\cal T}_2$ in $V$ if and only if ${\cal T}_1\cong{\cal T}_2$ in $V[G]$. As well, $M({\cal T}_1)\equiv_{\infty,\aleph_0} M({\cal T}_2)$ in $V$ if and only if $M({\cal T}_1)\cong M({\cal T}_2)$ in $V[G]$. Thus, since the mapping ${\cal T}\mapsto M({\cal T})$ is visibly absolute between $V$ and $V[G]$, the result follows immediately from Theorem~\ref{eniDOPthm}. \medskip \section{eni-NDOP and decompositions of models} In this section, we assume throughout that $T$ is $\aleph_0$-stable with eni-NDOP. [In fact, the first few Lemmas require only $\aleph_0$-stability.] We discuss three species of decompositions (regular, eni, and eni-active) of an arbitrary model $M$ and prove a theorem about each one. Theorem~\ref{regulartheorem} asserts that in a regular decomposition ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ of $M$, then $M$ is atomic over $\bigcup_{\eta\in I} M_\eta$. This theorem plays a key role in Corollary~\ref{notop}. Next, we discuss eni-active decompositions of a model $M$ and prove that for any $N\preceq M$ that contains $\bigcup_{\eta\in I} M_\eta$, then $N$ is an $L_{\infty,\aleph_0}$-elementary substructure of $M$. In particular, Corollary~\ref{quotelater} states that an eni-active decomposition determines a model up to $L_{\infty,\aleph_0}$-equivalence. This is extremely important when we compute $I_{\infty,\aleph_0}(T,\kappa)$ in Section~\ref{last}. Finally, we prove Theorem~\ref{enitheorem}, which states that a model $M$ is atomic over $\bigcup_{\eta\in I} M_\eta$ for any eni decomposition of $M$ {\em provided that each of the models is maximal atomic} (see Definition~\ref{maxatomic}). While the result sounds strong, it is of little use to us, as one has little control about what the maximal atomic submodels of an arbitrary model look like. This theorem was also proved by Koerwien~\cite{K}, but is included here to contrast with Theorems~\ref{regulartheorem} and \ref{eniactivetheorem}. \begin{Definition} {\em An {\em independent tree of models} $\{M_\eta:\eta\in I\}$ satisfies \begin{itemize} \item $I$ is a subtree of $Ord^{<\omega}$; \item $\eta\trianglelefteq\nu$ implies $M_\eta\preceq M_\nu$; and \item For each $\eta\in I$ and $\nu\in Succ_I(\eta)$, $\fg {\bigcup_{\nu\trianglelefteq\gamma} M_\gamma} {M_\eta} {\bigcup_{\nu\not\trianglelefteq\delta}M_\delta}$ \end{itemize} } \end{Definition} In the decompositions that follow, our trees of models will have the additional property that ${\rm tp}(M_\nu/M_\eta)\perp M_{\eta^-}$ for every $\eta\neq\<\>$ and every $\nu\in Succ_I(\eta)$, but our early Lemmas do not require this property. \begin{Lemma} \label{finitetree} Suppose $\{M_\eta:\eta\in I\}$ is any independent tree of models indexed by a finite tree $(I,\trianglelefteq)$. Then the set $\bigcup_{\eta\in I} M_\eta$ is essentially finite with respect to any strong type $p$ that is orthogonal to every $M_\eta$. \end{Lemma} {\bf Proof.}\quad We argue by induction on $|I|$. For $|I|=1$, this is immediate by Lemma~\ref{basicorth}(1) (taking $A=M_{\<\>}$ and $B=\emptyset$). So assume $\{M_\eta:\eta\in I\}$ is any independent tree of models with $|I|=n+1$ and we have proved the Lemma when $|I|=n$. Fix any strong type $p$ that is orthogonal to every $M_\eta$. Choose any leaf $\eta\in I$ and let $J\subseteq I$ be the subtree with universe $I\setminus\{\eta\}$. By the inductive hypothesis, $\bigcup_{\nu\in J} M_\nu$ is essentially finite with respect to $p$, so the result follows by Lemma~\ref{basicorth}(2), taking $A=\bigcup_{\nu\in J} M_\nu$ and $B=M_\eta$. \medskip \begin{Lemma} \label{anytree} Suppose $\{M_\eta:\eta\in I\}$ is any independent tree of models indexed by any tree $(I,\trianglelefteq)$ and let $N$ be any model that contains and is atomic over $\bigcup_{\eta\in I} M_\eta$. Let $p\in S(N)$ be any regular type that is not eni, but is orthogonal to every $M_\eta$, and let $c$ be any realization of $p$. Then $N\cup\{e\}$ is atomic over $\bigcup_{\eta\in I} M_\eta$. \end{Lemma} {\bf Proof.}\quad As notation, for $K\subseteq I$, we let $M_K$ denote $\bigcup_{\nu\in K} M_\nu$. It suffices to show that ${\rm tp}(Dc/M_I)$ is isolated for any finite subset $D\subseteq N$ on which $p$ is based and stationary. To see this, fix such a set $D$. As $D$ is atomic over $M_I$, we can find a finite set $E\subseteq M_I$ such that ${\rm tp}(D/E)$ is isolated and ${\rm tp}(D/E)\vdash{\rm tp}(D/M_I)$. Choose a non-empty finite subtree $J\subseteq I$ such that $E\subseteq M_J$ and choose a prime model $N_J\preceq N$ over $M_J$ that contains $D$. By Lemmas~\ref{finitetree} and \ref{atomicextension} we have that $N_J\cup\{c\}$ is atomic over $M_J$. Choose a formula $\delta(x,h)\in{\rm tp}(Dc/M_J)$ that isolates the type. Now, let $${\cal F}=\{K:J\subseteq K\subseteq I:K\ \hbox{is a subtree and}\ {\rm tp}(Dc/M_K) \ \hbox{is isolated by}\ \delta(x,h)\}$$ Clearly, $J\in{\cal F}$ and by Lemma~\ref{retain} ${\cal F}$ is closed under unions of increasing chains. So choose a maximal element $K^*\in {\cal F}$ with respect to inclusion. To complete the proof of the Lemma, it suffices to prove that $K^*=I$. If this were not the case, then choose a $\trianglelefteq$-minimal element $\eta\in I\setminus K^*$ and let $K':=K^*\cup\{\eta\}$. As $J$ was non-empty, $\eta\neq\<\>$ and the independence of the tree yields $\fg {M_{K^*}} {M_{\eta^-}} {M_\eta}$. But then, by Lemma~\ref{retain}(2), $\delta(x,h)$ isolates ${\rm tp}(Dc/M_{K'})$, contradicting the maximality of $K^*$. Thus, $K^*=I$ and the proof is complete. \medskip We define a plethora of decompositions. \begin{Definition} \label{decompdef} {\em Fix a model $M$. A {\em [regular, eni, eni-active] decomposition inside $M$\/} ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ consists of an independent tree $\{M_\eta:\eta\in I\}$ of elementary submodels of $M$ indexed by $(I,\trianglelefteq)$ satisfying the following conditions for each $\eta\in I$: \begin{enumerate} \item Each $a_\eta\in M_\eta$ (but $a_{\<\>}$ is meaningless); \item The set $C_\eta:=\{a_\nu:\nu\in Succ_I(\eta)\}$ is independent over $M_\eta$; \item For each $\nu\in Succ_I(\eta)$ we have: \begin{enumerate} \item ${\rm tp}(a_\nu/M_{\nu^-})$ is [regular, eni, eni-active]; \item If $\eta\neq\<\>$, then ${\rm tp}(a_\nu/M_\eta)\perp M_{\eta^-}$; \item $M_\nu$ is atomic over $M_\eta\cup\{a_\nu\}$; \end{enumerate} \end{enumerate} A {\em [regular, eni, eni-active] decomposition of $M$\/} is a [regular, eni, eni-active] decomposition inside $M$ with the additional property that for each $\eta\in I$, the set $C_\eta$ is a {\bf maximal} $M_\eta$-independent set of realizations of [regular, eni, eni-active] types (that are orthogonal to $M_{\eta^-}$ when $\eta\neq\<\>$). We say that a decomposition (of any sort) is {\em prime\/} if $M_{\<\>}$ is a prime submodel of $M$ and, for each $\nu\neq\<\>$, $M_{\nu}$ is prime over $M_{\nu^-}\cup\{a_\nu\}$. } \end{Definition} It is important to note that even though eni-NDOP implies eni-active NDOP, it is not the case that every eni-active decomposition is an eni decomposition. As well, note that if ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ is a decomposition of $M$ (in any of the senses) and $N\preceq M$ contains $\bigcup_{\eta\in I} M_\eta$, then ${\mathfrak d}$ is also a decomposition of $N$. The following Lemma requires no assumption beyond $\aleph_0$-stability. \begin{Lemma} For any $M$, prime [regular, eni, eni-active] decompositions of $M$ exist. \end{Lemma} {\bf Proof.}\quad Simply start with an arbitrary prime model $M_{\<\>}\preceq M$, and given a node $M_\eta$, choose $C_\eta$ to be any maximal $M_\eta$-independent subset of $M$ of realizations of [regular, eni, eni-active] types (that are orthogonal to $M_{\eta^-}$ when $\eta\neq\<\>$) and, for each $a_\nu\in C_\eta$, choose $M_\nu\preceq M$ to be prime over $M_\eta\cup\{a_\nu\}$. Any maximal construction of this sort will produce a prime [regular, eni, eni-active] decomposition of $M$. \medskip Of course, without any additional assumptions, such a decomposition may be of limited utility. \begin{Lemma}[$T$ $\aleph_0$-stable with eni-NDOP] \label{overatomic} Let ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ be any regular decomposition inside ${\frak C}$ and let $N$ be atomic over $\bigcup_{\eta\in I} M_\eta$. If an eni-active regular type $p\not\perp N$, then $p\not\perp M_\eta$ for some $\eta\in I$. \end{Lemma} {\bf Proof.}\quad Recall that eni-NDOP implies eni-active NDOP by Theorem~\ref{equiv}. We first prove the Lemma for all finite index trees $(I,\trianglelefteq)$ by induction on $|I|$. To begin, if $|I|=1$, then we must have $N=M_{\<\>}$ and there is nothing to prove. Assume the Lemma holds for all trees of size $n$ and let ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ be a decomposition inside ${\frak C}$ indexed by $(I,\trianglelefteq)$ of size $n+1$. Let $N$ be atomic over $\bigcup_{\eta\in I} M_\eta$ and let $p$ be an eni-active type non-orthogonal to $N$. Choose a leaf $\eta\in I$ and let $J=I\setminus\{\eta\}$. If $(I,\trianglelefteq)$ were a linear order, then again $N=M_\eta$ and there is nothing to prove. If $(I,\trianglelefteq)$ is not a linear order, then choose any $N_J\preceq N$ to be prime over $\bigcup_{\nu\in J} M_\nu$. Then, by eni-active NDOP and Lemma~\ref{DOPwitness}(2), either $p\not\perp M_\eta$ or $p\not\perp N_J$. In the first case we are done, and in the second we finish by the inductive hypothesis since $|J|=n$. Thus, we have proved the Lemma whenever the indexing tree $I$ is finite. For the general case, fix a regular decomposition ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ inside ${\frak C}$, let $N$ be atomic over $\bigcup_{\eta\in I} M_\eta$ and choose an eni-active $p\not\perp N$. By employing Fact~\ref{Fact}(2) and the fact that eni-active types are preserved under non-orthogonality, we may assume $p\in S(N)$. Choose a finite $D\subseteq N$ over which $p$ is based and stationary. As $D$ is finite and atomic over $\bigcup_{\eta\in I} M_\eta$, we can find a finite subtree $J\subseteq I$ such that $D$ is atomic over $\bigcup_{\eta\in J} M_\eta$. Fix such a $J$ and choose $M_J\preceq N$ such that $D\subseteq M_J$ and $M_J$ is prime over $\bigcup_{\eta\in J} M_\eta$. As $D\subseteq M_J$, $p\not\perp M_J$, so since $J$ is finite, the argument above implies that there is an $\eta\in J$ such that $p\not\perp M_\eta$. \medskip \begin{Theorem}[$T$ $\aleph_0$-stable with eni-NDOP] \label{regulartheorem} Suppose $\<M_\eta,a_\eta:\eta\in I\>$ is a regular decomposition of $M$. Then $M$ is atomic over $\bigcup_{\eta\in I} M_\eta$. \end{Theorem} {\bf Proof.}\quad Choose $N\preceq M$ to be maximal atomic over $\bigcup_{\eta\in I} M_\eta$. We argue that $N=M$. If this were not the case, then choose $e\in M\setminus N$ such that $p:={\rm tp}(e/N)$ is regular. We obtain a contradiction in three steps. \medskip \noindent{{\bf Claim 1:}} $p\perp M_\eta$ for all $\eta\in I$. \medskip {\bf Proof.}\quad Suppose this were not the case. Choose $\eta\in I$ $\triangleleft$-minimal such that $p\not\perp M_\eta$. Thus, either $\eta=\<\>$ or $p\perp M_{\eta^-}$. By Lemma~\ref{threemodel}, there is an element $e\in M$ such that ${\rm tp}(e/M_\eta)$ is regular and non-orthogonal to $p$ (hence orthogonal to $M_{\eta^-}$ if $\eta\neq\<\>$), but $\fg e {M_\eta} {N_\alpha}$. This element $e$ contradicts the maximality of $C_\eta$ in Definition~\ref{decompdef}. \medskip \medskip\noindent{{\bf Claim 2:}} $p$ is dull. \medskip {\bf Proof.}\quad If $p$ were eni-active, then by Lemma~\ref{overatomic} we would have $p\not\perp M_\eta$ for some $\eta\in I$, contradicting Claim 1. \medskip As $p$ is dull, it is not eni by Proposition~\ref{eninotdull}. But this, coupled with Claim~1 implies that $N\cup\{e\}$ is atomic over $\bigcup_{\eta\in I} M_\eta$, which contradicts the maximality of $N$. Thus, $N=M$ and we finish. \medskip \begin{Theorem}[$T$ $\aleph_0$-stable with eni-NDOP] \label{eniactivetheorem} Suppose ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ is an eni-active decomposition of a model $M$. If $N\preceq M$ is atomic over $\bigcup_{\eta\in I} M_\eta$, then $N\preceq M$ is a dull pair. Thus, for every $N'$ satisfying $N\preceq N'\preceq M$, we have that $N$ is an $L_{\infty,\aleph_0}$-elementary substructure of $N'$ and $N'$ is an $L_{\infty,\aleph_0}$-substructure of $M$. \end{Theorem} {\bf Proof.}\quad Given $M$ and ${\mathfrak d}$, choose any $N\preceq M$ atomic over $\bigcup_{\eta\in I} M_\eta$. To show that $N\preceq M$ is dull, it suffices to show that there is no $e\in M\setminus N$ such that ${\rm tp}(e/N)$ is eni-active. So, by way of contradiction, assume that there were such an $e$. Let $p:={\rm tp}(e/N)$. By Lemma~\ref{overatomic}, we can choose an $\trianglelefteq$-minimal $\eta\in I$ such that $p\not\perp M_\eta$. By Lemma~\ref{threemodel}, there is $c\in M\setminus M_\eta$ such that $q:={\rm tp}(c/M_\eta)$ is non-orthogonal to $p$ and $\fg c {M_\eta} N$. As $q$ is eni-active and orthogonal to $M_{\eta^-}$ (when $\eta\neq\<\>$), the element $c$ contradicts the maximality of $C_\eta$ in Definition~\ref{decompdef}. Thus, $N\preceq M$ is a dull pair. The final sentence follows from Lemma~\ref{dullpairlemma} and Proposition~\ref{DULLchain}. \medskip \begin{Corollary}[$T$ $\aleph_0$-stable with eni-NDOP] \label{quotelater} Suppose $\<M_\eta,a_\eta:\eta\in I\>$ is an eni-active decomposition of both $M_1$ and $M_2$. Then $M_1\equiv_{\infty,\aleph_0} M_2$. \end{Corollary} {\bf Proof.}\quad Choose any $N_1\preceq M_1$ to be prime over $\bigcup_{\eta\in I} M_\eta$. By Theorem~\ref{eniactivetheorem}, $N_1\equiv_{\infty,\aleph_0} M_1$. By the uniqueness of prime models, there is $N_2\preceq M_2$ that is both isomorphic to $N_1$ and prime over $\bigcup_{\eta\in I} M_\eta$. By Theorem~\ref{eniactivetheorem} again, $N_2\equiv_{\infty,\aleph_0} M_2$ and the result follows. \medskip The third theorem of this section involves eni decompositions of a model. Theorem~\ref{enitheorem} is of less interest to us, since when $M^*$ is uncountable, each of the component submodels $M_\eta$ may be uncountable as well. \begin{Definition} \label{maxatomic} {\em A decomposition $\<M_\eta,a_\eta:\eta\in I\>$ inside $M$ is {\em maximal atomic} if $M_{\<\>}$ is a maximal atomic substructure of $M$ and, for each $\nu\neq\<\>$, $M_\nu$ is maximal atomic over $M_{\nu^-}\cup\{a_\nu\}$. } \end{Definition} \begin{Theorem}[$T$ $\aleph_0$-stable with eni-NDOP] \label{enitheorem} Every model $M$ is atomic over $\bigcup_{\eta\in I} M_\eta$ for every maximal atomic, eni decomposition $\<M_\eta:\eta\in I\>$ of $M$. \end{Theorem} {\bf Proof.}\quad Given a maximal, atomic, eni decomposition $\{M_\eta:\eta\in I\}$ of a model $M$, choose an enumeration $\<\eta_i:i<\alpha\>$ of $I$ such that $\eta_i\triangleleft\eta_j$ implies $i<j$. Note that $\eta_0=\<\>$. Next, define a continuous, elementary sequence $\<N_i:i\le\alpha\>$ of elementary substructures of $M$ satisfying: \begin{itemize} \item $N_0=M_{\<\>}$; \item $N_\beta=\bigcup_{i<\beta} N_i$ for every non-zero limit ordinal $\beta\le\alpha$; and \item $N_{\beta+1}\preceq M$ is maximal atomic over $N_\beta\cup M_{\eta_\beta}$ whenever $\beta<\alpha$. \end{itemize} Using Lemma~\ref{retain}, it follows by induction on $\beta\le\alpha$ that each model $N_\beta$ is atomic over $\bigcup_{i<\beta} M_{\eta_i}$. Thus, it suffices to prove that $N_\alpha=M$. Suppose that this were not the case. Choose $e\in M\setminus N_\alpha$ such that $p:={\rm tp}(e/N_\alpha)$ is regular. Choose $i\le\alpha$ least such that $p\not\perp N_i$. By superstability, either $i=0$ or $i=\beta+1$ for some $\beta<\alpha$. We argue by cases, arriving at a contradiction in each case. \medskip \noindent{\bf Case 1:} $p\not\perp M_\eta$ for some $\trianglelefteq$-least $\eta\in I$. \medskip {\bf Proof.}\quad By Lemma~\ref{threemodel}, there is $c\in M\setminus M_\eta$ such that $q:={\rm tp}(c/M_\eta)$ is strongly regular, non-orthogonal to $p$, and $\fg c {M_\eta} {N_\alpha}$. If $q$ were eni, then the element $c$ contradicts the maximality of $C_\eta$ in Definition~\ref{decompdef}. So assume $q$ is not eni. There are two subcases: First, if $\eta=\<\>$, then by Lemma~\ref{atomicextension} (with $A=\emptyset)$ we would have $M_{\<\>}\cup\{c\}$ atomic, contradicting the maximality of $M_{\<\>}$. On the other hand, if $\eta\neq\<\>$, then $M_\eta$ would be atomic over $M_\nu\cup\{a_\eta\}$, where $\nu=\eta^-$. But then, by Lemma~\ref{basicorth}(1), we would have $M_\nu\cup\{a_\eta\}$ essentially finite with respect to $q$, hence again by Lemma~\ref{atomicextension} we would have $M_\eta\cup\{c\}$ atomic over $M_\nu\cup\{a_\eta\}$, contradicting the maximality of $M_\eta$. \medskip \medskip\noindent{Case 2:} $p\perp M_\eta$ for every $\eta\in I$. \medskip {\bf Proof.}\quad In this case, $p$ cannot be dull, because if it were, then by Lemma~\ref{anytree} $N_\alpha\cup\{e\}$ would be atomic over $\bigcup_{\eta\in I} M_\eta$. So assume $p$ is eni-active. As $N_0=M_{\<\>}$ and $p\not\perp N_i$, the conditions of the Case imply that $i=\beta+1$, so $N_i$ is atomic over $N_\beta\cup M_\beta$. Let $\nu=\eta_\beta^-$. As $N_\beta$ is atomic over $\bigcup_{\eta_j:j<\beta} M_j$ we have $\fg {N_\beta} {M_\nu} {M_{\eta_\beta}}$. Since $p$ is eni-active, then by eni-active NDOP we would have $p\not\perp N_\beta$ or $p\not\perp M_{\eta_\beta}$. The first possibility contradicts the minimality of $i$, while the second contradicts the conditions of Case 2. \medskip We close this section with an application of Theorem~\ref{regulartheorem}. The main point of the proof of Corollary~\ref{notop} is that models that are atomic over an independent tree of countable models have a large number of partial automorphisms. \begin{Corollary} \label{notop} If $T$ is $\aleph_0$-stable and eni-NDOP, then $T$ cannot have OTOP. \end{Corollary} {\bf Proof.}\quad By way of contradiction suppose that there were sufficiently large cardinal $\kappa$ and a model $M^*$ containing a sequence $\<(b_\alpha,c_\alpha):\alpha<\kappa\>$ and a type $p(x,y,z)$ such that for all $\alpha,\beta<\kappa$, $$ M^*\ \hbox{realizes}\ p(x,b_\alpha,c_\beta) \quad\hbox{if and only if} \quad \alpha<\beta$$ For each pair $\alpha<\beta$, fix a realization $a_{\alpha,\beta}$ of $p(x,b_\alpha,c_\beta)$. Choose a prime, regular decomposition $\<M_\eta,a_\eta:\eta\in I\>$ of $M^*$. Note that each of the models $M_\eta$ is countable. By Theorem~\ref{regulartheorem}, $M^*$ is atomic over $\bigcup_{\eta\in I} M_\eta$, so for each pair $\alpha<\beta$ we can choose a finite $e_{\alpha,\beta}$ from $\bigcup_{\eta\in I} M_\eta$ such that ${\rm tp}(a_{\alpha,\beta}/b_\alpha,c_\beta\bigcup_{\eta\in I} M_\eta)$ is isolated by a formula $\theta(x,b_\alpha,c_\beta,e_{\alpha,\beta})$. We will eventually find a pair $\alpha<\beta$ and $e^*$ from $\bigcup_{\eta\in I} M_\eta$ such that $${\rm tp}(b_\beta,c_\alpha,e^*)={\rm tp}(b_\alpha,c_\beta,e_{\alpha,\beta})$$ This immediately leads to a contradiction, as $\theta(x,b_\beta,c_\alpha,e^*)$ would be realized in $M^*$ and any realization of it also realizes $p(x,b_\beta,c_\alpha)$, contrary to our initial assumptions. We will obtain these $\alpha<\beta$ and $e^*$ by successively passing from our sequence to sufficiently long subsequences, each time adding some amount of homogeneity. First, for each $\alpha$, choose a finite subtree $J_\alpha\subseteq I$ such that ${\rm tp}(b_\alpha c_\alpha/\bigcup_{\eta\in I} M_\eta)$ does not fork and is as stationary as possible over $J_\alpha$. By an argument akin to the $\Delta$-system lemma, by passing to a subsequence we may assume that there is an $\eta^*\in I$ such that $J_\alpha\cap J_\beta=\{\nu:\nu\trianglelefteq\eta\}$ for all $\alpha\neq\beta$. For each $\alpha$, let $M^J_\alpha$ be the countable set $\bigcup_{\gamma\in J_\alpha} M_\gamma$. As well, let $\nu_\alpha$ be the (unique) immediate successor of $\eta^*$ contained in $J_\alpha$, let $H_\alpha=\{\gamma\in I:\nu_\alpha\trianglelefteq\gamma\}$, and let $M_\alpha=\bigcup_{\gamma\in H_\alpha} M_\gamma$. Note that the sets $H_\alpha$ are pairwise disjoint and the independence of the tree implies that the sets $\{M_\alpha:\alpha\in\kappa\}$ are independent over $M_{\eta^*}$. By trimming further, we may additionally assume that each of the $J_\alpha$'s are tree isomorphic over $\eta^*$, and that the sets $M_\alpha$ are isomorphic over $M_{\eta^*}$. Next, for each $\alpha<\beta$, partition each sequence $e_{\alpha,\beta}$ into three subsequences $r_{\alpha,\beta}\subseteq M_\alpha$, $s_{\alpha,\beta}\subseteq M_\beta$, and $t_{\alpha,\beta}$ disjoint from $M_\alpha\cup M_\beta$. By the Erd\"os-Rado Theorem, we can pass to a subsequence such that for all $\alpha<\beta<\gamma$ we have: \begin{itemize} \item The partitions coincide, i.e., for each $i$, the $i^{th}$ coordinate of $e_{\alpha,\beta}\in r_{\alpha,\beta}$ iff the $i^{th}$ coordinate of $e_{\beta,\gamma}\in r_{\beta,\gamma}$; \item ${\rm tp}(t_{\alpha,\beta}/M_{\eta^*})$ is constant; \item ${\rm tp}(r_{\alpha,\beta}/M^J_\alpha)$ is constant; and \item ${\rm tp}(s_{\alpha,\beta}/M^J_\beta)$ is constant. \end{itemize} Additionally, by trimming the sequence still further, we may insist that for all pairs $\alpha<\beta$, there is $r^*\in H_\beta$ such that ${\rm tp}(r_{\alpha,\beta}M^J_\alpha/M_{\eta^*})={\rm tp}(r^* M^J_\beta/M_{\eta^*})$ and there is $s^*\in H_\alpha$ such that ${\rm tp}(s_{\alpha,\beta} M^J_\beta/M_{\eta^*})={\rm tp}(s^*M^ J_\alpha/M_{\eta^*})$. Finally, fix any such $\alpha<\beta$. By independence, we have $${\rm tp}(M^J_\alpha,M^J_\beta,r_{\alpha,\beta},s_{\alpha,\beta}, t_{\alpha,\beta})= {\rm tp}(M^J_\beta, M^J_\alpha,r^*,s^*,t_{\alpha,\beta})$$ Let $e^*$ be the sequence formed from $r^*s^*t_{\alpha,\beta}$. As each of $b_\alpha$ and $c_\beta$ are dominated by $M^J_\alpha$ and $M^J_\beta$, respectively over $M_{\eta^*}$, it follows that $\fg {b_\alpha c_\beta} {M^J_\alpha M^J_\beta} {\bigcup_{\eta\in I} M_\eta}$, so ${\rm tp}(b_\alpha,c_\beta,e_{\alpha,\beta})={\rm tp}(b_\beta,c_\alpha,e^*)$, completing the proof. \medskip \section{Borel completeness of eni-NDOP, eni-deep theories} Throughout this section, we assume that $T$ is $\aleph_0$-stable with eni-NDOP, hence prime, eni-active decompositions exist for any model $N$ of $T$. We begin with a definition, which should be thought of as describing a potential `branch' of an eni-active decomposition. \begin{Definition} \label{CHAIN} {\em An {\em eni-active chain} is a sequence $\<M_i,a_i:i<\alpha\>$, where $2\le\alpha\le\omega$ such that each $a_i\in M_i$ and, for each $i$ such that $(i+1)<\alpha$, ${\rm tp}(a_{i+1}/M_i)$ is eni-active, $\perp M_{i-1}$ (when $i>0$), and $M_{i+1}$ is prime over $M_i\cup\{a_{i+1}\}$. An eni-active chain is {\em finite} when $\alpha<\omega$. For $q$ a stationary, regular type, we say a finite chain is {\em $q$-topped\/} if $q\not\perp M_{\alpha-1}$, but $q\perp M_{\alpha-2}$. A finite chain is {\em ENI-topped} if it is $q$-topped for some ENI type $q$. } \end{Definition} \begin{Definition} {\em \label{enideepdef} An $\aleph_0$-stable, eni-NDOP theory is {\em eni-deep} if an eni-active $\omega$-chain exists. } \end{Definition} Clearly, an $\aleph_0$-stable, eni-NDOP theory is eni-deep if and only if an eni-active $\omega$-chain exists. \begin{Lemma} \label{shift} Given any model $M$ and regular type $p\in S(M)$, if some stationary, regular type $q$ lies directly over $p$, then there is a $q$-topped finite chain $\<M_i,a_i:i<\alpha\>$ such that $M_0=M$ and ${\rm tp}(a_1/M_0)$ realizes $p$. \end{Lemma} {\bf Proof.}\quad Choose an $\aleph_0$-saturated $N\succeq M$, $a$ realizing $p|N$, and an $\aleph_0$-prime model $N[a]$ over $N\cup\{a\}$ such that $q\not\perp N[a]$, while $q\perp N$. Choose a prime model $M(a)\preceq N[a]$ over $M\cup\{a\}$. As $q\perp N$, $q\perp M$. There are now two cases. First, if $q\not\perp M(a)$, then the two-element chain $\<M,M(a)\>$ with $a_1=a$ is as desired. Second, assume that $q\perp M(a)$. Choose an eni-active decomposition $\<M_\eta:\eta\in I\>$ of $N[a]$ with $M_{\<\>}=M(a)$ such that $M_\eta$ is prime over $M_{\eta^-}\cup\{a_\eta\}$ for every $\eta\in I\setminus\{\<\>\}$. As $q\not\perp N[a]$ while $q\perp M(a)$, we can choose $\eta\neq\<\>$ minimal such that $q\not\perp M_\eta$. As $q\perp M_{\eta^-}$, $$M\preceq M_{\<\>}\preceq M_{\eta|1}\preceq\dots\preceq M_\eta$$ with $a_{\<\>}=a$ and $a_{\ell+1}=a_{\eta|\ell}$ is a $q$-topped finite chain as required. \medskip Under the assumption of eni-NDOP, this leads to another characterization of the eni-active types. \begin{Proposition}[$T$ $\aleph_0$-stable, eni-NDOP] \label{newprop} A stationary, regular type $p$ is eni-active if and only if either $p$ is ENI or for every model $M$ such that $p\not\perp M$, there is a finite, ENI-topped chain $\<M_i,a_i:i<\alpha\>$ such that $M_0=M$ and ${\rm tp}(a_1/M_0)\not\perp p$. \end{Proposition} {\bf Proof.}\quad Let ${\mathbf P}$ denote the class of types satisfying the alleged characterization. It follows immediately from Lemma~\ref{notsat} that every type in ${\mathbf P}$ is eni-active. For the converse, ${\mathbf P}$ visibly contains the ENI types and is closed under non-orthogonality and automorphisms of the monster model. Thus, it suffices to show that if $q\in{\mathbf P}$ and $q$ lies directly over $p$, then $p\in{\mathbf P}$. To see this, choose any model $M$ such that $p\not\perp M$. Choose a regular $p'\in S(M)$ non-orthogonal to $p$. As $q$ lies directly over $p'$ as well, use Lemma~\ref{shift} to find a $q$-topped finite chain $\<M_i,a_i:i<\alpha\>$ with $M_0=M$ and ${\rm tp}(a_1/M_0)\not\perp p$. Now, if $q$ is ENI, then this chain witnesses that $p\in{\mathbf P}$. On the other hand, if $q\in{\mathbf P}$ but is not ENI, then there is a finite ENI-topped chain $\<N_j,b_j:j<\beta\>$ with $N_0=M_{\alpha-1}$ and ${\rm tp}(b_1/N_0)\not\perp q$. The concatenation of these two finite chains is an ENI chain starting with $M_0=M$ and ${\rm tp}(a_1/M_0) \not\perp p$. \medskip \begin{quotation} {\bf Until the end of the proof of Theorem~\ref{enideepthm}, fix an $\aleph_0$-stable, eni-NDOP theory that is eni-deep as witnessed by a specific eni-active $\omega$-chain $\<M_i,a_i:i\in\omega\>$.} \end{quotation} Under these hypotheses, we aim to prove Theorem~\ref{enideepthm}. By employing Proposition~\ref{newprop} for each $i\in\omega$, there is an integer $k=k(i)>i$ and an ENI-topped finite chain ${\cal C}_{k}=\<N_j^{k},b_j^{k}:j\le k\>$ such that for every $j\le i$, $N^k_j=M_j$ and $b^k_j=a_j$. As notation, using Fact~\ref{Fact}(2), choose an ENI $q_k\in S(N^k_k)$ satisfying $q_k\perp N^k_{k-1}$. \medskip We will use this configuration of ENI-topped chains to code arbitrary subtrees of ${\cal T}\subseteq\lambda^{<\omega}$ into models $M({\cal T})$ preserving isomorphism in both directions. The `reverse direction' i.e., showing that $M({\cal T}_1)\cong M({\cal T}_2)$ implying $({\cal T}_1,\trianglelefteq)\cong ({\cal T}_2,\trianglelefteq)$ is quite involved and uses a `black box' in the form of Theorem~6.19 of \cite{ShL}. We begin by recalling a number of definitions that appear there. As we are concerned with eni-active decompositions, we take ${\mathbf P}$ to be the class of eni-active types. As eni-active types are regular, a ${\bf P^r}$-decomposition in the notation of \cite{ShL} is precisely an eni-active decomposition. \begin{Definition} {\em Given a tree $I\subseteq Ord^{<\omega}$, a {\em large subtree} of $I$ is a non-empty subtree $J\subseteq I$ such that for each $\eta\in J$, $Succ_I(\eta)\setminus J$ is finite. We say that two trees $I_1$ and $I_2$ are {\em almost isomorphic\/} if there exist large subtrees $J_1\subseteq I_1$ and $J_2\subseteq I_2$ such that $(J_1,\trianglelefteq)\cong (J_2,\trianglelefteq)$. A tree $I$ has {\em infinite branching\/} if, for every $\eta\in I$, $Succ(\eta)$ is either infinite or empty. If a tree $I$ has infinite branching, for any integer $k$, we say a node $\eta\in I$ has {\em uniform depth $k$\/} if, for every maximal branch of $\{\nu\in I:\eta\trianglelefteq\nu\}$ has length exactly $k$. A node $\eta$ {\em often has unbounded depth\/} if, for every large subtree $J\subseteq I$ with $\eta\in J$, there is an infinite branch in $J$ containing $\eta$. Suppose $\eta\in I$ and $E_\eta$ is an equivalence relation on $Succ(\eta)$. Then $\eta$ is an {\em $(m,n)$-cusp} if there are infinite sets $A_m,A_n,B\subseteq Succ(\eta)$ such that \begin{enumerate} \item the set $A_m\cup A_n$ is pairwise $E_\eta$-equivalent; \item each $\delta\in A_m$ has uniform depth $m$; \item each $\rho\in A_n$ has uniform depth $n$; and \item each $\gamma\in B$ is often unbounded. \end{enumerate} A {\em cusp\/} is an $(m,n)$-cusp for some $m\neq n$. } \end{Definition} \begin{Definition} {\em Suppose $S\subseteq {\mathbf P}$ and ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ is a ${\mathbf P}$-decomposition. We say {\em ${\mathfrak d}$ supports $S$\/} if, for every $q\in S$ there is $\eta(q)\in \max(I)\setminus\{\<\>\}$ such that $q\not\perp M_{\eta(q)}$, but $q\perp M_{\eta(q)^-}$. If ${\mathfrak d}$ supports $S$, then we let Field$(S):=\{\eta(q):q\in S\}$ and $I^S:=\{\nu\triangleleft\eta:\eta\in {\rm Field}(S)\}$. } \end{Definition} \begin{Definition} {\em Fix a subset $S\subseteq{\mathbf P}$, a model $M$, and a function $\Phi:\omega\rightarrow\omega$. We say that an eni-active decomposition ${\mathfrak d}=\<M_\eta,a_\eta:\eta\in I\>$ of $M$ is {\em ${\mathbf P}$-finitely saturated\/} if, for every finite $A\subseteq M$ and $p\in S(A)\cap{\mathbf P}$, there is $\eta\in I$ such that ${\rm tp}(a_\eta/M_{\eta^-})\not\perp p$. The decomposition ${\mathfrak d}$ is {\em $(S,\Phi)$-simple\/} if \begin{enumerate} \item ${\mathfrak d}$ is ${\mathbf P}$-finitely saturated; \item ${\mathfrak d}$ supports $S$ (hence $I^S$ is defined); \item For $\mu\in I$, define $E_\mu$ by $E_\mu(\eta,\nu)\Leftrightarrow {\rm tp}(a_\eta/M_\mu)={\rm tp}(a_\nu/M_\mu)$; \item for all $\eta,\nu\in I^S$ \begin{enumerate} \item if $\eta^-=\nu^-=\mu$, then $E_\mu(\eta,\nu)$; \item $Succ_{I^S}(\eta)$ is empty or infinite (hence $I^S$ has infinite branching); \item $\eta$ is either of some finite uniform depth or is a cusp; \item if $\eta$ is an $(m,n)$-cusp, then $\Phi(m-n)=\lg(\eta)$. \end{enumerate} \end{enumerate} } \end{Definition} Theorem 6.19 from \cite{ShL}, which we take as a black box, states: \begin{Theorem} \label{blackbox} Suppose $S\subseteq{\mathbf P}$, a model $M$, and a function $\Phi:\omega\rightarrow\omega$ are given. If ${\mathfrak d}_1$ and ${\mathfrak d}_2$ are both $(S,\Phi)$-simple decompositions of $M$, then the trees $I^S_1$ and $I^S_2$ are almost isomorphic. \end{Theorem} With our eye on applying Theorem~\ref{blackbox}, we massage the data we were given at the top of this section. Let ${\cal U}=\{k\in\omega:k=k(i)$ for some $i\}$. As ${\cal U}$ is infinite, by passing to an infinite subset, we may additionally assume that if $n<m$ are from ${\cal U}$, then $m>2n$. It follows from this that for all pairs $n<m$, $n'<m'$ from ${\cal U}$, $$m-n=m'-n'\qquad\hbox{if and only if}\qquad m=m'\ \hbox{and}\ n=n'$$ Next, it is routine to partition ${\cal U}$ into infinitely many infinite sets $V_i$ for which $k>i$ for every $k\in V_i$. Fix an integer $i$. An `$i$-tree' is a subtree of $\omega^{<\omega}$ with a unique `stem' $\{\<0^j\>:j<i\}$ of length $i$. As an example, for each $k\in V_i$, let $$I_i(k):=\{\eta\in\omega^{\le k}:\ \hbox{for all $j<i$, if $\lg(\eta)>j$, then $\eta(j)=0$\}}$$ If $I$ and $J$ are both $i$-trees (say with disjoint universes) the {\em free join of $I$ and $J$ over $i$, $I\oplus_iJ$,\/} is the $i$-tree with universe $(I\cup J)/\sim$, where for each $j<i$, the (unique) nodes of $I$ and $J$ of length $j$ are identified, and every other $\sim$-class is a singleton. To set notation, for $n<m$ from $V_i$, let $I_i(n,m):=I_i(n)\oplus_i I_i(m)$. We associate an eni-active decomposition $${\mathfrak d}(n,m):=\<N_\eta,b_\eta:\eta\in I_i(n,m)\>$$ satisfying:\begin{itemize} \item for $\lg(\eta)<i$, $N_\eta=M_i$ and $b_\eta=a_i$; \item if $k(\eta)=n $ when $\eta\in I_i(n)$ and $k(\eta)=m$ when $\eta\in I_i(m)$, then $N_\eta\cong N_{\lg(\eta)}^{k(\eta)}$ and ${\rm tp}(b_\nu/N_{\nu^-})={\rm tp}(b^{k(\nu)}_{\lg(\nu)}/N^{k(\nu)}_{\lg(\nu^-)})$. \end{itemize} In particular, as ${\mathfrak d}(n,m)$ is a decomposition, $\{N_\eta:\eta\in I_i(n,m)\}$ form an independent tree of models. Still with $i$ fixed, choose disjoint, 4-element sets $\{n(\delta^+),m(\delta^+),n(\delta^-),m(\delta^-)\}$ from $V_i$ for each $\delta\in\omega^i$ such that $n(\delta^+)<m(\delta^+)$ and $n(\delta^-)<m(\delta^-)$. Now, for each $\delta\in\omega^{<\omega}$, let ${\rm diff}(\delta^+)=m(\delta^+)-n(\delta^+)$ and ${\rm diff}(\delta^-)=m(\delta^-)-n(\delta^-)$. It follows from our thinness conditions on ${\cal U}$ (and the disjointness of the sets $V_i$) that the set $D=\{{\rm diff}(\delta^+),{\rm diff}(\delta^-):\delta\in \omega^{<\omega}\}$ is without repetition. Let $\Phi:\omega\rightarrow\omega$ be any function such that for every $\delta\in\omega^{<\omega}$, $$\Phi({\rm diff}(\delta^+))=\Phi({\rm diff}(\delta^-))=\lg(\delta)$$ To ease notation, for each $\delta\in\omega^{<\omega}$, let $I(\delta^+)=I_i(n(\delta^+),m(\delta^+))$ and ${\mathfrak d}(\delta^+)={\mathfrak d}(n(\delta^+),m(\delta^+))$, with analogous definitions for $I(\delta^-)$ and ${\mathfrak d}(\delta^-)$. Next, let $I_0:=(\lambda\times\omega)^{<\omega}$. We denote elements of $I_0$ by pairs $(\eta,\delta)$. Note that $\lg(\eta)=\lg(\delta)$ for all $(\eta,\delta)\in I_0$. Let ${\mathfrak d}_0$ denote the eni-active decomposition $\<M_{(\eta,\delta)},a_{(\eta,\delta)}:(\eta,\delta)\in I_0\>$, where $M_{(\eta,\delta)}\cong M_{\lg(\eta)}$ via a map $f_{(\eta,\delta)}$, and $f_{(\eta,\delta)}(a_{(\eta,\delta)})=a_{\lg(\eta)}$. With all of the above as a preamble, we are now ready to code subtrees of $\lambda^{<\omega}$ into models of our theory. \begin{Theorem}[$T$ $\aleph_0$-stable, eni-NDOP, eni-deep] \label{enideepthm} For any $\lambda\ge\aleph_0$, there is a $\lambda$-Borel embedding ${\cal T}\mapsto M({\cal T})$ of subtrees of $\lambda^{<\omega}$ into models of size $\lambda$ satisfying $$({\cal T}_1,\trianglelefteq)\cong ({\cal T}_2,\trianglelefteq)\qquad\hbox{if and only if}\qquad M({\cal T}_1)\cong M({\cal T}_2)$$ \end{Theorem} {\bf Proof.}\quad Fix a cardinal $\lambda\ge\aleph_0$. We describe the map ${\cal T}\mapsto M({\cal T})$. Fix a subtree ${\cal T}\subseteq\lambda^{<\omega}$. Begin by letting $\delta_0({\cal T})$ be the eni-active decomposition formed by beginning with the decomposition ${\mathfrak d}_0$ and simultaneously adjoining a copy of ${\mathfrak d}(\delta^+)$ to every node $(\eta,\delta)\in I_0$ for which $\eta\in {\cal T}$, as well as adjoining a copy of ${\mathfrak d}(\delta^-)$ to every node $(\eta,\delta)\in I_0$ for which $\eta\not\in {\cal T}$. Let $I_0({\cal T})$ denote the index tree of ${\mathfrak d}_0({\cal T})$. Let $M_0({\cal T})$ be prime over $\bigcup\{N_\nu:\nu\in I_0({\cal T})\}$. For each $\nu\in\max(I_0({\cal T}))$, let $q_\nu\in S(N_\nu)$ be the ENI-type conjugate to $q_{\lg(\nu)}\in S(N_{\lg(\nu)}^{\lg(\nu)})$ and let $S=\{q_\nu:\nu\in\max(I_0({\cal T}))\}$. Because of the independence of the tree and the fact that $M_0({\cal T})$ is prime over the tree, each $q_\nu$ has finite dimension in $M_0({\cal T})$. Next, we recursively construct an elementary chain $\<M_n({\cal T}):n\in\omega\>$ and a sequence $\<{\mathfrak d}_n({\cal T}):n\in\omega\>$ as follows. We have already defined $M_0({\cal T})$ and ${\mathfrak d}_0({\cal T})$, so assume $M_n({\cal T})$ is defined and ${\mathfrak d}_n({\cal T})$ is an eni-active decomposition of $M_n({\cal T})$ extending ${\mathfrak d}_0({\cal T})$. Let $R_n$ consist of all $p\in S(M_n({\cal T}))\cap{\mathbf P}$ satisfying $p\perp S$. Let $J_n:=\{a_p:p\in R_n\}$ be a $M_n({\cal T})$-independent set of realizations of each $p\in R_n$. For each $p\in R_n$, there is a $\triangleleft$-minimal $\eta(p)\in I_n({\cal T})$ such that $p\not\perp N_{\eta(p)})$. Let $N_p$ be prime over $N_{\eta(p)}\cup \{a_p\}$. Let ${\mathfrak d}_{n+1}({\cal T})$ be the natural extension of ${\mathfrak d}_n({\cal T})$ formed by affixing each $N_p$ as an immediate successor of $N_{\eta(p)}$, and let $M_{n+1}({\cal T})$ be prime over the independent tree of models in ${\mathfrak d}_{n+1}({\cal T})$. Finally, let ${\mathfrak d}({\cal T}):=\bigcup_{n\in\omega} {\mathfrak d}_n({\cal T})$ and let $M({\cal T})$ be prime over ${\mathfrak d}({\cal T})$. As notation, let $I({\cal T})$ denote the index tree of ${\mathfrak d}({\cal T})$. The following facts are easily established: \begin{enumerate} \item A type $p\in S(M({\cal T}))\cap{\mathbf P}$ has finite dimension in $M({\cal T})$ if and only if $p\not\perp S$; \item ${\mathfrak d}({\cal T})$ is ${\mathbf P}$-finitely saturated; \item ${\mathfrak d}({\cal T})$ supports $S$ and $I^S({\cal T})=I_0({\cal T})$; \item $I^S({\cal T})$ is infinitely branching; and \item for $\nu\in I^S({\cal T})$, \begin{itemize} \item $\nu$ is a cusp if and only if $\nu\in I_0$. In particular, if $\nu=(\eta,\delta)$ and $\eta\in{\cal T}$, then $\nu$ is an $(m(\delta^+),n(\delta^+))$-cusp, and $\eta\not\in{\cal T}$, then $\nu$ is an $(m(\delta^-),n(\delta^-))$-cusp; \item if $\nu\in I_0({\cal T})\setminus I_0$, then $\nu$ is of uniform finite depth. \end{itemize} \end{enumerate} In particular, ${\mathfrak d}({\cal T})$ is an $(S,\Phi)$-simple decomposition of $M({\cal T})$. \medskip \par\noindent{{\bf Main Claim:}} If $M({\cal T}_1)\cong M({\cal T}_2)$, then $({\cal T}_1,\trianglelefteq)\cong ({\cal T}_2,\trianglelefteq)$. \medskip {\bf Proof.}\quad Suppose that $f:M({\cal T}_1)\rightarrow M({\cal T}_2)$ is an isomorphism. Then the image of ${\mathfrak d}({\cal T}_1)$ under $f$ is a decomposition of $M({\cal T}_2)$ with index tree $I({\cal T}_1)$. As well, ${\mathfrak d}_({\cal T}_2)$ is also a decomposition of $M({\cal T}_2)$ with index tree $I({\cal T}_2)$. If, for $\ell=1,2$, we let $S_\ell$ denote the non-orthogonality classes of ENI types of finite dimension in $M({\cal T}_\ell)$, then as isomorphisms preserve types of finite dimension, $f(S_1)=S_2$ setwise. It follows that both $f({\mathfrak d}_1)$ and ${\mathfrak d}_2$ are both $(S_2.\Phi)$-simple decompositions of $M({\cal T}_2)$. Thus, by Theorem~\ref{blackbox}, the trees $I_0({\cal T}_1)$ and $I_0({\cal T}_2)$ are almost isomorphic. Fix large subtrees $J_\ell\subseteq I_0({\cal T}_\ell)$ and a tree isomorphism $h:J_1\rightarrow J_2$. Note that for $\ell=1,2$, a node $\nu\in J_\ell$ has uniform depth $k$ in $J_\ell$ if and only if $\nu$ has uniform depth $k$ in $I_0({\cal T}_\ell)$. It follows that $h$ maps cusps to cusps, and more precisely, $(m,n)$-cusps to $(m,n)$-cusps. Thus, the restriction $h'$ of $h$ to $J_1\cap(\lambda\times\omega)^{<\omega}$ is a tree isomorphism mapping onto $J_2\cap(\lambda\times\omega)^{<\omega}$ that sends $(m,n)$-cusps to $(m,n)$-cusps. However, as the pairs $(m,n)$ uniquely identify $\delta\in\omega^{<\omega}$ and even $\delta^+$ and $\delta^-$, it follows that $h'(\eta,\delta)=(\eta^*,\delta)$ for every $(\eta,\delta)\in {\rm dom}(h')$. As well, if we let $$P_\ell:=\{(\eta,\delta)\in J_\ell\cap(\lambda\times\omega)^{<\omega}: \hbox{$(\eta,\delta)$ is a $\delta^+$-cusp}\}$$ then $h'$ maps $P_1$ onto $P_2$ as well. Recalling that from our construction, $(\eta,\delta)\in P_\ell$ if and only if $\eta\in {\cal T}_\ell$, we have that for every $(\eta,\delta)\in{\rm dom}(h')$ $$\hbox{if $h'(\eta,\delta)=(\eta^*,\delta)$, then $\eta\in {\cal T}_1$ if and only if $\eta^*\in {\cal T}_2$.}$$ To finish, we recursively construct maps $h^*:\lambda^{<\omega}\rightarrow \lambda^{<\omega}$ and $\delta^*:\lambda^{<\omega}\rightarrow \omega^{<\omega}$ satisfying: \begin{enumerate} \item $(\eta,\delta^*(\eta))\in J_1$; \item $h^*(\eta)=\eta^*$ if and only if $h'(\eta,\delta^*(\eta))=(\eta^*, \delta^*(\eta))$; \item for all $\eta$ and all $\alpha,\alpha'\in\lambda$, $\delta^*(\eta{\char'136}\<\alpha\>)=\delta^*(\eta{\char'136}\<\alpha'\>)$; and \item for all $\eta\in \lambda^{<\omega}$, $\alpha,\beta\in\lambda$, $(\eta{\char'136}\<\alpha\>,\delta^*(\eta{\char'136}\<\alpha\>))\in J_1$ and \hfill \break $(h^*(\eta){\char'136}\<\beta\>,\delta^*(h^*(\eta){\char'136}\<\beta\>))\in J_2$. \end{enumerate} To accomplish this, first let $\delta^*(\<\>)=\<\>$. Given that $\delta^*(\eta)$ is defined, the definition of $h^*(\eta)$ is given by Clause~(2). As $(\eta,\delta^*(\eta)\in J_1$ and since $J_\ell$ are large subtrees of $I_0({\cal T}_\ell)$, it follows that there is $\delta'\in Succ(\delta^*(\eta))$ such that Clauses (3) and (4) hold for all $\alpha,\beta\in\lambda$. Define $\delta^*(\eta{\char'136}\<\alpha\>)=\delta'$ for every $\alpha$ and define $h^*(\eta{\char'136}\<\alpha\>)$ according to Clause~(2). It is easily checked that $h^*:\lambda^{<\omega}\rightarrow\lambda^{<\omega}$ is a tree isomorphism. Additionally, as $h'$ mapped $P_1$ onto $P_2$, it follows that the restriction of $h^*$ to ${\cal T}_1$ is a tree isomorphism between $({\cal T}_1,\trianglelefteq)$ and $({\cal T}_2,\trianglelefteq)$. \medskip \begin{Corollary} \label{enideepcor} If $T$ is $\aleph_0$-stable with eni-NDOP and is eni-deep, then $T$ is Borel complete. Moreover, for every infinite cardinal $\lambda$, $T$ is $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$. \end{Corollary} {\bf Proof.}\quad If $T$ has eni-DOP, then this is literally Corollary~\ref{eniDOPcor}. If $T$ has eni-NDOP, then the proof is exactly like the proof of Corollary~\ref{eniDOPcor}, using Theorem~\ref{enideepthm} in place of Theorem~\ref{eniDOPthm}. \medskip \section{Main gap for models of $\aleph_0$-stable theories modulo $L_{\infty,\aleph_0}$-equivalence} \label{last} In this brief section, we combine our previous results to exhibit a dichotomy among $\aleph_0$-stable theories. \begin{Definition} {\em For $T$ any theory and $\lambda$ an infinite cardinal, let $Mod_\lambda(T)$ denote the set of models of $T$ with universe $\lambda$. \begin{itemize} \item For $T$ any theory and $\lambda$ any cardinal, $I_{\infty,\aleph_0}(T,\lambda)$ denotes the maximum cardinality of any pairwise non-$\equiv_{\infty,\aleph_0}$ collection from $Mod_\lambda(T)$. \item For any $M\models T$ of size $\lambda$, the {\em Scott height of $M$}, $SH(M)$ is the least ordinal $\alpha<\lambda^+$ such that for any model $N$, $N\equiv_\alpha M$ implies $N\equiv_{\alpha+1} M$. \end{itemize} } \end{Definition} \begin{Theorem} \label{maingap} The following conditions are equivalent for any $\aleph_0$-stable theory $T$: \begin{enumerate} \item For all infinite cardinals $\lambda$, $I_{\infty,\aleph_0}(T,\lambda)=2^\lambda$; \item For all infinite cardinals $\lambda$, $\sup\{SH(M):M\in Mod_\lambda(T)\}=\lambda^+$; \item $T$ either has eni-DOP or is eni-deep. \end{enumerate} \end{Theorem} {\bf Proof.}\quad The equivalence of $(1)\Leftrightarrow(2)$ is the content of \cite{Sh11}. $(3)\Rightarrow(1):$ Fix any infinite cardinal $\lambda$. If $T$ has either of these properties, then by Corollary~\ref{eniDOPcor} or Corollary~\ref{enideepcor}, $T$ is $\lambda$-Borel complete. However, it is well known (see e.g., \cite{Sh220}) that there is a family of $2^\lambda$ pairwise non-$\equiv_{\infty,\aleph_0}$ directed graphs with universe $\lambda$. It follows immediately that $I_{\infty,\aleph_0}(T,\lambda)=2^\lambda$ in either case. $(1)\Rightarrow(3):$ Assume that $T$ is $\aleph_0$-stable, with eni-NDOP and eni-shallow (i.e., not eni-deep). Then, by Corollary~\ref{quotelater}, models of $T$ are determined by up to $\equiv_{\infty,\aleph_0}$-equivalence by their prime, eni-active decompositions. Thus, it suffices to count the number of prime, eni-active decompositions up to isomorphism.\footnote{We say that two eni-active decompositions ${\mathfrak d}_1=\<M_\eta^1,a^1_\eta:\eta\in I_1\>$ and ${\mathfrak d}_2=\<M_\eta^2,a^2_\eta:\eta\in I_2\>$ are isomorphic if there is a tree isomorphism $f:(I_1,\trianglelefteq)\cong (I_2,\trianglelefteq)$ and an elementary bijection $f^*:\bigcup_{\eta\in I_1} M^1_\eta\rightarrow\bigcup_{\eta\in I_2} M^2_\eta$ such that, for each $\eta\in\ I_1$, $f^*|_{M^1_\eta}$ maps $M^1_\eta$ isomorphically onto $M^2_{f(\eta)}$.} To obtain this count, first note that if $T$ is eni-shallow, then as in Theorem~X~4.4 of \cite{Shc} (which builds on VII, Section~5 of \cite{Shc}), the depth of any index tree of an eni-active decomposition is an ordinal $\beta<\omega_1$. In any prime decomposition, each of the models $M_\eta$ is countable, hence there are at most $2^{\aleph_0}$ isomorphism types. So, as a weak upper bound, if $\lambda=\aleph_\alpha$, then the number of prime, eni-active decompositions of depth $\beta$ of a model of size $\lambda$ is bounded by ${Beth}_{(|\alpha|+|\beta|)^+}$. [Similar counting arguments appear in Theorem~X~4.7 of \cite{Shc}.] From this, we conclude that for some cardinals $\lambda$, $I_{\infty,\aleph_0}(T,\lambda)<2^\lambda$. \medskip
1206.4935
\section{Appendix: overview of notation} \label{s:appendix} Since this paper features a multitude of categories, functors and natural transformations, for the reader's convenience we list these in the tables below. \bigskip \begin{center} \begin{tabular}{c} \begin{tabular}{|ll|}\hline \multicolumn{2}{|c|}{Categories} \\ \hline $\mathsf{BA}$ & section~\ref{ss:basics1} \\ $\mathsf{Boole}$ & Definition~\ref{d:Boole} \\ $\mathsf{Pres}$ & Definition~\ref{d:Prs} \\ $\mathsf{Set},\mathsf{Rel}$ & section~\ref{ss:basics1} \\ $\Boole_{\nabla}$ & Definition~\ref{d:Mossfun} \\ \hline \end{tabular} \\[20mm] \begin{tabular}{|ll|}\hline \multicolumn{2}{|c|}{Natural Transformations} \\ \hline $\mathit{Base}^{\T}: \Tom \mathrel{\dot{\rightarrow}} \Pom$ & Definition~\ref{d:base} \\ $\lambda\!^{\T}: \T\funP \mathrel{\dot{\rightarrow}} \funP\T$ & Definition~\ref{def:elementlift} \\ $\quot{}: \Tba\funU \mathrel{\dot{\rightarrow}} \mathbb{M}$ & Definition~\ref{d:quot} \\ $\delta: \mathbb{M}\funaQ \mathrel{\dot{\rightarrow}} \funaQ\T$ & Definition~\ref{d:ntrde} \\ \hline \end{tabular} \\[12mm] \end{tabular} \begin{tabular}{|ll|}\hline \multicolumn{2}{|c|}{Functors} \\ \hline $\funB: \mathsf{Pres} \to \mathsf{BA}$ & Definition~\ref{d:funB1}, \ref{d:funB2} \\ $\funC: \mathsf{BA} \to \mathsf{Pres}$ & Definition~\ref{d:funC} \\ $\funaF: \mathsf{Set} \to \mathsf{Boole}$ & page~\pageref{d:funaF} \\ $\Id,\mathit{B}_{\omega},D_{\omega}: \mathsf{Set} \to \mathsf{Set}$ & Example~\ref{ex:1} \\ $A_{M}: \mathsf{Set} \to \mathsf{Set}$ & Definition~\ref{d:Mossfun} \\ $M: \mathsf{Pres} \to \mathsf{Pres}$ & Definition~\ref{d:funpM} \\ $\mathbb{M}: \mathsf{BA} \to \mathsf{BA}$ & Definition~\ref{d:funaM} \\ $\Tba: \mathsf{Set} \to \mathsf{Set}$ & Definition~\ref{d:BAsyntax} \&~\eqref{eq:Tba} \\ $\Tnb: \mathsf{Set} \to \mathsf{Boole}/\mathsf{Set}$ & Definition~\ref{d:Tnb} \\ $\funP,\Pom: \mathsf{Set} \to \mathsf{Set}$ & section~\ref{ss:basics1} \\ $\funQ: \mathsf{Set} \to \mathsf{Set}^{\mathit{op}} $ & section~\ref{ss:basics1} \\ $\funaQ: \mathsf{Set} \to \mathsf{BA}^{\mathit{op}} $ & Definition~\ref{d:funaQ} \\ $\Tom:\mathsf{Set} \to \mathsf{Set}$ & page~\pageref{page:Tom} \\ $\Tomnb:\mathsf{Set} \to \mathsf{Set}$ & Definition~\ref{d:Tomnb} \\ $\funU:\mathsf{Boole} \to \mathsf{Set}$ & page~\pageref{d:funU} \\ $V:\mathsf{Alg}_{\mathsf{BA}}(\mathbb{M}) \to \mathsf{Alg}_{\mathsf{Set}}(A_{M})$ & Definition~\ref{d:funV} \\ \hline \end{tabular} \end{center} \section{Boolean algebras and their presentations} \label{s:boolean} \subsection{Boolean-type algebras} It will be convenient for us to work with a syntax for Boolean logic and Boolean algebras, in which the finitary meet and join symbols, $\bigwedge$ and $\bigvee$, respectively, are the \emph{primitive} symbols for the conjunction and disjunction operation, respectively. \begin{definition} \label{d:BAsyntax} Given a set $X$, we let $\Tba(X)$ denote the set of Boolean terms/formulas over $X$, defined by the following grammar: \[ a \mathrel{\;::=\;} x \in X \mathrel{\mid} \neg a\mathrel{\mid} \mbox{$\bigvee$}\phi \mathrel{\mid} \mbox{$\bigwedge$}\phi, \] where $\phi$ is a finite set of Boolean terms. We abbreviate $\bot \mathrel{:=} \bigvee\varnothing$ and $\top \mathrel{:=} \bigwedge\varnothing$, and if no confusion is likely we will write $\Tba \mathrel{:=} \Tba(\varnothing)$. \end{definition} Observe that each $\Tba(X)$ is non-empty, always containing the elements $\top$ and $\bot$. The above definition can be brought in coherence with the categorical perspective of section~\ref{s:preliminaries}, as follows. \begin{definition} \label{d:Boole} We define the category $\mathsf{Boole}$ of Boolean-type algebras as the algebras for the functor $\mathsf{Set}\to\mathsf{Set}$, $X\mapsto X + \Pom X + \Pom X$. A Boolean-type algebra will usually be introduced as a quadruple $\mathbb{B} = \struc{B,\neg^{\mathbb{B}},\bigwedge^{\mathbb{B}},\bigvee^{\mathbb{B}}}$, where $B$ is the \emph{carrier} of the algebra, and $\neg^{\mathbb{B}}: B \to B$, and $\bigwedge^{\mathbb{B}},\bigvee^{\mathbb{B}}: \Pom(B) \to B$ the Boolean \emph{operations}. \end{definition} Note that this perspective has built in that both conjunction and disjunction are commutative, associative and have a neutral element. We let $\funU: \mathsf{Boole} \to \mathsf{Set}$ \label{d:funU} denote the forgetful functor, and $\funaF: \mathsf{Set} \to \mathsf{Boole}$ \label{d:funaF} its left adjoint; that is, given a set $X$, $\funaF X$ denotes the absolutely free Boolean-type algebra, or Boolean term algebra, over $X$. Note that $\funaF X$ is not a Boolean algebra. Given a set $X$, observe that $\funU\funaF(X)$ consists of the set $\Tba(X)$ of all Boolean terms/formulas using the elements of $X$ as variables. In fact, we may extend $\Tba$ to the set functor $\Tba: \mathsf{Set} \to \mathsf{Set}$ given by \begin{equation} \label{eq:Tba} \Tba \mathrel{:=} \funU\funaF. \end{equation} In this way we obtain the well-known term monad for the Boolean signature with the usual unit $\eta :\Id \to \Tba$ (`variables are terms') and multiplication $\mu: \Tba \Tba \to \Tba$ (`terms built from terms are terms'). \[% \xymatrix{ \mathsf{Set} \ar@(ul,dl)_{\Tba} \ar@/^{.5cm}/[r]^\funaF & \mathsf{Boole} \ar@/^{.5cm}/[l]^U} \] \noindent In particular, for any $f:X \to \Tba Y$ there is $\wh{f}: \Tba X \to \Tba Y$ which extends $f$ and can be defined as the composition $\mu_{Y} \circ \Tba f$. Logicians will recognise $\wh{f}$ as the \emph{substitution} induced by $f$. \begin{definition}\label{d:ind_extension} Given a set $X$ and a Boolean-type algebra $\mathbb{B}$, a map $f: X \to \funU\mathbb{B}$ is called an \emph{assignment}. Because of the adjunction $\funaF \dashv \funU$, such an assignment has a unique extension to a $\mathsf{Boole}$-homomorphism, denoted by \[ \ti{f}: \funaF X \to \mathbb{B}. \] This map $\ti{f}$ is the \emph{meaning} function induced by $f$. \end{definition} \begin{definition} \label{d:funaQ} A Boole-type algebra $\mathbb{B}$ is a \emph{Boolean algebra} if it satisfies the inequalities of Table~\ref{tb:clax}. We let $\funaQ: \mathsf{Set} \to \mathsf{BA}^{\mathit{op}}$ denote the contravariant power set algebra functor. That is, given a set $X$, we let $\funaQ X$ denote the power set algebra of $X$, and for a map $f: X \to Y$, the homomorphism $\funaQ f: \funaQ Y \to \funaQ X$ is provided by the map $f^{-1} = \funQ f$. \end{definition} \subsection{Presentations of Boolean algebras} \label{ss:presentations} It has become a standard tool in mathematics to define an algebraic structure by means of a \emph{presentation} by generators and relations. Usually, these definitions are given in the category-theoretic sense, and in particular do not distinguish isomorphic structures. Our proof-theoretic analysis of the logic requires us to be very precise here, and for this purpose we have developed a small piece of theory on `concrete presentations'. We want to stress the fact that whereas we only talk about Boolean algebras here, the results in this section in fact apply to a wide universal algebraic setting. \begin{definition} \label{d:funB1} A \emdef{presentation} is a pair $\prs{G}{R}$ consisting of a set $G$ of \emdef{generators} and a set $R \subseteq \Tba(G) \times \Tba(G)$. Given such a relation $R$, let ${\equiv_{R}} \subseteq \Tba(G) \times \Tba(G)$ be the least congruence relation on the term algebra $\funaF G$ extending $R$ such that the quotient $\funaF G/_{\equiv_{R}}$ is a Boolean algebra. We say that this quotient is the Boolean algebra \emph{presented by $\prs{G}{R}$}, and denote it as $\funB\prs{G}{R}$. Given a presentation $\prs{G}{R}$, we let \begin{equation} \label{eq:defunit} \eta_{\prs{G}{R}}: g \mapsto [g]. \end{equation} define a map $\eta_{\prs{G}{R}}: G \to \funU\funB\prs{G}{R}$. \end{definition} It is straightforward to verify that $\ti{\eta}_{\prs{G}{R}}$ is the quotient morphism from $\funaF G$ to $\funB\prs{G}{R}$, with kernel $\ker(\ti{\eta}_{\prs{G}{R}}) = {\equiv_{R}}$. Relating this definition of presentations to the more usual one, first observe that a `relation' is nothing but an equation over the set of generators (but note that generators should not be seen as \emph{variables}). Accordingly, given a presentation $\prs{G}{R}$, a Boolean algebra $\mathbb{B}$, and an assignment $f: G \to \funU\mathbb{B}$, we say that a relation $(s,t) \in R$ is \emph{true} in $\mathbb{B}$ under $f$, notation: $\mathbb{B},f \models s \approx t$, if $\ti{f}(s) = \ti{f}(t)$. $\mathbb{B}$ is a \emph{model} for $R$ under $f$ if $\mathbb{B},f \models s \approx t$ for all $(s,t) \in R$. It is straightforward to verify that $\funB\prs{G}{R}$ is a model for $R$ under $\eta_{\prs{G}{R}}$. We can now formulate the following proposition, of which we omit the (straightforward) proof. \begin{prop} Let $\prs{G}{R}$ be a presentation, and let $\mathbb{B}$ be a model for $R$ under the assignment $f: G \to \funU\mathbb{B}$. Then there is a unique homomorphism $f': \funB\prs{G}{R} \to \mathbb{B}$ that extends $f$ in the sense that $f'([g]) = f(g)$. In a diagram: \begin{equation*} \xymatrix{ G \ar[drr]_{f} \ar[rr]^{\eta_{\prs{G}{R}}} && \funU\funB\prs{G}{R} \ar[d]^{f'} \\ && \funU\mathbb{B} } \end{equation*} \end{prop} The universal property of $\funB\prs{G}{R}$ expressed by the above proposition is usually taken as the definition of the Boolean algebra presented by a presentation. In order to turn the class of presentations into a category we need to define a notion of morphism between two presentations. \begin{definition} A \emph{presentation morphism} from one presentation $\prs{G}{R}$ to another $\prs{G'}{R'}$ is a map $f: G \to \Tba(G')$ satisfying $\wh{f}(s) \equiv_{R'} \wh{f}(t)$ for all $s,t \in \Tba(G)$ such that $(s,t) \in R$. Given two presentation morphisms $f: \prs{G}{R} \to \prs{G'}{R'}$ and $g: \prs{G'}{R'} \to \prs{G''}{R''}$, we define their \emph{composition} $g\circ f: G \to \Tba(G'')$ as the map given by \[ g \circ f (x) := \wh{g}(f(x)), \] and the \emph{identity presentation} on $\prs{G}{R}$ as the function $\mathit{id}_{\prs{G}{R}}: G \to \Tba(G)$ mapping a generator $x \in G$ to the term $x \in \Tba{G}$. \end{definition} The verification that the above defines a category is routine. Category theorists will note that identity and composition are those of the \emph{Kleisli category} associated with the monad $\Tba$. \begin{definition} \label{d:Prs} We will let $\mathsf{Pres}$ denote the category with presentations as objects and presentation morphisms as arrows. \end{definition} We will now extend the construction $\funB$ of a Boolean algebra out of a presentation to a functor $\funB: \mathsf{Pres} \to \mathsf{BA}$, and define a functor $\funC: \mathsf{BA} \to \mathsf{Pres}$ in the opposite direction. \begin{definition} \label{d:funB2}\label{d:funC} Given a presentation morphism $f: \prs{G}{R} \to \prs{G'}{R'}$, it is easy to see that the map $\funB f: \funaF G/_{\equiv_{R}} \to \funaF G'/_{\equiv_{R'}}$ given by \[ \funB f:\; [s]_{\prs{G}{R}} \;\mapsto\; [\wh{f}(s)]_{\prs{G'}{R'}} \] is well-defined. Conversely, given a Boolean algebra $\mathbb{B}$, define its \emdef{canonical presentation} as the pair $\funC\mathbb{B} := \prs{\funU\mathbb{B}}{\Delta_{\mathbb{B}}}$. Here $\funU\mathbb{B}$ is the underlying set of $\mathbb{B}$, and $\Delta_{\mathbb{B}}$ is the \emdef{diagram} of $\mathbb{B}$, defined as follows: \[\begin{array}{llll} \Delta_{\mathbb{B}} &:=&& \{ (a,\neg b) \mid a,b \in \funU\mathbb{B} \mbox{ with } a = \neg^{\mathbb{B}}b \} \\ && \cup & \{ (a, \textstyle{\bigwedge} \phi) \mid \{ a \} \cup \phi \subseteq_{\omega} \funU\mathbb{B} \mbox{ with } a = \textstyle{\bigwedge}^{\mathbb{B}}\phi \} \\ && \cup & \{ (a, \textstyle{\bigvee} \phi) \mid \{ a \} \cup \phi \subseteq_{\omega} \funU\mathbb{B} \mbox{ with } a = \textstyle{\bigvee}^{\mathbb{B}}\phi \}. \end{array}\] Given a homomorphism $f: \mathbb{B} \to \mathbb{B}'$ between two Boolean algebras, we let \[ \funC f: b \mapsto f(b) \] define a map $\funC f: \funU\mathbb{B} \to \Tba(\funU\mathbb{B}')$. \end{definition} \begin{prop} \label{p:BCfun} $\funB: \mathsf{Pres} \to \mathsf{BA}$ and $\funC: \mathsf{BA} \to \mathsf{Pres}$ are functors. \end{prop} Further on we will make good use of the following definition. \begin{definition} A presentation morphism $f: \prs{G}{R} \to \prs{G'}{R'}$ is a \emph{pre-isomorphism} if there is a morphism $g: \prs{G'}{R'} \to \prs{G}{R}$ such that $\wh{g}\wh{f}(s) \equiv_{R} s$ and $\wh{f}\wh{g}(s') \equiv_{R'} s'$, for all terms $s \in \Tba G$ and $s' \in \Tba G'$. This $g$ is called a \emph{pre-inverse} of $f$. \end{definition} \begin{prop} \label{p:piBi} Let $f: \prs{G}{R} \to \prs{G'}{R'}$ be a presentation morphism. Then $f$ is a pre-isomorphism iff $\funB f$ is an isomorphism. \end{prop} \begin{proof} For the direction from left to right, let $f$ be a pre-isomorphism. We confine ourselves to proving that $\funB f$ is injective. For this purpose assume that $\funB f([s]_{\prs{G}{R}}) = \funB f([t]_{\prs{G}{R}})$. Then by definition we have $[\wh{f} s]_{\prs{G'}{R'}} = [\wh{f} t]_{\prs{G'}{R'}}$, or equivalently, $\wh{f}s \equiv_{R'} \wh{f}t$. From this it follows by the assumption that $s \equiv_{R} \wh{g}\wh{f}s \equiv_{R} \wh{g}\wh{f}t \equiv_{R} t$, and so it is immediate that $[s]_{\prs{G}{R}} = [t]_{\prs{G}{R}}$. Conversely, assume that $\funB f$ is an isomorphism between $\funB \prs{G}{R}$ and $\funB \prs{G'}{R'}$. Let $g: G' \to \Tba G$ be such that $g(x') \in (\funB f)^{-1}[x']$ for every generator $x' \in G'$. We claim that $\funB g = (\funB f)^{-1}$. To see this, note that it is straightforward to check that $g(s') \in (\funB f)^{-1}[s']$; from this it follows that $(\funB f)^{-1} ([s']_{\prs{G'}{R'}} )= [\wh{g} s']_{\prs{G}{R}}$. In order to see that $g$ is a pre-inverse of $f$, consider an arbitrary term $s \in \Tba G$. Clearly we have $[s]_{\prs{G}{R}} = (\funB f)^{-1}(\funB f) [s]_{\prs{G}{R}}$, and so by definition and the above observation, we find $[s]_{\prs{G}{R}} = (\funB f)^{-1}[\wh{f} s]_{\prs{G'}{R'}} = [\wh{g}\wh{f} s]_{\prs{G}{R}}$. This means that $s \equiv_{R} \wh{g}\wh{f} s$, as required. Conversely, let $s'$ be an arbitrary term in $\Tba G'$. Then we have $[s']_{\prs{G'}{R'}} = (\funB f)(\funB f)^{-1}[s']_{\prs{G'}{R'}} = \funB f [\wh{g}s']_{\prs{G}{R}} = [\wh{f}\wh{g}s']_{\prs{G'}{R'}}$, or equivalently, $s' \equiv_{R'} \wh{f}\wh{g}s'$. \end{proof} The functors $\funB$ and $\funC$ are very close to forming an equivalence between the categories $\mathsf{Pres}$ and $\mathsf{BA}$. More precisely, we can formulate the following connections. Given a presentation $\prs{G}{R}$, it is not hard to verify that the insertion of generators $\eta_{\prs{G}{R}}: G \to \funU\funB\prs{G}{R}$ defined in \eqref{eq:defunit} is in fact a presentation morphism \[ \eta_{\prs{G}{R}}: \prs{G}{R} \to \funC\funB\prs{G}{R}. \] Conversely, given a Boolean algebra $\mathbb{B}$, let $\mathit{id}_{B}$ denote the identity map on $B := \funU\mathbb{B}$, and recall that $\ti{\mathit{id}}_{B}$ denotes the unique homomorphism $\ti{\mathit{id}}_{B}: \funaF\funU\mathbb{B} \to \mathbb{B}$ extending $\mathit{id}_{B}$. It is not difficult to show that $\ti{\mathit{id}}_{B}(t(b_{1},\ldots,b_{n})) = t^{\mathbb{B}}(b_{1},\ldots,b_{n})$, and so we may think of $\ti{\mathit{id}}$ as an \emph{evaluation map}. We leave it for the reader to verify that for all $s,t \in \funaF\funU\mathbb{B}$, we have \begin{equation} s \equiv_{\funC\mathbb{B}} t \mbox{ iff } \ti{\mathit{id}}_{B}(s) = \ti{\mathit{id}}_{B}(t). \end{equation} From this it follows that the map $\epsilon_{\mathbb{B}}: \funB\funC\mathbb{B} \to \mathbb{B}$ given by putting, for any $t(b_{1},\ldots,b_{n}) \in \Tba(\funU\mathbb{B})$: \begin{equation} \label{eq:defcounit} \epsilon_{\mathbb{B}}: [t(b_{1},\ldots,b_{n})] \mapsto t^{\mathbb{B}}(b_{1},\ldots,b_{n}) \end{equation} is a well-defined homomorphism from $\funB\funC\mathbb{B}$ to $\mathbb{B}$. \begin{thm} \label{t:BCadj} The functors $\funB$ and $\funC$ form an adjoint pair $\funB \dashv \funC$, with unit $\eta: \Id_{\mathsf{Pres}} \mathrel{\dot{\rightarrow}} \funC\funB$ and counit $\epsilon: \funB\funC \mathrel{\dot{\rightarrow}} \Id_{\mathsf{BA}}$ given by \eqref{eq:defunit} and \eqref{eq:defcounit}, respectively. Furthermore, each arrow $\eta_{\prs{G}{R}}: \prs{G}{R} \to \funC\funB\prs{G}{R}$ is a pre-isomorphism, and each arrow $\epsilon_{\mathbb{B}}: \funB\funC\mathbb{B} \to \mathbb{B}$ is an isomorphism. \end{thm} \begin{proof} Let us start with showing that $\eta: \Id_{\mathsf{Pres}} \mathrel{\dot{\rightarrow}} \funC\funB$ is indeed a natural transformation. That is, given an presentation morphism $f: \prs{G}{R} \to \prs{G'}{R'}$ we have to show that the following diagram commutes. \begin{equation*} \xymatrix{ \prs{G}{R} \ar[d]_{f} \ar[r]^{\eta_{\prs{G}{R}}} & \funC\funB\prs{G}{R} \ar[d]_{\funC\funB f} \\ \prs{G'}{R'} \ar[r]^{\eta_{\prs{G'}{R'}}} & \funC\funB\prs{G'}{R'} } \end{equation*} For this purpose it suffices to check that the two compositions, $\funC\funB f \circ \eta_{\prs{G}{R}}(x)$ and $\eta_{\prs{G'}{R'}}\circ f (x)$ agree on an arbitrary generator $x \in G$. But this is immediate: \[ \funC\funB f \circ \eta_{\prs{G}{R}}(x) = \funC\funB f [x] = [\wh{f} x] = \wh{\eta}_{\prs{G'}{R'}}(\wh{f}x) = \wh{\eta}_{\prs{G'}{R'}}(f x) = (\eta_{\prs{G'}{R'}}\circ f) (x). \] In order to prove that $\eta_{\prs{G}{R}}$ is a pre-isomorphism, let $g: \funU\funB\prs{G}{R} \to \Tba G$ be any map such that $g([s]) \in [s]$ for any element $[s] \in \funU\funB\prs{G}{R}$. It is easy to check that $g$ is a presentation morphism and that $\eta_{\prs{G}{R}}$ and $g$ are pre-inverses of each other. From this it is immediate that $\eta_{\prs{G}{R}}$ is a pre-isomorphism. Turning to the counit of the adjunction, let $f: \mathbb{B} \to \mathbb{B}'$ be a homomorphism between Boolean algebras. Let $[t(b_{1},\ldots,b_{n})]$, with each $b_{i}$ in $\mathbb{B}$, be an arbitrary element of $\funB\funC\mathbb{B}$. Then we compute \begin{align*} f\circ\epsilon_{\mathbb{B}}[t(b_{1},\ldots,b_{n})] &= f(t^{\mathbb{B}}(b_{1},\ldots,b_{n})) \tag*{(definition of $\epsilon$)} \\ &= t^{\mathbb{B}'}(fb_{1},\ldots,fb_{n}) \tag*{($f$ is a homomorphism)} \\ &= \epsilon_{\mathbb{B}'}[t(fb_{1},\ldots,fb_{n})] \tag*{(definition of $\epsilon$)} \\ &= \epsilon_{\mathbb{B}'}[\wh{f}(t(b_{1},\ldots,b_{n}))] \tag*{(definition of $\wh{f}$)} \\ &= \epsilon_{\mathbb{B}'}(\funB\funC f)[t(b_{1},\ldots,b_{n})] \tag*{(definition of $\funB$ and $\funC$)} \end{align*} This shows that the following diagram commutes: \begin{equation*} \xymatrix{ \funB\funC\mathbb{B} \ar[d]_{\funB\funC f} \ar[r]^{\epsilon_{\mathbb{B}}} & \mathbb{B} \ar[d]_{f} \\ \funB\funC\mathbb{B}' \ar[r]^{\epsilon_{\mathbb{B}'}} & \mathbb{B}' } \end{equation*} and thus proves that $\epsilon$ is a natural transformation. To show that $\epsilon_{\mathbb{B}}$ is an isomorphism, it suffices to check injectivity. But by a straightforward term induction it is easy to prove that every term $t(b_{1},\ldots,t_{k})$ in $\Tba\funU\mathbb{B}$ satisfies \[ t(b_{1},\ldots,b_{n}) \equiv_{\funC\mathbb{B}} t^{\mathbb{B}}(b_{1},\ldots,b_{n}). \] Hence if $\epsilon_{\mathbb{B}}[s(a_{1},\ldots,a_{k})] = \epsilon_{\mathbb{B}}[t(b_{1},\ldots,b_{n})]$, then by $s(a_{1},\ldots,a_{k}) \equiv_{\funC\mathbb{B}} s^{\mathbb{B}}(a_{1},\ldots,a_{k}) = t^{\mathbb{B}}(b_{1},\ldots,b_{n}) \equiv_{\funC\mathbb{B}} t(b_{1},\ldots,b_{n})$, we immediately find that $[s(a_{1},\ldots,a_{k})] = [t(b_{1},\ldots,b_{n})]$, as required. Finally, in order to prove that $B \dashv C$, by~\cite[Theorem IV.1.2]{macl:cate98} it suffices to prove that (i) for any Boolean algebra $\mathbb{A}$, the composition \[ \funC\mathbb{A} \stackrel{\eta_{\funC\mathbb{A}}}{\longrightarrow} \funC\funB\funC\mathbb{A} \stackrel{\funC\epsilon_{\mathbb{A}}}{\longrightarrow} \funC\mathbb{A} \] is the identity on $\funC\mathbb{A}$, and that (ii) for any presentation $\prs{G}{R}$, the composition \[ \funB\prs{G}{R} \stackrel{\funB\eta_{\prs{G}{R}}}{\longrightarrow} \funB\funC\funB \prs{G}{R} \stackrel{\epsilon_{\funB\prs{G}{R}}}{\longrightarrow} \funB\prs{G}{R} \] is the identity on $\funB\prs{G}{R}$. Both of these facts can be checked by a straightforward unravelling of the definitions, which we will leave as an exercise for the reader. \end{proof} \begin{rem} What keeps $\funB$ and $\funC$ from forming an \emph{equivalence} of categories is that the unit $\eta$ is a `natural pre-isomorphism' rather than a natural isomorphism. We could remedy this by changing the notion of arrow in the category of presentations but this would be disadvantageous in our completeness proof, when we construct a stratification of our logic. \end{rem} \begin{rem} We indicate how the present section generalises beyond Boolean algebras, as suggested by a referee. We have been working with three categories, $\mathsf{BA}$, $\mathsf{Boole}$, and $\mathsf{Pres}$. Instead of $\mathsf{Boole}$ consider a category $\cal B$ with forgetful functor $U:\cal B\to\mathsf{Set}$ and left-adjoint $F$ of $U$. Instead of $\mathsf{BA}$ consider a category $\cal A$ and a full inclusion $I:\cal A \to \cal B$ with a left-adjoint $L$ of $I$. Now, we can define a category $\mathsf{Pres}$. $\mathsf{Pres}$ has as as objects pairs $\langle G, R\rangle$ where $G$ is a set and $R$ is a relation given by a pair of arrows $R\rightrightarrows UFG$, or equivalently, by $FR\rightrightarrows FG$. A presentation morphism $f:\langle G, R\rangle \to \langle G', R'\rangle$ is then an algebra morphism $f:FG\to FG'$ such that for all $A\in\cal A$ and all $v:FG'\to IA$, if $v$ equalises $FR'\rightrightarrows FG'$ then $v\circ f$ equalises $FR\rightrightarrows FG$. The functors $B:\mathsf{Pres}\to\cal A$ and $C:\cal A\to \mathsf{Pres}$ can then be defined as above. Indeed, for $A\in\cal A$ we let the canonical presentation $CA$ be the kernel pair of the map $UFUIA\to UIA$, given by the counit of $F\dashv U$ at $IA$; and $B\langle G, R\rangle$ is given by the coequaliser of $LFR\rightrightarrows LFG$. As in Theorem~\ref{t:BCadj}, one can now show that $B\dashv C$ and that the counit $BC\to\Id$ is an iso. Moreover, the proofs do not depend on the base category $\mathsf{Set}$ and only require rather general assumptions about kernel pairs and coequalisers (which are certainly fullfilled whenever $\cal A$ and $\cal B$ are varities, that is, classes of algebras given by operations of finite arity and equations). \end{rem} \section{Soundness and completeness} \label{s:completeness} \newcommand{\sprs}[1]{\prs{G_{#1}}{\equiv_{#1}}} \newcommand{\Lstr}[1]{\mathbb{L}_{#1}} \newcommand{\eqn}[1]{[#1]_{n}} In this section we will apply Pattinson's \emph{stratification method}~\cite{patt:coal03} in order to prove the soundness and completeness of our axiom system $\mathbf{M}$ with respect to the coalgebraic semantics. This stratification method consists in showing that not only the \emph{language} of our system, but also its \emph{semantics} and our \emph{logic} can be stratified in $\omega$ many layers. As we will see further on, the results in the previous section will then serve to glue these layers nicely together. In order to understand the idea of the proof, first assume that a final $\T$-coalgebra $\mathbb{Z} = \struc{Z, \zeta:Z\to \T Z}$ exists. Then we could prove that the unique Moss morphism $\mathit{mng}_{\mathbb{Z}}$ from the initial Moss algebra $\mathcal{L}$ to the algebra $\mathbb{Z}^{+}$ actually factors as $\mathit{mng}_{\mathbb{Z}} = V \mathit{mng}^*_\mathbb{Z} \cof q$, where $q: \mathcal{L} \to \mathcal{M}$ is the quotient map modulo derivability (in the sense that $\ker(q)$ is the relation $\equiv_{\mathbf{M}}$ of interderivability in $\mathbf{M}$), and $\mathit{mng}^*_\mathbb{Z}$ is an \emph{injective} $\mathbb{M}$-algebra morphism from $\mathcal{M}$ to $\mathbb{Z}^{*}$: \begin{equation*} \xymatrix{ & \mathcal{L} \ar[dr]_{\mathit{mng}_{\mathbb{Z}}} \ar[r]^{q} & V\mathcal{M} \ar@{>->}[d]^{V \mathit{mng}^*_\mathbb{Z}} \\ & & V\mathbb{Z}^{*} = \mathbb{Z}^{+} } \end{equation*} On the basis of this we would prove that $a \not\sqsubseteq_{\mathbf{M}}b$ implies that $q(a) \not\leq_{\mathcal{M}} q(b)$, and so by injectivity of $m$ we would conclude that $\mathit{mng}_{\mathbb{Z}}(a)\not\subseteq\mathit{mng}_{\mathbb{Z}}(b)$, providing a state $z\in Z$ such that $z\Vdash_{\mathbb{Z}} a$ and $z\not\Vdash_{\mathbb{Z}} b$. Since our set functor $\T$ generally does not admit a final coalgebra, we replace the final coalgebra with the \emph{final sequence}. \begin{definition} \label{d:finseq} The \emph{final $\T$-sequence} is defined as follows. \begin{equation} \label{dg:finseq} \xymatrix{ 1 & \T 1 \ar@{->}[l]_{h_{0}} & \T ^{2}1 \ar@{->}[l]_{h_{1}} & \ldots & \T ^{n}1 \ar@{->}[l]_{h_{n}} & \T ^{n+1}1 \ar@{->}[l]_{h_{n+1}} & \ldots } \end{equation} We denote by $1=\T^01$ the final object in $\mathsf{Set}$. The map $h_0:\T{1}\to{1}$ is given by finality and inductively, $h_{n+1}: \T(\T^n{1})\to \T^n{1}$ is defined to be the map $\T^{n}h_{0} = \T h_n$. \end{definition} The reader may think of the $\T^n{1}$ as approximating the final coalgebra. Indeed, if we let the final sequence run through all ordinals, we obtain the final coalgebra as a limit if it exists~\cite{adam:grea95}. Intuitively, where the states of the final coalgebra provide all possible $\T$-behaviors, the elements of $\T^{n}1$ represent all `$n$-step behaviors'. Given a $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$, for each $n\in\omega$ we may canonically define a map $\xi_{n}: X\to \T^{n}1$ providing the $n$-step behavior of the states of $\mathbb{X}$. \begin{definition} Given a $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$, we define the arrows $\xi_{n}: X\to \T^{n}1$, for $n \in \omega$, to the approximants of the final coalgebra by the following induction: $\xi_0:X\to 1$ is given by finality of $1$ in $\mathsf{Set}$, and $\xi_{n+1} \mathrel{:=} \T\xi_{n}\cof\xi$ . \end{definition} Interestingly, every object $\T^{n}1$ in the final sequence can be equipped with coalgebra structure. \begin{definition} \label{d:ncoalg} Let, for each $n\in\omega$, $\mathbb{Z}_{n}$ be the coalgebra \[ \mathbb{Z}_{n} \mathrel{:=} (\T^{n}1,\T^{n} g), \] where $g$ is an arbitrary but fixed map $g: 1 \to \T 1$. \end{definition} As we will see in a moment, these `$n$-final coalgebras' display all possible $n$-step behaviours, and thus act as a canonical witness for all non-provable inequalities between formulas of depth $n$. \subsection{A stratification of the semantics} \label{ss:strat1} We first show how to slice the semantics of nabla formulas into layers. For that purpose we define the $n$-step meaning of depth-$n$ modal formulas as a subset of the set $\T^{n}1$. \begin{definition} \label{d:nstepsem} By induction on $n$ we define maps $\mathit{mng}_{n}: \mathcal{L}_{n} \to \funP\T^{n}1$. For $n=0$, we define $\mathit{mng}_{0}$ by initiality of $\mathcal{L}_{0}$, or equivalently: \[ \mathit{mng}_{0}(a) \mathrel{:=} \left\{\begin{array}{ll} 1 & \mbox{if $a$ is a tautology,} \\\varnothing & \mbox{otherwise.} \end{array}\right.\] Inductively, assuming that $\mathit{mng}_{n}: \mathcal{L}_{n} \to \funP\T^{n}1$ has been defined, we may compose $\T\mathit{mng}_{n}: \T\mathcal{L}_{n} \to \T\funP\T^{n}1$ with $\lambda\!^{\T}_{\funP\T^{n}1}: \T\funP\T^{n}1 \to \funP\T^{n+1}1$ to obtain \begin{equation*} \lambda\!^{\T}_{\funP\T^{n}1}\cof\T\mathit{mng}_{n}: \Tom\mathcal{L}_{n} \to \funP\T^{n+1}1. \end{equation*} Then we let $\mathit{mng}_{n+1}: \mathcal{L}_{n+1} \to \funP\T^{n+1}1$ be the unique $\mathsf{Boole}$-homomorphism from $\funaF(\Tomnb\mathcal{L}_{n})$ to $\funaP\T^{n+1}1$ that extends the mapping given by \[ \nabla \alpha \mapsto \left( \lambda\!^{\T}_{\funP\T^{n}1}\cof\T\mathit{mng}_{n}(\alpha) \right) \hspace{1cm} \mbox{for $\nabla \alpha \in \Tomnb \mathcal{L}_n$.} \vspace{-18 pt}\] \end{definition}\medskip \noindent The following proposition provides a clear link between the $n$-step meaning of formulas and the $n$-step behaviour map of a coalgebra. \begin{prop} \label{p:nsem} Let $\mathbb{X}$ be a coalgebra, and $a \in \mathcal{L}_{n}$ a formula of rank $n$. Then \[ \mathit{mng}_{\mathbb{X}}(a) = (\funQ\xi_{n})(\mathit{mng}_{n}(a)). \] \end{prop} \begin{proof} The proof of the proposition is by induction on the modal depth and on the structure of the formula $a$. We only provide the induction case for $a = \nabla \alpha \in \mathcal{L}_{n+1}$ for some $n \in \omega$. In this case we have \begin{align*} \mathit{mng}_{\mathbb{X}} ( \nabla \alpha) & = \funQ \xi (\lambda_X (\T \mathit{mng}_{\mathbb{X}} (\alpha))) & \mbox{(definition of $\mathit{mng}_{\mathbb{X}}$)} \\&= \funQ \xi (\lambda_X (\T \funQ \xi_n (\T \mathit{mng}_n (\alpha)))) & \mbox{(induction hypothesis)} \\&= \funQ \xi \left( \funQ \T \xi_n (\lambda_{\T^n 1}(\T \mathit{mng}_n (\alpha)))\right) & \mbox{(naturality of $\lambda$)} \\&= \funQ \xi_{n+1}(\mathit{mng}_{n+1}(\nabla \alpha)) & \mbox{(definition of $\mathit{mng}_{n+1}$ and $\xi_{n+1}$)} \end{align*} \end{proof} The $n$-final coalgebra of Definition~\ref{d:ncoalg} has the interesting property that its $n$-step behaviour map is the \emph{identity} map on $\T^{n}1$. As a corollary, the $n$-step meaning of any depth-$n$ formula $a$ coincides with its meaning in the $n$-step coalgebra. \begin{prop} \label{p:ncoalg} Let $a$ be a formula of depth $n$. Then \[ \mathit{mng}_{\mathbb{Z}_{n}}(a) = \mathit{mng}_{n}(a). \] \end{prop} \begin{proof} It is not difficult to see that for the coalgebra $\mathbb{Z}_{n}$ (and for this $n$), we have \begin{equation} \label{eq:gh} (\T^{n}g)_{n} \mathrel{:=} \mathit{id}_{\T^{n}1}. \end{equation} We confine ourselves to a proof sketch. The basic idea of the proof is to prove inductively that $(\T^{n}g)_{k} = h_{nk}$ for all $k\leq n$, where $h_{nk}: \T^{n}1 \to \T^{k}1$ is the map $h_{nk}\mathrel{:=} h_{k} \cof h_{k+1} \cof \cdots \cof h_{n}$. Further details can be found in~\cite[Section 4]{patt:coal03}. The Proposition itself is immediate by Proposition~\ref{p:nsem} and \eqref{eq:gh}. \end{proof} As a fairly direct corollary to the previous two propositions we can formulate our semantic stratification theorem. Basically it states that the meaning of depth-$n$ formulas is determined at level $n$ of the final sequence, and in the $n$-step final coalgebra $\mathbb{Z}_{n}$. \begin{thm}[Semantic Stratification Theorem] \label{t:strs} Let $a,b \in \mathcal{L}_{n}$ be formulas. Then the following are equivalent: \begin{enumerate}[\em(1)] \item $a \models_{\T} b$; \item $\mathit{mng}_{n}(a) \subseteq \mathit{mng}_{n}(b)$; \item $\mathit{mng}_{\mathbb{Z}_{n}}(a) \subseteq \mathit{mng}_{\mathbb{Z}_{n}}(b)$. \end{enumerate} \end{thm} \begin{proof} The implication $1 \Rightarrow 3$ is immediate by the definitions, while the implication $2 \Rightarrow 1$ follows by Proposition~\ref{p:nsem}: given a coalgebra $\mathbb{X} = \struc{X,\xi}$, we conclude from $\mathit{mng}_{n}(a) \subseteq \mathit{mng}_{n}(b)$ that $\mathit{mng}_{\mathbb{X}}(a) = (\funaQ\xi_{n})(\mathit{mng}_{n}(a)) \subseteq (\funaQ\xi_{n})(\mathit{mng}_{n}(b)) = \mathit{mng}_{\mathbb{X}}(b)$. The remaining implication $3 \Rightarrow 2$ follows directly by Proposition~\ref{p:ncoalg}. \end{proof} \subsection{A stratification of the logic} \label{ss:stratification} To see in detail how our \emph{logic} can be stratified, let us first introduce some terminology concerning the stratification of the language. \begin{definition} Let $G_{0} := \varnothing$, and define inductively $G_{n+1} := \Tomnb\Tba G_{n} = \{ \nabla\alpha \mid \alpha\in\Tom\Tba(G_{n}) \}$. In addition, let $e_{0}: G_{0} \to \Tba G_{1}$ be the empty map, and define $e_{n+1}: G_{n+1} \to \Tba G_{n+2}$ by putting $e_{n+1} := M e_{n}$. Finally, we let $d_{n}$ denote the inclusion $d_{n}: \mathcal{L}_{n} \hookrightarrow\mathcal{L}$. \end{definition} Recall that $\mathcal{L}_{n}$ denotes the set of formulas of rank $n$ (see Definition~\ref{d:syntax}), and observe that $\mathcal{L}_{n} = \Tba G_{n}$, for all $n$, and that each $\mathcal{L}_{n}$ is also the carrier of an algebra in $\mathsf{Boole}$; this algebra will also be denoted as $\mathcal{L}_{n}$. \akk{Consequently, $\mathcal{L}_{n+1}=\mathcal{L}_1(G_n)$, which is different from $\mathcal{L}_1(\mathcal{L}_n)=\mathcal{L}_1(\mathcal{L}_0(G_n)))$ since in $\mathsf{Boole}$ we do not identify terms which are equivalent in the theory of Boolean algebras.} Also observe that the map $\wh{e}_{n}: \Tba G_{n} \to \Tba G_{n+1}$ is in fact the embedding of $\mathcal{L}_{n}$ into $\mathcal{L}_{n+1}$: \[ \wh{e}_{n}: \mathcal{L}_{n} \hookrightarrow\mathcal{L}_{n+1}, \] and that the embedding $d_{n}: \mathcal{L}_{n} \hookrightarrow\mathcal{L}$ commutes with the one-step embeddings, in the sense that $d_{n} = d_{n+1} \cof \wh{e}_{n}$. We can now formulate our stratification theorem as follows. Recall that $\mathcal{L}$ is the initial algebra in the category $\Boole_{\nabla}$. \begin{thm}[Axiomatic Stratification Theorem] \label{t:strat} Let $m \mathrel{:=} \mathit{mng}_{V\mathcal{M}}$ be the unique homomorphism $m: \mathcal{L} \to V\mathcal{M}$ in the category of Moss algebras. \begin{enumerate}[\em(1)] \item There are maps $q_{n}: \mathcal{L}_{n} \to \mathbb{M}^{n}\mathbbm{2}$, with each $q_{n}$ a $\mathsf{Boole}$-homomorphism, such that the following diagram (in the category $\mathsf{Boole}$) commutes: \begin{equation} \label{dg:strat1} \xymatrix{ \mathcal{L}_{0} \ar[d]_{q_{0}} \ar@{^{(}->}[r]^{\wh{e}_{0}} \ar@/^{12mm}/[rrrrrrr]|{d_{0}} & \mathcal{L}_{1} \ar[d]_{q_{1}} \ar@{^{(}->}[r]^{\wh{e}_{1}} \ar@/^{10mm}/[rrrrrr]|{d_{1}} & \mathcal{L}_{2} \ar[d]_{q_{2}} \ar@/^{8mm}/[rrrrr]|{d_{2}} & \ldots & \mathcal{L}_{n} \ar[d]_{q_{n}} \ar@{^{(}->}[r]^{\wh{e}_{n}} \ar@/^{5mm}/[rrr]|{d_{n}} & \mathcal{L}_{n+1} \ar[d]_{q_{n+1}} \ar@/^{3mm}/[rr]|{d_{n+1}} & \ldots & \mathcal{L} \ar[d]_{m} \\ \mathbbm{2} \ar@{>->}[r]^{j_0} \ar@/_{12mm}/[rrrrrrr]|{i_{0}} & \mathbb{M}\mathbbm{2} \ar@{>->}[r]^{j_1} \ar@/_{10mm}/[rrrrrr]|{i_{1}} & \mathbb{M}^{2}\mathbbm{2} \ar@/_{8mm}/[rrrrr]|{i_{2}} & \ldots & \mathbb{M}^{n}\mathbbm{2} \ar@{>->}[r]^{j_n} \ar@/_{5mm}/[rrr]|{i_{n}} & \mathbb{M}^{n+1}\mathbbm{2} \ar@/_{3mm}/[rr]|{i_{n+1}} & \ldots & \mathbb{M}^{\omega}\mathbbm{2} } \end{equation} \item In addition, $\ker(m) = {\equiv_{\mathbf{M}}}$; that is, $m(a) = m(b)$ iff $a$ and $b$ are provably equivalent in $\mathbf{M}$. \end{enumerate} \end{thm} Before turning to the proof of this result, let us briefly summarize its meaning. Most importantly, Theorem~\ref{t:strat} states that for each $n<\omega$, the Boolean algebra $\mathbb{M}^{n}\mathbbm{2}$ coincides with the quotient of the $\mathsf{Boole}$-algebra $\mathcal{L}^{n}$ under the relation $\equiv_{\mathbf{M}}$ of provable equivalence in our derivation system $\mathbf{M}$. In addition, the quotient maps $q_{n}$ commute with the inclusions $\wh{e}_{n}$ of $\mathcal{L}_{n}$ into $\mathcal{L}_{n+1}$, and $j_{n}$ from $\mathbb{M}^{n}\mathbbm{2}$ into $\mathbb{M}^{n+1}\mathbbm{2}$. In order to prove Theorem~\ref{t:strat}, we will inductively define a relation $\equiv_{n}$ of ``$n$-inter\-derivability'' between $\mathcal{L}_{n}$-formulas. We will see that for every $n$, the Boolean algebra $\Lstr{n} = \mathcal{L}_{n}/_{\equiv_{n}}$ is isomorphic to $\mathbb{M}^{n}\mathbbm{2}$, but also, that for formulas $a,b \in \mathcal{L}_{n}$, we have $a \equiv_{n} b$ iff $a \equiv_{\mathbf{M}} b$. The definition of $\equiv_{n}$ will be such that \[ \sprs{n+1} = M\sprs{n}. \] \begin{definition} Let ${\equiv_{0}} \subseteq \mathcal{L}_{0} \times \mathcal{L}_{0}$ be the relation of provable equivalence between closed Boolean terms. Inductively, define the relation ${\equiv_{n+1}} \subseteq \mathcal{L}_{n+1} \times \mathcal{L}_{n+1}$ as the congruence relation of the presentation $M \sprs{n}$, and let $\Lstr{n}$ denote the Boolean algebra $\funB \sprs{n}$, or equivalently, $\Lstr{n} = \mathcal{L}_{n}/_{\equiv_{n}}$. Given a formula $a \in \mathcal{L}_{n}$, we let $\eqn{a}$ denote the equivalence class of $a$ under the relation $\equiv_{n}$. \end{definition} \noindent As we will see, the algebras $\Lstr{n}$ form an intermediate row in the stratification diagram (\ref{dg:strat1}) (in the category $\mathsf{Boole}$): \begin{equation} \label{dg:strat2} \xymatrix{ \mathcal{L}_{0} \ar@{>>}[d]_{\ti{\eta}_{0}} \ar@{^{(}->}[r]^{\wh{e}_{0}} & \mathcal{L}_{1} \ar@{>>}[d]_{\ti{\eta}_{1}} \ar@{^{(}->}[r]^{\wh{e}_{1}} & \mathcal{L}_{2} \ar@{>>}[d]_{\ti{\eta}_{2}} & \ldots & \mathcal{L}_{n} \ar@{>>}[d]_{\ti{\eta}_{n}} \ar@{^{(}->}[r]^{\wh{e}_{n}} & \mathcal{L}_{n+1} \ar@{>>}[d]_{\ti{\eta}_{n+1}} & \ldots \\ \Lstr{0} \ar@{>->>}[d]_{f_{0}} \ar@{>->}[r]^{\funB e_{0}} & \Lstr{1} \ar@{>->>}[d]_{f_{1}} \ar@{>->}[r]^{\funB e_{1}} & \Lstr{2} \ar@{>->>}[d]_{f_{2}} & \ldots & \Lstr{n} \ar@{>->>}[d]_{f_{n}} \ar@{>->}[r]^{\funB e_{n}} & \Lstr{n+1} \ar@{>->>}[d]_{f_{n+1}} & \ldots \\ \mathbbm{2} \ar@{>->}[r]^{j_0} & \mathbb{M}\mathbbm{2} \ar@{>->}[r]^{j_1} & \mathbb{M}^{2}\mathbbm{2} & \ldots & \mathbb{M}^{n}\mathbbm{2} \ar@{>->}[r]^{j_n} & \mathbb{M}^{n+1}\mathbbm{2} & \ldots } \end{equation} \noindent We now turn to the details of the proof of Theorem~\ref{t:strat}, step by step filling in diagram~(\ref{dg:strat2}). Since we already discussed the embeddings $\wh{e}_{n}$, $n\in\omega$, we start with the map $\ti{\eta}_{n}$, which will denote the quotient map associated with the congruence $\equiv_{n}$. \begin{definition} \label{d:nu} Let $\eta_{n}: G_{n} \to \mathcal{L}_{n}/_{\equiv_{n}}$ be the map given by $\eta_{n}: g \mapsto \eqn{g}$. \end{definition} We may see the map $\eta_{n}$ as a presentation morphism from $\sprs{n}$ to $\funC(\Lstr{n})$ --- as such it is the unit $\eta_{\sprs{n}}$ of the adjunction $\funB \dashv \funC$, and hence, a pre-isomorphism (cf.~Theorem~\ref{t:BCadj}). This function extends to a homomorphism in $\mathsf{Boole}$: \[ \ti{\eta}_{n}: \mathcal{L}_{n} \to \Lstr{n} \] which maps a formula $a \in \mathcal{L}_{n}$ to its $n$-equivalence class: \[ \ti{\eta}_{n}: a \mapsto \eqn{a}. \] Concerning the maps $\funB e_{n}: \Lstr{n} \to \Lstr{n+1}$, it is easy to see that they are indeed well-typed, but in order to prove that each $\funB e_{n}$ is an embedding, some work will be needed. The embeddings $j_{n}: \mathbb{M}^{n}\mathbbm{2} \to \mathbb{M}^{n+1}\mathbbm{2}$ have been defined in Definition~\ref{d:initial_sequence}. Finally, the isomorphisms $f_{n}$ of diagram~(\ref{dg:strat2}) will be defined inductively. \begin{definition} By induction on $n$ we define Boolean homomorphisms $f_{n}: \Lstr{n} \to \mathbb{M}^{n}\mathbbm{2}$. For $n=0$, we let $f_{0}$ be the (unique) isomorphism from $\Lstr{0}$ to $\mathbbm{2}$. For $n=k+1$, we first define $p_{n+1}: \Lstr{n+1} \to \mathbb{M}\Lstr{n}$ by putting $p_{n+1} := \funBM\eta_{n}$. Then we compose the maps \[ \Lstr{n+1} \stackrel{p_{n+1}}{\longrightarrow} \mathbb{M}\Lstr{n} \stackrel{\mathbb{M} f_{n}}{\longrightarrow} \mathbb{M}^{n+1}\mathbbm{2}, \] and define $f_{n+1} := (\mathbb{M} f_{n})\cof p_{n+1}$. \end{definition} The following proposition gathers all the facts about the maps defined until now that are needed to prove that diagram~(\ref{dg:strat2}) commutes: \begin{prop} \label{l:str1}\hfill \begin{enumerate}[\em(1)] \item In the category $\mathsf{Pres}$ of presentation each map $e_{n}$ is a morphism $e_{n}: \sprs{n}$ $\to \sprs{n+1}$, each map $\eta_{n}: \sprs{n+1} \to \funC\Lstr{n}$ is a pre-isomorphism, and each of the following diagrams commutes: \begin{equation} \label{dg:tec0} \xymatrix{ \sprs{n} \ar[d]_{\eta_{n}} \ar[r]^{e_{n}} & \sprs{n+1} \ar[d]_{\eta_{n+1}} \\ \funC(\Lstr{n}) \ar[r]^{\funC\funB e_{n}} & \funC(\Lstr{n+1}) } \end{equation} \item In the category $\mathsf{Boole}$, each of the following diagrams commutes: \begin{equation} \label{dg:tec1a} \xymatrix{ \mathcal{L}_{n} \ar[d]_{\ti{\eta}_{n}} \ar[r]^{\wh{e}_{n}} & \mathcal{L}_{n+1} \ar[d]_{\ti{\eta}_{n+1}} \\ \Lstr{n} \ar[r]^{\funB e_{n}} & \Lstr{n+1} } \end{equation} \item In the category $\mathsf{BA}$ of Boolean algebras, each map $p_{n+1}$ is an isomorphism, and each of the following diagrams commutes: \begin{equation} \label{dg:tec2} \xymatrix{ \Lstr{n+1} \ar@{>->>}[d]_{p_{n+1}} \ar[r]^{\funB e_{n+1}} & \Lstr{n+2} \ar@{>->>}[d]_{p_{n+2}} \\ \mathbb{M}(\Lstr{n}) \ar[r]^{\mathbb{M}\funB e_{n}} & \mathbb{M}(\Lstr{n+1}) } \end{equation} \item In the category $\mathsf{Boole}$, each of the following diagrams commutes: \begin{equation} \label{dg:tec3} \xymatrix{ \mathcal{L}_{n+1} \ar[d]_{\Tnb\eta_{n}} \ar[r]^{\ti{\eta}_{n+1}} & \Lstr{n+1} \ar[d]_{p_{n+1}} \\ \Tnb\funU(\Lstr{n}) \ar[r]^{\quot{\Lstr{n}}} & \mathbb{M}(\Lstr{n}) } \end{equation} with $\quot{\Lstr{n}}$ as in Definition~\ref{d:quot}. \item In the category $\mathsf{BA}$ of Boolean algebras, each map $f_{n}$ is an isomorphism; each map $\funB e_{n}: \Lstr{n} \to \Lstr{n+1}$ is an embedding; and each of the following diagrams commutes: \begin{equation} \label{dg:strat-f} \xymatrix{ \Lstr{n} \ar@{>->>}[d]_{f_{n}} \ar@{>->}[r]^{Be_{n}} & \Lstr{n+1} \ar@{>->>}[d]_{f_{n+1}} \\ \mathbb{M}^{n}\mathbbm{2} \ar@{>->}[r]^{j_{n}} & \mathbb{M}^{n+1}\mathbbm{2} } \end{equation} \end{enumerate} \end{prop} \begin{proof}\hfill \begin{enumerate}[(1)] \item It follows by a straightforward induction that every $e_{n}$ is a presentation morphism. The other statements of this item follow from the fact that $\eta_{n} = \eta_{\prs{G_{n}}{\equiv_{n}}}$, together with our earlier observation (cf.~Theorem~\ref{t:BCadj}) that $\eta: \Id_{\mathsf{Pres}} \to \funC\funB$ is a natural transformation of which each $\eta_{\prs{G}{R}}$ is a pre-isomorphism. \item We claim that if $f: \prs{G}{R} \to \prs{G'}{R'}$ is the presentation morphism represented by one of the four arrows of the diagram (\ref{dg:tec0}), then the corresponding arrow $\hat{f}$ in (\ref{dg:tec1a}) is the \emph{unique} $\mathsf{Boole}$-morphism extending $f$ (seen as a map between sets). For instance, if $f$ is the presentation morphism $\eta_{n}: \sprs{n} \to \funC\Lstr{n}$, then using the fact that $\mathcal{L}_{n} = \Tba G_{n}$ is the free $\mathsf{Boole}$-algebra over $G_{n}$, it follows that $\hat{f} =\ti{\eta}_{n}$ is the unique homomorphism in $\mathsf{Boole}$ from $\mathcal{L}_{n}$ to $\Lstr{n}$. Or, to give a second example, $\funB e_{n}$ is clearly the only homomorphism from $\Lstr{n}$ to $\Lstr{n+1}$ which ``extends'' $\funC\funB e_{n}: \funC\Lstr{n} \to \funC\Lstr{n+1}$. From this it follows that both $\ti{\eta}_{n+1}\cof\wh{e}_{n}$ and $\funB e_{n}\cof\ti{\eta}_{n}$ are morphisms in $\mathsf{Boole}$ that extend the map $\eta_{n+1}\cof e_{n} = \funC\funB e_{n} \cof \eta_{n}$ (with the identity holding because diagram~(\ref{dg:tec0}) commutes). But then, again by the freeness of $\mathcal{L}_{n}$ over $G_{n}$ in $\mathsf{Boole}$, these two extensions must be equal, which is the same as to say that (\ref{dg:tec1a}) commutes. \item It is easy to see that our definition of the map $p_{n+1}$ indeed provides an isomorphism, because \[ M\eta_{n}: \sprs{n+1} = M\sprs{n} \to M\funC\Lstr{n}, \] is a pre-isomorphism in $\mathsf{Pres}$, by Theorem~\ref{t:Mfun} inheriting this property from $\eta_{n}: \sprs{n} \to \funC\Lstr{n}$, and $\funB$ maps pre-isomorphisms to isomorphisms, see Proposition~\ref{p:piBi}. To prove that diagram (\ref{dg:tec2}) commutes it suffices to see that we may obtain it from diagram (\ref{dg:tec0}) by applying the functor $\funBM$. \item Recall that the family of presentation morphisms $\eta_{\prs{G}{R}}: \prs{G}{R} \to \funC\funB\prs{G}{R}$, defined by (\ref{eq:defunit}), constitutes a natural transformation $\eta: \Id_{\mathsf{Pres}} \mathrel{\dot{\rightarrow}} \funC\funB$. Instantiating the diagram which expresses this fact for the arrow $M\eta_{n}: M\sprs{n} \to M\funC\Lstr{n}$, we obtain the following commuting diagram: \begin{equation} \label{dg:tec4} \xymatrix{ M\sprs{n} \ar[d]_{M\eta_{n}} \ar[rr]^{\eta_{M\sprs{n}}} && \funC\funBM\sprs{n} = \funC\Lstr{n+1} \ar[d]^{\funC\funBM\eta_{n}} \\ M\funC\Lstr{n} \ar[rr]^{\eta_{M\funC\Lstr{n}}} && \funC\funBM\funC\Lstr{n} = \funC\mathbb{M}\Lstr{n} } \end{equation} Now we can, similarly as in the proof of item~2, show that each of the arrows in (\ref{dg:tec3}) is the unique morphism in $\mathsf{Boole}$ that extends the corresponding map in (\ref{dg:tec4}). For example, consider the map $\Tnb\eta_{n}: \mathcal{L}_{n+1} \to \Tnb\funU \Lstr{n}$. It follows from a straightforward unravelling of the definitions that $\Tnb\eta_{n}$ extends $M\eta_{n}$ (see Proposition~\ref{p:funpM-Tnb}). The latter, as a function between sets, is just a map from $\Tomnb\Tba G_{n} = G_{n+1}$ to the set of generators of the presentation $M\funC\Lstr{n}$, which is nothing but the set $\Tomnb\Tba\funU \Lstr{n}$. But then, again similar to the proof of item~2, we can prove that the maps $p_{n+1}\cof \ti{\eta}_{n+1}$ and $\quot{\Lstr{n}}\cof\Tnb\eta_{n}$ are identical, by noting that both are morphisms in $\mathsf{Boole}$ that extend the presentation morphism $\funC\funBM\eta_{n} \cof \eta_{M\sprs{n}} = \eta_{M\funC\Lstr{n}} \cof M\eta_{n}$ of diagram~\eqref{dg:tec4}. \item This part of the Proposition is proved by induction on $n$. For $n=0$, the map $f_{0}$ is an isomorphism by definition, and the map $\funB e_{0}$ is an embedding by initiality of $\mathbbm{2}$ in $\mathsf{BA}$. Finally, the following diagram commutes simply by the initiality of the algebra $\Lstr{0}$ in the category $\mathsf{BA}$: \begin{equation} \xymatrix{ \Lstr{0} \ar@{>->>}[d]_{f_{0}} \ar@{>->}[r]^{Be_{0}} & \Lstr{1} \ar[d]_{f_{1}} \\ \mathbbm{2} \ar@{>->}[r]^{j_{0}} & \mathbb{M}\mathbbm{2} } \end{equation} In the inductive case for $n+1$, by hypothesis the map $f_{n}$ is an isomorphism, and the map $\funB e_{n}$ an embedding. From this it is immediate that $\mathbb{M} f_{n}$ is an isomorphism as well, and since $p_{n+1}$ is an isomorphism by Proposition~\ref{l:str1}(2), it follows that the map $f_{n+1}$, being the composition of two isomorphisms, is an isomorphism as well. Now consider the following diagram: \begin{equation} \xymatrix{ \Lstr{n+1} \ar@/_{20mm}/[dd]_{f_{n+1}} \ar@{>->>}[d]_{p_{n+1}} \ar[r]^{Be_{n+1}} & \Lstr{n+2} \ar@/^{20mm}/[dd]^{f_{n+2}} \ar@{>->>}[d]_{p_{n+2}} \\ \mathbb{M}(\Lstr{n}) \ar@{>->>}[d]_{\mathbb{M} f_{n}} \ar@{>->}[r]^{\mathbb{M}\funB e_{n}} & \mathbb{M}(\Lstr{n+1}) \ar@{>->>}[d]_{\mathbb{M} f_{n+1}} \\ \mathbb{M}^{n+1}\mathbbm{2} \ar@{>->}[r]^{j_{n+1}} & \mathbb{M}^{n+2}\mathbbm{2} } \end{equation} The upper rectangle of this diagram commutes by Proposition~\ref{l:str1}(2), and the lower rectangle, by applying the functor $\mathbb{M}$ to the diagram (\ref{dg:strat-f}) which commutes by the inductive hypothesis. As a consequence, the outer rectangle, which exactly corresponds to the diagram (\ref{dg:strat-f}) for the case $n+1$, commutes as well. Finally, then, the injectivity of $\funB e_{n+1}$ is immediate by that of $j_{n+1}$, which was established in Lemma~\ref{p:minit}(1). \end{enumerate} \end{proof} By Proposition~\ref{l:str1} it follows that the diagram~(\ref{dg:strat2}) commutes. \medskip For future reference we state the following technical fact, which links the quotient maps $q_{n}$ and $q_{n+1}$ to the natural transformation $\quot{}$ of Definition~\ref{d:quot}, instantiated at the Boolean algebra $\mathbb{M}^{n}\mathbbm{2}$. \begin{prop} \label{p:qqq} For any element $\alpha \in \Tom\mathcal{L}_{n}$, we have \begin{equation} \label{eq:qqq} q_{n+1}(\nabla\alpha) = \quot{\mathbb{M}^{n}\mathbbm{2}}\nabla(\T q_{n}(\alpha)). \end{equation} \end{prop} \begin{proof} To see why this proposition holds, recall that $q_{k} = f_{k}\cof \ti{\eta}_{k}$ for each $k\in\omega$, and consider the diagram below \begin{equation} \label{dg:nbhom3} \xymatrix{ \Tom\mathcal{L}_{n} \ar[d]_{\T\ti{\eta}_{n}} \ar[r]^{\nabla_{G_{n}}} & \Tnb(G_{n})=\mathcal{L}_{n+1} \ar[d]_{\Tnb\eta_{n}} \ar[r]^{\ti{\eta}_{n+1}} & \Lstr{n+1} \ar[d]_{p_{n+1}} \ar@/^{20mm}/[dd]^{f_{n+1}} \\ \Tom\funU(\Lstr{n}) \ar[d]_{\T f_{n}} \ar[r]^{\nabla_{\funU(\Lstr{n})}} & \Tnb\funU(\Lstr{n}) \ar[d]_{\Tnb f_{n}} \ar[r]^{\quot{\Lstr{n}}} & \mathbb{M}(\Lstr{n}) \ar[d]_{\mathbb{M} f_{n}} \\ \Tom\funU(\mathbb{M}^{n}\mathbbm{2}) \ar[r]^{\nabla_{\funU\mathbb{M}^{n}\mathbbm{2}}} & \Tnb\funU(\mathbb{M}^{n}\mathbbm{2}) \ar[r]^{\quot{\mathbb{M}^{n}\mathbbm{2}}} & \mathbb{M}^{n+1}\mathbbm{2} } \end{equation} \akk{where, in order to simplify the diagram, we omit the forgetful functors to $\mathsf{Set}$ on the right-hand side of the diagram and exploit our ambiguous notation allowing $\mathcal{L}_1$ to be considered as $\mathsf{Set}$-valued or $\mathsf{Boole}$-valued.} Here an arrow labelled $\nabla_{G}$ represents the function mapping an object $\alpha \in \Tom \Tba G$ to the corresponding formula $\nabla\alpha \in \Tnb(G)$. Note that in the case that $G = \funU(\Lstr{n})$ and $G =\funU\mathbb{M}^{n}\mathbbm{2}$ we use the fact that $\Tom G \subseteq \Tom\Tba G$. We claim that all squares of (\ref{dg:nbhom3}) commute. To check this for the left squares this is simply a matter of unravelling the definitions, and the upper right square has been shown to commute in Proposition~\ref{l:str1}(4). Finally, that the lower right square commutes is a consequence of the fact that $\quot{}$ is a natural transformation $\quot{}: \Tnb\funU \mathrel{\dot{\rightarrow}} \mathbb{M}$, cf.~Proposition~\ref{p:q}. But if indeed all squares of (\ref{dg:nbhom3}) commute, then the identity (\ref{eq:qqq}) can simply be read off from the outer sides of the diagram. \end{proof} Continuing the proof of the Stratification Theorem, what is left to do is link the algebras $\mathcal{L}$ and $\mathcal{M}$ to diagram~(\ref{dg:strat2}). We first need a proof-theoretical result stating that on formulas in $\mathcal{L}_{n}$, the notions of $n$-derivability and derivability coincide. \begin{prop} \label{p:str3} Let $a$ and $b$ be two formula in $\mathcal{L}_{n}$. \begin{enumerate}[\em(1)] \item $a \equiv_{n} b$ iff $a \equiv_{m} b$ for some $m\in\omega$; \item $a \equiv_{n} b$ iff $a {\equiv_{\mathbf{M}}} b$. \end{enumerate} \end{prop} \begin{proof} Part~1 of the proposition is a direct consequence of diagram~(\ref{dg:strat2}) commuting. Concerning the second part, the left-to-right direction can be proved by a straightforward induction on $n$. For the opposite direction `$\Leftarrow$', it suffices to establish that for two formulas $a,b \in \mathcal{L}_{n}$ we have \begin{equation} \label{eq:str} \D: \;\; \vdash_{\nax} a \precsim b \mbox{ implies } a \sqsubseteq_{n} b, \end{equation} where we use $a \sqsubseteq_{n} b$ to denote that $a \equiv_{n} a\land b$. The proof of \eqref{eq:str} is by induction on the complexity of the derivation $\D$. We confine ourselves to the most difficult case of the inductive step, namely where the last applied rule in $\D$ is the cut rule; that is, we assume $\D$ to be of the form \[ \D:\hspace{10mm} \AXC{$\D_{1}$} \UIC{$a \precsim c$} \AXC{$\D_{2}$} \UIC{$c \precsim b$} \LL{cut} \BIC{$a \precsim b$} \DisplayProof \] (This case is the most difficult one since here we may not assume $c$ to be in $\mathcal{L}_{n}$.) Let $m$ be such that $c \in \mathcal{L}_{m}$, and put $k := \max(m,n)$. Then inductively, we have $a \sqsubseteq_{k} c$ and $c \sqsubseteq_{k} b$, from which we easily obtain that $a \sqsubseteq_{k} b$. But then by the first part of the Proposition, we see that $a \sqsubseteq_{n} b$, as required. \end{proof} \begin{prop} \label{p:str4} The relation ${\equiv_{\mathbf{M}}} \subseteq \mathcal{L} \times \mathcal{L}$ is the kernel of the unique $\Boole_{\nabla}$-quotient map from $\mathcal{L}$ to $V\mathcal{M}$. \end{prop} \begin{proof} Define the map $q: \mathcal{L} \to \mathbb{M}^{\omega}\mathbbm{2}$ as follows. Given a formula $a \in \mathcal{L}$, there is some $n\in\omega$ such that $a \in \mathcal{L}_{n}$. Now define \[ q(a) := i_{n}q_{n}(a) \] This is well-defined by the fact that diagram~(\ref{dg:strat2}) commutes and we have $\ker(q) = {\equiv_{\mathbf{M}}}$ by Proposition~\ref{p:str3}. Then by initiality of $\mathcal{L}$ in $\Boole_{\nabla}$ it suffices to prove that $q$ is an algebraic homomorphism. For the Boolean connectives/operators this is straightforward, and so we leave this as an exercise for the reader. For the $\nabla$ modality we need to prove that the following diagram commutes: \begin{equation} \label{dg:nbhom} \xymatrix{ \Tom\mathcal{L} \ar[d]_{\T q} \ar[r]^{\nabla^{\mathcal{L}}} & \mathcal{L} \ar[d]_{q} \\ \Tom\funU \mathbb{M}^{\omega}\mathbbm{2} \ar[r]^{\nabla^{V\mathcal{M}}} & \funU\mathbb{M}^{\omega}\mathbbm{2} } \end{equation} In order to prove this, take an arbitrary element $\alpha\in\Tom(\mathcal{L})$. Without loss of generality, assume that $\alpha\in\Tom(\mathcal{L}_{n})$, so that $\nabla\alpha \in \mathcal{L}_{n+1}$. Then by definition of $q$, we have \begin{equation} \label{eq:nbhom1} (q \cof \nabla^{\mathcal{L}})(\alpha) = q(\nabla\alpha) = i_{n+1}q_{n+1}(\nabla\alpha). \end{equation} Computing $(\nabla^{V\mathcal{M}}\cof \T q)(\alpha)$, we first calculate \begin{eqnarray*} (\T q)(\alpha) &=& \T i_{n} \big( (\T q_{n})(\alpha) \big), \end{eqnarray*} where $(\T q_{n})(\alpha)$ belongs to the set $\Tom\funU\mathbb{M}^{n}\mathbbm{2}$. Now we claim that for all $\beta \in \Tom\funU\mathbb{M}^{n}\mathbbm{2}$: \begin{equation} \label{eq:yz1} \nabla^{V\mathcal{M}} (\T i_{n}) \beta = i_{n+1} \quot{\mathbb{M}^{n}\mathbbm{2}} (\nabla\beta), \end{equation} with $\quot{\mathbb{M}^{n}\mathbbm{2}}$ as in Definition~\ref{d:quot}. To see this, consider the following calculation: \begin{align*} \nabla^{V\mathcal{M}} (\T i_{n}) \beta &=j_{\omega}^{-1}\left(\quot{\mathbb{M}^{\omega}\mathbbm{2}}\left( \nabla\left(\T i_{n}\right) (\beta) \right) \right) &\text{(Remark~\ref{r:VMinit})} \\&=j_{\omega}^{-1}\left(\quot{\mathbb{M}^{\omega}\mathbbm{2}}\left( \left(\Tomnb i_{n}\right) (\nabla\beta) \right) \right) &\text{(definition of $\Tomnb$)} \\&=j_{\omega}^{-1}\left(\quot{\mathbb{M}^{\omega}\mathbbm{2}}\left( \left(\Tnb \funU i_{n}\right) (\nabla\beta) \right) \right) &\text{($\Tnb \funU i_{n} \rst{\Tomnb\funU\mathbb{M}^{n}\mathbbm{2}} = \Tomnb i_{n}$)} \\&=j_{\omega}^{-1} \left(\left(\mathbb{M} i_{n} \circ \quot{\mathbb{M}^{n}\mathbbm{2}}\right) (\nabla\beta) \right) &\text{(naturality of $\quot{}$)} \\&= i_{n+1}\quot{\mathbb{M}^{n}\mathbbm{2}} (\nabla\beta) &\text{(\dag)} \end{align*} where the last equality (\dag) follows by Proposition~\ref{p:minit}(5). And so we obtain that \begin{equation} \label{eq:nbhom2} (\nabla^{V\mathcal{M}}\cof\T q)(\alpha) = i_{n+1}\quot{\mathbb{M}^{n}\mathbbm{2}} (\nabla(\T q_{n})(\alpha)) \end{equation} Thus in order to prove the commutativity of (\ref{dg:nbhom}), by (\ref{eq:nbhom1}) and (\ref{eq:nbhom2}) it suffices to prove that \begin{equation} \label{eq:nbhom3} q_{n+1}(\nabla\alpha) = \quot{\mathbb{M}^{n}\mathbbm{2}} (\nabla(\T q_{n})(\alpha)). \end{equation} But this is precisely the content of Proposition~\ref{p:qqq}. \end{proof} We can now prove the Stratification Theorem. \begin{proofof}{Theorem~\ref{t:strat}} Given the Propositions~\ref{l:str1}, \ref{p:str3} and~\ref{p:str4}, all that is left to do is prove that the following diagram commutes for each $n \in \omega$: \begin{equation} \label{dg:stratfin} \xymatrix{ & \mathcal{L}_{n} \ar@{>>}[d]_{q_{n}} \ar@{^{(}->}[r]^{d_{n}} & \mathcal{L} \ar@{>>}[d]^{m} \\ & \mathbb{M}^{n}\mathbbm{2} \ar@{>->}[r]_{i_n} & V\mathcal{M} } \end{equation} We already saw in the proof of Proposition~\ref{p:str4} that the map $q: \mathcal{L} \to \mathbb{M}^{\omega}\mathbbm{2}$, defined by putting, for $a \in \mathcal{L}_{n}$, \[ q(a) \mathrel{:=} i_{n}(q_{n}(a)), \] is the unique Moss homomorphism from $\mathcal{L}$ to $V\mathcal{M}$; in other words, this map $q$ coincides with $m$. Reformulating this in terms that explicitize the role of the inclusion map $d_{n}: \mathcal{L}_{n} \hookrightarrow \mathcal{L}$, we obtain that $m(d_{n}(a)) = q(d_{n}(a)) = i_{n}(q_{n}(a))$. In other words, the diagram \eqref{dg:stratfin} commutes indeed. \end{proofof} As a corollary we obtain that the algebra $V\mathcal{M}$ is the initial algebra in the class of Moss algebras that satisfy the nabla-equations. This means that we may see $\mathcal{M}$ as the \emph{Lindenbaum-Tarski} algebra of our logic. \begin{cor} \label{c:Minit} Let $\mathbb{B} = \struc{B,\neg^{\mathbb{B}},\bigwedge^{\mathbb{B}},\bigvee^{\mathbb{B}},\nabla^{\mathbb{B}}}$ be a Moss algebra such that $\mathbb{B}$ validates every instance of the axioms $(\nb1)$ -- $(\nb3)$. Then there is a unique morphism $\mathit{mng}^{*}_{\mathbb{B}}: V\mathcal{M} \to \mathbb{B}$ through which the meaning function $\mathit{mng}_{\mathbb{B}}$ factors: \begin{equation*} \xymatrix{ & \mathcal{L} \ar[dr]_{\mathit{mng}_{\mathbb{B}}} \ar[r]^{m} & V\mathcal{M} \ar[d]^{\mathit{mng}^{*}_{\mathbb{B}}} \\ & & \mathbb{B} } \end{equation*} \end{cor} \begin{proof} An arbitrary element of (the carrier of) $V\mathcal{M}$ is of the form $m(a)$ for some formula $a \in \mathcal{L}$. We leave it as an exercise for the reader to verify that the following map \[ \mathit{mng}^{*}_{\mathbb{B}}(m(a)) \mathrel{:=} \mathit{mng}_{\mathbb{B}}(a) \] is well-defined and has the right properties. \end{proof} \begin{rem} In fact, we can show that the functor $V$ constitutes an \emph{isomorphism} between the category $\mathsf{Coalg}_{\mathsf{BA}}(\mathbb{M})$ and the variety of Moss algebras validating the nabla axioms. We omit the details of this proof. \end{rem} \subsection{Proof of soundness and completeness} We are almost ready to prove our main result. What is left to do is link the final $\T$-sequence to the initial $\mathbb{M}$-sequence. Recall that the elements of $\T^{n}1$ intuitively correspond to the $n$-behaviors associated with $\T$, and that $\mathcal{M}$, the initial $\mathbb{M}$-algebra, is the colimit of the initial sequence $\struc{\mathbb{M}^n\mathbbm{2},j_{n}}_{n<\omega}$, where elements of $\mathbb{M}^n\mathbbm{2}$ correspond to (equivalence classes of) formulas of depth $n$. \begin{definition} We define the sequence of maps $s_n:\mathbb{M}^n\mathbbm{2}\to \funaQ\T^n 1$ as follows. The map $s_0: \mathbbm{2} \to \funaQ 1$ is given by initiality (and is actually the identity). For the definition of $s_{n+1}$, recall from Defintion~\ref{d:ntrde} that $\delta_{\T^{n}1}: \mathbb{M}\funaQ \T^{n}1 \to \funaQ \T^{n+1} 1$, and assume inductively that $s_{n}: \mathbb{M}^{n}\mathbbm{2} \to \funaQ \T^{n}1$ has been defined, so that $\mathbb{M} s_{n} : \mathbb{M}^{n+1}\mathbbm{2} \to \mathbb{M}\funaQ \T^{n}1$. Composing these two maps, we obtain $s_{n+1} := \delta_{\T^n 1} \cof\mathbb{M}(s_n)$. \end{definition} Intuitively, the reader may think of the map $s_{n}$ as providing semantics of elements of $\mathbb{M}^{n}\mathbbm{2}$. This can be made more precise by proving that the following diagram commutes: \begin{equation*} \xymatrix{ \mathcal{L}_{n} \ar[dr]_{\mathit{mng}_{n}} \ar[r]^{q_{n}} & \mathbb{M}^{n}\mathbbm{2} \ar[d]^{s_{n}} \\ & \funaQ\T^{n}1 } \end{equation*} Here $q_{n}$ is the quotient map under $n$-step derivability of Theorem~\ref{t:strat} and $\mathit{mng}_{n}$ is the $n$-step meaning function of Definition~\ref{d:nstepsem}. From this perspective, the following proposition states that the semantics of a formula with respect to the final sequence is independent of the particular approximant we choose. \begin{prop} \label{p:fin1} The following diagram commutes: \begin{equation}\label{finalsequence} \xymatrix{ \funaQ 1\ar[r]^{\funaQ h_0} & \ldots & \funaQ\T^n 1\ar[r]^{\funaQ h_n} & \funaQ\T^{n+1} 1 & \ldots\\ \mathbbm{2}\ar[u]_{s_0}\ar[r]_{j_0} & \ldots & \mathbb{M}^n\mathbbm{2}\ar[u]_{s_n}\ar[r]_{j_n} & \mathbb{M}^{n+1}\mathbbm{2}\ar[u]_{s_{n+1}} & \ldots } \end{equation} In addition, each map $s_{n}$ is injective. \end{prop} \begin{proof} In order to show that diagram (\ref{finalsequence}) commutes, we will prove that \[ s_{n+1} \cof j_n = \funaQ h_n \cof s_n \] for all $n \in \omega$ . The proof is by induction on $n$. The base case $s_1 \cof j_0 = \funaQ h_0 \cof s_0$ is a consequence of the fact that $\mathbbm{2}$ is the initial object in $\mathsf{BA}$. For the inductive case, where $n=k+1$ for some $k \in \omega$, we reason as follows: \begin{align*} s_{k+2} \cof j_{k+1} &= \delta_{\T^{k+1} 1} \cof \mathbb{M} (s_{k+1}) \cof \mathbb{M}(j_k) & \text{(unfolding definitions)} \\&= \delta_{\T^{k+1} 1} \cof \mathbb{M} (s_{k+1} \cof j_k) & \text{(functoriality of $\mathbb{M}$)} \\&= \delta_{\T^{k+1}1} \cof \mathbb{M}( \funaQ h_k \cof s_k) & \text{(inductive hypothesis)} \\&= \funaQ \T h_k \cof \delta_{\T^k 1} \cof \mathbb{M}(s_k) & \text{(naturality of $\delta$)} \\&= \funaQ h_{k+1} \cof s_{k+1} & \text{(definition $s_{k+1}$)} \end{align*} Since $\delta$ is injective (Proposition~\ref{p:3}) and $\mathbb{M}$ preserves embeddings (Proposition~\ref{p:Mfinemb}), a straightforward inductive proof shows that all $s_n$, $n\in\omega$, are injective. \end{proof} We are now going to demonstrate that the coalgebraic semantics and the semantics via the final sequence coincide. \begin{prop} \label{p:fin2} For a given coalgebra $\mathbb{X} = \struc{X,\xi}$ and any formula $a \in \mathcal{L}_{n}$, the following holds: \begin{equation}\label{equ:semantics_stratify} \mathit{mng}_{\mathbb{X}} (a)=\xi_n^{-1}(s_n(q_n(a))), \qquad \mbox{ for all } a \in \mathcal{L}_n \text{ and }n \in \omega. \end{equation} \end{prop} \begin{proof} First note that $\funaQ X$ together with the maps $\funaQ \xi_n \cof s_n =\xi_n^{-1} \cof s_n$ form a cocone over the initial sequence of $\mathbb{M}$. Therefore there is a mediating arrow \[ \mathit{mng}^*_\mathbb{X}: \mathbb{M}^\omega \mathbbm{2} \to \funaQ X \] from the carrier of the initial $\mathbb{M}$-algebra $\mathcal{M}$ to $\funaQ X$ with the property that $\mathit{mng}^*_\mathbb{X} \cof i_n = \xi_n^{-1} \cof s_n$. We claim that \begin{equation} \label{eq:minit} \text{the map $\mathit{mng}^*_\mathbb{X}$ is an $\mathbb{M}$-algebra morphism from $\mathcal{M}$ to $\mathbb{X}^{*}$.} \end{equation} In order to prove \eqref{eq:minit}, observe that by Proposition~\ref{p:minit}, for all $n \in \omega$ we have \begin{equation}\label{equ:Minitinvmap} j_\omega \cof i_{n+1} = \mathbb{M} (i_n), \end{equation} where $j_\omega: \mathbb{M}^\omega \mathbbm{2} \to \mathbb{M} \mathbb{M}^\omega \mathbbm{2} $ is the inverse of the algebra structure map $\heartsuit^\mathcal{M}$ of the initial $\mathbb{M}$-algebra. In order to prove the claim it suffices to show that the following diagram commutes \[ \xymatrix{ \mathbb{M} \mathbb{M}^\omega \mathbbm{2} \ar[rr]^{\mathbb{M} \mathit{mng}^*_\mathbb{X}} & & \mathbb{M} \funaQ X \ar[d]^{\delta_X} \\& & \funaQ T X \ar[d]^{\funaQ \xi} \\ \mathbb{M}^\omega \mathbbm{2} \ar[uu]^{j_\omega} \ar[rr]_{\mathit{mng}^*_\mathbb{X}} & & \funaQ X } \] We prove that the diagram commutes by showing that $f \mathrel{:=} \funaQ \xi \cof \delta_X \cof \mathbb{M} (\mathit{mng}^*_\mathbb{X}) \cof j_\omega$ is a mediating arrow from $(\mathbb{M}^\omega \mathbbm{2} , \{i_n\}_{n \in \omega})$ to $(\funaQ X, \{\funaQ \xi_n \cof s_n \}_{n \in \omega})$. Therefore $f$ has to be equal to $\mathit{mng}^*_\mathbb{X}$ by the universal property of the colimit $(\mathbb{M}^\omega \mathbbm{2} , \{i_n\}_{n \in \omega})$. We show that $f$ has the claimed property by proving that for all $n \in \omega$ we have \begin{equation}\label{equ:malgmorph} \funaQ (\xi_{n}) \cof s_{n} = f \cof i_n \end{equation} For $n=0$ the equation holds by initiality of $\mathbbm{2}$. Furthermore for an arbitrary $n \geq 0$ we have \begin{align*} \funaQ (\xi_{n+1}) \cof s_{n+1} & = \funaQ (\T \xi_n \cof \xi) \cof \delta_{\T^n 1} \cof \mathbb{M} s_n & \text{(definition of $\xi_{n+1}$ and of $s_{n+1}$)} \\& = \funaQ (\xi) \cof \funaQ(\T \xi_n) \cof \delta_{\T^n 1} \cof \mathbb{M} s_n & \text{(functoriality of $\funaQ$)} \\& =\funaQ(\xi) \cof \delta_X \cof \mathbb{M}\funaQ \xi_n \cof \mathbb{M} s_n &\text{(naturality of $\delta$)} \\& = \funaQ(\xi) \cof \delta_X \cof \mathbb{M}(\funaQ \xi_n \cof s_n) &\text{(functoriality of $\mathbb{M}$)} \\& = \funaQ(\xi) \cof \delta_X \cof \mathbb{M}(\mathit{mng}^*_\mathbb{X} \cof i_n) &\text{($\mathit{mng}^*_\mathbb{X}$ mediating arrow)} \\& = \funaQ (\xi) \cof \delta_X \cof \mathbb{M} \mathit{mng}^*_\mathbb{X} \cof \mathbb{M} i_n &\text{(functoriality of $\mathbb{M}$)} \\& = \funaQ (\xi) \cof \delta_X \cof \mathbb{M} \mathit{mng}^*_\mathbb{X} \cof j_\omega \cof i_{n+1} . &\text{(equation \eqref{equ:Minitinvmap})} \end{align*} Therefore equation (\ref{equ:malgmorph}) holds for all $n$, which finishes the proof of \eqref{eq:minit}. From this it follows that $V\mathit{mng}^{*}: V\mathcal{M} \to V\mathbb{X}^{*}$ is a Moss algebra homomorphism. Recalling from Proposition~\ref{p:plusstar} that $V\mathbb{X}^{*} = \mathbb{X}^{+}$, we obtain by initiality of $\mathcal{L}$ as a Moss algebra, that $V \mathit{mng}^*_\mathbb{X} \cof m = \mathit{mng}_{\mathbb{X}}$. Here $\mathit{mng}_{\mathbb{X}}: \mathcal{L} \to \mathbb{X}^{+}$ is the unique Moss algebra homomorphism that maps an element of $\mathcal{L}$ to its semantics in $\mathbb{X}^{+}$, and $m \mathrel{:=} \mathit{mng}_{V\mathcal{M}}$ is the unique homomorphism $m: \mathcal{L} \to V\mathcal{M}$ in the category of Moss algebras. But then by the Axiomatic Stratification Theorem~\ref{t:strat}, for all $n \in \omega$ and all formulas $a \in \mathcal{L}_n$ we have $\mathit{mng}_{\mathbb{X}}(a) = \mathit{mng}^*_\mathbb{X}(m(a)) = \mathit{mng}^*_\mathbb{X}(i_n(q_n(a))) = \funaQ \xi_n \cof s_n(q_n(a))$, where the last identity holds by the definition of $\mathit{mng}^{*}_\mathbb{X}$ as a mediating arrow. This shows that (\ref{equ:semantics_stratify}) holds, and finishes the proof of the claim. \end{proof} On the basis of the results obtained so far, the proof of our soundness and completeness results is now more or less immediate. \begin{proofof}{Theorem~\ref{t:main}} Let $a$ and $b$ be two formulas in $\mathcal{L}$. Fix a natural number $n$ such that $a,b \in \mathcal{L}_{n}$. Recall that $\mathbb{F}_{n} = \struc{\T^{n}1,\T^{n}g}$ denotes the `$n$-step coalgebra' defined in Definition~\ref{d:ncoalg}. Now consider the following sequence of equivalences: \begin{align*} a \sqsubseteq_{\mathbf{M}} b & \iff q_{n}(a) \subseteq q_{n}(b) & \text{(Axiomatic Stratification Theorem~\ref{t:strat})} \\& \iff s_{n}q_{n}(a) \subseteq s_{n}q_{n}(b) & \text{(injectivity of $s_{n}$)} \\& \iff (\funaQ(\T^{n}g)_{n})(s_{n}q_{n}(a)) \subseteq (\funaQ(\T^{n}g)_{n})\rlap{$(s_{n}q_{n}(b))$} & \text{(equation~\eqref{eq:gh})} \\& \iff \mathit{mng}_{\mathbb{F}_{n}}(a) \subseteq \mathit{mng}_{\mathbb{F}_{n}}(b) & \text{(Proposition~\ref{p:fin2})} \\& \iff a \models_{\T} b & \text{(Semantic Stratification Theorem~\ref{t:strs})} \end{align*} From this the Theorem is immediate. \end{proofof} \section{Conclusions} \label{s:conclusion} \paragraph{Summary of results} Obviously, as the main contributions of this paper we see the definition of the \emph{derivation system $\mathbf{M}$} for the finitary version of Moss' coalgebraic logic, the result stating that $\mathbf{M}$ provides a \emph{sound and complete} axiomatization for the collection of coalgebraically valid inequalities, and the fact that all of our definitions, results and our proofs are completely \emph{uniform} in the coalgebraic type functor $\T$ Our proof of the soundness and completeness theorem is rather elaborate and technical, but we believe that the effort has been worth the while, and that on the way we have identified some new concepts and obtained some auxiliary results that may be of independent interest. Of these we list the following: \begin{enumerate}[(1)] \item a survey of the properties of the notion $\rl{\T}$ of relation lifting, induecd by an arbitrary but fixed set functor $\T$ (section~\ref{s:relationlifting}); \item the introduction in Definition~\ref{d:Prs} of the category $\mathsf{Pres}$ of Boolean algebra presentations, and the establishment in Theorem~\ref{t:BCadj} of an adjunction between $\mathsf{Pres}$ and the category $\mathsf{BA}$ of Boolean algebras; \item the introduction in section~\ref{ss:functorM} of the functor $\mathbb{M}: \mathsf{BA} \to \mathsf{BA}$, and the results in Proposition~\ref{p:Mfinemb} that $\mathbb{M}$ is finitary and preserves embeddings, and in Theorem~\ref{l:1st2} that it preserves atomicity of Boolean algebras. \item the stratification of our logic, both semantically (Theorem~\ref{t:strs}) and syntactically (Theorem~\ref{t:strat}); \item the identification, in Corollary~\ref{c:Minit}, of the initial $\mathbb{M}$-algebra $\mathcal{M}$, through the functor $V$, as the Lindenbaum-Tarski algebra of our logic. \end{enumerate} \paragraph{Related and ongoing work} As mentioned in the introduction, this paper replaces, extends and partly corrects an earlier version~\cite{kupk:comp08}. Since the publication of the latter paper, and the preparation of the current manuscript there have been a number of developments in the area of Moss' logic that we would like to mention here. First of all, based on our one-step soundness and completeness results, Bergfeld gave a more direct version of our completeness proof in his MSc thesis~\cite{berg:moss09}; as a corollary he established a strong completeness theorem for Moss' logic (modulo some restrictions on the functor $\T$). Second, B\'{\i}lkov\'a, Palmigiano \& Venema generalized their earlier result on the power set nabla~\cite{bilk:proo08} to the general case of a standard, weak pullback preserving functor $\T$: in~\cite{bilk:proo10} they provide a sound, complete, and cut-free proof system for (the finitary version of) Moss' coalgebraic logic. Systematically using Stone duality, Kurz \& Leal~\cite{kurz:modaxx} make a detailed comparison between Moss' approach towards coalgebraic logic, and the one based on associating standard modalities with predicate liftings; their main contribution is a new coalgebraic logic combining features of both approaches. Venema, Vickers \& Vosmaer~\cite{vene:powe10} study a variant of the derivation system $\mathbf{M}$ in the setting of geometric logic; their main contribution is to generalize Johnstone's power construction on locales, to a functor $V_{\T}$, parametrically defined in a set functor $\T$, on the category of locales. Finally, B\'{\i}lkov\'a, Velebil \& Venema~\cite{bilk:monoxx} prove that on the (semantically defined) Lindenbaum-Tarski algebra of our logic, the nabla modality has the interesting order-theoretic property of being a so-called $\mathcal{O}$-adjoint. \paragraph{Future research} We finish with mentioning some directions for future research. To start with, in this paper we have studied the nabla operator in the setting of the diagram~\eqref{diag:duality}, which is a particular instantiation of the general Stone duality diagram \begin{equation} \label{diag:Stonegen} \xymatrix{ \mathsf{Alg} \ar@(dl,ul)[]^{L} \ar@/_/[r]_{S} & {\mathsf{Sp}^{\mathrm{op}}} \ar@/_/[l]_{P} \ar@(dr,ur)[]_{T} } \end{equation} where $\mathsf{Alg}$ denotes a category of algebras representing the base logic, $\mathsf{Sp}$ is a category of spaces representing the semantics of the logic, $\T$ is the coalgebra functor representing all one-step behaviours, and $L$ represents the one-step version of the coalgebraic modal logic. Given the flexibility of the Stone duality approach we believe it to be of interest to consider more instances of the diagram~\eqref{diag:Stonegen} where $L$ is some version of our nabla logic. Of particular interest are the cases where for $\mathsf{Alg}$ we take the variety of distributive lattices, because this could clarify the role of the negation in our setting. Second, a clear drawback of the current nabla-based approach towards coalgebraic logic is the restriction to functors that preserve weak pullbacks. It would therefore be interesting to see whether this restriction can be removed. A first step in this direction has been made by Santocanale \& Venema~\cite{sant:unif10}, who introduce a nabla-based version of monotone modal logic, a variant of basic modal logic that is naturally interpreted in coalgebras for the monotone neighborhood functor of Example~\ref{ex:1} --- a functor that does not preserve weak pullbacks. Finally, in the introduction we mentioned that the work of Janin \& Walukiewicz~\cite{jani:auto95} on automata theory and modal fixpoint logics is an independent source for the introduction of the cover modality $\nabla_{\!\funP}$ as a primitive modality. Since $\nabla_{\!\funP}$ also plays a fundamental role in Walukiewicz' completeness result for the modal $\mu$-calculus~\cite{walu:comp00}, this naturally raises the question whether we can extend our completeness result to the setting with fixpoint operators. \section{The derivation system} \label{s:derivation} \subsection{Introduction} In this section we introduce our derivation system $\mathbf{M}$ for the finitary version of Moss' logic, as given in the previous section. First we fix some general notation and terminology concerning derivations. \begin{definition} \label{d:dy} Given a derivation system $\mathbf{D}$, we let each of $\vdash_{\mathbf{D}} a \precsim b$, $a \sqsubseteq_{\mathbf{D}} b$ and $b \sqsupseteq_{\mathbf{D}} a$ denote the fact that the inequality $a \precsim b$ is derivable in $\mathbf{D}$, and we write $a \equiv_{\mathbf{D}} b$ if both $a \sqsubseteq_{\mathbf{D}} b$ and $b \sqsubseteq_{\mathbf{D}} a$. \end{definition} In other words, where $a \precsim b$ and $a \approx b$ are syntactic expressions in an object language, the expressions $a \sqsubseteq_{\mathbf{D}} b$ and $a \equiv_{\mathbf{D}} b$ denote statements, in the metalanguage, \emph{about} the derivability of such expressions $a \precsim b$ and $b \precsim a$. In case no confusion is likely concerning the derivation system at hand, we will drop subscripts, simply writing $a \equiv b$ and $a \sqsubseteq b$. \bigskip In principle, the derivation system that we are looking for, should have axioms and rules of three kinds. First of all, it will have a propositional core taking care of the Boolean basis of our setting. For this purpose, any sound and complete set of axioms and derivation rules would do; for concreteness, we propose the set given in Table~\ref{tb:clax}. Recall that our language has $\bigvee$ and $\bigwedge$ as primitive connectives. \begin{table}[bht] \begin{center} \begin{tabular}{|cc|} \hline & \\ \AXC{} \UIC{$a \precsim a$} \DisplayProof & \AXC{$a \precsim b$} \AXC{$b \precsim c$} \BIC{$a \precsim c$} \DisplayProof \\ & \\ \AXC{$\{ a \precsim b \mid a \in \phi \}$} \UIC{$\bigvee\phi \precsim b$} \DisplayProof & \AXC{$a \precsim b$} \RL{$b \in \psi$} \UIC{$a \precsim\bigvee\psi$} \DisplayProof \\ & \\ \AXC{$\{ a \precsim b \mid b \in \psi \}$} \UIC{$a \precsim \bigwedge\psi$} \DisplayProof & \AXC{$a \precsim b$} \RL{$a \in \phi$} \UIC{$\bigwedge\phi \precsim b$} \DisplayProof \\ & \\ \multicolumn{2}{|c|}{% \AXC{} \UIC{$\bigwedge \{ \bigvee\phi \mid \phi \in X\} \precsim \bigvee \{ \bigwedge\gamma[X] \mid \gamma \in \mathit{Choice}(X) \}$} \DisplayProof } \\ & \\ \AXC{$\bigwedge (X \cup \{ \neg a\}) \precsim \bigvee Y$} \UIC{$\bigwedge X \precsim \bigvee (Y \cup \{ a\})$} \DisplayProof & \AXC{$\bigwedge (X \cup \{ a\}) \precsim \bigvee Y$} \UIC{$\bigwedge X \precsim \bigvee (Y \cup \{ \neg a\})$} \DisplayProof \\ & \\ \hline \end{tabular} \end{center} \caption{Axioms and rules for classical propositional logic} \label{tb:clax} \end{table} Second, our system will need some kind of \emph{congruence rule} for the nabla modality. Since $\nabla$ has a rather unusual form, perhaps it is not a priori clear what such a rule would look like. The naive way to formulate a congruence rule for $\nabla$ would be as \begin{equation} \label{eq:cgr} \mbox{from } \alpha \rel{\rl{\T}{\equiv}} \beta \mbox{ infer } \nabla\alpha \equiv \nabla\beta \end{equation} Problem is that the premiss of \eqref{eq:cgr} is not itself an equation, or a set of equations. This problem can be remedied by invoking some properties of relation lifting. More precisely, note that from Proposition~\ref{p:st-rl} we may derive the equivalence $\alpha \rel{\rl{\T}{\equiv}} \beta \iff \alpha \rel{\rl{\T} Z} \beta$, for some $Z \subseteq \mathit{Base}(\alpha) \times \mathit{Base}(\beta)$. This would lead to the following formulation of a congruence rule: \[ \AXC{$\{ a \approx b \mid (a,b) \in Z \}$} \RL{$(\alpha,\beta) \in \rl{\T} Z$} \UIC{$\nabla\alpha \approx \nabla\beta$} \DisplayProof \\ \\ \] The above rule is supposed to have a set of premisses: $\{ a \approx b \mid (a,b) \in Z \}$, where $Z \subseteq \mathit{Base}(\alpha) \times \mathit{Base}(\beta)$ is a relation such that $(\alpha, \beta) \in \rl{\T} Z$ --- the latter condition is formulated as a \emph{side condition} of the rule. As it turns out, however, we also want $\nabla$ to be \emph{order-preserving}, and the most straightforward way to formulate that would be by strengthening \eqref{eq:cgr} to \begin{equation} \label{eq:cgrmon} \mbox{from } \alpha \rel{\rl{\T} {\sqsubseteq}} \beta \mbox{ infer } \nabla\alpha \sqsubseteq \nabla\beta. \end{equation} If we want to turn this into a syntactically well-formed derivation rule again, we obtain our first derivation rule ($\nb1$): \[ \tag{$\nabla 1$} \AXC{$\{ a \precsim b \mid (a,b) \in Z \}$} \RL{$(\alpha,\beta) \in \rl{\T} Z $} \UIC{$\nabla\alpha \precsim \nabla\beta$} \DisplayProof \\ \\ \] which can be read as a congruence and monotonicity rule in one. It has the additional advantage of being formulated in terms of our primitive symbol, $\precsim$. \begin{exa} First, consider the $C$-labelled binary tree functor $\mathit{B}_{C} = C \times \Id \times \Id$ of Example~\ref{ex:2}. Here, an application of rule ($\nabla 1$) looks as follows: \[ \AXC{$\{ a_1 \precsim b_1, a_2 \precsim b_2\}$} \UIC{$\nabla(c,a_1,a_2) \precsim \nabla(c,b_1,b_2)$} \DisplayProof \\ \\ \] where $c$ is an arbitrary element of $C$. Note that no inequality of the form $\nabla(c,a_1,a_2) \precsim \nabla(d,b_1,b_2)$ with $c \not= d$ can be derived using ($\nabla 1$) because $(\nabla(c,a_1,a_2),\nabla(d,b_1,b_2)) \not\in \rl{\T} (Z)$ for any relation $Z$. In the case of the power set functor $\funP$, an application of the rule ($\nabla 1$) looks as follows: \[ \AXC{$\{ a \precsim b \mid (a,b) \in Z\}$} \RL{$(\alpha,\beta) \in \rl{\Pow}Z$} \UIC{$\nabla \alpha \precsim \nabla \beta$} \DisplayProof \\ \\ \] where $\alpha,\beta \in \Pom \mathcal{L}$ are finite sets of formulas. It can be easily seen that the premiss of the rule can be satisfied iff for all $a \in \alpha$ there is a $b \in \beta$ such that $a \precsim b$, and vice versa. \end{exa} In addition, any complete derivation system for Moss' language will need some \emph{interaction principles} describing the interaction between the nabla modality and the Boolean connectives. As we will see, the interaction principles between $\nabla$ and the Boolean connectives $\bigvee$ and $\bigwedge$ will take the form of two \emph{distributive laws} (in the logical meaning of the word). We postpone discussing the role of negation in our system until subsection~\ref{ss:neg}, and before giving the general formulation of the laws for $\bigwedge$ and $\bigvee$, we first discuss a simple, special, case. \subsection{Functors restricting to finite sets} For a gentle introduction of our derivation system we first consider the special case where the functor restricts to finite sets. Turning to the interaction principles, we first consider the interaction between the coalgebraic modality and \emph{conjunctions}. More specifically, the purpose of axiom ($\nb2$) will be to rewrite a conjunction of nabla formulas as an equivalent `disjunction of nablas of conjunctions', and we think of this axiom as a distributive law (in the logical sense). Formally, recall from Definition~\ref{d:srd} that given a finite set $A \in \Pom\Tom\mathcal{L}$, the set $\mathit{SRD}(A) \subseteq \Tom\Pom\mathcal{L}$ denotes the set of \emph{slim redistributions} of $A$. Also recall that given an object $\Phi \in \Tom\Pom\mathcal{L}$, we find $(\T\bigwedge)\Phi \in \Tom\mathcal{L}$, which means that $\nabla(\T\bigwedge)\Phi$ is a well-formed formula. We can now formulate the axiom ($\nabla 2$) as the following inequality: \begin{equation} \tag{$\nabla 2_{f}$} \bigwedge \Big\{ \nabla\alpha \mid \alpha \in A \Big\} \precsim \bigvee \Big\{ \nabla (\T\mbox{$\bigwedge$})\Phi \mid \Phi \in \mathit{SRD}(A) \Big\} \end{equation} \begin{exa} First consider the case of the $C$-labelled binary tree functor $\mathit{B}_{C}$ of Example~\ref{ex:2}. In Example~\ref{ex:srd} we discussed the shape of the collection of slim redistributions of a collection $A \subseteq_{\omega} \Tom\mathcal{L}$. From this it should be clear that we obtain the following three instances of ($\nb2_{f}$). \begin{enumerate}[(1)] \item If $A=\varnothing$, we obtain \[ \top \precsim \bigvee \{ \nabla (c,\top,\top) \mid c \in C \} \] \item If $A$ contains two elements $(c,a_{1},a_{2})$ and $(c',a'_{1},a'_{2})$ with $c \neq c'$, then we obtain \[ \bigwedge \{\nabla\alpha \mid \alpha \in A \} \precsim \bot. \] \item If $\pi_{C}[A]$ contains a unique element $c_{A}$, then we obtain \[ \bigwedge \{\nabla\alpha \mid \alpha \in A \} \precsim \nabla(c_{A},\pi_1[A],\pi_2[A]) \] where $\pi_{C}$, $\pi_{1}$ and $\pi_{2}$ are the projection functions, as in Example~\ref{ex:srd} and where we used the optimization outlined in Remark~\ref{rem:superslim}. \end{enumerate} \noindent Second, in the case of the power set functor in Example~\ref{ex:srd2}, $\T =\Pow$, an instance of ($\nabla 2_f$) looks as follows \begin{equation} \label{eq:dl1} \bigwedge_{\alpha\in A} \nabla \alpha \precsim \bigvee \Big\{ \nabla \{ \mbox{$\bigwedge$}\beta \mid \beta \in \Phi \} \mid \mbox{$\bigcup$} A = \mbox{$\bigcup$} \Phi \text{ and } \alpha\cap\beta\neq\varnothing \text{ for all } \alpha\in A, \beta \in \Phi \Big\} \end{equation} \end{exa} \begin{rem} In fact, we could have formulated this principle as an \emph{equation} rather than as an inequality, since the opposite inequality of ($\nabla 2_{f}$) can be derived on the basis of $(\nabla 1)$. To see this, observe that for any formula $a \in \mathcal{L}$ and any set $\phi \in \Pom\mathcal{L}$ it holds that $a \in \phi$ implies that $a \sqsupseteq \bigwedge\phi$. Reformulating this as $({\in};\bigwedge)\; \subseteq \; {\sqsupseteq}$, and using the properties of relation lifting we find that $\rl{\T}{\in};\T{\mbox{$\bigwedge$}} \subseteq \rl{\T}{\sqsupseteq}$. From this it follows that, whenever $\alpha \in \Tom\mathcal{L}$ is a lifted member of $\Phi \in \Tom\Pom\mathcal{L}$, we find that $(\T\mbox{$\bigwedge$})\Phi \rl{T}(\sqsubseteq)\alpha$. From this, one application of ($\nabla 1$) yields the existence of a derivation for the inequality $\nabla(\T\mbox{$\bigwedge$})\Phi \precsim \nabla\alpha$. Since this holds for any $\alpha$ and $\Phi$ with $\alpha \rl{T}{\in} \Phi$, we may conclude that \[ \bigvee \Big\{ \nabla (\T\mbox{$\bigwedge$})\Phi \mid \Phi \in \mathit{SRD}(A) \Big\} \sqsubseteq \bigwedge \Big\{ \nabla\alpha \mid \alpha \in A \Big\}. \] That is, the opposite inequality of ($\nabla 2_{f}$) is indeed derivable. \end{rem} Our second interaction principle, ($\nb3$), involves the interaction between $\nabla$ and the \emph{disjunction} operation. And again, we think of this axiom as a distributive law (in the logical sense), stating that the coalgebraic modality distributes over disjunctions. More precisely, the rule reads as follows: \[ \tag{$\nabla 3_{f}$} \nabla(\T\mbox{$\bigvee$})\Phi \precsim \bigvee \Big\{ \nabla \beta \mid \beta \rl{T}(\in) \Phi \Big\} \] \begin{exa} In the case of the functor $\mathit{B}_{C}=C \times \Id \times \Id$, axiom ($\nabla 3_f$) is of the following shape: \[ \nabla (c,\mbox{$\bigvee$} A,\mbox{$\bigvee$} B) \precsim \bigvee \{ \nabla (c,a,b) \mid a \in A, b \in B \}. \] For the power set functor $\funP$, an instance of axiom ($\nabla 3_f$) looks as follows \[ \nabla \{ \mbox{$\bigvee$}\beta \mid \beta\in \Phi\} \precsim \bigvee \{ \nabla \alpha \mid \alpha \subseteq \mbox{$\bigcup$} \Phi \; \mbox{and} \; \alpha \cap \beta \not= \emptyset \; \mbox{for all} \; \beta \in \Phi \; \} .\] \end{exa} \begin{rem} In this case the opposite inequality can be derived on the basis of ($\nb1$) as well. Here we use the fact that $a \in \phi$ implies $a \sqsubseteq \bigvee\phi$, or in other words, that ${\in};{\bigvee} \subseteq {\sqsubseteq}$. This implies that $\rl{\T}{\in};\T{\bigvee} \subseteq \rl{\T}{\sqsubseteq}$, and hence, whenever $\beta$ is a lifted member of $\Phi$, we find that $\beta \rl{\T}{\sqsubseteq} (\T\bigvee)\Phi$. Thus an application of ($\nabla 1$) shows the derivability of the inequality $\nabla\beta \precsim \nabla(\T\bigvee)\Phi$. And since this applies to every lifted member of $\Phi$, we may conclude that \[ \bigvee \Big\{ \nabla \beta \mid \beta \rl{T}(\in) \Phi \Big\} \sqsubseteq \nabla(\T\mbox{$\bigvee$})\Phi, \] meaning that, indeed, the opposite inequality of ($\nabla 3_{f}$) is derivable. \end{rem} Summarizing, in the case of a set functor $\T$ that preserves finite sets, our derivation system $\mathbf{M}_{f}$ extends that of classical proposition logic (Table~\ref{tb:clax}) with one congruence/monotonicity rule, and two axioms that take the form of distributive laws, see Table~\ref{tb:naxfin}. The point of restricting to this case is to ensure that the axioms ($\nb2_{f}$) and ($\nb3_{f}$) are well-formed pieces of syntax, in the sense that the disjunctions on the right hand side are \emph{finite}. \begin{rem} The requirement on the given set functor $\T$ to preserve finite sets is obviously sufficient in order to ensure that the axioms ($\nb2_{f}$) and ($\nb3_{f}$) are well-formed. Note, however, that there are set functors that do not restrict to finite sets and for which the axioms ($\nb2_f$) and ($\nb3_f$) are nevertheless syntactically well-formed. Consider for example the bag functor $\mathit{B}_\omega$ from Example~\ref{ex:1}. In order to show that ($\nb2_f$) and ($\nb3_f$) are well-formed we have to prove that the sets \begin{align} \{ \Phi \in \mathit{B}_\omega \Pom X \mid \Phi \in \mathit{SRD}(A) \} &\quad \mbox{for } \; A \in \Pom \mathit{B}_\omega X \; \mbox{and} \label{finite_number1}\\ \{ \beta \in \mathit{B}_\omega X \mid \beta (\overline{\mathit{B}_\omega} \in ) \Phi \} &\quad \mbox{for } \; \Phi \in \mathit{B}_\omega \Pom X \label{finite_number2} \end{align} are finite. Using the characterisation of the relation lifting for $\mathit{B}_\omega$ in Example~\ref{ex:rellift} this is not diffcult to see: Let us consider first the set in (\ref{finite_number1}), ie., we consider some $A \in \Pom \mathit{B}_\omega X$ and we want to prove that the set $\{ \Phi \in \mathit{B}_\omega \Pom X \mid \Phi \in \mathit{SRD}(A) \}$ is finite. If $\Phi \in \mathit{SRD}(A)$ then by the definition of slim redistributions we have $(\alpha,\Phi) \in (\overline{\mathit{B}_\omega} \in)$ for all $\alpha \in A$ and $\Phi \in \mathit{B}_\omega \Pom (\bigcup_{\alpha' \in A} \mathit{Base}(\alpha'))$. Therefore, using Proposition~\ref{p:st-rl}, we get that \[ (\alpha,\Phi) \in \overline{\mathit{B}_\omega} \left( \in \rst{\mathit{Base}(\alpha) \times \Pom (\bigcup_{\alpha' \in A} \mathit{Base}(\alpha'))} \right) \quad \mbox{for all } \alpha \in A.\] This implies, by the definition of $\overline{\mathit{B}_\omega}$ from Example~\ref{ex:rellift}, that there exists a function \[ \rho: \in \rst{\mathit{Base}(\alpha) \times \Pom (\bigcup_{\alpha' \in A} \mathit{Base}(\alpha'))} \to \mathbb{N} \] such that for all $\alpha \in A$, all $x \in \mathit{Base}(\alpha)$ and all $U \in \Pom (\bigcup_{\alpha' \in A} \mathit{Base}(\alpha'))$ we have \[ \Phi(U) = \sum_{x' \in \mathit{Base}(\alpha), x' \in U} \rho(x',U) \quad \mbox{and} \quad \rho(x,U) \leq \alpha(x) \] Therefore we have $\Phi(U) \leq \sum_{x \in U} \alpha(x)$. This shows that the range of $\Phi$ has an upper bound an thus, as $\Phi$ is determined by its values on the finite set $ \Pom (\bigcup_{\alpha' \in A} \mathit{Base}(\alpha'))$, there can only finitely many $\Phi$'s that satisfy the requirement of a slim redistribution for the set $A$. In a similar way one can show that the set $\{\beta \in \mathit{B}_\omega X \mid \beta (\overline{\mathit{B}_\omega} \in ) \Phi \}$ in (\ref{finite_number2}) is finite for all $\Phi \in \mathit{B}_\omega \Pom X$. We leave the details of the argument as an exercise to the reader. One example for a set functor for which the finitary axioms ($\nb2_f$) and ($\nb3_f$) are not well-formed is provided by the finitary probability functor $D_\omega$ in Example~\ref{ex:1}. \end{rem} \begin{table} \begin{center} \begin{tabular}{|lc|} \hline & \\ ($\nabla 1$) & \AXC{$\Big\{ a \precsim b \mid (a,b) \in Z \Big\}$} \RL{$(\alpha,\beta) \in \rl{\T} Z $} \UIC{$\nabla\alpha \precsim \nabla\beta$} \DisplayProof \\ & \\ ($\nabla 2_{f}$) & $\displaystyle \bigwedge \Big\{ \nabla\alpha \mid \alpha \in A \Big\} \precsim \displaystyle \bigvee \Big\{ \nabla (\T\mbox{$\bigwedge$})\Phi \mid \Phi \in \mathit{SRD}(A) \Big\}$ \\ & \\ ($\nabla 3_{f}$) & $\nabla(\T\mbox{$\bigvee$})\Phi \precsim \displaystyle \bigvee \Big\{ \nabla \beta \mid \beta \rl{T}(\in) \Phi \Big\}$ \\ & \\ \hline \end{tabular} \end{center} \caption{Rules and axioms of the system $\mathbf{M}$ (in case $\T$ preserves finite sets)} \label{tb:naxfin} \end{table} \subsection{The derivation system $\mathbf{M}$} In the case that we are dealing with an arbitrary set functor $\T$ (not necessarily preserving finite sets), we would like to use the same derivation system as given in Table~\ref{tb:naxfin}. Unfortunately however, in this case the axioms ($\nabla 2_{f}$) and ($\nabla 3_{f}$) are no longer well-formed syntactic expressions, since we cannot guarantee that the disjunctions on the right hand sides are taken over a \emph{finite} set. In order to deal with this problem, we use the following trick: we replace an axiom of the form \[ a \precsim \bigvee \{ a_{i} \mid i \in I \} \] with the derivation rule \[ \AXC{$\{ a_{i} \precsim b \mid i \in I \}$} \UIC{$a \precsim b$} \DisplayProof \] The price that we have to pay for this transformation is that our derivation system will be \emph{infinitary}. \begin{definition}\label{def:nax} The derivation system $\mathbf{M}$ is given by the axioms and derivation rules of Table~\ref{tb:nax}, together with the complete set of axioms and rules for classical propositional logic given in Table~\ref{tb:clax}. \begin{table} \begin{center} \begin{tabular}{|lc|} \hline & \\ ($\nabla 1$) & \AXC{$\{ a \precsim b \mid (a,b) \in Z \}$} \RL{$(\alpha,\beta) \in \rl{\T} Z$} \UIC{$\nabla\alpha \precsim \nabla\beta$} \DisplayProof \\ & \\ ($\nabla 2$) & \AXC{$\{ \nabla (\T\bigwedge)(\Phi) \precsim b \mid \Phi\in \mathit{SRD}(A)\}$} \UIC{$\bigwedge\{\nabla\alpha \mid \alpha\in A\} \precsim b$} \DisplayProof \\ & \\ ($\nabla 3$) & \AXC{$ \{ \nabla\alpha \precsim b \mid \alpha \rel{\Tb{\in}} \Phi \}$} \UIC{$\nabla(\T\bigvee)(\Phi) \precsim b$} \DisplayProof \\ & \\ \hline \end{tabular} \end{center} \caption{Rules of the system $\mathbf{M}$} \label{tb:nax} \end{table} \end{definition} Our notions of derivation and derivability are completely standard. \begin{definition}\label{def:deriv} A \emph{derivation} is a well-founded tree, labelled with inequalities, such that the leaves of the tree are labelled with axioms of $\mathbf{M}$, whereas with each parent node we may associate a derivation rule of which the conclusion labels the parent node itself, and the premisses label its children. If $\D$ is a derivation of the inequality $a\precsim b$, we write \AXC{$\D$} \UIC{$a \precsim b$} \DisplayProof or $\D: a \sqsubseteq b$. If we want to suppress the actual derivation, we write $\vdash_{\mathbf{M}} a \precsim b$ or (in accordance with Definition~\ref{d:dy}) $a \sqsubseteq_{\mathbf{M}} b$. \end{definition} Note that $\mathbf{M}$ is not a Gentzen-style derivation system; in particular, we do not have left- and right introduction- and elimination rules for $\nabla$. Readers who are interested to see a detailed development of the \emph{proof theory} of nabla-style coalgebraic logic, are referred to B\'{\i}lkov\'a, Palmigiano \& Venema~\cite{bilk:proo08} (for the power set case). \subsection{Soundness and completeness} \label{ss:main} We can now very concisely formulate the main result of this paper as the following soundness and completeness result: \begin{thm} \label{t:main} Let $\T$ be a standard set functor that preserves weak pullbacks. For all formulas $a,b \in \mathcal{L}$ we have \begin{equation} \label{eq:main} \vdash_{\mathbf{M}} a \precsim b \qquad \mbox{iff} \qquad a \models_{\T} b. \end{equation} \end{thm} In words, Theorem~\ref{t:main} states that for any two $\mathcal{L}$-formulas $a$ and $b$, the inequality $a\precsim b$ is derivable in our derivation system $\mathbf{M}$ iff it is valid in all $\T$-coalgebras. Our proof of this result will be based on many auxiliary results, which we will discuss in the next two sections. The final proof will be given at the end of section~\ref{s:completeness}. \subsection{The role of negation} \label{ss:neg} At this point, the reader may be surprised or even worried that we have formulated our derivation system for a Boolean-based coalgebraic modal logic, without mentioning the negation connective (or the implication, for that matter) in relation to the nabla modality at all. Surely there must be some validities involving both $\nabla$ and $\neg$? The point is that indeed there are such interaction principles, but we do not need to formulate them explicitly as axioms or derivation rules since they are already \emph{derivable} in the system $\mathbf{M}$. The intuition underlying this fact is that in a bounded distributive lattice, all existing complementations are completely determined by the lattice operations: the complement $\neg a$ of an element $a$, if existing, is the unique element $b$ such that $a \land b = \bot$ and $a \lor b = \top$. Nevertheless, the key principle relating $\nabla$ to $\neg$ will be needed in our proofs below, and so we discuss it in some detail. For a smooth formulation we need the following definition. \begin{definition} Given an element $\alpha \in \Tom\mathcal{L}$, let $Q(\alpha) \subseteq \Tom\mathcal{L}$ be the set defined by \[ Q(\alpha) := \Big\{ T (\mbox{$\bigwedge$} \circ \funP\neg) \Psi \mid \Psi \in \Tom\Pom \mathit{Base}(\alpha) \mbox{ and } (\alpha,\Psi) \not\in \rl{\T}{\not\in} \Big\}.\vspace{-18 pt} \] \end{definition} \noindent To unravel this definition, observe that $\funP\neg: \Pom\mathcal{L} \to \Pom\mathcal{L}$, and so we have $\bigwedge\circ\funP\neg: \Pom\mathcal{L} \to \mathcal{L}$. Thus we find that for $\Psi \in \Tom\Pom \mathit{Base}(\alpha) \subseteq \Tom\Pom\mathcal{L}$ we have $(\T(\bigwedge\circ\funP\neg))\Psi \in \Tom\mathcal{L}$ indeed. In case $\T$ preserves finite sets, $Q(\alpha)$ is a finite set, and we can express the principle relating $\nabla$ and $\neg$ as follows: \[ \tag{$\nabla 4_{f}$} \neg\nabla\alpha \approx \bigvee \Big\{ \nabla \beta \mid \beta \in Q(\alpha) \Big\}. \] In other words: the negation of a nabla is equivalent to a disjunction of nablas of conjunctions of negations of the base formulas. Putting it yet differently, in the case of $\T$ preserving finite sets, we can define the \emph{Boolean dual} $\Delta$ of $\nabla$, just in terms of $\nabla$ and $\bigvee$. For more information on this dual modality $\Delta$ the reader is referred to Kissig \& Venema~\cite{kiss:comp09}. In the general case, that is, if the functor $\T$ does not necessarily take finite sets to finite sets, we can express the interaction between $\nabla$ and $\neg$ in the form of a derivation rule, \[ \tag{$\nabla 4_{L}$} \AXC{$\{ \nabla \beta \precsim b \mid \beta \in Q(\alpha) \}$} \UIC{$\neg\nabla\alpha \precsim b$} \DisplayProof \] and a collection of axioms: \[ \tag{$\nabla 4_{R}$} \{ \nabla\beta \precsim \neg\nabla\alpha \mid \beta \in Q(\alpha) \}, \] corresponding to the directions $\precsim$ and $\succcurlyeq$ of ($\nabla 4_{f}$), respectively. The point to make is that \emph{both} ($\nabla 4_{L}$) and ($\nabla 4_{R}$) are \emph{derivable} in $\mathbf{M}$. We will prove this in detail for ($\nabla 4_{L}$). Given our completeness result, the derivability of $(\nabla 4_{R})$ is an immediate consequence of its validity~\cite{kiss:comp09}. The actual derivation of $\nabla\beta \precsim \neg\nabla\alpha$ for $\beta \in Q(\alpha)$ is rather involved, so we refrain from giving the details here. In any case, the key instruments in the derivability of both ($\nabla 4_{L}$) and ($\nabla 4_{R}$) are the following two rules. \begin{prop} \label{p:negder} For any finite set $\phi$ of formulas, the following rules are $\mathbf{M}$-derivable: \noindent \begin{tabular}{ll} \\ $(\nabla 4a)$ & \AXC{$\top \precsim \bigvee\phi$} \AXC{$\Big\{\nabla\alpha \precsim b \mid \alpha \in \T\phi \Big\}$} \BIC{$\top \precsim b$} \DisplayProof \\ \\ $(\nabla 4b)$ & \AXC{$\Big\{ a \land a' \precsim \bot \mid a \neq a' \in \phi \Big\}$} \RL{\hspace{1mm} $\alpha\neq\alpha' \in \T\phi$} \UIC{$\nabla\alpha \land \nabla\alpha' \precsim \bot$} \DisplayProof \end{tabular} \end{prop} \begin{proof} In the proof below, the following principle will be used a few times: \begin{equation} \label{eq:pr1} \mbox{Given $f: S \to S'$, for $s \in S$, $\T f$ restricts to a bijection $\T f: \T\{s\} \to \T\{f(s)\}$} \end{equation} We first show the derivability of ($\nabla 4a$). Assume that we have a derivation $\D_{\top}$ of $\top \precsim \bigvee\phi$, and a derivation $\D_{\alpha}$ of $\nabla\alpha \precsim b$, for each $\alpha\in\T\phi$. Consider an arbitrary element $\Phi \in \T\{\phi\}$. By Proposition~\ref{p:nbsem}(\ref{item:memberofdistri}), each lifted member $\alpha$ of $\Phi$ belongs to $\T\phi$. If we apply ($\nabla 3$) to the set $\{ \D_{\alpha} \mid \alpha \rel{\Tb{\in}} \Phi \}$, we obtain a derivation \[ \D_{\Phi}: \AXC{$\{ \D_{\alpha}: \nabla\alpha \precsim b \mid \alpha \rel{\Tb{\in}} \Phi \}$} \UIC{$\nabla(\T\mbox{$\bigvee$})(\Phi) \precsim b$} \DisplayProof \] for each $\Phi\in\T(\{\phi\})$. Applying our principle \eqref{eq:pr1} to the map $\bigvee: \Pom\mathcal{L} \to \mathcal{L}$, we find that each $\beta\in \T(\{ \bigvee\phi \})$ is of the form $\beta = (\T\bigvee)(\Phi_{\beta})$ for some $\Phi_{\beta} \in \T(\{\phi\})$. Thus in fact for each such $\beta$ we have a derivation \[ \D_{\beta}: \nabla\beta \precsim b \] On the other hand, we may continue the derivation $\D_{\top}$ as follows. Consider the bijection $f: \{ \top \} \to \{ \bigvee\phi \}$, which induces a bijection $\T f: \T\{ \top \} \to \T\{ \bigvee\phi \}$. Clearly we find that $f \subseteq \{\sqsubseteq\}$, so that $\T f \subseteq \rl{\T}{\sqsubseteq}$. From this it follows that we may apply the rule ($\nabla 1$) to the inequality $\top \precsim \bigvee\phi$ and obtain, for each $\gamma\in\T\{\top\}$, the derivation \[ \AXC{$\D_{\top}$} \UIC{$\top \precsim \bigvee\phi$} \LL{$\nabla 1$} \UIC{$\nabla\gamma\precsim\nabla (\T f)\gamma$} \DisplayProof \] Combining the observations until now, we obtain the following derivation $\D_{\gamma}$ for each $\gamma \in \T\{\top\}$: \[ \D_{\gamma}:\hspace{10mm} \AXC{$\D_{\top}$} \UIC{$\top \precsim \bigvee\phi$} \LL{$\nabla 1$}\UIC{$\nabla\gamma\precsim\nabla (\T f)\gamma$} \AXC{$\D_{(\T f)\gamma}$} \UIC{$\nabla (\T f)\gamma \precsim b$} \LL{cut} \BIC{$\nabla\gamma \precsim b$} \DisplayProof \] Since $(\T\bigwedge)(\Psi) \in \T\{\top\}$ for each $\Psi \in \T \{ \varnothing \}$, this means that above we have obtained a derivation \[ \D_{\Psi}: \nabla(\T\mbox{$\bigwedge$})(\Psi) \precsim b \] for each $\Psi\in \T\{\varnothing\}$. Finally, consider the instantiation of ($\nabla 2$) with $A = \varnothing$. By Proposition~\ref{item:redistriofempty} we have $\mathit{SRD}(\varnothing) = \T\{\varnothing\}$, so that the set $\left\{ \nabla(\T\bigwedge)(\Psi) \precsim b \mid \Psi \in \T\{\varnothing\} \right\}$ is exactly the set of premises of this instantiation of ($\nabla 2$). Hence we may simply take the set of all derivations $\D_{\Psi}$, with $\Psi \in \T\{\varnothing\}$, and continue as follows: \[ \AXC{$\Big\{ \D_{\Psi} \mid \Psi \in \T\{\varnothing\} \Big\}$} \LL{$\nb2$} \UIC{$\top \precsim b$} \DisplayProof \] This finishes the proof of the derivability of ($\nabla 4a$). \medskip In the case of ($\nabla 4b$) we will proceed a bit faster, leaving the details as to why our argumentation yields derivability rather than admissibility, as an exercise for the reader. Let $\phi$ be a finite set of formulas such that $a \land a' \equiv \bot$ for all distinct $a,a' \in \phi$, and let $\alpha$ and $\alpha'$ be two distinct elements of $\T\phi$. We will derive the inequality $\nabla\alpha \land \nabla\alpha' \precsim \bot$. By ($\nabla 2$) it suffices to show that \[ \vdash_{\nax} \nabla (\T\mbox{$\bigwedge$})(\Phi) \precsim \bot, \] where $\Phi$ is an arbitrary slim redistribution of the set $\{ \alpha, \alpha'\}$. But if $\Phi \in \mathit{SRD}(\{\alpha,\alpha'\})$, and both $\alpha$ and $\alpha'$ belong to $\T\phi$, then first of all we have $\mathit{Base}(\Phi) \subseteq \funP \phi$, because $\Phi \in \Tom \Pom (\mathit{Base}(\alpha) \cup \mathit{Base}(\alpha'))$ by the definition of a slim redistribution and thus $\mathit{Base}(\Phi) \subseteq \funP (\mathit{Base}(\alpha) \cup \mathit{Base}(\alpha')) \subseteq \funP \varphi$. In addition, it follows by Proposition~\ref{p:nbsem}(1) that $\varnothing \not\in\mathit{Base}(\Phi)$, and then by Proposition~\ref{p:nbsem}(\ref{item:singletonredistri}) that $\mathit{Base}(\Phi)$ contains some set $\psi \subseteq \phi$ with $\size{\psi} > 1$. Define the following function $d: \mathit{Base}(\Phi) \to \funP(\phi) \cup \Big\{\{ \top \} \Big\}$: \[ d(\chi) \mathrel{:=} \left\{\begin{array}{lcl} \emptyset & \mbox{if} & \size{\chi} > 1 \\ \chi & \mbox{if} & \size{\chi} = 1 \\ \{ \top \} & \mbox{if} & \size{\chi} = 0 \end{array} \right. \] On the basis of our set of premises $\{ a \land a' \precsim \bot \mid a \neq a'\in \T\phi \}$, for each $\chi \in \mathit{Base}(\Phi) \subseteq \funP\phi$ we can find a derivation for the inequality $\bigwedge\chi\precsim \bigvee d(\chi)$. Putting these derivations together, and applying ($\nabla 1$) with $Z= \{ (\bigwedge \chi, \bigvee d(\chi)) \mid \chi \in \mathit{Base}(\Phi) \}$, we obtain a derivation $\D_{\Phi}$ for the inequality $\nabla (\T \bigwedge)(\Phi) \precsim \nabla (\T \bigvee)(\T d (\Phi))$. We also claim that we can derive the inequality $\nabla (\T\bigvee)(\T d (\Phi)) \precsim \bot$. Since $\mathit{Base}: \Tom \to \Pom$ is a natural transformation, we have that $\mathit{Base}(\T d(\Phi)) = (\funP d) (\mathit{Base}(\Phi)) = d[\mathit{Base}(\Phi)]$. Now recall that above we found a $\psi \in \mathit{Base}(\Phi)$ with $\size{\psi} > 1$; it follows that $\varnothing = d(\psi) \in \mathit{Base}(\T d(\Phi))$, so that on the basis of Proposition~\ref{p:nbsem}(1) we may conclude that $\T d(\Phi)$ has \emph{no} lifted members. But then one single application of ($\nabla 3$), with the \emph{empty} set of premisses, provides the desired derivation for $\nabla (\T\bigvee)(\T d (\Phi)) \precsim \bot$. Finally then, an application of the cut rule gives $\nabla (\T\bigwedge)(\Phi) \precsim \bot$, as required. \end{proof} As a corollary to this we can now prove the derivability of $(\nabla 4_{L})$. \begin{prop} The rule $(\nabla 4_{L})$ is derivable in $\mathbf{M}$. \end{prop} \begin{proof} Let $\alpha \in \Tom\mathcal{L}$ and $b \in \mathcal{L}$ be arbitrary, and assume that for all $\beta \in Q(\alpha)$ we have $\nabla\beta \sqsubseteq b$. We will show that $\neg\nabla\alpha \sqsubseteq b$. Consider the map $t: \Pom\mathit{Base}(\alpha) \to \mathcal{L}$ given by \[ t:\psi \mapsto \bigwedge\{ a \in \mathit{Base}(\alpha) \mid a \not\in\psi \} \land \bigwedge \{\neg b \mid b \in \psi \}. \] Then for all $\psi \subseteq \mathit{Base}(\alpha)$ it is straightforward to verify that (i) $t(\psi) \sqsubseteq (\bigwedge\circ\funP\neg)\psi$, and (ii) if $a \not\in \psi$ then $t(\psi) \sqsubseteq a$. Define $\phi$ to be the \emph{range} of $t$. Intuitively, think of $\phi$ as the set of atoms of a Boolean algebra; then it is not hard to see that \begin{equation} \label{eq:nb4a1} \top \sqsubseteq \bigvee\phi. \end{equation} We claim that \begin{equation} \label{eq:nb4a2} \mbox{ for all } \gamma\in\T\phi: \nabla\gamma \sqsubseteq b \lor \nabla\alpha. \end{equation} For the proof of \eqref{eq:nb4a2}, take an arbitrary $\gamma \in \T\phi$. By definition of $\phi$, the map $\T t$ is surjective when seen as $\T t: \Tom\Pom\mathit{Base}(\alpha) \to \Tom\phi$, and so we may fix an element $\Psi \in \Tom\Pom\mathit{Base}(\alpha)$ such that $\gamma = (\T t)\Psi$. Now distinguish cases. First assume that $(\alpha,\Psi) \not\in \rl{\T}{\not\in}$. It follows from (i) that $\gamma = (\T t)\Psi \,\rel{\rl{\T}{\sqsubseteq}}\, (\T(\bigwedge\circ\funP\neg))\Psi$, and so an application of ($\nabla 1$) shows that $\nabla\gamma \sqsubseteq \nabla (\T(\bigwedge\circ\funP\neg))\Psi$. Now by assumption we have $(\T(\bigwedge\circ\funP\neg))\Psi \in Q(\alpha)$, and so there is a derivation of the inequality $\nabla(\T(\bigwedge\circ\funP\neg))\Psi \precsim b$. Then an application of the cut rule shows that $\nabla\gamma \sqsubseteq b$. If, on the other hand, the pair $(\alpha,\Psi)$ \emph{does} belong to the relation $\rl{T}{\not\in}$, then by (ii) we obtain that $\gamma = (\T t)\Psi \,\rl{T}{\sqsubseteq}\, \alpha$. Now an application of ($\nabla 1$) yields a derivation for $\nabla\gamma \precsim \nabla\alpha$. In either case, a simple propositional continuation of the derivation shows that $\nabla\gamma \sqsubseteq b \lor \nabla\alpha$, which proves \eqref{eq:nb4a2}. Finally, applying the derived rule ($\nabla 4a$) to the premisses given by \eqref{eq:nb4a1} and \eqref{eq:nb4a2}, we obtain a derivation of the inequality $\top \precsim b \lor \nabla\alpha$. But from this it follows by some straightforward classical propositional manipulations that $\neg\nabla\alpha \sqsubseteq b$, as required. \end{proof} \section{Introduction} \label{s:introduction} \newcommand{\mathcal{L}}{\mathcal{L}} Coalgebra, introduced to computer science by Aczel in the late 1980s~\cite{acze:nonw88,acze:fina89}, is rapidly gaining ground as a general mathematical framework for many kinds of state-based evolving systems. Examples of coalgebras include data streams, (infinite) labelled trees, Kripke structures, finite automata, (probabilistic/weighted) transition systems, neighborhood models, and many other familiar structures. As emphasized by Rutten~\cite{rutt:univ00}, who developed, in analogy with Universal Algebra, the theory of Universal Coalgebra as a general theory of such transition systems, the coalgebraic viewpoint combines wide applicability with mathematical simplicity. In particular, one of the main advantages of the coalgebraic approach is that a substantial part of the theory of systems can be developed \emph{uniformly} in a functor $\T$ which represents the \emph{type} of the coalgebras we are dealing with. Here we restrict attention to \emph{systems}, where $\T$ is an endofunctor on the category $\mathsf{Set}$ of sets with functions, so that a $\T$-coalgebra is a pair of the form \[ \mathbb{X} = \struc{X,\xi: X \to \T X} \] with the set $X$ being the carrier or state space of the coalgebra, and the map $\xi$ its unfolding or transition map. Many important notions, properties, and results of systems can be explained just in terms of properties of their type functors. As a key example, any set functor $\T$ canonically induces a notion of observational or \emph{behavioural equivalence} between $\T$-coalgebras; this notion generalizes the natural notions of bisimilarity that were independently developed for each specific type of system. In order to describe and reason about the kind of behaviour modelled by coalgebras, there is a clear need for the design of coalgebraic specification languages and derivation systems, respectively. The resulting research programme of \emph{Coalgebraic Logic} naturally supplements that of Coalgebra by searching for logical formalisms that, next to meeting the usual desiderata such as striking a good balance between expressive power and computational feasibility, can be defined and studied uniformly in the functor $\T$. Given the fact that Kripke models and frames are prime examples of coalgebras, it should come as no surprise that in search for suitable coalgebraic logics, researchers looked for inspiration to \emph{modal logic}~\cite{blac:moda01}. This research direction was inititiated by Moss~\cite{moss:coal99}; roughly speaking, his idea was to take the functor $\T$ \emph{itself} as supplying a modality $\nabla_{\T}$, in the sense that for every element $\alpha \in \T\mathcal{L}$ (where $\mathcal{L}$ is the collection of formulas), the object $\nabla_{\T}\alpha$ is a formula in $\mathcal{L}$. While Moss' work was recognized to be of seminal conceptual importance in advocating modal logic as a specification language for coalgebra, his particular formalism did not find much acclaim, for at least two reasons. First of all, the semantics of his modality is defined in terms of relation lifting, and for this to work smoothly, Moss needed to impose a restriction on the functor (the coalgebra type functor $\T$ is required to preserve weak pullbacks). Thus the scope of his work excluded some interesting and important coalgebras such as neighborhood models and frames. And second, for practical purposes, the syntax of Moss' language was considered to be rather unwieldy, with the nonstandard operator $\nabla_{\T}$ looking strikingly different from the usual $\Box$ and $\Diamond$ modalities. Following on from Moss' work, attention turned to the question how to obtain modal languages for $\T$-coalgebras which use more standard modalities \cite{kurz:cmcs98-j,roes:coal00,jaco:many01}, and how to find derivation systems for these formalisms. This approach is now usually described in terms of predicate liftings~\cite{patt:coal03,schr:expr05} or, equivalently, Stone duality~\cite{bons:dual05,kurz:coal06}. Other approaches towards coalgebraic logic, such as the one using co-equations~\cite{adam:logi05} until now have received somewhat less attention. For a while, this development directed interest away from Moss' logic, and the relationship between various approaches towards coalgebraic logic was not completely clear. In the mean time, however, it had become obvious that even in standard modal logic, a nabla-based approach has some advantages. In this setting the coalgebra type $\T$ is instantiated by the power set functor $\funP$, so that (the finitary version of) the nabla operator $\nabla_{\!\funP}$, takes a (finite) \emph{set} $\alpha$ of formulas and returns a single formula $\nabla_{\!\funP}\alpha$. The semantics of this so-called \emph{cover modality} can be explicitly formulated as follows, for an arbitrary Kripke structure $\mathbb{X}$ with accessibility relation $R$: \begin{equation} \label{eq:1a} \begin{array}{lll} \mathbb{X},x \Vdash \nabla_{\!\funP}\alpha & \mbox{ if } & \mbox{ for all $a \in \alpha$ there is a $t \in R[x]$ with $\mathbb{X},t \Vdash a$, and} \\&& \mbox{ for all $t \in R[x]$ there is an $a \in \alpha$ with $\mathbb{X},t \Vdash a$}. \end{array} \end{equation} In short: $\nabla_{\!\funP}\alpha$ holds at a state $x$ iff the formulas in $\alpha$ and the set $R[x]$ of successors of $x$ `cover' one another. Readers familiar with classical first-order logic will recognize the quantification pattern underlying (\ref{eq:1a}) from the theory of Ehrenfeucht-Fra\"{\i}ss\'e games, Scott sentences, and the like, see for instance~\cite{hodg:mode93}. In modal logic, related ideas made an early appearance in Fine's work on normal forms~\cite{fine:norm75}. Using the standard modal language, $\nabla_{\!\funP}$ can be seen as a defined operator: \begin{equation} \label{eq:1} \nabla_{\!\funP}\alpha = \Box\mbox{$\bigvee$}\alpha \land \mbox{$\bigwedge$} \Diamond \alpha, \end{equation} where $\Diamond\alpha$ denotes the set $\{ \Diamond a \mid a \in \alpha \}$. But is in fact an easy exercise to prove that with $\nabla_{\!\funP}$ defined by (\ref{eq:1a}), we have the following semantic equivalences: \begin{equation} \label{eq:2} \begin{array}{lll} \Diamond\alpha &\equiv& \nabla_{\!\funP} \{ \alpha, \top \} \\ \Box\alpha &\equiv& \nabla_{\!\funP}\varnothing \lor \nabla_{\!\funP} \{ \alpha \} \end{array} \end{equation} In other words, the standard modalities $\Box$ and $\Diamond$ can be defined in terms of the nabla operator (together with $\lor$ and $\top$). When combined, (\ref{eq:1}) and (\ref{eq:2}) show that the language based on the nabla operator offers an alternative formulation of standard modal logic. In fact, independently of Moss' work, Janin \& Walukiewicz~\cite{jani:auto95} had already made the much stronger observation that the set of connectives $\{ \Box,\Diamond,\land,\lor \}$ may in some sense be replaced by the connectives $\nabla_{\!\funP}$ and $\lor$, that is, without the conjunction operation. This fact, which is closely linked to fundamental automata-theoretic constructions, lies at the heart of the theory of the modal $\mu$-calculus, and has many applications, see for instance~\cite{dago:logi00,sant:comp10}. These observations naturally led Venema~\cite{vene:auto06} to introduce, parametric in the coalgebraic type functor $\T$, a finitary version of Moss' logic, extended with fixpoint operators, and to generalize the link between fixpoint logics and automata theory to the coalgebraic level of generality. Subsequently, Kupke \& Venema~\cite{kuve08:coal} showed that many fundamental results in automata theory and fixpoint logics are really theorems of universal coalgebra. The key role of the nabla modality in these results revived interest in Moss' logic. Our paper addresses the main problem left open in the literature on $\nabla$-based coalgebraic logic, namely that of providing a \emph{sound and complete derivation system} for the logic. Moss' approach is entirely semantic, and does not provide any kind of syntactic calculus. As a first result in the direction of a derivation system for nabla modalities, Palmigiano \& Venema~\cite{palm:nabl07} gave a complete axiomatization for the cover modality $\nabla_{\!\funP}$. This calculus was streamlined into a formulation that admits a straightforward generalization to an arbitrary set functor $\T$, by B{\'\i}lkov\'a, Palmigiano \& Venema~\cite{bilk:proo08}, who also provided suitable Gentzen systems for the logic based on $\nabla_{\!\funP}$. In this paper we will prove the soundness and completeness of this axiomatization in the general case. \medskip In the remaining part of the introduction we briefly survey the paper, its main contributions, and its proof method. Throughout the paper we let $\T$ denote the coalgebraic type functor; usually we make the proviso that $\T$ preserves weak pullbacks and inclusions (all of this will be discussed further on in detail). Our key instrument in making Moss' language more standard is to base its syntax on the \emph{finitary} version $\Tom$ of the functor $\T$ which is defined on objects as follows: for a set $X$, $\Tom X \mathrel{:=} \bigcup\{ \T Y \mid Y \subseteq_{\omega} X \}$. As we will discuss in detail, for each object $\alpha \in \Tom X$ there is a \emph{minimal} finite set $\mathit{Base}_{X}(\alpha) \subseteq_{\omega} X$ such that $\alpha \in \T \mathit{Base}(\alpha)$, and the maps $\mathit{Base}_{X}$ provide a natural transformation \[ \mathit{Base}: \Tom \mathrel{\dot{\rightarrow}} \Pom. \] The formulas of our coalgebraic language $\mathcal{L}$ can now be defined by the following grammar: \[ a \mathrel{\;::=\;} \neg a \mathrel{\mid} \mbox{$\bigwedge$}\phi \mathrel{\mid} \mbox{$\bigvee$}\phi \mathrel{\mid} \nabla_{\T} \alpha. \] where $\phi \in \Pom \mathcal{L}$ and $\alpha \in \Tom\mathcal{L}$. That is, the propositional basis of our coalgebraic language $\mathcal{L}$ takes the finitary conjunction ($\bigwedge$) and disjunction ($\bigvee$) connectives as primitives, and to this we add the coalgebraic modality $\nabla_{\T}$, which returns a formula $\nabla_{\T}\alpha$ for every object $\alpha \in \Tom\mathcal{L}$. The point of restricting Moss' modality to the set $\Tom\mathcal{L}$ is that the formula $\nabla_{\T}\alpha$ has a finite, clearly defined set of immediate subformulas, namely the set $\mathit{Base}(\alpha)$; thus every formula has a finite set of subformulas. The key observation of Moss~\cite{moss:coal99} was that the \emph{semantics} \eqref{eq:1a} of $\nabla$ can be expressed in terms of the so-called Egli-Milner \emph{lifting} of the satisfaction relation ${\Vdash} \subseteq X \times \mathcal{L}$. Generalizing this observation from the Kripke functor $\funP$ to the arbitrary type $\T$, he uniformly defined the semantics of $\nabla_{\T}$ in a $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$ as follows: \[ \mathbb{X},x \Vdash \nabla_{\T}\alpha \mbox{ iff } \xi(x) \rel{\rl{\T}{\Vdash}}\alpha. \] Here $\rl{\T}{\Vdash}$ denotes a categorically defined \emph{lifting} of the satisfaction relation ${\Vdash} \subseteq X \times \mathcal{L}$ between states and formulas to a relation $\rl{\T}{\Vdash} \subseteq \T X \times \T\mathcal{L}$. Given the importance of the \emph{relation lifting} operation $\rl{\T}$ in Moss' logic, we include in this paper a fairly detailed survey of its properties and related concepts. The coalgebraic \emph{validities}, that is, the formulas that are true at every state of every $\T$-coalgebra thus constitute a semantically defined coalgebraic \emph{logic}, and it is this logic that we will axiomatize in this paper. Our approach will be \emph{algebraic} in nature, and so it will be convenient to work with equations, or rather, inequalities (expressions of the form $a \precsim b$, where $a$ an $b$ are terms/formulas of the language). We obtain our derivation system for Moss' logic by extending a sound and complete derivation system for propositional logic with three rules for the $\nabla$-operator. The first rule, denoted by $(\nabla 1)$, can be seen as a combined montonicity and congruence rule. Rule $(\nabla 2)$ is a distributive law that expresses that any conjunction of $\nabla$-formulas is equivalent to a (possibly infinite) disjunction of $\nabla$-formulas built from conjunctions. Finally, rule $(\nabla 3)$ expresses that $\nabla$ distributes over disjunctions. In the case that the functor $\T$ under consideration maps finite sets to finite sets, the rules $(\nabla 2)$ and $(\nabla 3)$ take the form of axioms. The proof of our soundness and completeness theorem is based on the \emph{stratification method} of Pattinson~\cite{patt:coal03}. We will show that not only the \emph{language} of our system, but also its \emph{semantics} and our \emph{derivation system} can be stratified in $\omega$ many layers corresponding to the modal depth of the formulas involved. (This means for instance that if two formulas of depth $n$ are provably equivalent, this can be demonstrated by a derivation involving only formulas of depth at most $n$.) What glues these layers nicely together can be formulated in terms of properties of a one-step version of the derivation system $\mathbf{M}$. In our algebraic approach, this one-step version of $\mathbf{M}$ is incarnated as a \emph{functor} on the category of Boolean algebras: \[ \mathbb{M}: \mathsf{BA} \to \mathsf{BA}. \] To mention a few interesting properties of this functor, of which the definition is uniformly parametrized by the functor $\T$: $\mathbb{M}$ is finitary, and preserves atomicity of Boolean algebras, and injectivity of homomorphisms. We will be interested in algebras for the functor $\mathbb{M}$, and in particular, we will see that the \emph{initial} $\mathbb{M}$-algebra can be seen as the Lindenbaum-Tarski algebra of our derivation system $\mathbf{M}$. For the definition of $\mathbb{M}$, we need to go into quite a bit of detail concerning the theory of \emph{presentations} of (Boolean) algebras. In particular, we define a \emph{category} $\mathsf{Pres}$ of presentations by introducing a suitable notion of presentation morphism, and establish an adjunction between the categories $\mathsf{Pres}$ and $\mathsf{BA}$: \begin{equation} \xymatrix{ \mathsf{BA} \ar@/_/[r]_{C} \ar@{}[r] |{\bot} & \mathsf{Pres} \ar@/_/[l]_{B} } \end{equation} This adjunction (which is almost an equivalence) is the instrument that allows us to turn the modal rule and axioms of $\mathbf{M}$ into the functor $\mathbb{M}$; the key property that makes this work is that all modal rules and axioms of $\mathbf{M}$ are formulated in terms of \emph{depth-one} formulas. What is left to do, in order to prove the soundness and completeness of our logic, is connect the algebra functor $\mathbb{M}: \mathsf{BA} \to \mathsf{BA}$ (that is, the `logic') to the coalgebra functor $\T: \mathsf{Set} \to \mathsf{Set}$ (the `semantics'). Here we will apply a well-known method in coalgebraic logic~\cite{bons:dual05,kurz:coal06} which is often described in terms of \emph{Stone duality} because its aim is to link functors on two different base categories that are connected themselves by a Stone-type duality or adjunction. In our case, to make the connection between $\mathbb{M}$ and $\T$ we invoke the already existing link on the level of the base logic, provided by the (contravariant) power set functor $\funaQ$ from $\mathsf{Set}$ to $\mathsf{BA}$ (we do not need its adjoint functor sending a Boolean algebra to its set of ultrafilters): \begin{equation} \label{diag:duality} \xymatrix{ \mathsf{BA} \ar@(dl,ul)[]^{\mathbb{M}} & \mathsf{Set} \ar@/_/[l]_{\funaQ} \ar@(dr,ur)[]_{T} } \end{equation} The key remaining step in the completeness proof involves the definition of a natural transformation \begin{equation*} \label{eq:delta1} \delta: \mathbb{M}\funaQ \mathrel{\dot{\rightarrow}} \funaQ\T. \end{equation*} As usual in the Stone duality approach towards coalgebraic logic, the \emph{existence} of $\delta$ corresponds to the \emph{soundness} of the logic. To get an idea of why this is the case, observe that the existence of $\delta$ enables us to see a $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$ as an $\mathbb{M}$-algebra, namely its \emph{complex algebra} $\mathbb{X}^{*} \mathrel{:=} \struc{\funaQ X, \funaQ\xi\cof\delta_{X}}$. Finally, as we will see in the final part of our stratification-based proof, the \emph{completeness} of $\mathbf{M}$ is based on the observation that \begin{equation} \label{eq:delta2} \delta \text{ is injective}, \end{equation} that is, for each set $X$, the $\mathsf{BA}$-homomorphism $\delta_{X}: \mathbb{M}\funaQ X \to \funaQ\T X$ is an \emph{embedding}. The proof of \eqref{eq:delta2}, which technically forms the heart of our proof, is based on the fact that the nabla-axioms allow us to write depth-one formulas into a certain normal form, and on the earlier mentioned properties of the functor $\mathbb{M}$. \medskip This paper replaces, extends and partly corrects (c.q.\ clarifies, see Remark~\ref{r:aiml}) an earlier version~\cite{kupk:comp08}. The main differences with respect to~\cite{kupk:comp08} are the following. First of all, we provide a detailed, self-contained overview of the notion of relation lifting and its properties (which was only covered as Fact~3 in the mentioned paper). Second, our categorical treatment of presentations and the algebras they present (which is novel to the best of our knowledge) clarifies and substantially extends the treatment in~\cite{kupk:comp08}. Third, our axiomatization simplifies the earlier one; in particular, we show here in detail that we do not need axioms or rules specifically dealing with negation (more specifically, we prove that an earlier rule ($\nb4$) is derivable in the system here. Fourth, we provide a more precise definition and a more detailed discussion of the functor $\mathbb{M}$; for instance, the result that $\mathbb{M}$ preserves atomicity is new. Fifth and final, we show here in much more detail and precision how the soundness and completeness of our axiomatization follows from the one-step soundness and completeness. \paragraph{Overview} In the next section we fix our notation, introduce the necessary basic (co-)alge\-braic terminology and discuss properties of functors on the category of sets that will play an important role in our paper. After that, in Section~\ref{s:relationlifting}, we recall the notion of a relation lifting $\rl{\T}$ induced by a set functor $\T$ and give an overview of its properties. Section~\ref{s:boolean} and Section~\ref{s:moss} introduce the terminology that we need concerning Boolean algebras and their presentations, and concerning Moss' coalgebraic logic, respectively. After that we move to the main results of our paper. First, in Section~\ref{s:derivation} we introduce the derivation system for Moss' coalgebraic logic and we define the algebra functor $\mathbb{M}:\mathsf{BA} \to \mathsf{BA}$. In Section~\ref{s:onestep} we prove that our derivation system is one-step sound and complete. Within the above described categorical framework this is equivalent to establishing the existence of a natural transformation $\delta: \mathbb{M} \funaQ \mathrel{\dot{\rightarrow}} \funaQ \T$ (one-step soundness) and proving that this transformation $\delta$ is injective (one-step completeness). Finally, in Section~\ref{s:completeness} we prove our main result, namely soundness and completeness of our derivation system with respect to the coalgebraic semantics. We conclude with an overview of related work and open questions. Finally, since this paper features a multitude of categories, functors and natural transformations, for the reader's convenience we list these in an appendix. \paragraph{\bf Acknowledgement} We thank the anonymous referee for many useful comments. \section{Moss' coalgebraic logic} \label{s:moss} In this section we will recall the definitions of Moss' coalgebraic logic and its semantics~\cite{moss:coal99}, or rather, the finitary version thereof developed by Venema~\cite{vene:auto06}. \subsection{Syntax} As mentioned in the introduction, the key idea underlying the syntax of Moss' language for reasoning about $\T$-coalgebras is to include a modal operator $\nabla$ into the language whose `arity' is given by the functor $\T$ itself, in the same way that $\Pom$ is the `arity' of our conjunction and disjunctions. In the \emph{finitary} version of the language, the arity of $\nabla$ is given by the finitary version $\Tom$ of $\T$. In brief, the language $\mathcal{L}$ will be defined by the following grammar: \[ a \mathrel{\;::=\;} \neg a \mathrel{\mid} \mbox{$\bigwedge$}\phi \mathrel{\mid} \mbox{$\bigvee$}\phi \mathrel{\mid} \nabla \alpha \] where $\phi \in \Pom \mathcal{L}$ and $\alpha \in \Tom\mathcal{L}$. For the purpose of this paper we need some further syntactic definitions. \begin{definition} \label{d:syntax} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving set functor and let $\Tom$ be the finitary version of $\T$. The language $\mathcal{L}$ of the finitary Moss language for $\T$ is defined inductively. We first define $\mathcal{L}_{0}$ as the set $\Tba(\varnothing)$ of closed Boolean formulas (see Definition~\ref{d:BAsyntax}). For the inductive step, we start with introducing the set functor $\Tomnb$ defined by, for a given set $X$ and function $f: X \to Y$, \begin{eqnarray*} \Tomnb X & \mathrel{:=} & \{ \nabla \alpha \mid \alpha \in \Tom X \}, \\ \Tomnb f (\nabla\alpha) & \mathrel{:=} & \nabla \T f(\alpha). \end{eqnarray*} We continue the inductive definition by putting \[ \mathcal{L}_{i+1} \mathrel{:=} \Tba \Tomnb \mathcal{L}_{i}. \] Finally, we define $\mathcal{L}$ as the union $\mathcal{L} \mathrel{:=}\bigcup_{i \in \omega} \mathcal{L}_{i}$, and fix the \emph{rank} or \emph{depth} of a formula $a \in \mathcal{L}$ is the smallest natural number $n$ such that $a \in \mathcal{L}_{n}$. \end{definition} Using BNF notation, we can recast the above definition as \begin{align*} \mathcal{L}_0 \ni a \mathrel{\;::=\;} & \neg a \mathrel{\mid} \mbox{$\bigwedge$}\phi \mathrel{\mid} \mbox{$\bigvee$}\phi \intertext{where $\phi \subseteq_{\omega} \mathcal{L}_{0}$, and} \mathcal{L}_{i+1} \ni a \mathrel{\;::=\;} & \nabla \alpha \mathrel{\mid} \neg a \mathrel{\mid} \mbox{$\bigwedge$}\phi \mathrel{\mid} \mbox{$\bigvee$}\phi \end{align*} where $\alpha \in \Tom\mathcal{L}_i$ and $\phi \in \Pom \mathcal{L}_{i+1}$. Despite its unconventional appearance, the language $\mathcal{L}$ admits fairly standard definitions of most syntactical notions. As an example we mention the notion of a subformula. \begin{definition} \label{d:sfor} We define the set $\mathit{Sfor}(a)$ of \emph{subformulas} of $a$ by the following induction: \begin{eqnarray*} \mathit{Sfor}(\neg a) & \mathrel{:=} & \{ \neg a \} \cup \mathit{Sfor}(a) \\ \mathit{Sfor}(\mbox{$\bigwedge$}\phi) & \mathrel{:=} & \{ \mbox{$\bigwedge$}\phi \} \cup \mbox{$\bigcup$}_{a\in\phi}\mathit{Sfor}(a) \\ \mathit{Sfor}(\mbox{$\bigvee$}\phi) & \mathrel{:=} & \{ \mbox{$\bigvee$}\phi \} \cup \mbox{$\bigcup$}_{a\in\phi}\mathit{Sfor}(a) \\ \mathit{Sfor}(\nabla\alpha) & \mathrel{:=} & \{ \nabla\alpha \} \cup \mbox{$\bigcup$}_{a\in\mathit{Base}(\alpha)}\mathit{Sfor}(a). \end{eqnarray*} The elements of $\mathit{Base}(\alpha)\subseteq\mathit{Sfor}(\nabla\alpha)$ will be called the \emph{immediate} subformulas of $\nabla\alpha$. \end{definition} On the basis of this definition it is not difficult to prove that every formula in $\mathcal{L}$ has only \emph{finitely} many subformulas. This is in fact the reason why we call our language the \emph{finitary} version of Moss'. \begin{rem} \label{r:Tsynt} In order to formulate and understand the interaction principles between nabla and the Boolean operations, we need to think of the propositional connectives as \emph{functions} on formulas. Taking disjunction as an example, observe that we may think of it as a map $\bigvee: \Pom\mathcal{L} \to \mathcal{L}$. Thus we may apply the functor $\Tom$ to this map, obtaining $\T\bigvee: \Tom\Pom\mathcal{L} \to \Tom\mathcal{L}$. (Recall from our discussion on the finitary version of a functor that to simplify notation we will write $\T\bigvee$ rather than $\Tom\bigvee$ .) Hence, for $\Phi \in \Tom\Pom\mathcal{L}$, we find $(\T\bigvee)\Phi \in \Tom\mathcal{L}$, which means that $\nabla(\T\bigvee)\Phi$ is a well-formed formula. The same applies to the formula $\nabla(\T\bigwedge)\Phi$, and similarly, we may think of negation as a map $\neg: \mathcal{L} \to \mathcal{L}$, and obtain $\T\neg: \T\mathcal{L} \to \T\mathcal{L}$; thus for any formula $\nabla\alpha$, we may also consider the formula $\nabla(\T\neg)\alpha$. \end{rem} \begin{rem} \label{r:prop} The reader may be surprised that we did not include propositional variables in our language. The reason for this is that we may \emph{encode} these into the functor. More precisely, given a functor $\T$ and a set $\mathsf{Prop}$ of proposition letters, recall from Example~\ref{ex:1}(5) that the $\T$-models over $\mathsf{Prop}$ can be identified with the coalgebras for the functor $\T_{\mathsf{Prop}} = \funP(\mathsf{Prop}) \times T$. Hence we may use the language $\mathcal{L}$ associated with $\T_{\mathsf{Prop}}$ to describe the $\mathsf{Prop}$-models based on $\T$-coalgebras, see Example~\ref{ex:sem}(3). \end{rem} \begin{convention} Since in this paper we will not only be dealing with formulas and sets of formulas, but also with elements of the sets $\Tom\mathcal{L}$, $\Pom\Tom\mathcal{L}$ and $\Tom\Pom\mathcal{L}$, it will be convenient to use some kind of \emph{naming convention}, see Table~\ref{tb:naming} below. \begin{table}[thb] \begin{center} \begin{tabular}{|r|l|} \hline Set & Elements\\ \hline $\mathcal{L}$& $a,b,\dotsc$\\ $\Tom \mathcal{L}$ & $\alpha,\beta, \dotsc$\\ $\Pom \mathcal{L}$ & $\phi,\psi, \dotsc$\\ $\Pom \Tom \mathcal{L}$& $A, B, \dotsc$\\ $\Tom \Pom \mathcal{L}$ & $\Phi, \Psi,\dotsc$\\ \hline \end{tabular} \end{center} \caption{Naming convention } \label{tb:naming} \end{table} \end{convention} It will be useful later on to have a more categorical description of the finitary Moss language for a functor $\T$. For this purpose we need the following definition. \begin{definition} \label{d:Mossfun} We define the category $\Boole_{\nabla}$ of \emph{Moss algebras} as the algebras for the Moss functor $A_{M}: \mathsf{Set}\to\mathsf{Set}$, given as: \[ A_{M} \mathrel{:=} \Id + \Pom + \Pom + \Tom , \] That is, for a set $S$, $A_{M} S$ is the disjoint union of $S$, two (disjoint copies) of $\Pom S$, and $\Tom S$; for a map $f$, $A_{M} f$ is defined accordingly. A Moss algebra will usually be introduced as a quadruple $\mathbb{B} = \struc{B,\neg^{\mathbb{B}},\bigwedge^{\mathbb{B}},\bigvee^{\mathbb{B}},\nabla^{\mathbb{B}}}$, where $\struc{B,\neg^{\mathbb{B}},\bigwedge^{\mathbb{B}},\bigvee^{\mathbb{B}}}$ is a $\mathsf{Boole}$-type algebra, called the \emph{Boolean reduct} of $\mathbb{B}$, and $\nabla^{\mathbb{B}}: \Tom B \to B$ is the \emph{nabla operator} of $\mathbb{B}$. \end{definition} Given a Moss algebra $\mathbb{B}$, there is a unique, natural way to interpret $\mathcal{L}$-terms as elements of the carrier $B$ of $\mathbb{B}$. This \emph{meaning function} $\mathit{mng}_{\mathbb{B}}: \mathcal{L} \to \funU\mathbb{B}$ can be defined by a straightforward induction on the complexity of formulas. For instance, the clauses for $\bigwedge$ and $\nabla$ are \begin{eqnarray*} \mathit{mng}_{\mathbb{B}}(\mbox{$\bigwedge$}\phi) &\mathrel{:=}& \mbox{$\bigwedge$}^{\mathbb{B}}(\funP\mathit{mng}_{\mathbb{B}})(\phi) \\ \mathit{mng}_{\mathbb{B}}(\nabla\alpha) &\mathrel{:=}& \nabla^{\mathbb{B}}(\T\mathit{mng}_{\mathbb{B}})(\alpha) \end{eqnarray*} Categorically speaking, this means the following. We may view Moss' language itself as a Moss algebra, by interpreting the function symbols as the corresponding syntactic operation, as usual in universal algebra. Note that in order to prove that $\nabla^{\mathcal{L}}\alpha$ belongs to $\mathcal{L}$, it is crucial that $\nabla$ is a finitary operation: from $\alpha \in \Tom\mathcal{L}$ it follows that $\alpha \in \Tom\mathcal{L}_{n}$ for some finite $n$, and then we may proceed with $\nabla\alpha \in \mathcal{L}_{n+1}\subseteq\mathcal{L}$. The arising algebra, that we will also denote as $\mathcal{L}$, is a rather special Moss algebra, namely, the \emph{initial} one. Apart from the fact that the syntax of $\mathcal{L}$ is slightly unusual, the proof of the proposition below is standard universal algebra, and so we omit it. \begin{prop} \label{p:Lmoss-init} $\mathcal{L}$ is the initial Moss algebra: given an arbitrary Moss algebra $\mathbb{B}$, the meaning function $\mathit{mng}_{\mathbb{B}}$ is the unique homomorphism from $\mathcal{L}$ to $\mathbb{B}$. \end{prop} Before moving on to the coalgebraic semantics of $\mathcal{L}$, we finish our discussion of its syntax with the following definition, for future reference. \begin{definition} \label{d:Tomnb}\label{d:Tnb} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a set functor and let $\Tom$ be the finitary version of $\T$. We define the functor $\Tnb: \mathsf{Set} \to \mathsf{Boole}$ by putting \[ \Tnb \mathrel{:=} \akk{ \mathcal{L}_0}\cof \Tomnb \cof \mathcal{L}_0. \] \end{definition} \akk{On occasion, we will consider $\Tnb$ also as a $\mathsf{Boole}$ valued functor allowing us to write $\Tnb=\funaF \Tomnb \Tba$. The notation $\Tnb$} is in accordance with the definition of $\mathcal{L}_{1}$ as the fragment of rank one formulas in $\mathcal{L}$, by the observation that $\mathcal{L}_{1} = (\Tba\cof\Tomnb)(\mathcal{L}_{0}) = \Tba\Tomnb\Tba(\varnothing)$. \subsection{Semantics} Given all the preparations we have made in the previous sections, the definition of the semantics of the language is completely straightforward. \begin{definition}\label{def:moss_sem} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $\mathbb{X} = \struc{X,\xi}$ be a $\T$-coalgebra. The satisfaction relation ${\Vdash_{\mathbb{X}}} \subseteq X \times \mathcal{L}$ is defined by the following induction on the complexity of formulas: \[\begin{array}{lcl} x \Vdash_{\mathbb{X}} \neg a & \mbox{if} & x \not\Vdash_{\mathbb{X}} a, \\ x \Vdash_{\mathbb{X}} \bigwedge\phi & \mbox{if} & x \Vdash_{\mathbb{X}} a \text{ for all } a \in \phi, \\ x \Vdash_{\mathbb{X}} \bigvee\phi & \mbox{if} & x \Vdash_{\mathbb{X}} a \text{ for some } a \in \phi, \\ x \Vdash_{\mathbb{X}} \nabla \alpha & \mbox{if} & \xi(x) \rel{\rl{\T}{\Vdash_{\mathbb{X}}}} \alpha. \end{array} \] If $x \Vdash_{\mathbb{X}} a$ we say that $a$ is \emph{true}, or \emph{holds} at $x$ in $\mathbb{X}$. We may omit the superscript when no confusion is likely, writing $\Vdash$ instead of $\Vdash_{\mathbb{X}}$. In case $a$ \emph{holds throughout $\mathbb{X}$}, that is, at every state of $\mathbb{X}$, we write $\mathbb{X} \Vdash a$. \end{definition} \noindent Before we turn to look at some examples, we should argue for the \emph{well-definedness} of the relation $\Vdash$. In particular, when looking at the clause for the nabla modality, the reader might be worried whether this is an inductive definition at all, since the defining clause, `$\xi(x) \rel{\rl{\T}{\Vdash}} \alpha$', refers to the \emph{full} forcing relation. The point is that because of our assumptions, $\rl{\T}$ commutes with restrictions, and so we have \begin{equation} \label{eq:sem} (\xi(x),\alpha) \in \rl{\T}({\Vdash}) \iff (\xi(x),\alpha) \in \rl{\T}({\Vdash\rst{X\times\mathit{Base}(\alpha) }}). \end{equation} Thus, in order to determine whether $\nabla\alpha$ holds at $x$ or not, we only have to know the interpretation of the \emph{immediate subformulas of $\alpha$} (that is, the elements of $\mathit{Base}(\alpha)$). In other words, if using the right hand side of \eqref{eq:sem} rather than the left hand side, we would have an equivalent, inductive, definition of the semantics. \begin{exa} \label{ex:sem}\hfill \begin{enumerate}[(1)] \item Let $\T$ be the $C$-stream functor given by $\T X = C \times X$ for some set $C$. Then $\nabla_{\T}$ takes as its argument a pair $(c,a)$ where $c \in C$ and $a$ is a formula in $\mathcal{L}$. The formula $\nabla(c,a)$ is true in a $\T$-coalgebra $(X,\xi)$ at a state $x$ if $\xi(x) = (c',y)$ with $c=c'$ and $y \Vdash a$. \item The nabla operator $\nabla_{\funP}$ associated with the power set functor $\funP$ is the \emph{cover modality} discussed in the introduction. \item If $\T_{\mathsf{Prop}}$ is the $\T$-model functor of Example~\ref{ex:2}(5), associated with a functor $\T$ and a set $\mathsf{Prop}$ of proposition letters, then $\nabla_{\T_{\mathsf{Prop}}}$ takes as its argument a pair $(\pi,\alpha)$ consisting of a set $\pi \subseteq \mathsf{Prop}$ and a set $\alpha \subseteq_{\omega} \mathcal{L}^{\T}$. The meaning of the formula $\nabla_{K}(\pi,\alpha)$ can be expressed as \[ \nabla_{\T_{\mathsf{Prop}}}(\pi,\alpha) \equiv (\bigwedge_{p\in\pi}p \land \bigwedge_{p\not\in\pi}\neg p) \land \nabla_{\T}\alpha. \] \item Finally, let $\T=D_{\omega}$ be the finitary distribution functor, In this case, $\nabla_{D_{\omega}}$ takes as argument a distribution $\mu: \mathcal{L} \to [0,1]$ of finite support. Given a $\T$-coalgebra $\mathbb{X} = (X,\xi)$ and some $x \in X$ we have $x \Vdash_{\mathbb{X}} \nabla_{D_{\omega}} \mu$ if for all $y \in X$ and all $a \in \mathcal{L}$ there are real numbers $\rho_{y,a} \in [0,1]$ such that \begin{align*} \rho_{y,a} \not= 0 \quad \mbox{implies} \quad & y \Vdash a, \xi(x)(y) \not= 0, \mu(a) \not= 0 \quad &\mbox{and}\\ & \sum_{a' \in \mathcal{L}} \rho_{y,a'} = \xi(x)(y) \quad \mbox{for all } y \in X \quad &\mbox{and} \\ & \sum_{y \in X} \rho_{y,a} = \mu (a) \quad \mbox{for all } a \in \mathcal{L}. \quad \end{align*} \end{enumerate} \end{exa} \noindent The state-based semantics of the logics as presented in Definition~\ref{def:moss_sem} can be brought in accordance with the earlier algebraic perspective by the observation that every $\T$-coalgebra naturally induces a Moss algebra, namely its \emph{complex algebra}. \begin{definition} \label{d:cplxalg1} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $\mathbb{X} = \struc{X,\xi}$ be a $\T$-coalgebra. The \emph{complex algebra} $\mathbb{X}^{+}$ of $\mathbb{X}$ is defined as the Moss algebra $\mathbb{B}$ which has the power set algebra $\funaQ(X)$ as its Boolean reduct, while \[ \nabla^{\mathbb{X}^{+}} \mathrel{:=} \funQ\xi \cof \lambda\!^{\T}_{X} \] defines the nabla operation of $\mathbb{X}^{+}$. \end{definition} In words: the Boolean function symbols $\neg,\bigvee$ and $\bigwedge$ are interpreted as the complementation, union and intersection operations on the power set of $X$. To understand the definition of the nabla operation, observe that applying the contravariant power set functor to the coalgebra map $\xi$, we obtain a function $\funQ\xi: \funQ\T X \to \funQ X$, so if we compose this map with the $\T$-transformation $\lambda\!^{\T}_{X}: \T\funQ X \to \funQ\T X$, we obtain a map $\funQ\xi \cof \lambda\!^{\T}_{X}: \T\funQ X \to \funQ X$ of the right shape. It follows by Proposition~\ref{p:Lmoss-init} that every $\mathcal{L}$-formula $a$ can uniquely be assigned a meaning $\mathit{mng}_{\mathbb{X}^{+}}(a) \in \funP X$ in the complex algebra of a $\T$-coalgebra $\mathbb{X}$ --- in the sequel we will write $\mathit{mng}_{\mathbb{X}}$ rather than $\mathit{mng}_{\mathbb{X}^{+}}$. The Proposition below states that the two approaches to the coalgebraic semantics of $\mathcal{L}$ coincide, so that we can speak without hesitation of `the' meaning of a formula in a $\T$-coalgebra. \begin{prop} \label{p:onesemantics} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $\mathbb{X} = \struc{X,\xi}$ be a $\T$-coalgebra. Then we have \begin{equation*} \mathit{mng}_{\mathbb{X}}(a) = \{ x \in X \mid x \Vdash a \}, \end{equation*} for every formula $a \in \mathcal{L}$. \end{prop} \begin{proof} The proof of this proposition proceeds by a routine formula induction. \end{proof} \subsection{First observations} In this subsection we gather first observations on $\mathcal{L}$. First we show that Moss' logic is \emph{adequate}; that is, it cannot distinguish behaviorally equivalent states. \begin{thm}[Adequacy] \label{t:adequacy} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $f: X \to Z$ be a coalgebra morphism between the $\T$-coalgebras $(X,\xi)$ and $(Z,\zeta)$. For all formulas $a \in \mathcal{L}$ and all states $x \in X$ we have \begin{equation} \label{eq:adeq1} x \Vdash_{\mathbb{X}} a \text{ iff } f(x) \Vdash_{\mathbb{Z}} a. \end{equation} \end{thm} We leave it as an exercise for the reader to give a \emph{direct} proof of Theorem~\ref{t:adequacy} --- a straightforward induction will suffice, using the fact that $\rl{\T}$ distributes over relation composition in the case of a formula $a = \nabla\alpha$. We will give a proof based on the algebraic approach, involving the initiality of $\mathcal{L}$ (Proposition~\ref{p:Lmoss-init}), and the following result. \begin{prop} \label{p:nbhom} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $f: X \to Z$ be a coalgebra morphism between the $\T$-coalgebras $\mathbb{X} = (X,\xi)$ and $\mathbb{Z} = (Z,\zeta)$. Then $\funQ f$ is an algebraic homomorphism from $\mathbb{Z}^{+}$ to $\mathbb{X}^{+}$. \end{prop} \begin{proof} It is well-known that $\funQ f$ is a homomorphism from the power set algebra $\funaQ(Z)$ to $\funaQ(X)$. Thus it is left to show that $\funQ f$ also is a homomorphism with respect to the nabla operators. For that purpose, consider the following diagram: \[\xymatrix{ \T\funQ Z \ar[d]_{\T\funQ f} \ar[r]^{\lambda\!^{\T}_Z} & \funQ\T Z \ar[d]^{\funQ\T f} \ar[r]^{\funQ\zeta} & \funQ Z \ar[d]^{\funQ f} \\ \T\funQ X \ar[r]_{\lambda\!^{\T}_X} & \funQ\T X \ar[r]^{\funQ\zeta} & \funQ X }\] The left rectangle commutes since $\lambda\!^{\T}$ is a distributive law of $\T$ over $\funQ$ (see Proposition~\ref{p:nbdlfunQ}), and the right rectangle commutes by functoriality of $\funQ$ and the assumption that $f$ is a coalgebra morphism. As a corollary, the outer diagram commutes, but by definition of $\nabla^{\mathbb{X}^{+}}$ and $\nabla^{\mathbb{Z}^{+}}$ this just means that $\funQ f$ is a homomorphism for $\nabla$. \end{proof} On the basis of the previous proposition, the proof of the Theorem is almost immediate. \begin{proofof}{Theorem~\ref{t:adequacy}} By initiality of $\mathcal{L}$ as a Moss algebra, $\mathit{mng}_{\mathbb{X}}$ is the unique homomorphism $\mathit{mng}_{\mathbb{X}}: \mathcal{L} \to \mathbb{X}^{+}$. But it follows from Proposition~\ref{p:nbhom} that $\funQ f \cof \mathit{mng}_{\mathbb{Z}}$ is also a homomorphism from $\mathcal{L}$ to $\mathbb{X}^{+}$, so that we may conclude that \begin{equation} \label{eq:adeq2} \mathit{mng}_{\mathbb{X}} = \funQ f \cof \mathit{mng}_{\mathbb{Z}}. \end{equation} Now let $x$ and $a$ be as in the statement of the theorem, then we have \begin{align*} x \Vdash_{\mathbb{X}} a & \text{ iff } x \in \mathit{mng}_{\mathbb{X}}(a) & \text{(Proposition~\ref{p:onesemantics})} \\ & \text{ iff } x \in \funQ f (\mathit{mng}_{\mathbb{Z}}(a)) & \text{(\ref{eq:adeq2})} \\ & \text{ iff } fx \in \mathit{mng}_{\mathbb{Z}}(a) & \text{(definition of $\funQ f$)} \\ & \text{ iff } fx \Vdash_{\mathbb{Z}} a & \text{(Proposition~\ref{p:onesemantics})} \end{align*} From this the theorem is immediate. \end{proofof} \subsection{Logic} The purpose of this paper is to provide a sound and complete axiomatization of the set of \emph{coalgebraically valid} formulas in this language, that is, the set of $\mathcal{L}$-formulas that are true in every state of every coalgebra. Since our completeness proof will be algebraic in nature, for our purposes it will be convenient to formulate our results in terms of \emph{equations}, or rather, \emph{inequalities}. \begin{definition} An \emph{inequality} is an expression of the form $a \precsim b$, where $a$ and $b$ are formulas in $\mathcal{L}$. Similarly, an \emph{equation} is an expression of the form $a \approx b$. \end{definition} One may think of the inequality $a \precsim b$ as abbreviating the equation $a \land b \approx a$, and we will see the equation $a \approx b$ as representing the set $\{ a \precsim b, b \precsim a \}$ of inequations. (In fact, in our Boolean setting, we could even represent the equation $a \approx b$ by the single inequality $(a \land \neg b) \lor (\neg a\land b) \precsim \bot$.) Thus it does not really matter whether we base our logic on equations or on inequalities, and in the sequel we will move from one perspective to the other if we deem it useful. \begin{definition} An inequality $a \precsim b$ \emph{holds} in a Moss algebra $\mathbb{A}$, notation: $\mathbb{A} \models a \precsim b$, if $\mathit{mng}_{\mathbb{A}} (a) \leq_{\mathbb{A}} \mathit{mng}_{\mathbb{A}} (b) $. \end{definition} Given the Boolean basis of our logics, we can express coalgebraic validity in terms of equational validity, and vice versa. More precisely, given a $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$, it is easy to see that \begin{align*} \mathbb{X} \Vdash a &\iff \mathbb{X}^{+} \models \top \precsim a \intertext{and, conversely,} \mathbb{X}^{+} \models a \precsim b &\iff \mathbb{X} \Vdash \neg a \lor b. \end{align*} As a consequence, in order to axiomatize the coalgebraically valid formulas, we may just as well find a derivation system for the inequalities that are valid in all complex algebras. \begin{definition} An inequality $a \precsim b$ is \emph{($\T$-coalgebraically) valid}, notation: $a \models_{\T} b$, if it holds in every complex algebra $\mathbb{X}^{+}$. \end{definition} As an example of a validity, we mention the following, for an arbitrary $\Phi \in \Tom\Pom\mathcal{L}$: \[ \tag{$\nabla 3_{f}$} \nabla(\T\mbox{$\bigvee$})\Phi \precsim \bigvee \Big\{ \nabla \beta \mid \beta \rel{\Tb{\in}} \Phi \Big\} \] (see Remark~\ref{r:Tsynt} for an explanation of the syntax). Note that the right hand side of ($\nb3_{f}$) is a well-defined formula only if the disjunction is finite; we can guarantee this by requiring $\T$ to map finite sets to finite sets. (We will come back to this issue in the next section.) \begin{prop} \label{p:nb3s} If $\T$ is a weak pullback preserving, standard set functor that maps finite sets to finite sets, then the formula $(\nb3_{f})$ is valid for every $\Phi \in \Tom\Pom\mathcal{L}$. \end{prop} \begin{proof} In order to understand the validity of ($\nb3_{f}$), fix some $\T$-coalgebra $\mathbb{X} = \struc{X,\xi}$. First observe that for any $\phi \subseteq_{\omega} \mathcal{L}$ we have $\mathbb{X},x \Vdash \bigvee\phi$ iff $\mathbb{X},x \Vdash a$, for some $a \in \phi$. Putting it differently, the relations ${\Vdash} \corel {\in}$ and ${\Vdash} \corel \cv{\bigvee}$ coincide. From this it follows that \begin{equation} \label{eq:soundbv} \rl{\T}({\Vdash} \corel {\in}) \;=\; \rl{\T}({\Vdash}\corel \cv{\mbox{$\bigvee$}}). \end{equation} \noindent Now fix some object $\Phi \in \Tom\Pom\mathcal{L}$, and suppose that $x$ is a state in $\mathbb{X}$ such that $x \Vdash \nabla(\T\bigvee)\Phi$. From this it follows that the pair $(\xi(x),(\T\bigvee)(\Phi))$ belongs to the relation $\rl{\T}\Vdash$, and so $(\xi(x),\Phi)$ belongs to $(\rl{\T}{\Vdash}) \corel \cv{(\T\bigvee)} = \rl{\T}({\Vdash}\corel \cv{\bigvee})$. But then by (\ref{eq:soundbv}), we find $(\xi(x),\Phi) \in \rl{\T}({\Vdash} \corel {\in}) = \rl{\T}{\Vdash} \corel \rl{\T}{\in}$. In other words, there is some object $\beta$ such that $\xi(x) \rel{\rl{\T}{\Vdash}} \beta$ and $\beta \rel{\Tb{\in}} \Phi$. Clearly then $x \Vdash \nabla\beta$, and so we have $x \Vdash \bigvee \{ \nabla \beta \mid \beta \rel{\Tb{\in}} \Phi \}$, as required. \end{proof} \section{One-step soundness and completeness} \label{s:onestep} As mentioned in the introduction, our completeness proof is based on Pattinson's stratification method~\cite{patt:coal03}, which consists of stratifying the logic in $\omega$ many layers which are nicely glued together by means of a so-called one-step version of the derivation system. The main technical hurdle in this method consists of showing that this one-step derivation system is sound and complete with respect to a natural one-step semantics. In this section we will first properly introduce our version of these notions, and then prove the one-step soundness and completeness result. \subsection{One-step semantics and one-step axiomatics} Starting with the one-step semantics, fix a set $X$ and think of $\funQ X$ as a set of formal objects or \emph{propositions}. Recall from Section~\ref{s:moss} that $\Tba \funQ X$ and $\Tnb \funQ X$ are the sets of formulas of depth zero and depth one over this language, respectively. The point underlying the one-step semantics is that there is a natural interpretation of the formulas in $\Tnb\funQ X$ as sets of elements of $\T X$, or, expressed more accurately, as elements of the Boolean algebra $\funaQ\T X$. To explain this, first note that we may see the identity map \[ \iota: \funQ X \to \funQ X \] as a natural \emph{valuation} interpreting variables of $\funQ X$ as subsets of $X$, and then extend this valuation to a unique homomorphism \[ \semzero{\cdot}^{\akk{X}} \mathrel{:=} \ti{\iota}: \akk{\funaF\funQ X \to \funaQ X}. \] \akk{We find it convenient to denote $U\ti{\iota}:\Tba\funQ X \to \funQ X$ by the same symbol $\semzero{\cdot}^X$ and also to occasionally drop the superscript $^{\akk{X}}$.} We may associate a relation $\Vdash_{X}^{0} \subseteq X \times \Tba \funQ X$ with this map, which we define inductively by putting \[\begin{array}{lll} x \Vdash_{X}^{0} p & \mbox{if} & x \in p, \; \mbox{ where } \; p \in \funQ X, \\ x \Vdash_{X}^{0} \bigvee\phi & \mbox{if} & x \Vdash_{X}^{0} a \mbox{ for some } a \in \phi, \\ x \Vdash_{X}^{0} \bigwedge\phi & \mbox{if} & x \Vdash_{X}^{0} a \mbox{ for all } a \in \phi. \end{array}\] Clearly the relation between $\semzero{\cdot}$ and $\Vdash_{X}^{0}$ is given by \[ x \in \semzero{a} \ \mbox{ iff } \ x \Vdash_{X}^{0} a, \] for all $x \in X$ and all $a \in \Tba \funQ X$. We note for future reference that $\semzero{\cdot}$ gives rise to a natural transformation. \begin{prop} \label{p:semzeronat} The family of homomorphisms $\{ \semzero{\cdot}^X\}_{X\in\mathsf{Set}}$ \akk{is a natural transformation $\funaF\funQ \mathrel{\dot{\rightarrow}} \funaQ $ and, therefore, also a natural transformation $\semzero{\cdot}: \Tba\funQ \mathrel{\dot{\rightarrow}} \funQ$.} \end{prop} \begin{proof} Naturality of $\semzero{\cdot}$ is a matter of routine checking. The key for the proof is that for any function $f:X \to Y$, $\funQ f: \funQ Y \to \funQ X$ is a Boolean homomorphism. \end{proof} Turning our attention to depth-one formulas, perhaps the easiest way to explain their one-step semantics is to introduce a similar relation ${\Vdash_{X}^{1}} \subseteq \T X \times \Tnb \funQ X$: \[\begin{array}{lll} \T X, \xi \Vdash_{X}^{1} \nabla\alpha & \mbox{if} & (\xi,\alpha) \in \rl{\T}(\Vdash_{X}^{0}), \\ \T X, \xi \Vdash_{X}^{1} \bigvee\phi & \mbox{if} & \T X, \xi \Vdash_{X}^{1} c \mbox{ for some } c \in \phi, \\ \T X, \xi \Vdash_{X}^{1} \bigwedge\phi & \mbox{if} & \T X, \xi \Vdash_{X}^{1} c \mbox{ for all } c \in \phi. \end{array}\] \begin{rem} \label{r:semsem} It is instructive to have a look at the relationship between the one-step semantics of depth-one formulas and the coalgebraic semantics for arbitrary formulas from Definition~\ref{def:moss_sem}. Roughly, the definition of the one-step semantics of a formula captures precisely what is needed to inductively define the semantics of the logic. More precisely, let $(X,\xi)$ be some $\T$-coalgebra and let, for $i<\omega$, $\mathit{mng}_{i}: \mathcal{L}_{i} \to \funQ X$ be the map, that maps any formula $a \in \mathcal{L}_{i}$ of modal rank $i$ to its coalgebraic meaning, that is, for all $a \in \mathcal{L}_{i}$ and all $x \in X$ we let $x \in \mathit{mng}_i(a)$ if $x \Vdash a$. Now we claim that for any $k<\omega$ and any $\nabla \alpha \in \mathcal{L}_{k+1}$ we have \begin{equation} \label{eq:semsem0} x \Vdash_{\mathbb{X}} \nabla \alpha \ \mbox{ iff } \ \T X, \xi(x) \Vdash_X^1 \nabla (\T\mathit{mng}_{k})\alpha. \end{equation} To see this, first observe that by induction on the Boolean structure of $\mathcal{L}_{k}$-formulas, we may show that for any $a \in \mathcal{L}_{k}$ and any $x \in X$, we have $x \Vdash_{\mathbb{X}} a$ iff $x \Vdash_{X}^{0} \mathit{mng}_{k}(a)$. In other words, we have \begin{equation} \label{eq:semsem1} ({\Vdash_{\mathbb{X}}) \rst{X \times \mathcal{L}_{k}}} = {\Vdash_{X}^{0}} \corel \cv{\mathit{mng}_{k}}. \end{equation} Based on this, we may reason as follows: \begin{align*} x \Vdash_{\mathbb{X}} \nabla \alpha &\iff \xi(x) \rel{\rl{\T}{\Vdash_{\mathbb{X}}}} \alpha &\text{(definition of $\Vdash$)} \\&\iff \xi(x) \rel{\rl{\T} ({({\Vdash_{\mathbb{X}})}\rst{X\times\mathcal{L}_{k}}})} \alpha \\&\iff \xi(x) \rel{\rl{\T} {{\Vdash_{X}^{0}} \corel \cv{\mathit{mng}_{k}}}} \alpha &\text{(equation \eqref{eq:semsem1})} \\&\iff \xi(x) \rel{\rl{\T} {{\Vdash_{X}^{0}}}} (\T\mathit{mng}_{k}) \alpha &\text{(properties of relation lifting)} \\&\iff \T X, \xi(x) \Vdash_X^1 \nabla (\T\mathit{mng}_{k})\alpha. &\text{(definition of $\Vdash^{1}$)} \end{align*} \noindent In words: if we assume that we have already defined the interpretation of all formulas of modal rank $k$ then the one-step semantics allows us to extend this interpretation to formulas of rank $k+1$. \end{rem} The relation $\Vdash_{X}^{1}$ provides a natural semantics for terms of depth one, and induces a natural semantic equivalence relation. \begin{definition} Given a set $X$, we define the one-step semantics $\semone{a'}$ of a formula $a' \in \Tnb(\funQ X)$ as \[ \semone{a'} := \{ \xi \in \T X \mid \T X, \xi \Vdash_{X}^{1} a' \}. \] We say that two formulas $a',b' \in \Tnb(\funQ X)$ are \emph{semantically one-step equivalent}, notation: $a' \equiv_{sem} b'$, if $\semone{a'} = \semone{b'}$. \end{definition} \begin{rem} \label{r:onestep2} Alternatively but equivalently, we can define the $\semone{\cdot}$ as follows. Apply $\T$ to the map $\semzero{\cdot}$, and compose with the function $\lambda\!^{\T}_{X}$ to obtain \[ \lambda\!^{\T}_{X}\circ\T\semzero{\cdot}: \Tom\Tba\funQ X \to \funQ\T X. \] This map then provides us with an interpretation of the basic formulas in $\Tnb\funQ X = \Tba\Tomnb\Tba\funQ X$, namely the ones of the form $\nabla\alpha \in \Tomnb\Tba\funQ X$: \[ \mu_{X}(\nabla\alpha) := (\lambda\!^{\T}_{X}\circ\T\semzero{\cdot})(\alpha). \] Now $\semone{\cdot}$ may be identified with $\akk{U}\ti{\mu}_{X}: \Tba\Tomnb\Tba\funQ X \to \akk{\funQ}\T X$. \akk{Occasionally, we will write $\semone{\cdot}$ also for the $\mathsf{Boole}$-morphism $\ti{\mu}_{X}: \funaF\Tomnb\Tba\funQ X \to \funaQ\T X$.} \end{rem} To match the semantic notions of equivalence between $\Tnb\funQ X$-formulas, we introduce a one-step version of the derivation system $\mathbf{M}$, associated with the presentation $\funC\funaQ X$ of the power set Boolean algebra $\funaQ X$. Formal definitions will be given below, but the basic idea is straightforward: modify $\mathbf{M}$ by (i) restricting attention to the depth-0 and depth-1 formulas over the set $\funP X$ of (formal) variables, and (ii) adding the `true facts about $\funaP X$' as additional axioms. The resulting derivation system naturally induces an interderivability relation on $\Tnb\funQ X$-formulas that we shall denote as $\equiv_{\mathbf{M} \funC\funaP X}$ for reasons that we will clarify in Remark~\ref{r:sc1} further on. This then raises the question whether the two equivalence relations are the same or not, and the main aim of this section is to provide an affirmative answer to this question. \begin{thm}[1-step soundness and completeness] \label{t:sc1} For any set $X$, and for any pair of formulas $c,d \in \Tnb\funP X$ we have \begin{equation} \label{eq:1stsc} c \equiv_{sem} d \ \mbox{ iff } \ c \equiv_{\mathbf{M}\funC\funaP X} d. \end{equation} \end{thm} Our proof of this result will be algebraic, and before we can move to the details of the proof, we need to set up the appropriate framework for this. We now define the one-step derivation system $\mathbf{M}\prs{G}{R}$ associated with a presentation $\prs{G}{R}$. Recall that $\Tba G$ and $\Tnb G = \Tba\Tomnb\Tba(G)$ are the set of depth zero and depth one formulas in $G$, respectively. In this section if we want to stress the difference between the two kinds of formulas, we shall use $a,b,\ldots$ for formulas in $\Tba(G)$, and $c,d,\ldots$ for formulas in $\Tnb(G)$. An $\Tba G$-inequality is an inequality of the form $a \precsim b$, with $a,b \in \Tba G$; and likewise for $\Tnb G$. Intuitively, we obtain $\mathbf{M}{\prs{G}{R}}$ from $\mathbf{M}$ by restricting attention to $\Tba G$- and $\Tnb G$-inequalities, and adding the (in)equalities of $R$ as additional axioms. \begin{definition} \label{d:ds1} Given a presentation $\prs{G}{R}$, we let $\mathbf{M}\prs{G}{R}$ denote the \emph{one-step derivation system} associated with $\prs{G}{R}$. The language of $\mathbf{M}{\prs{G}{R}}$ consists of $\Tba G$-inequalities, and $\Tnb G$-inequalities, and its axioms and rules are those of $\mathbf{M}$, together with the set \[ R^{\precsim} := \{ a \precsim b, b \precsim a \mid (a,b) \in R \}. \] A \emph{$\mathbf{M}\prs{G}{R}$-derivation} is a well-founded tree, labelled with $\Tba G$- and $\Tnb G$-inequalities, such that (i) the leaves of the tree are labelled with axioms of $\mathbf{M}$ or with inequalities in $R^{\precsim}$, (ii) with each parent node we may associate a derivation rule of which the conclusion labels the parent node itself, and the premisses label its children. \end{definition} We will leave it for the reader to verify that in $\mathbf{M}{\prs{G}{R}}$-derivations, a parent node is generally labelled with the same type of inequality (i.e. $\Tba G$ versus $\Tnb G$) as its children; the single exception is the rule ($\nabla 1$) which links $\Tba$-inequalities of the premises to an $\Tnb$-inequality in the conclusion. As a corollary, $\mathbf{M}{\prs{G}{R}}$-derivation trees can be divided into a (possibly empty) upper $\Tba G$-part and a (possibly empty) lower $\Tnb G$-part. \begin{rem} \label{r:sc1} We can now clarify the syntactic interderivability notion of our one-step soundness and completeness theorem. Given a set $X$, recall that $\funC \funaQ X$ is the canonical presentation of the Boolean algebra $\funaQ X$, and observe that $\equiv_{\mathbf{M}\funC\funaQ X}$ is the associated relation of derivable equivalence of $\Tnb\funQ X$-terms in the one-step derivation system $\mathbf{M}\funC\funaQ X$. It is \emph{this} derivation system that Theorem~\ref{t:sc1}, stating that the semantic equivalence relation is the same as the relation $\equiv_{\mathbf{M}\funC \funaQ X}$, is concerned with. \end{rem} \begin{rem} \label{r:aiml} Definition~\ref{d:ds1} corrects and clarifies the corresponding definition in this paper's earlier incarnation, where the one-step proof system $\mathbf{M}\prs{G}{R}$ was not properly specified. In particular, the sentence in~\cite[\akk{Definition~22}]{kupk:comp08}, `in which \emph{only} elements of $X$ and $\mathfrak{L}(X)$ may be used' (where $X$ denotes the set of generators) was not only rather vague, but in fact \emph{mistaken}: it would not permit nontrivial applications of the derivation rules ($\nb2$) and ($\nb3$), since these require the use of more terms in $\Tba(X)$ than only the generators in $X$ themselves. \end{rem} \subsection{The functor $M$ on presentations} As we will see now, the notion of a one-step derivation system induces a functor on the category of presentations. \begin{definition} \label{d:funpM} Given a presentation $\prs{G}{R}$, we let $M\prs{G}{R}$ denote the presentation given as \[ M\prs{G}{R} := \prs{\Tomnb\Tba(G)}{\equiv_{\mathbf{M}\prs{G}{R}}}. \] For a presentation morphism $f: \prs{G}{R} \to \prs{G'}{R'}$, the definition \[ M f: \nabla\alpha \mapsto \nabla (\Tom \wh{f})\alpha \] provides us with a map $M f: \Tomnb\Tba(G) \to \Tomnb\Tba(G')$. \end{definition} In other words, $M f$ maps generators of the presentation $M\prs{G}{R}$ to generators of the presentation $M\prs{G'}{R'}$. We will now show that $M f$ is in fact a presentation morphism from $M\prs{G}{R}$ to $M\prs{G'}{R'}$. \begin{rem} To be more precise, we need to compose $M f$ with the unit $\eta_{\Tomnb\Tba(G')}$ of the monad $\Tba$, instantiated at $\Tomnb\Tba(G')$, in order to obtain a map with the right codomain, $\Tba\Tomnb\Tba(G')$. In the sequel we will suppress this sublety. \end{rem} Our key tool in the proof that $M f$ is a presentation morphism, consists of a natural way to transform $\mathbf{M}\prs{G}{R}$-derivations into $\mathbf{M}\prs{G'}{R'}$-proofs. \begin{prop} \label{p:prm} If $f: \prs{G}{R} \to \prs{G'}{R'}$ is a presentation morphism, then there is a map $(\cdot)^{f}$ transforming $\mathbf{M}\prs{G}{R}$-derivations into $\mathbf{M}\prs{G'}{R'}$-derivations such that \[ \D: c \precsim d \ \Rightarrow\ \D^{f}: \wh{M f} c \precsim \wh{M f} d. \] for every $\Tnb G$-inequality $c \precsim d$. \end{prop} \begin{proof} As an easy auxiliary result we need that for any two terms $a,b \in \Tba G$, \begin{equation} \label{eq:x1} a \sqsubseteq_{\mathbf{M}\prs{G}{R}} b \iff a \sqsubseteq_{R} b, \end{equation} where $a \sqsubseteq_{R} b$ means that $a \equiv_{R} a \land b$. From \eqref{eq:x1} and the fact that $f$ is a presentation morphism it is easy to derive that \begin{equation} \label{eq:x2} a \sqsubseteq_{\mathbf{M}\prs{G}{R}} b \ \mbox{ only if } \ \wh{f}a \sqsubseteq_{\mathbf{M}\prs{G'}{R'}} \wh{f}b. \end{equation} We now turn to the proof of the Proposition proper, which will be based on a straightforward induction on the complexity of $\D: c \precsim d$, where $c$ and $d$ are $\Tnb$-formulas. We make a case distinction as to the last rule applied in $\D$. First assume that the last applied rule in $\D$ was $(\nb1)$. That is, the formulas $c$ and $d$ in $\D: c\precsim d$ are of the form $c = \nabla\alpha$ and $d = \nabla\beta$, for some $\alpha$ and $\beta$ in $\Tom\Tba G$, respectively, and we may assume that $\D$ is of the following form: \[ \D: \AXC{$\{ \D_{ab}: a \precsim b \mid (a,b) \in Z \}$} \UIC{$\nabla\alpha \precsim \nabla\beta$} \DisplayProof \] Here $Z \subseteq \mathit{Base}(\alpha) \times \mathit{Base}(\beta)$ is some set with $(\alpha,\beta) \in \rl{\T} Z$, and such that for every pair $(a,b) \in Z$, there is a depth zero derivation $\D_{ab}: a \precsim b$. Define $Z' := \{ (\wh{f}a,\wh{f}b) \mid (a,b) \in Z \}$, or, equivalently, $Z' := \converse{(\wh{f\,})};Z;\wh{f}$. Then it follows from \eqref{eq:x2} that for each $(a',b') \in Z'$, there is a derivation $\D^{f}_{a'b'}: a' \precsim b'$. Using the properties of relation lifting we find that $\rl{\T} Z' = \converse{(\T \wh{f}\,)}; \rl{\T} Z; \T \wh{f}$, and from this it is immediate that $(\T \wh{f} \alpha,\T \wh{f} \beta) \in \rl{\T} Z'$. Combining these observations, we may transform the derivation $\D$ into \[ \D^{f}: \AXC{$\{ \D_{a'b'}: a' \precsim b' \mid (a',b') \in Z' \}$} \UIC{$\nabla\T \wh{f}\alpha \precsim \nabla\T \wh{f}\beta$} \DisplayProof \] But then we are done, since $M f (\nabla\alpha) = \nabla\T \wh{f}\alpha$, and likewise for $\beta$. \medskip Second, suppose that the last applied rule in $\D$ was $(\nb2)$. That is, $\D$ ends with \[ \D: \AXC{$\{ \D_{\Phi}: \nabla (\T\bigwedge)\Phi \precsim d \mid \Phi \in \mathit{SRD}(A) \}$} \UIC{$\bigwedge \{ \nabla\alpha \mid \alpha \in A \} \precsim d$} \DisplayProof \] We are to transform $\D$ into a derivation $\D^{f}$ of the inequality $\bigwedge \{ \nabla\alpha' \mid \alpha' \in A'\} \precsim \wh{M f} d$, where $A' := \{ \T\wh{f} \alpha \mid \alpha \in A \}$. Working towards an application of ($\nb2$), we claim that \begin{equation} \label{eq:trsrd} \mathit{SRD}(A') \subseteq \Big\{ \Phi' \in \Tom \left( \bigcup_{\alpha\in A} \mathit{Base} ( \T\wh{f} \alpha) \right)\mid \exists \Phi \in \mathit{SRD}(A) \; \mbox{such that } \T\funP\wh{f}(\Phi) = \Phi' \Big \}. \end{equation} To see why this is so, consider an arbitrary slim redistribution $\Phi'$ of $A'$. First observe that \begin{equation} \label{eq:trsrd2} \wh{f}[\mathit{Base}[A]] = \bigcup_{\alpha\in A} (\funP \wh{f}) (\mathit{Base}(\alpha)) = \bigcup_{\alpha\in A} \mathit{Base}((\T\wh{f})\alpha) = \mathit{Base}[A'], \end{equation} where the second identity is by the fact that $\mathit{Base}: \Tom \mathrel{\dot{\rightarrow}} \Pom$ is a natural transformation (cf.~Fact~\ref{fact:basenatural}). If we restrict $\wh{f}$ to the set $\mathit{Base}[A]$, by \eqref{eq:trsrd2} we obtain a \emph{surjective} map \[ g: \mathit{Base}[A] \to \mathit{Base}[A']. \] From the surjectiveness of $g$ it follows that $(\funP g) \circ (\funQ g) = \mathit{id}_{\funP\mathit{Base}[A']}$, and so we also find that $(\T\funP g) \circ (\T\funQ g) = \mathit{id}_{\Tom\funP\mathit{Base}[A']}$. Hence if we define \[ \Phi := (\T\funQ g)\Phi', \] we see that $\Phi' = \T\funP g(\Phi) = \T\funP\wh{f}(\Phi)$. Therefore, using $\in;\funP f \subseteq f;\in$, it is easy to see that $\alpha (\rl{\T} \in) \Phi$ implies $\T f \alpha (\rl{\T} \in) \Phi'$ for all $\alpha \in \Tom\Tba G$. Thus, in order to prove \eqref{eq:trsrd} it suffices to prove that $\Phi$ is a slim redistribution of $A$. To see why this is the case, first observe that by definition of $g$ we have that $\T\funQ g: \Tom\funP\mathit{Base}[A'] \to \Tom\funP\mathit{Base}[A]$, and so we find that $\Phi \in \Tom\funP\mathit{Base}[A]$. It is left to prove that every element of $A$ is a lifted member of $\Phi$. Take an arbitrary element $\alpha\in A$, then $\T g (\alpha) \in A'$ by definition of $A'$ and $g$, and so $\T g (\alpha)$ is a lifted member of $\Phi'$ by the assumption that $\Phi' \in \mathit{SRD}(A')$. This means that $(\alpha,\Phi) \in (\T g); (\rl{T}{\in});(\T\funQ g)$. The key observation now is that $(\T g); (\rl{T}{\in});(\T\funQ g) \,\subseteq\, \rl{\T} {\in}$, which is immediate from $g;{\in};(\funQ g) \subseteq {\in}$ by the properties of relation lifting. Applying this key observation we find that $(\alpha,\Phi) \in \rl{\T} {\in}$ as required. This finishes the proof of \eqref{eq:trsrd}. Returning to the construction of our derivation $\D^{f}$, consider an arbitrary slim redistribution $\Phi'$ of $A'$, which by \eqref{eq:trsrd} we may assume to be of the form $(\T\funP\wh{f})\Phi$ with $\Phi \in \mathit{SRD}(A)$. Applying the inductive hypothesis to the derivation $\D_{\Phi}$ we obtain a proof $\D_{\Phi}^{f}: \wh{M f}\nabla(\T\bigwedge)(\Phi) \precsim \wh{M f}d$. However, from $\wh{f}\circ\bigwedge = \bigwedge\circ(\funP \wh{f})$ we obtain that \[ \wh{M f}\nabla(\T\mbox{$\bigwedge$})(\Phi) = \nabla(\T \wh{f})(\T\mbox{$\bigwedge$})(\Phi) = \nabla(\T\mbox{$\bigwedge$})(\T\funP \wh{f})(\Phi) = \nabla(\T\mbox{$\bigwedge$})\Phi'. \] In other words, for any $\Phi' \in \mathit{SRD}(A')$ there is a derivation of the inequality $\nabla(\T\bigwedge)\Phi' \precsim \wh{M f}q$. Putting all these derivations together, one application of ($\nabla 2$) gives the desired derivation \[ \D^{f}: (M \wh{f}) \Big( \bigwedge \{ \nabla \alpha \mid \alpha \in A \} \Big) \precsim \wh{M f}d. \] \medskip Now suppose that the last applied rule in $\D$ was $(\nb3)$. In this case $\D$ has the following shape: \[ \D: \AXC{$\{ \D_{\alpha}: \nabla\alpha \precsim q \mid \alpha \rl{\T} (\in) \Phi \}$} \UIC{$\nabla(\T\bigvee)\Phi \precsim d$} \DisplayProof \] In order to see which inequality we need to derive, we first compute \[ \wh{M f}(\nabla(\T\mbox{$\bigvee$})\Phi) = \nabla (\T\wh{f})(\T\mbox{$\bigvee$})\Phi = \nabla (\T\mbox{$\bigvee$}) (\T\funP\wh{f})\Phi, \] where the latter identity follows from the fact that $\wh{f}\circ\bigvee = \bigvee\circ\funP\wh{f}$. We are looking for a derivation of the inequality $\nabla (\T\mbox{$\bigvee$})(\T\funP\wh{f})\Phi\precsim \wh{M f}d$. Since we want to apply the rule ($\nb3$), we first compute the set of lifted members of $(\T\funP\wh{f})\Phi$. But since ${\in};\converse{(\funP \wh{f\,})} = \converse{\wh{f\,}};{\in}$, applying relation lifting we obtain ${\rl{\T} \in};\converse{(\T\funP \wh{f}\,)} = \converse{(\T \wh{f}\,)}; {\rl{\T} {\in}}$. This immediately shows that \[ (\alpha',(\T\funP\wh{f})\Phi) \in \rl{\T} {\in} \mbox{ iff } \alpha' = \T\wh{f} \alpha \mbox{ for some } \alpha \rl{\T} {\in} \Phi. \] By the induction hypothesis, for each $\alpha \rl{\T} {\in}\Phi$ we have a derivation $\D_{\alpha}^{f}: \nabla\T\wh{f}\alpha \precsim \wh{M f}d$. In other words, for every lifted member $\alpha'$ of $(\T\funP\wh{f})\Phi$, there is a derivation $\D_{\alpha'}: \nabla\alpha' \precsim \wh{M f}d$. But then by one application of ($\nb3$) we are done: \[ \D^{f}: \AXC{$\{ \D_{\alpha'}: \nabla\alpha' \precsim q \mid \alpha \rl{\T} (\in) (\T\funP\wh{f})\Phi \}$} \UIC{$\wh{M f}(\nabla(\T\mbox{$\bigvee$})\Phi) \precsim \wh{M f} d$} \DisplayProof \] Finally, the cases where the last applied rule in $\D$ was a propositional one, are left as exercises to the reader. \end{proof} Given Proposition~\ref{p:prm} it is not difficult to prove that $M$ is a functor. \begin{thm} \label{t:Mfun} $M: \mathsf{Pres} \to \mathsf{Pres}$ is a functor. In addition, $M$ maps pre-isomorphisms to pre-isomorphisms. \end{thm} \begin{proof} Since it is not difficult to verify that $M$ preserves identity arrows and distributes over composition, we confine our attention to the proof that $M$ maps presentation morphisms to presentation morphisms. Let $f: \prs{G}{R} \to \prs{G'}{R'}$ be a presentation morphism, and let $c,d \in \Tba(\Tomnb\Tba G) = \Tnb G$ be such that $c \equiv_{\mathbf{M}\prs{G}{R}} d$, that is, there are $\mathbf{M}\prs{G}{R}$-derivations $\D_{1}: c \precsim d$ and $\D_{2}: d \precsim c$. Proposition~\ref{p:prm} provides us with $\mathbf{M}\prs{G'}{R'}$-derivations $\D_{1}^{f}: \wh{M f} c \precsim \wh{M f}d$ and $\D_{2}^{f}: \wh{M f}d \precsim \wh{M f}c$. This means that we have $\wh{M f}c \equiv_{\mathbf{M}\prs{G'}{R'}} \wh{M f}d$, as is required for $M f$ to be a presentation morphism. In order to prove that $M$ maps pre-isomorphisms to pre-isomorphisms, a routine proof will show that $M$ preserves pre-inverses. \end{proof} \subsection{The functor $\mathbb{M}$ and its algebras} \label{ss:functorM} Given the intimate relation between Boolean algebras and their presentations, it should come as no suprise that the presentation functor $M$ naturally induces a functor on the category of Boolean algebras. \begin{definition} \label{d:funaM} The functor $\mathbb{M}: \mathsf{BA} \to \mathsf{BA}$ is defined as $\mathbb{M} := \funB\circM\circ\funC$. \end{definition} To explain this functor in words, first consider the objects. Given a Boolean algebra $\mathbb{A}$ with carrier $A := \funU\mathbb{A}$, the elements of $\mathbb{M}\mathbb{A}$ are the equivalence classes of the form $[a]_{\mathbf{M}\funC\mathbb{A}}$, where $a\in\Tnb A$ is a depth-one term over the carrier of $\mathbb{A}$, and the equivalence relation $\equiv_{\mathbf{M}\funC\mathbb{A}}$ is the interderivability relation in the one-step derivation system $\mathbf{M}\funC\mathbb{A}$ which takes, as its additional axioms, the \emph{diagram} $\Delta_{\mathbb{A}}$ of $\mathbb{A}$ (listing the `true facts' about $\mathbb{A}$). Summarizing, we find that \[ \funU\mathbb{M}\mathbb{A} = \Tnb A /{\equiv_{\mathbf{M}\funC\mathbb{A}}}. \] In order to explain the action of $\mathbb{M}$ on a homomorphism $f: \mathbb{A} \to \mathbb{A}'$, the upshot of Theorem~\ref{t:Mfun} is that the map \begin{equation} \label{eq:Mfun} \mathbb{M} f: [a]_{\mathbf{M}\funC\mathbb{A}} \mapsto [\Tnb \funU f (a)]_{\mathbf{M}\funC\mathbb{A}'}, \end{equation} correctly defines a homomorphism $\mathbb{M} f: \mathbb{M}\mathbb{A} \to \mathbb{M}\mathbb{A}'$. Here $\Tnb$ is given in Definition~\ref{d:Tnb}, and the observation \eqref{eq:Mfun} is a direct consequence of the definitions and of the following proposition. \begin{prop} \label{p:funpM-Tnb} Let $f: \prs{G}{R} \to \prs{G'}{R'}$ be a presentation morphism. If $f$ maps generators to generators (in the sense that $f[G] \subseteq G'$), then \[ \wh{M f} = \Tnb f. \] \end{prop} \begin{proof} Suppose that $f: \prs{G}{R} \to \prs{G'}{R'}$ maps generators to generators, then it is immediate that $\wh{f} = \Tba f$. From this it follows that $M f = \Tomnb \wh{f} = \Tomnb \Tba f$, and since $M f$ also maps generators to generators, we find that $\wh{M f} = \TbaM f = \Tba\Tomnb \Tba f = \Tnb f$. \end{proof} For future reference we mention the following. \begin{definition} \label{d:quot} Given algebra $\mathbb{B}$, we shall denote with $\quot{\mathbb{B}}: \Tnb(\funU\mathbb{B}) \to \mathbb{M}\mathbb{B}$ the map \[ \quot{\mathbb{B}}: b \mapsto [b]_{\mathbf{M}\funC\mathbb{B}}, \] that is, $\quot{\mathbb{B}} = \ti{\eta}_{M\funC\mathbb{B}}$ is the quotient map sending a formula $b$ to its equivalence class under $\equiv_{\mathbf{M}\funC\mathbb{B}}$. \end{definition} \begin{prop} \label{p:q} The family of homomorphisms $\quot{\mathbb{B}}$, with $\mathbb{B}$ ranging over the class of Boolean algebras, provides a natural transformation $\quot{}: \Tnb\funU \mathrel{\dot{\rightarrow}} \mathbb{M}$. \end{prop} \begin{proof} Let $f: \mathbb{B} \to \mathbb{B}'$ be some Boolean homomorphism. In order to prove that $\quot{}$ is a natural transformation, we need to show that the diagram below commutes: \begin{equation} \label{dg:qntr} \xymatrix{ \mathbb{B} \ar[d]_{f} & \Tnb\funU\mathbb{B} \ar[d]_{\Tnb\funU f} \ar[r]^{\quot{\mathbb{B}}} & \mathbb{M}\mathbb{B} \ar[d]_{\mathbb{M} f} \\ \mathbb{B}'& \Tnb\funU\mathbb{B}' \ar[r]^{\quot{\mathbb{B}'}} & \mathbb{M}\mathbb{B}' } \end{equation} This follows from a straightforward unfolding of the definitions: For any $b \in \Tnb\funU\mathbb{B}$ we have \[ (\mathbb{M} f \circ \quot{\mathbb{B}})(b) = \mathbb{M} f ([b]_{\mathbf{M}\funC\mathbb{B}}) = [\Tnb\funU f(b)]_{\mathbf{M}\funC\mathbb{B}'} = \quot{\mathbb{B}'}(\Tnb\funU f(b)) = (\quot{\mathbb{B}'}\circ\Tnb\funU f)(b). \] Here the second step is by \eqref{eq:Mfun} above. \end{proof} It turns out that $\mathbb{M}$ has some nice properties that will be of use later on. In particular, we may show that $\mathbb{M}$ is \emph{finitary} and \emph{preserves embeddings}. Intuitively, being finitary means proof-theoretically, that for any Boolean algebra $\mathbb{A}$, a derivation of $\vdash_{\mathbb{M}(\mathbb{A})}a_{1}\precsim a_{2}$ can be carried out in a \emph{finite} subalgebra of $\mathbb{A}$. (Note that this is not obvious since we may be dealing with an infinitary proof system.) Formally, we need the following definition, referring to~\cite{adam94:loca} for more details. Recall that a partial order is \emph{directed} if any finite set of elements has an upper bound. \begin{definition} Given a category $\class{C}$, a \emph{directed diagram} over $\class{C}$ is a diagram which is indexed by a directed partial order. An endofunctor on $\class{C}$ is \emph{finitary} if it preserves colimits of directed diagrams. \end{definition} \noindent In case of an endofunctor on $\mathsf{Set}$ this definition is equivalent to the one of Section~\ref{s:preliminaries}. \begin{exa} \label{ex:subalg-colimit} Given a Boolean algebra $\mathbb{B}$, let $\struc{\mathit{Sub}(\mathbb{B}),\subseteq}$ be the set of finite subalgebras of $\mathbb{B}$, ordered by inclusion. We can turn this poset into a diagram $S_{\mathbb{B}}$ by supplying, for each pair of finite subalgebras $\mathbb{B}'$ and $\mathbb{B}''$ such that $\mathbb{B}' \subseteq \mathbb{B}''$, the (unique) inclusion $\iota_{\mathbb{B}'\mathbb{B}''}$. Since the variety $\mathsf{BA}$ is \emph{locally finite}, which means that every finitely generated Boolean algebra is finite, one may easily see that every Boolean algebra $\mathbb{B}$ is the directed colimit of its associated diagram $S_{\mathbb{B}}$. \end{exa} In fact, it is a routine exercise to verify that for an endofunctor on the category on Boolean algebras to be finitary, it suffices to preserve the directed colimits of the subalgebra diagrams described in Example~\ref{ex:subalg-colimit}. \begin{prop} \label{p:Mfinemb} $\mathbb{M}$ is a finitary functor that preserves embeddings. \end{prop} \begin{proof} Fix a Boolean algebra $\mathbb{A}$ with carrier set $A := U\mathbb{A}$. Given two elements $a_{1},a_{2} \in \Tnb A$, consider the collection of elements of $A$ that occur as \emph{subformulas} of $a_{1}$ and $a_{2}$. It follows from our earlier remarks on subformulas that this is a \emph{finite} set, which then generates a finite subalgebra $\mathbb{A}'$ of $\mathbb{A}$. By definition we have $a_{1},a_{2} \in \Tnb A'$, where we define $A' := \funU\mathbb{A}'$. We claim that \begin{equation} \label{eq:2-4} \vdash_{\mathbf{M}\funC\mathbb{A}} a_{1} \precsim a_{2} \ \mbox{ iff } \ \vdash_{\mathbf{M}\funC\mathbb{A}'} a_{1} \precsim a_{2}. \end{equation} The interesting direction of (\ref{eq:2-4}) is from left to right. The key observation here is that from the fact that $\mathbb{A}'$ is a finite subalgebra of $\mathbb{A}$, we may infer the existence of a \emph{surjective} homomorphism $f: \mathbb{A} \to \mathbb{A}'$ such that $f(a') = a'$ for all $a'\in A'$. (In other words, $\mathbb{A}'$ is a \emph{retract} of $\mathbb{A}$.) There are various ways to prove this statement; here we refer to Sikorski's theorem that complete Boolean algebras are injective~\cite{siko:theo48}. But if $f$ is a homomorphism, by Proposition~\ref{p:prm} it follows from $\vdash_{\mathbf{M}\funC\mathbb{A}} a_{1} \precsim a_{2}$ that $\vdash_{\mathbf{M}\funC\mathbb{A}'} \wh{M f}(a_{1}) \precsim \wh{M f}(a_{2})$. Since $f$ restricts to the identity on $A'$, so does $\wh{M f} = \Tnb f$ on $\Tnb A'$. As a direct consequence we find that $\wh{M f}(a_{i}) = a_{i}$, for both $i = 1,2$. Thus, indeed, $\vdash_{\mathbf{M}\funC\mathbb{A}'} a_{1} \precsim a_{2}$, which proves \eqref{eq:2-4}. It is now easy to see that $\mathbb{M}$ is a finitary functor. As mentioned above, it suffices to show that $\mathbb{M}\mathbb{A}$ is a directed colimit of the image $\mathbb{M} S_{\mathbb{A}}$ under $\mathbb{M}$ of the subalgebra diagram $S_{\mathbb{A}}$ of $\mathbb{A}$ (see Example~\ref{ex:subalg-colimit}). Given a finite subalgebra $\mathbb{B}$ of $\mathbb{A}$, let $e_{\mathbb{B}}$ denote the inclusion homomorphism, $e_{\mathbb{B}}: \mathbb{B} \hookrightarrow \mathbb{A}$. We claim that \begin{equation} \label{eq:Mfin} \struc{\mathbb{M}\mathbb{A},\mathbb{M} e_{\mathbb{B}}}_{\mathbb{B}\in S_{\mathbb{A}}} \text{ is a colimit of } \mathbb{M} S_{\mathbb{A}}. \end{equation} Since for every pair $\mathbb{B},\mathbb{B}'$ such that $\mathbb{B} \hookrightarrow \mathbb{B}' \hookrightarrow \mathbb{A}$, we have $e_{\mathbb{B}} = e_{\mathbb{B}'} \cof \iota_{\mathbb{B}\bbB'}$, it is obvious from the functoriality of $\mathbb{M}$ that $\struc{\mathbb{M}\mathbb{A},\mathbb{M} e_{\mathbb{B}}}_{\mathbb{B}\in S_{\mathbb{A}}}$ is a cocone over $\mathbb{M} S_{\mathbb{A}}$. To see why it is in fact a colimit, suppose that $\struc{\mathbb{D},d_{\mathbb{B}}}_{\mathbb{B}\in S_{\mathbb{A}}}$ is another cocone over $\mathbb{M} S_{\mathbb{A}}$, and take an arbitrary element of $\mathbb{M}\mathbb{A}$. By definition, this element is of the form $[a]_{\mathbf{M}\funC\mathbb{A}}$ for some formula $a \in \Tnb A$. Let, as above, $\mathbb{A}'$ be a finite subalgebra of $\mathbb{A}$ such that $a \in \Tnb A'$, then it follows from \eqref{eq:2-4} that the following provides a well-defined homomorphism $d: \mathbb{M}\mathbb{A} \to \mathbb{D}$: \[ d([a]_{\mathbf{M}\funC\mathbb{A}}) \mathrel{:=} d_{\mathbb{A}'}([a]_{\mathbf{M}\funC\mathbb{A}'}). \] We leave it as an exercise for the reader to verify that $d$ is the unique homomorphism $d: \mathbb{M}\mathbb{A} \to \mathbb{D}$ such that for all $\mathbb{B} \hookrightarrow \mathbb{A}$, the following diagram commutes: \[ \xymatrix{ \mathbb{M}\mathbb{B} \ar[r]^{\mathbb{M} e_{\mathbb{B}'}} \ar[rd]_{d_{\mathbb{B}'}} & \mathbb{M}\mathbb{A} \ar[d]^{d} \\& \mathbb{D} }\] This proves \eqref{eq:Mfin}, and as mentioned this suffices to establish that $\mathbb{M}$ is finitary. For the second part of the Proposition, let $e: \mathbb{A} \to \mathbb{B}$ be an embedding. Without loss of generality we will assume that $e$ is actually an inclusion (that is, $\mathbb{A}$ is a subalgebra of $\mathbb{B}$). In order to prove that $\mathbb{M} e: \mathbb{M}\mathbb{A} \to \mathbb{M}\mathbb{B}$ is also injective, it suffices to prove the following, for all $a_{1},a_{2} \in A$: \begin{equation} \label{eq:2-5} \vdash_{\mathbf{M}\funC\mathbb{B}} a_{1} \precsim a_{2} \ \mbox{ implies } \ \vdash_{\mathbf{M}\funC\mathbb{A}} a_{1} \precsim a_{2}. \end{equation} But the proof of (\ref{eq:2-5}) simply follows from two applications of (\ref{eq:2-4}). \end{proof} In the sequel we will be interested in algebras for the functor $\mathbb{M}$. Recall that these are pairs of the form $\struc{\mathbb{A},f}$, where $\mathbb{A}$ is some Boolean algebra, and $f$ is a homomorphism from $\mathbb{M}\mathbb{A}$ to $\mathbb{A}$. First of all, we will see that such $\mathbb{M}$-algebras are Moss algebras in disguise. \begin{definition} \label{d:funV} Given an $\mathbb{M}$-algebra $\struc{\mathbb{A},f}$, we let $V\struc{\mathbb{A},f}$ denote the Moss algebra \[ V\struc{\mathbb{A},f} \mathrel{:=} \struc{\funU\mathbb{A},\neg^{\mathbb{A}},\mbox{$\bigvee$}^{\mathbb{A}},\ \mbox{$\bigwedge$}^{\mathbb{A}},\nabla^{V\struc{\mathbb{A},f}}}. \] Here we define $\nabla^{V\struc{\mathbb{A},f}}: \Tom\funU\mathbb{A} \to \funU\mathbb{A}$ by recalling that $\Tom\funU\mathbb{A}$ is a subset of $\Tnb\funU\mathbb{A}$, and putting \[ \nabla^{V\struc{\mathbb{A},f}}\alpha \mathrel{:=} f(\quot{\mathbb{A}}(\nabla\alpha)), \] where $\quot{\mathbb{A}}$ is as in Definition~\ref{d:quot}. In addition, given an $\mathbb{M}$-morphism $g: \struc{\mathbb{A},f} \to \struc{\mathbb{A}',f}$, we define $V g$ to be the morphism $V g: V\mathbb{A} \to V\mathbb{A}'$ given by \[ V g \mathrel{:=} \funU g. \] That is, as a map, $V g$ is simply the same as $g$. \end{definition} We leave it for the reader to verify that with this definition, $V$ actually defines a functor transforming $\mathbb{M}$-algebras into Moss algebras. \begin{prop} The operation $V$ defines a functor \[ V: \mathsf{Alg}_{\mathsf{BA}}(\mathbb{M}) \to \mathsf{Alg}_{\mathsf{Set}}(A_{M}). \] \end{prop} Because $\mathbb{M}$ is a finitary functor we can define the initial $\mathbb{M}$-algebra to be the colimit of the first $\omega$ steps of the initial sequence of $\mathbb{M}$. \begin{definition} \label{d:initial_sequence} The \emph{initial sequence} \begin{equation} \label{dg:initseq} \xymatrix{ \mathbbm{2} \ar@{->}[r]^{j_{0}} & \mathbb{M}\mathbbm{2} \ar@{->}[r]^{j_{1}} & \mathbb{M}^{2}\mathbbm{2} \ar@{->}[r]^{j_{2}} & \ldots & \mathbb{M}^{k}\mathbbm{2} \ar@{->}[r]^{j_{k}} & \mathbb{M}^{k+1}\mathbbm{2} \ar@{->}[r]^{j_{k+1}} & \ldots } \end{equation} results from starting with $j_0$ as the unique homomorphism from $\mathbbm{2}$ to $\mathbb{M} \mathbbm{2}$, and defining $j_{k+1} \mathrel{:=} \mathbb{M} j_k$, for all $k \in \mathbb{N}$. We let $\mathbb{M}^{\omega}\mathbbm{2}$, with the collection of maps $(i_{k}: \mathbb{M}^{k}\mathbbm{2} \to \mathbb{M}^{\omega}\mathbbm{2})_{k\in\omega}$, denote the colimit of this sequence. \end{definition} In the following Proposition we gather some facts about these structures. \begin{prop} \label{p:minit}\hfill \begin{enumerate}[\em(1)] \item For each $k \in \omega$, the map $j_{k}:\mathbb{M}^{k}\mathbbm{2} \to \mathbb{M}^{k+1}\mathbbm{2}$ is an embedding, and so is the map $i_{k}: \mathbb{M}^{k}\mathbbm{2} \to \mathbb{M}^{\omega}\mathbbm{2}$. \item There is a map $j_{\omega}: \mathbb{M}^{\omega}\mathbbm{2} \to \mathbb{M}^{\omega+1}\mathbbm{2}$ such that the following diagram commutes, for every $k\in\omega$: \[ \xymatrix{ \mathbb{M}^{k}\mathbbm{2} \ar[d]_{i_{k}} \ar[r]^{j_{k}} & \mathbb{M}^{k+1}\mathbbm{2} \ar[d]^{\mathbb{M} i_{k}} \\ \mathbb{M}^{\omega}\mathbbm{2} \ar[r]_{j_{\omega}} & \mathbb{M}^{\omega+1}\mathbbm{2} } \] \item The map $j_{\omega}$ has an inverse $\heartsuit^{\mathcal{M}}: \mathbb{M}^{\omega+1}\mathbbm{2} \to \mathbb{M}^{\omega} \mathbbm{2}$. \item The structure $\struc{\mathbb{M}^{\omega} \mathbbm{2}, \heartsuit^{\mathcal{M}}}$ is an initial $\mathbb{M}$-algebra. \item For all $k\in\omega$ we have that $i_{k+1} = \heartsuit^{\mathcal{M}}\cof\mathbb{M} i_{k}$. \end{enumerate} \end{prop} \begin{proof} Part~1 is immediate by Proposition~\ref{p:Mfinemb} and basic category theory. Part~2 follows from $\mathbb{M}^{\omega} \mathbbm{2}$ being a colimit of the initial sequence~\eqref{dg:initseq}. The inverse of $j_{\omega}$, mentioned in part~3, exists by the facts that the initial sequence is a chain, and hence directed, and that $\T$ preserves directed colimits. For part~4, consider an arbitrary $\mathbb{M}$-algebra $\xymatrix{\mathbb{A} & \mathbb{M}\mathbb{A} \ar[l]_{\alpha}}$, and define the co-cone $\struc{\mathbb{A},\alpha_{k}: \mathbb{M}^{k}\mathbbm{2} \to \mathbb{A}}$ as follows: $\alpha_{0}: \mathbbm{2} \to \mathbb{A}$ is given by initiality, and for $k\in\omega$ we put $\alpha_{k+1} \mathrel{:=} \lambda\cof\alpha_{k}$. Then by $\mathbb{M}^{\omega}\mathbbm{2}$ being the colimit of the initial sequence, there is a unique map $\alpha_{\omega}: \mathbb{M}^{\omega}\mathbbm{2} \to \mathbb{A}$ such that $\alpha_{k} = \alpha_{\omega}\cof i_{k}$, for all $k\in\omega$. Now consider the following diagram: \begin{equation} \label{dg:7-1} \xymatrix{ \mathbb{M}^{\omega}\mathbbm{2} \ar[d]_{\alpha_{\omega}} \ar[r]^{j_{\omega}} & \mathbb{M}^{\omega+1}\mathbbm{2} \ar[d]^{\mathbb{M}\alpha_{\omega}} \\ \mathbb{A} & \mathbb{M}\mathbb{A} \ar[l]_{\alpha} } \end{equation} This diagram commutes by $\mathbb{M}^{\omega}\mathbbm{2}$ being the colimit of the initial sequence. Finally, consider the map $\heartsuit^{\mathcal{M}}$ of part~3. Then \begin{align*} \alpha^{\omega} \cof \heartsuit^{\mathcal{M}} &= (\alpha\cof\mathbb{M}\alpha_{\omega}\cof j_{\omega}) \cof\heartsuit^{\mathcal{M}} & \text{(diagram \eqref{dg:7-1} commutes)} \\ &= \alpha\cof \alpha_{\omega} & \text{($j_{\omega}$ and $\heartsuit^{\mathcal{M}}$ are converses)} \end{align*} and from this part~4 is immediate. Finally, for part~5, fix $k \in \omega$. By definition, $\struc{\mathcal{M},i_{n}}_{n\in\omega}$ is a co-cone of the initial sequence, and so we have $i_{k} = j_{k} \cof i_{k+1}$. From this it follows by (the diagram of) part~2 of this Proposition that $j_{\omega}\cof i_{k+1} = \mathbb{M} i_{k}$, and from this we easily derive by part~3 that $i_{k+1} = j_{\omega}^{-1}\cof \mathbb{M} i_{k} = \heartsuit^{\mathcal{M}}\cof \mathbb{M} i_{k}$. \end{proof} The above Proposition justifies the following Definition. \begin{definition} We let $\mathcal{M}$ denote the $\mathbb{M}$-algebra $\struc{\mathbb{M}^{\omega} \mathbbm{2}, \heartsuit^{\mathcal{M}}}$, and we will refer to this structure as the \emph{initial $\mathbb{M}$-algebra}. \end{definition} \begin{rem} \label{r:VMinit} In the sequel, we will be interested in the Moss algebra $V\mathcal{M}$. Observe that the nabla operation $\nabla^{V\mathcal{M}}$ of this structure is defined as $\nabla^{V\mathcal{M}}(\alpha) = \heartsuit^{\mathcal{M}}(\quot{\mathbb{M}^{\omega}\mathbbm{2}}(\nabla\alpha))$, and so by definition of $\heartsuit^{\mathcal{M}}$ we find that \[ \nabla^{V\mathcal{M}}(\alpha) = j_{\omega}^{-1}(\quot{\mathbb{M}^{\omega}\mathbbm{2}}(\nabla\alpha)). \] \end{rem} \subsection{Proof of One-Step Soundness} In this subsection we will establish one-step soundness of the one-step derivation system; that is, we prove the direction from right to left of Theorem~\ref{t:sc1}. \begin{prop} \label{p:1sts} For any set $X$, and for any pair of formulas $c,d \in \Tnb\funP X$ we have \begin{equation} \label{eq:1sts} c \equiv_{sem} d \ \mbox{ if } \ c \equiv_{\mathbf{M}\funC\funaP X} d. \end{equation} \end{prop} \begin{proof} We argue by induction on derivations, so that clearly it suffices to show that each of the rules $(\nabla 1) - (\nabla 3)$ is sound. Fix a set $X$. \medskip\noindent \emph{Case $(\nabla 1)$}. Let ${\sqsubseteq_{0}} \subseteq \Tba\funQ X \times \Tba\funQ X$ be the relation of `provable inequality' $a \sqsubseteq_{0} b$ if the inequality $a \precsim b$ is derivable. It is straightforward to see that for all $a,b \in \Tba\funQ X$, it follows from $a \sqsubseteq b$ that $\semzero{a} \subseteq \semzero{b}$. (This boils down to showing that our Boolean axioms of Table~\ref{tb:clax} are sound.) Hence it remains to show that for all $\alpha,\beta \in \Tnb\funQ X$, we have \begin{equation} \label{eq:s-nb1} \text{ if } \alpha \rel{\rl{\T} Z} \beta \text{ for some } Z \subseteq {\sqsubseteq_{0}}, \text{ then } \semone{\nabla\alpha} \subseteq \semone{\nabla\beta}. \end{equation} For this purpose, assume that $\alpha \rel{\rl{\T} Z} \beta$ for some $Z \subseteq {\sqsubseteq_{0}}$, and take an arbitrary element $\xi \in \T X$ such that $\T X,\xi \Vdash_{1} \nabla\alpha$. Then by definition of $\Vdash_{1}$, we have $\xi \rel{\rl{\T}{\Vdash_{0}}}\alpha$, so that by the properties of relation lifting we obtain that $\xi \rel{\rl{\T}({\Vdash_{0}}\corel Z)} \beta$. However, it is straightforward to verify that ${\Vdash_{0}}\corel Z \subseteq {\Vdash_{0}}\corel {\sqsubseteq_{0}} \subseteq {\Vdash_{0}}$, and so we obtain that $\xi \rel{\rl{\T}\Vdash_{0}} \beta$. From this it is immediate that $\T X, \xi \Vdash_{1} \nabla\beta$. \medskip\noindent \emph{Case $(\nabla 2)$. } Given a set $A\subseteq\Tom\Tba(\funP X)$ and an element $\xi \in \T X$, assume that $\T X, \xi \Vdash_{1} \nabla\alpha$ for each $\alpha \in A$. We need to prove that $\T X, \xi \Vdash_{1} \nabla (\T\bigwedge)\Phi$ for some $\Phi \in \mathit{SRD}(A)$. To come up with a suitable $\Phi$, let $B \mathrel{:=} \bigcup \mathit{Base}[A]$ and consider the map $\phi: X \to \Pom B$ given by \[ \phi: x \mapsto \{ b \in B \mid X,x \Vdash_{0} b \}. \] We claim that the set \[ \Phi \mathrel{:=} (\T\phi)(\xi) \] fulfills our requirements. First of all, in order to prove that $\T X,\xi \Vdash \nabla(\T\bigwedge)(\Phi)$, observe that by definition of $\phi$, we have $\phi\corel\bigwedge \subseteq {\Vdash_{0}}$. Hence by the properties of relation lifting, it follows that ${\T\phi}\corel {\T{\bigwedge}} \subseteq \rl{\T}{\Vdash_{0}}$. In particular, we find that $(\xi,(\T{\bigwedge})(\Phi)) \in \rl{\T}{\Vdash_{0}}$; but then it is immediate from the definitions that $\T X,\xi \Vdash \nabla(\T\bigwedge)(\Phi)$. Second, by definition we have $\Phi \in \Tom \Pom B$ and so, in order to show that $\Phi\in \mathit{SRD}(A)$, it suffices to prove that $\alpha\in \lambda\!^{\T}_{\Pow X}(\Phi)$ for all $\alpha \in A$. For this purpose, observe that $\phi\corel{\cv{{\in}}} = {\Vdash_{0}}\rst{X\times B}$. Then by the properties of relation lifting we obtain $\T\phi\corel{\cv{(\rl{\T}{\in}})} = \rl{\T}{\Vdash_{0}}\rst{\T X\times \T B}$. In particular, since $\xi \rel{\rl{\T}{\Vdash_{0}}\rst{\T X\times \T B}} \alpha$ by assumption, it follows that $\alpha \rel{\Tb{\in}} \T\phi(\xi) = \Phi$, as required. \medskip \noindent \emph{Case $(\nabla 3)$. } We could prove the soundness of $(\nb3)$ analogously to our proof of Proposition~\ref{p:nb3s}, but we prefer to give a different proof here, stressing the role of the distributive of $\lambda\!^{\T}$ over the power set \emph{monad}, cf.~Fact~\ref{fact:distriblaw}. Fix an element $\Phi\in\Tom\Pom\Tba\funP X$. Given Remark~\ref{r:onestep2}, it suffices to show that \begin{equation}\label{equ:almostnabla3} \semone{\nabla(T\mbox{$\bigvee$})(\Phi)} = \bigcup\{\lambda\!^{\T}_X (\T \semzero{\cdot}(\alpha)) \mid \alpha \rel{\Tb{\in}} \Phi \}. \end{equation} The point is now that \eqref{equ:almostnabla3} can be read off the following diagram, where we tacitly use the fact that $\lambda\!^{\T}$ restricts to a natural transformation $\lambda\!^{\T}_{X}: \Tom\Pom \to \funP\Tom$ (see Proposition~\ref{p:nbsem}). \begin{equation} \label{dg:nb3} \xymatrix{ \Tom\Pom\Tba\funP X \ar[rr]^{\lambda\!^{\T}_{\Tba\funP X}} \ar[d]_{\Tom\bigvee} \ar[dr]^{\Tom\Pom(\semzero{\cdot})} & & \funP\Tom\Tba\funP X \ar[d]^{\funP\T\semzero{\cdot}} \\ \Tom\Tba\funP X \ar[d]_{\Tom\semzero{\cdot}} & \Tom\Pom\funP X \ar[dl]^{\Tom\bigcup} \ar[r]^{\lambda\!^{\T}_{\funP X}} & \funP\Tom\funP X \ar[dd]^{\funP\lambda\!^{\T}_X} \\ \Tom\funP X \ar[d]_{\lambda\!^{\T}_X} && \\ \funP\Tom X && \funP\funP \Tom X \ar[ll]_{\bigcup_{TX}} } \end{equation} To see this, first observe that the left hand side of \eqref{equ:almostnabla3} corresponds to the left edge of the diagram, where an arbitrary element $\Phi \in \Tom\Pom\Tba\funP X$ is mapped to $$ \lambda\!^{\T}_X\left( \Tom \semzero{\cdot} (\Tom \mbox{$\bigvee$} (\Phi))\right) = \semone{\nabla(T\mbox{$\bigvee$})(\Phi)}. $$ Similarly, the right hand side of \eqref{equ:almostnabla3} corresponds to clockwise following $\Phi \in \Tom\Pom\Tba\funP X$ along the outer edges of the diagram, from the upper left to the lower left corner, arriving at the object $\bigcup\{\lambda\!^{\T}_X (\T \semzero{\cdot} (\alpha)) \mid \alpha \rel{\Tb{\in}} \Phi \}$. Therefore in order to show (\ref{equ:almostnabla3}) it suffices to show that the diagram commutes. But this is fairly straightforward. First observe that \begin{equation} \label{equ:d} \semzero{\cdot} \cof \bigvee = \bigcup \circ \Pom \semzero{\cdot}, \end{equation} as a straightforward verification will reveal. After applying the functor $\Tom$ to \eqref{equ:d}, we immediately obtain that the left quadrangle of \eqref{dg:nb3} commutes. The right-hand quadrangle commutes since $\lambda\!^{\T}$ is natural. And finally, the pentagon commutes since $\lambda\!^{\T}$ is a distributive law over the power set monad, see Fact~\ref{fact:distriblaw}. As a consequence, the diagram \eqref{dg:nb3} itself commutes. \end{proof} \subsection{Proof of One-Step Completeness} We now turn to the one-step completeness of our derivation system. Our proof is based on properties of algebras of the form $\mathbb{M}\mathbb{B}$, with $\mathbb{B}$ an arbitrary finite Boolean algebra. With $\mathit{At}\mathbb{B}$ denoting the set of atoms of $\mathbb{B}$, we can formulate our key insight by stating that the Boolean algebra $\mathbb{M}\mathbb{B}$ is join-generated by its `lifted atoms', that is, its elements of the form $[\nabla\alpha]$ with $\alpha \in \T (\mathit{At}\mathbb{B})$. That is to say, we can prove that every element $x$ of $\mathbb{M}\mathbb{B}$ is the join of the elements in $\T (\mathit{At}\mathbb{B})$ below it: \begin{equation*} x = \bigvee \Big\{ [\nabla\alpha] \mid \alpha\in\Tom(\mathit{At}\mathbb{B}), [\nabla\alpha] \leq x \Big\}. \end{equation*} Here, as elsewhere in this subsection, the join is taken in the algebra $\mathbb{M}\mathbb{B}$, and may be happen to be taken over an infinite set; in that case, the statement should be read as saying that `the join on the righthandside exists, and it is equal to the lefthandside'. As we will see, in the case that the functor does not preserve finite sets, this is a convenient way of treating infinitary rules as identities. Arriving at the proof details, in order to establish the one-step completeness of $\mathbf{M}$, we need to prove the direction from left to right of \eqref{eq:1stsc}. We will reason by contraposition, showing that for arbitrary $a',b' \in \Tnb\funQ X$: \[ a' \not\equiv_{M\funC\funaP X} b' \mbox{ implies } \semone{a'} \neq \semone{b'}. \] Given the fact that our logic extends classical propositional logic, we may confine ourselves to the case where $b'= \bot$. Fix an element $a' \in \Tnb\funQ X$, and assume that $a'$ is one-step consistent: $a' \not\equiv_{ax} \bot$, or, equivalently, $[a'] > \bot^{\mathbb{M}\mathbb{B}}$. We will prove that $a'$ is one-step satisfiable: $\semone{a'} \neq \varnothing$. Let $\{ \alpha_{1},\ldots, \alpha_{n} \}$ be the (finite!) set of elements $\alpha \in \Tom\funQ X$ such that $\nabla\alpha$ occurs in $a'$, and define \[ \mathit{Base}(a') := \bigcup_{1 \leq i \leq n} \mathit{Base}(\alpha_{i}). \] This is a finite subset of $\Tba\funQ X$, that is, a finite set of Boolean formulas in which the subsets of $X$ are the formal generators. Let $D \subseteq_{\omega} \funP X$ be the collection of those subsets of $X$ that actually occur (as a formal object) in one of the formulas in $\mathit{Base}(a')$, and let $\mathbb{B}$ be the subalgebra of $\funaQ X$ that is generated by $D$. Then both $D$ and $\mathbb{B}$ are finite (whereas their elements may themselves be infinite subsets of $X$). The point is that $\mathbb{B}$ is a finite subalgebra of $\funaQ X$ such that $a' \in \Tnb(\funU\mathbb{B})$. It follows by the key lemma in the one-step completeness proof, Theorem~\ref{l:1st2} below, that \begin{equation} \label{eq:1stc1} [a'] = \bigvee{}^{\mathbb{M}\mathbb{B}} \{ [\nabla\alpha] \mid \alpha \in \Tom (\mathit{At}\mathbb{B}), \nabla\alpha \sqsubseteq a' \}. \end{equation} But since $a'$ is consistent, we have that $[a'] > \bot$, and so we may conclude that there actually exists an $\alpha \in \Tom \mathit{At}\mathbb{B}$ such that $\nabla\alpha \sqsubseteq a'$ --- if there were no such $\alpha$, then the righthandside of (\ref{eq:1stc1}) would evaluate to $\bot$. By Proposition~\ref{p:1st1} we obtain for this $\alpha$ that $\semone{\nabla\alpha} \neq \varnothing$, and so by soundness we may conclude that $\semone{a'} \supseteq \semone{\nabla\alpha} \neq \varnothing$. In other words, we find that $\semone{a'}$ is one-step satisfiable, as required. \begin{prop} \label{p:1st1} Fix a set $X$ and let $\alpha\in \Tom(\mathit{At}\mathbb{B})$ for some finite subalgebra $\mathbb{B}$ of $\funaP X$. Then $\semone{\nabla\alpha} \neq \varnothing$. \end{prop} \begin{proof} Clearly the set $\mathit{At}\mathbb{B} \subseteq \funP X$ forms a partition of $X$. Let $h: \mathit{At}\mathbb{B} \to X$ be a choice function, that is, $h(a)\in a$ for each $a \in \mathit{At}\mathbb{B}$. Using the properties of relation lifting, it is not hard to derive from this that $(Th)(\alpha) \rl{\T} (\in_{X}) \alpha$ for each lifted atom $\alpha$. It follows immediately that $(Th)(\alpha) \in \semone{\nabla\alpha}$. \end{proof} The following is the key lemma in the one-step completeness proof. \begin{thm} \label{l:1st2} Let $\mathbb{B}$ be a finite Boolean algebra. \begin{enumerate}[\em(1)] \item For any two elements $\alpha,\beta \in \Tom(\mathit{At}\mathbb{B})$, we have \begin{equation} \label{eq:atoms1} [\nabla\alpha] \land [\nabla\beta] > \bot \mbox{ iff } \alpha = \beta. \end{equation} \item The top element of $\mathbb{M}\mathbb{B}$ satisfies \begin{equation} \label{eq:atoms2} \top^{\mathbb{M}\mathbb{B}} = \bigvee \{ [\nabla\alpha] \mid \alpha\in\Tom(\mathit{At}\mathbb{B}) \}. \end{equation} \item The set $\{ [\nabla\alpha] \mid \alpha \in \Tom(\mathit{At}\mathbb{B}) \}$ join-generates $\mathbb{M}\mathbb{B}$; that is, for all $a' \in \Tnb\funU\mathbb{B}$: \begin{equation} \label{eq:5:4:0} [a'] = \bigvee \{ [\nabla\alpha] \mid \alpha\in\Tom(\mathit{At}\mathbb{B}), [\nabla\alpha] \leq [a'] \}. \end{equation} \end{enumerate} Summarizing, the algebra $\mathbb{M}\mathbb{B}$ is atomic, with $\mathit{At}(\mathbb{M}\mathbb{B}) = \{ [\nabla\alpha] \mid \alpha \in \Tom(\mathit{At}\mathbb{B}) \}$. \end{thm} \begin{proof} Throughout the proof we will abbreviate $A := \mathit{At}\mathbb{B}$ and $B := \funU\mathbb{B}$. The proof of first two statements is immediate by Proposition~\ref{p:negder} (take for $\phi$ the set $A$). Concerning the third statement of the Theorem, observe that the inequality `$\geq$' of (\ref{eq:5:4:0}) always holds, so it will be the opposite inequality that we need to establish. Our proof will be by induction on the complexity of $a'$ (as a boolean formula over the set $\Tomnb\Tba B$). \smallskip In the base case of the induction, $a'$ is of the form $\nabla\beta$, with $\beta\in \Tom\Tba B$. Our first claim is that without loss of generality, we may assume that $\nabla\beta$ actually belongs to $\Tom B$. The justification for this claim is that for any $b \in \Tba B$ there is a $b_{0}\in B$ such that the equation $b_{0} \approx b$ is derivable in the proof system $\funC \mathbb{B}$ associated with the canonical presentation of $\mathbb{B}$: simply let $b_{0} := \ti{\mathit{id}}_{B}(b)$ be the element of $B$ to which the term $b$ evaluates. (For the definition of $\ti{\mathit{id}}_{B}$ we refer to \ref{d:ind_extension}.) Thus an application of ($\nb1$) shows that for any $\beta \in \Tom\Tba B$ there is a $\beta_{0} \in \Tom B$ such that $\vdash_{\mathbf{M}\funC\mathbb{B}} \nabla\beta \approx \nabla\beta_{0}$: simply take $\beta_{0} := \T\ti{\mathit{id}}_{B}(\beta)$. Hence, assume that indeed, $\beta \in \Tom B$. Think of the finitary join as a map $\bigvee: \Pom A \to B$. As such it is a bijection, and this property is inherited by the map $\T\bigvee: \Tom\Pom A \to \Tom B$. Furthermore, it is easy to verify that for any $\phi \in \Pom A$ and any $a \in A$, we have that \begin{equation} \label{eq:5:4:1} a \in_{X} \phi \ \mbox{ iff } \ a \leq \bigvee\phi, \end{equation} which can be succinctly formulated as $\ni_{X} = \bigvee;{\geq}$ (where $\bigvee$ now denotes the graph of the disjunction function). By the properties of relation lifting, this implies $\rl{\T} (\ni_{X}) = \T\bigvee;\rl{\T} {\geq}$, which can again be reformulated as stating that for any $\Phi \in \Tom\funP A$ and any $\alpha \in \Tom A$ it holds that \begin{equation} \label{eq:5:4:2} \alpha \rl{\T} (\in_{X}) \Phi \mbox{ iff } \alpha \rl{\T} {\leq} (\T\mbox{$\bigvee$})\Phi. \end{equation} Now consider an arbitrary element $\beta\in \Tom B$, and let $\Phi$ be the (unique) element of $\Tom\Pom A$ such that $\beta = (\T\bigvee)(\Phi)$. Then (\ref{eq:5:4:2}) reads that $\alpha \rl{\T} (\in_{X}) \Phi \mbox{ iff } \alpha \rl{T}(\leq) \beta$, for all $\alpha\in \Tom A$, and so axiom ($\nb3$) instantiates to \begin{equation} \label{eq:5:4:3} [\nabla\beta] = \bigvee \{ [\nabla\alpha] \mid \alpha \in \Tom A \mbox{ and } \alpha \rl{\T} (\leq) \beta \}. \end{equation} But since by the nature of the one-step derivation system we have ${\leq} = {\sqsubseteq}$ on elements of $\funP X$, we also have $\rl{\T} (\leq) = \rl{\T} (\sqsubseteq)$. So if $\alpha \rl{\T} (\leq) \beta$ then one application of ($\nb1$) gives that $\nabla\alpha \sqsubseteq \nabla\beta$, which implies that $[\nabla\alpha] \leq [\nabla\beta]$. From this and (\ref{eq:5:4:3}) is immediate that \[ [\nabla\beta] \leq \bigvee \{ [\nabla\alpha] \mid \alpha \in \Tom(\mathit{At}\mathbb{B}), [\nabla \alpha] \leq [\nabla \beta] \}. \] This finishes the base case of the inductive proof of (\ref{eq:5:4:0}). \smallskip For the inductive step of the proof there are three cases to consider. First, assume that $a'$ is of the form $\bigvee_{i\in I} a_{i}'$ for some finite index set $I$. Then we may compute \begin{align*} [a'] &= \bigvee \{ [a_{i}'] \mid i\in I \} \tag*{(assumption)} \\ &= \bigvee \Big\{ \mbox{$\bigvee$} \{ [\nabla\alpha] \mid \alpha \in \Tom A, [\nabla\alpha] \leq [a_{i}'] \} \mid i \in I \Big\} \tag*{(induction hypothesis)} \\ &= \bigvee \Big\{ [\nabla\alpha] \mid \alpha \in \Tom A, [\nabla\alpha] \leq [a_{i}'] \mbox{ for some } i\in I \Big\} \tag*{(associativity of $\bigvee$)} \\ &\leq \bigvee \Big\{ [\nabla\alpha] \mid \alpha \in \Tom A, [\nabla\alpha] \leq \mbox{$\bigvee_{i\in I}$} [a_{i}'] = a' \Big\} \tag*{(properties of $\bigvee$)} \end{align*} Second, consider the case that $a'$ is a conjunction $\bigwedge_{i\in I} a_{i}'$ for some finite $I$. Now we have \begin{align*} [a'] &= \bigwedge \{ [a_{i}'] \mid i\in I \} \tag*{(assumption)} \\ &= \bigwedge \Big\{ \mbox{$\bigvee$} \{ [\nabla\alpha] \mid \alpha \in \Tom A, [\nabla\alpha] \leq [a_{i}'] \} \mid i \in I \Big\} \tag*{(induction hypothesis)} \\ &= \bigvee \Big\{ \mbox{$\bigwedge$}_{i\in I} [\nabla\gamma(i)] \mid \gamma: I \to \Tom A \mbox{ such that } [\nabla\gamma(i)] \leq [a_{i}'] \mbox{ for all } i \Big\} \tag*{(distributivity)} \\ &= \bigvee \Big\{ [\nabla\gamma] \mid \gamma \in \Tom A \mbox{ such that } [\nabla\gamma] \leq \tag*{(part 1)} [a_{i}'] \mbox{ for all } i \Big\} \\ &= \bigvee \Big\{ [\nabla\gamma] \mid \gamma \in \Tom A, [\nabla\gamma] \leq \mbox{$\bigwedge_{i\in I}$} [a_{i}'] = a' \Big\} \tag*{(properties of $\bigvee$)} \end{align*} Here `distributivity' refers to the fact that in any Boolean algebra, finite meets distribute over arbitrary joins, and `part~1' refers to the first statement of this Theorem. The point here is that we only need to consider those meets $\bigwedge_{i\in I} [\nabla\gamma(i)]$ for which $\gamma(i) = \gamma(j)$ for all $i,j \in I$, since the other meets will reduce to $\bot$. Finally, suppose that $a'$ is a negation, say $a' = \neg b'$. We first claim that \begin{equation} \label{eq:neg1} \mbox{for all } \alpha \in \Tom A \mbox{ either } \nabla\alpha \sqsubseteq b' \mbox{ or } \nabla\alpha \sqsubseteq \neg b'. \end{equation} To see this, assume that $\nabla\alpha \not\sqsubseteq \neg b'$; then by propositional logic, \[ [\nabla\alpha] \land [b'] > \bot. \] By the inductive hypothesis, we have $[b'] = \bigvee \{ [\nabla\beta] \mid \beta\in\Tom A, [\nabla\beta] \leq [b']\}$, and so by distributivity we obtain \[ \bigvee \{ [\nabla\alpha] \land [\nabla\beta] \mid \beta\in\Tom A, [\nabla\beta] \leq [b']\} \;>\; \bot. \] But then there must be at least one $\beta\in\Tom A$ with $[\nabla\alpha] \land [\nabla\beta] > \bot$ and $[\nabla\beta] \leq [b']$. By the first statement of this Theorem, we can only have $[\nabla\alpha] \land [\nabla\beta] > \bot$ if $\alpha$ is \emph{identical} to $\beta$, and so indeed we find that $[\nabla\alpha] \leq [b']$. This proves (\ref{eq:neg1}). Because of this we can rewrite $[\neg b']$ as follows: \begin{align*} [\neg b'] &= [\neg b'] \land \bigvee \{ [\nabla\alpha] \mid \alpha \in \Tom A \} \tag*{(part 2)} \\ &= \bigvee \{ [\neg b'] \land [\nabla\alpha] \mid \alpha \in \Tom A \} \tag*{(distributivity)} \\ &= \bigvee \Big( \{ [\neg b' \land \nabla\alpha] \mid b' \sqsupseteq \nabla\alpha, \alpha \in \Tom A \} \cup \{ [\neg b' \land \nabla\alpha] \mid \neg b' \sqsupseteq \nabla\alpha, \alpha \in \Tom A \} \Big) \tag*{(\ref{eq:neg1})} \\ &= \bigvee \Big( \{ [\bot] \mid b' \sqsupseteq \nabla\alpha, \alpha \in \Tom A \} \cup \{ [\nabla\alpha] \mid \neg b' \sqsupseteq \nabla\alpha, \alpha \in \Tom A \} \Big) \tag*{(immediate)} \\ &= \bigvee \{ [\nabla\alpha] \mid [\neg b'] \geq [\nabla\alpha], \alpha \in \Tom A \} \tag*{(immediate)} \end{align*} This settles the remaining inductive case, and thus finishes the proof of the third part of the Theorem. \end{proof} \subsection{Connecting algebra and coalgebra} Now that we have proved the one-step soundness and completeness of our logic, we will show how to connect the algebraic functor $\mathbb{M}$ to the coalgebraic functor $\T$ by defining a natural transformation \[ \delta: \mathbb{M} \funaQ \mathrel{\dot{\rightarrow}} \funaQ \T \] which in fact provides an embedding $\delta_{X}$ for each set $X$. For the definition of $\delta$, note that given a set $X$, it follows from one-step soundness that $\semone{a} = \semone{b}$ for all $a,b \in \Tnb\funQ X$ such that $[a]_{\mathbf{M}\funC\funaQ X} = [b]_{\mathbf{M}\funC\funaQ X}$. This ensures that the following is well-defined. \begin{definition} \label{d:ntrde} Given a set $X$, let \[ \delta_{X}([a]_{\mathbf{M}\funC\funaQ X}) := \semone{a} \] define a map $\delta_{X}: \mathbb{M}\funaQ X \to \funaQ\T X$. \end{definition} \begin{prop}\label{p:3} The family of maps $\delta_{X}$, with $X$ ranging over the category $\mathsf{Set}$, provides a natural transformation $\delta: \mathbb{M}\funaQ \mathrel{\dot{\rightarrow}} \funaQ\T$. Furthermore, each $\delta_{X}: \mathbb{M}\funaQ X \to \funaQ\T X$ is an embedding. \end{prop} \begin{proof} In order to demonstrate that $\delta$ is a natural transformation, we have to prove that for any function $f: X \to Y$ the following diagram commutes: \[ \xymatrix{ \mathbb{M}\funaQ X \ar[r]^{\delta_X} & \funaQ\T X \\\mathbb{M}\funaQ Y \ar[u]^{\mathbb{M}\funaQ f} \ar[r]_{\delta_Y} & \funaQ\T Y \ar[u]_{\funaQ \T f} } \] In order to see that the above diagram commutes it suffices to show that it commutes on the generators of $\mathbb{M} \funaQ Y$. Consider such a generator $\nabla \alpha \in\Tomnb\Tba \funQ Y$. Then \[\begin{array}{lclll} \delta_X(\mathbb{M} \funaQ(f) (\nabla \alpha)) &=& \delta_X ([\Tomnb\Tba \funQ (f)(\nabla \alpha)]) &=& \semone{\Tomnb\Tba \funQ (f)(\nabla \alpha)} \\& \stackrel{\mbox{\tiny Remark~\ref{r:onestep2}}}{=} & \lambda\!^{\T}_X( \T \semzero{\cdot} ( \T\Tba \funQ (f) (\alpha)) ) &=& \lambda\!^{\T}_X(\T (\semzero{\cdot} \circ \Tba \funQ(f))(\alpha)) \\& \stackrel{\mbox{\tiny $\semzero{\cdot}$ natural, Lem.~\ref{p:semzeronat}}}{=} &\lambda\!^{\T}_X(\T(\funQ f \circ \semzero{\cdot})(\alpha)) &=& \lambda\!^{\T}_X (\T\funQ f \circ \T\semzero{\cdot} (\alpha)) \\& \stackrel{\mbox{\tiny $\lambda$ natural}}{=} & \funQ \T f (\lambda\!^{\T}_Y (\T \semzero{\cdot} (\alpha))) &=& \funQ \T f (\semone{\nabla \alpha}) \\ &=& \funQ \T f (\delta_Y([\nabla \alpha])) \end{array}\] Let us finally show that $\delta_X$ is injective for an arbitrary set $X$. Suppose that $\delta_X([a]) = \delta_X([b])$ for some $a,b \in \Tba\Tomnb\Tba \funQ X$. By definition of $\delta_X$ that means that $\semone{a} = \semone{b}$ which by one-step completeness of the logic entails that $[a] = [a']$ in $\mathbb{M} \funaQ X$. \end{proof} On the basis of this natural transformation we can define a second notion of complex algebra of a coalgebra, next to the Moss complex algebra of Definition~\ref{d:cplxalg1}. \begin{definition} \label{d:cplxalg2} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor, and let $\mathbb{X} = \struc{X,\xi}$ be a $\T$-coalgebra. We define the \emph{complex $\mathbb{M}$-algebra} of $\mathbb{X}$ as the pair $\mathbb{X}^{*} \mathrel{:=} \struc{\funaQ X, \delta_{X}\cof\funaQ\xi}$. \end{definition} The link between the two kinds of complex algebras is given by the functor $V$ from Definition~\ref{d:funV} which allows us to see $\mathbb{M}$-algebras as Moss algebras. \begin{prop} \label{p:plusstar} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a standard, weak pullback preserving functor. Then \[ \mathbb{X}^{+} = V\mathbb{X}^{*}. \] for any $\T$-coalgebra $\mathbb{X}$. Therefore, for any $\T$-coalgebra $\mathbb{X}$ and any formula $a \in \mathcal{L}$ we have \[ \mathit{mng}_{V\mathbb{X}^*}(a) = \mathit{mng}_{\mathbb{X}^+} (a) = \{ x \in X \mid x \Vdash a \} .\] \end{prop} \section{Preliminaries} \label{s:preliminaries} The purpose of this section is to fix our notation and terminology, and to introduce some concepts that underlie our work in all other parts of the paper. \subsection{Basic mathematics and category theory} \label{ss:basics1} First we fix some basic mathematical issues. Given a set $X$, we let $\funP X$ and $\Pom X$ denote the power set and the finite power set of $X$, respectively. We write $Y \subseteq_{\omega} X$ to indicate that $Y$ is a finite subset of $X$. Given a relation $R \subseteq X \times X'$, we denote the \emph{domain} and \emph{range} of $R$ by $\mathsf{dom}(R)$ and $\mathsf{rng}(R)$, respectively, and we denote by $\pi^R_1:R \to X$ its first projection and by $\pi^R_2:R \to X'$ its second projection map. Given subsets $Y \subseteq X$, $Y' \subseteq X'$, the \emph{restriction} of $R$ to $Y$ and $Y'$ is given as \[ R\rst{Y \times Y'} \mathrel{:=} R \cap (Y \times Y'). \] The \emph{converse} of a relation $R \subseteq X \times X'$ is denoted as $\converse{R} \subseteq X' \times X$. The \emph{composition} of two relations $R \subseteq X \times X'$ and $R' \subseteq X' \times X''$ is denoted by $R\corel R'$, while the composition of two functions $f: X \to X'$ and $f': X' \to X''$ is denoted by $f'\cof f$. That is, we denote function composition by $\cof$ and write it from right to left and we denote relation composition of relations by $\corel$ and write it from left to right. It is often convenient to identify a function $f:X \to X'$ with its \emph{graph}, that is, the relation $\mathit{Gr}(f)=\{(x,f(x))\mid x\in X\} \subseteq X\times X'$. For example given a relation $R \subseteq X \times X'$ and a function $f: X' \to X''$ we write $R\corel f$ to denote the composition of relations $R\corel\mathit{Gr}(f)$. We will assume familiarity with basic notions from category theory, including those of categories, functors, natural transformations, (co-)monads and (co-)limits; see for instance~\cite{macl:cate98}. We denote by $\mathsf{Set}$ the category of sets and functions, and by $\mathsf{Rel}$ the category of sets and binary relations. $\mathsf{BA}$ is the category with Boolean algebras as objects and homomorphisms as arrows. Endofunctors on $\mathsf{Set}$ will simply be called \emph{set functors}. We denote by $\Pow $ the \emph{power set functor} which maps a set $X$ to its power set $\Pow X$ and a function $f:X\to X'$ to its \emph{direct image} $\Pow f: \Pow X \to \Pow X'$, given by $\Pow(X) \ni Y \mapsto \{f(y)\mid y\in Y\}$. Similarly, $\Pom X$ denotes the finite power set functor. $\funP$ is in fact (part of) a monad $(\funP,\mu,\eta)$, with $\eta_{X}: X \to \funP(X)$ denoting the singleton map $\eta_{X}: x \mapsto \{ x\}$, and $\mu_{X}: \funP\funP X \to \funP X$ denoting union, $\mu_{X}(\mathcal{A}) \mathrel{:=} \bigcup \mathcal{A}$. The contravariant power set functor will be denoted as $\funQ$; this functor maps a set $X$ to its power set $\funQ X = \funP X$, and a function $f: X \to X'$ to its \emph{inverse image} $\funQ f: \funQ X' \to \funQ X$ given by $\funQ X' \ni Y' \mapsto \{ x \in X \mid fx \in Y' \}$. \subsection{(Co-)algebras} \label{ss:coalgebras} We provide some details concerning the notions of an algebra and a coalgebra for a functor. We start with coalgebras since these provide the semantic structures of the logics considered in this paper. \begin{definition} Given a functor $\T$ on a category $\class{C}$, a $\T$-coalgebra $(X,\xi)$ is an arrow $\xi:X\to \T X$ in $\class{C}$; a $T$-coalgebra morphism $f:(X,\xi)\to(X',\xi')$ is an arrow $f:X\to X'$ such that $\T f\cof\xi =\xi'\cof f$, in a diagram: \begin{equation*} \xymatrix{ X \ar[d]_{\xi} \ar[r]^{f} & X' \ar[d]^{\xi'} \\ \T X \ar[r]^{\T f} & \T X' } \end{equation*} The functor $\T$ is called the \emph{type} of the coalgebra $(X,\xi)$, The category of $\T$-coalgebras is denoted by $\mathsf{Coalg}(\T)$ and we denote coalgebras by capital letters $\mathbb{X},\mathbb{Y},\dots$ in blackboard bold. In the case of a set coalgebra (that is, a coalgebra for a set functor), elements of the (carrier of the) coalgebra will be called \emph{states} of the coalgebra, and a \emph{pointed coalgebra} is a pair consisting $(\mathbb{X},x)$ consisting of a coalgebra $\mathbb{X}=(X,\xi)$ and a state $x$ of $\mathbb{X}$. \end{definition} Here are some simple, standard examples of coalgebras for set functors. \begin{exa} \label{ex:1}\hfill \begin{enumerate}[(1)] \item We let $\Id$ denote the \emph{identity} functor on $\mathsf{Set}$. Given a set $C$, we let $C$ itself also denote the \emph{constant} functor, mapping every set $X$ to $C$, and every function $f$ to the identity map $\mathit{id}_{C}$ on $C$. Coalgebras for this functor are called \emph{$C$-colorings}; in case $C$ is of the form $\Pow(\mathsf{Prop})$ for some set $\mathsf{Prop}$ of proposition letters, we may think of a coloring $\xi: X \to C$ as a \emph{$\mathsf{Prop}$-valuation} (in the sense that $\xi$ says of every proposition letter $p$ and every state $x$ whether $p$ is true of $x$ or not). \item A \emph{Kripke frame} $\struc{S,R}$ can be represented as a coalgebra $\struc{S,\sigma_{R}}$ for the power set functor $\funP$, with $\sigma_{R}: S \to \funP S$ mapping a point $s$ to its collection of successors. It is left as an exercise for the reader to verify that the coalgebra morphisms for this functor precisely coincide with the \emph{bounded morphisms} of modal logic. \item Coalgebras for the functor $\funQ\cof\funQ$ (that is, the contravariant power set functor composed with itself) can be identified with the \emph{neighborhood frames} known from the theory of modal logic as structures that generalize Kripke frames. As a special case of this, but also generalizing Kripke frames, the \emph{monotone neighborhood functor} $N$ maps a set $X$ to the collection $N(X) \mathrel{:=} \{ \alpha \in \funQ\funQ X \mid \alpha \text{ is upward closed }\}$, and a function $f$ to the map $\funQ\funQ f$. \item For a slightly more involved example, consider the finitary \emph{multiset} or \emph{bag} functor $\mathit{B}_{\omega}$. This functor takes a set $X$ to the collection $B_{\omega}X$ of maps $\mu: X \to \mathbb{N}$ of finite support (that is, for which the set $\mathit{Supp}(\mu) := \{ x \in X \mid \mu(x) > 0 \}$ is finite), while its action on arrows is defined as follows. Given an arrow $f: X \to X'$ and a map $\mu \in \mathit{B}_{\omega}X$, we define $(\mathit{B}_{\omega}f)(\mu): X' \to \mathbb{N}$ by putting \[ (\mathit{B}_{\omega}f)(\mu)(x') := \sum \{ \mu(x) \mid f(x) = x' \}. \] \item As a variant of $\mathit{B}_{\omega}$, consider the finitary probability functor $D_{\omega}$, where $D_{\omega} X = \{ \delta: X \to [0,1] \mid \mathit{Supp}(\delta) \text{ is finite and } \sum_{x\in X}\delta(x) = 1 \}$, while the action of $D_{\omega}$ on arrows is just like that of $\mathit{B}_{\omega}$. \end{enumerate} \end{exa} \begin{exa} \label{ex:2} Many examples of coalgebraically interesting set functors are obtained by \emph{composition} of simpler functors. Inductively define the following class $\mathit{EKPF}$ of \emph{extended Kripke polynomial functors}: \[ \T \mathrel{:=} \Id \mid C \mid \funP \mid \mathit{B}_{\omega} \mid D_{\omega} \mid \T_{0} \cof \T_{1} \mid \T_{0} + \T_{1} \mid \T_{0} \times \T_{1} \mid \T^{D}, \] where $\cof$, $+$ and $\times$ denote functor composition, coproduct (or disjoint union) and product, respectively, and $(-)^{D}$ denotes exponentiation with respect to some set $D$. Examples of such functors include: \begin{enumerate}[(1)] \item Given an alphabet-color set $C$, the \emph{$C$-streams} are simple specimens of coalgebras for the functor $C \times \Id$; similarly, $C$-labelled binary trees are coalgebras for the functor $\mathit{B}_{C} = C \times \Id \times \Id$. \item \emph{Labelled transition systems} over a set $A$ of atomic actions can be seen as coalgebras for the functor $\funP(-)^{A}$. \item \emph{Deterministic automata} are coalgebras for the functor $(-)^{\Sigma} \times 2$ where $\Sigma$ is the finite alphabet. \item \emph{Kripke models} over a set $\mathsf{Prop}$ of proposition letters can be identified with coalgebras for the functor $\funP(\mathsf{Prop}) \times \funP(-) = \funP \cof C_{\mathsf{Prop}} \times \funP\cof\Id$. \item Generalizing the previous example, viewing $\T$-coalgebra as \emph{frames}, we can define \emph{$\T$-models} over a set $\mathsf{Prop}$ of proposition letters as coalgebras for the functor $\T_{\mathsf{Prop}} = \funP(\mathsf{Prop}) \times \T(-)$. \end{enumerate} \end{exa} \noindent As running examples through this paper we will often take the binary tree functor over a set $C$ of colors, and the power set functor. The key notion of equivalence in coalgebra is of two states in two coalgebras being \emph{behaviorally equivalent}. In case the functor $\T$ admits a final coalgebra $\mathbb{Z} = \struc{Z,\zeta}$ the elements of $Z$ often provide an intuitive encoding of the notion of behaviour, and the unique coalgebra homomorphism $!_{\mathbb{X}}$ can be seen as a map that assigns to a state $x$ in $\mathbb{X}$ its \emph{behaviour}. In this case we call two states, $x$ in $\mathbb{X}$ and $x'$ in $\mathbb{X}'$, \emph{behaviorally equivalent} if $!_{\mathbb{X}}(x) = !_{\mathbb{X}'}(x')$. In the general case, when we may not assume the existence of a final coalgebra, we define the notion as follows. \begin{definition} \label{d:beheq} Two elements (often called states) $x,x'$ in two coalgebras $\mathbb{X}$ and $\mathbb{X}'$, respectively, are \emph{behaviorally equivalent} iff there are coalgebra morphisms $f,f'$ with a common codomain such that $f(x)=f'(x')$. \end{definition} \medskip Turning to the dual notion of algebra, we shall use algebras mainly to describe logics for coalgebras, and the notion of an algebra `for a functor' will provide us with an elegant way to exploit the duality with coalgebras. \begin{definition} Given a functor $L$ on a category $\class{A}$, an $L$-algebra $(A,\alpha)$ is an arrow $\alpha:LA\to A$ in $\class{A}$ and an $L$-algebra morphism $f:(A,\alpha)\to(A',\alpha')$ is an arrow $f:A\to A'$ such that $f\cof\alpha=\alpha'\cof Lf$. The category of $L$-algebras is denoted by $\mathsf{Alg}(L)$. \end{definition} \begin{exa}\hfill \begin{enumerate}[(1)] \item If $\class{A}=\mathsf{Set}$, then every signature (or similarity type) induces a functor $LX=\coprod_{n<\omega} \mathit{Op_n}\times X^n$ where $\mathit{Op_n}$ is the set of operation symbols of arity $n$. Then $\mathsf{Alg}(L)$ is (isomorphic to) the category of algebras for the signature. \item If $\class{A}=\mathsf{BA}$, then we can define a functor $L:\mathsf{BA}\to\mathsf{BA}$ to map an algebra $A$ to the algebra $LA$ generated by $\Box a$, $a \in A$, and quotiented by the relation stipulating that $\Box$ preserves finite meets. Then $\mathsf{Alg}(L)$ is isomorphic to the category of modal algebras \cite{kupk:ston04}. \end{enumerate} \end{exa} \noindent As the second example above shows, functors on $\mathsf{BA}$ give rise to modal logics extending Boolean algebras with operators. \subsection{Properties of set functors} \label{ss:setfunctors} As mentioned in the introduction, in this paper we will restrict our attention to set functors satisfying certain properties. The first one of these is crucial. \paragraph*{Weak pullback preservation} Recall that a set $P$ together with functions $p_1:P \to X_1$ and $p_2: P \to X_2$ is a \emph{pullback} of two functions $f_1:X_1 \to X$ and $f_2:X_2 \to X$ if $f_1 \cof p_1= f_2 \cof p_2$ and for all sets $P'$ and all functions $p_1':P' \to X_1$, $p_2': P' \to X_2$ such that $f_1 \cof p_1' = f_2 \cof p_2'$ there exists a {\em unique} function $e:P' \to P$ such that $p_i \cof e = p_i'$ for $i=1,2$. \\ \centerline{ \xymatrix{% P' \ar@/_/[ddr]_{p'_1} \ar@/^/[drr]^{p'_2} \ar@{-->}[dr]^e & & \\ & P \ar[r]^{p_2} \ar[d]_{p_1} & X_2 \ar[d]^{f_2} \\ & X_1 \ar[r]_{f_1} & X }} \\ If the function $e$ is not necessarily unique we call $(P,p_1,p_2)$ a \emph{weak pullback}. Furthermore we call a relation $R \subseteq X_1 \times X_2$ a (weak) pullback of $f_1$ and $f_2$ if $R$ together with the projection maps $\pi_1^R$ and $\pi_2^R$ is a (weak) pullback of $f_1$ and $f_2$. In the category of sets, (weak) pullbacks have a straightforward characterization \begin{fact}\label{f:wp}% \cite{gumm01:func}. Given two functions $f_1:X_1 \to X_3$ and $f_2:X_2 \to X_3$, let \[ \mathit{pb}(f_1,f_2) \mathrel{:=} \{ (x_1,x_2) \mid f_1(x_1) = f_2(x_2) \}. \] Furthermore, given a set $P$ with functions $p_1:P \to X_1$ and $p_2:P \to X_2$, let \[ e: y \mapsto (p_1(y),p_2(y)). \] define a function $e: P \to \mathit{pb}(f_1,f_2)$. Then \begin{enumerate}[(1)] \item $(P,p_1,p_2)$ is a pullback of $f_1$ and $f_2$ iff $f_1 \cof p_1= f_2 \cof p_2$ and $e$ is an isomorphism. \item $(P,p_1,p_2)$ is a weak pullback of $f_1$ and $f_2$ iff $f_1 \cof p_1= f_2 \cof p_2$ and $e$ is surjective. \end{enumerate} \end{fact} A functor $\T$ \emph{preserves weak pullbacks} if it transforms every weak pullback $(P,p_{1},p_{2})$ for $f_{1}$ and $f_{2}$ into a weak pullback $(\T P,\T p_{1}, \T p_{2})$ for $\T f_{1}$ and $\T f_{2}$. An equivalent characterization is to require $\T$ to \emph{weakly preserve pullbacks}, that is, to turn pullbacks into weak pullbacks. Further on in Corollary~\ref{cor:extT}, we will see yet another, and probably more motivating, characterization of this property. \begin{exa} All the functors of Example~\ref{ex:1} preserve weak pullbacks, except for the neighborhood functor and its monotone variant. It can be shown that the property of preserving weak pullbacks is preserved under the operations $\cof,+,\times$ and $(-)^{D}$, so that all extended polynomial Kripke functors (Example~\ref{ex:2}) preserve weak pullbacks. \end{exa} \paragraph{Standard functors} The second property that we will impose on our set functors is that of standardness. Given two sets $X$ and $X'$ such that $X \subseteq X'$, let $\iota_{X,X'}$ denote the inclusion map from $X$ into $X'$. A weak pullback-preserving set functor $\T$ is \emph{standard} if it \emph{preserves inclusions}, that is, if $\T\iota_{X,X'} = \iota_{\T X,\T X'}$ for every inclusion map $\iota_{X,X'}$. \begin{rem} Unfortunately the definition of standardness is not uniform throughout the literature. Our definition of standardness is taken from Moss~\cite{moss:coal99}, while for instance Ad\'{a}mek \& Trnkov\'{a}~\cite{adam:auto90} have an additional condition involving so-called distinguished points. Fortunately, the two definitions are equivalent in case the functor preserves weak pullbacks, see Kupke~\cite[Lemma A.2.12]{kupk:fini06}. Since we almost exclusively consider standard functors that also preserve weak pullbacks, we have opted for the simpler definition. For readers who are interested in some more details, fix sets 0,1 and 2 of of the corresponding sizes (0,1 and 2), respectively, and let $e,o$ denote the two maps $e,o: 1\to 2$. Then the second condition of standardness in the sense of~\cite{adam:auto90} can be phrased as the requirement that $\T 0 = \{x\in \T 1 \mid \T i(x)= \T o(x)\}$, in words: all distinguished points are standard. \end{rem} In any case the restriction to standard functors is for convenience only, since every set functor is `almost standard'~\cite[Theorem~III.4.5]{adam:auto90}. That is, given an arbitrary set functor $\T$, we may find a standard set functor $\T'$ such that the restriction of $\T$ and $\T'$ to all non-empty sets and non-empty functions are naturally isomorphic. The important observation about $T'$ is that $\mathsf{Alg}(T)\cong\mathsf{Alg}(T')$ and $\mathsf{Coalg}(T)\cong\mathsf{Coalg}(T')$. Consequently, in our work we can assume without loss of generality that our functors are standard and we will do so whenever convenient. \begin{exa} The finitary bag functor $\mathit{B}_{\omega}$ of Example~\ref{ex:1} is not standard, but we may `standardize' it by representing any map $\mu: X \to \mathbb{N}$ of finite support by its `positive graph' $\{ (x,\mu x) \mid \mu x > 0 \}$. Similarly, the finite distribution functor $D_\omega$ can be standardized by identifying a probability distribution $\mu: X \to [0,1] \in D_\omega X$ with the (finite) set $\{(x,\mu x) \mid \mu x > 0 \}$. \end{exa} \paragraph{Finitary functors} Let $\T$ be a set functor that preserves inclusions. Then $\T$ is \emph{finitary} or \emph{$\omega$-accessible} if, for all sets $X$, \[ TX = \bigcup \{TY \mid Y\subseteq X, \textrm{finite}\}. \] Generalizing the construction of $\Pom$ from $\funP$, we can define, for any set functor $\T$ that preserves inclusions, its \emph{finitary version} \label{page:Tom} $\Tom: \mathsf{Set} \to \mathsf{Set}$ by putting \begin{eqnarray*} \Tom(X) &\mathrel{:=}& \bigcup \{\T Y \mid Y\subseteq_{\omega} X \}, \\ \Tom(f) &\mathrel{:=}& \T f. \end{eqnarray*} It is easy to verify that $\Tom$ preserves inclusions, is finitary and a subfunctor of $\T$ as we have a natural transformation $\tau_{X}: \Tom X \hookrightarrow \T X$. Given the definition of the action of $\Tom$ on arrows, we shall often write $\T f$ instead of $\Tom f$. In order to avoid confusion, we already mention the following fact, but we postpone its proof until subsection~\ref{ss:standard}. \begin{prop} \label{p:Tomwp} Let $\T$ be a standard set functor that preserves weak pullbacks. Then $\Tom$ is also a standard functor that preserves weak pullbacks. \end{prop} The reason that we are interested in finitary functors is that we want our language to be \emph{finitary}, in the sense that a formula has only finitely many subformulas. The key property of finitary functors that will make this possible, is that every $\alpha \in \T X$ is supported by a finite subset of $X$, and in fact, there will always be a \emph{minimal} such set. \begin{definition} \label{d:base} Given a finitary functor $\T$ and an element $\alpha \in \T X$, we define \[ \mathit{Base}^{\T}_{X}(\alpha) \mathrel{:=} \bigcap \{ Y \subseteq_{\omega} X \mid \alpha \in \T Y \}. \] \end{definition} We write $\mathit{Base}^{\T}$ rather than $\mathit{Base}^{\Tom}$, and in fact omit the superscript whenever possible. \begin{exa} The following examples are easy to check: $\mathit{Base}^{\Id}_{X}: X \to \Pom X$ is the singleton map, $\mathit{Base}^{\funP}_{X}: \Pom X \to \Pom X$ is the identity map on $\Pom X$, $\mathit{Base}^{\mathit{B}_{C}}_{X}: C \times X \times X \to \Pom X$ maps the triple $(c,x_{1},x_{2})$ to the set $\{ x_{1}, x_{2} \}$, and $\mathit{Base}^{D_{\omega}}$ maps a finitary distribution to its support. \end{exa} \begin{prop} \label{fact:basenatural} Let $\T: \mathsf{Set} \to \mathsf{Set}$ be a standard functor that preserves weak pullbacks. \begin{enumerate}[\em(1)] \item For any $\alpha \in \Tom X$, $\mathit{Base}^{\T}_{X}(\alpha)$ is the smallest set $Y$ such that $\alpha \in \T Y$. \item $\mathit{Base}^{\T}$ provides a natural transformation $\mathit{Base}: \Tom \to \Pom$. \end{enumerate} \end{prop} \begin{proof} Part~(1) is proved in~\cite{vene:auto06}. For the second part, consider a map $f:X\to X'$. We have to show $\Pom f\cof \mathit{Base}_X=\mathit{Base}_{X'}\cof T_\omega f$. Fix $\alpha\in T_\omega X$ and write $B=\mathit{Base}_X(\alpha)$ and $B'= \mathit{Base}_{X'}(T_\omega f(\alpha))$. We need to prove $B'=f[B]$. For the inclusion ``$\subseteq$'', from $$\xymatrix{ T_\omega B \ar@{^{(}->}[d]\ar[r] & T_\omega(f[B]) \ar@{^{(}->}[d] \\ T_\omega X \ar[r]^{T_\omega f} & T_\omega X'} $$ we see that $f[B]$ supports $T_\omega f(\alpha)$ and, as $B'$ is the smallest such, $B'\subseteq f[B]$ follows. For the opposite inclusion ``$\supseteq$'', since $T_\omega$ preserves weak pullbacks, the dotted arrow in $$\xymatrix{ 1 \ar@/_/[ddr]_{\alpha} \ar@/^/[drr]^{T_\omega f(\alpha)} \ar@{.>}[dr]|-{} \\ & T_\omega(f^{-1}(B')) \ar@{^{(}->}[d]\ar[r] & T_\omega(B') \ar@{^{(}->}[d] \\ & T_\omega X \ar[r]^{T_\omega f} & T_\omega X'} $$ exists and shows that $\alpha\in T_\omega(f^{-1}(B'))$. By minimality of the base, it follows $B\subseteq f^{-1}(B')$, that is, $B'\supseteq f[B]$. \end{proof} \begin{rem} A stronger version of the previous proposition follows from results in~\cite{gumm05:from}. Let us briefly sketch the details using the terminology of~\cite{gumm05:from}. First of all note that it is not difficult to see that all finitary set functors preserve intersections. Therefore \cite[Theorem 7.4] {gumm05:from} implies that $\mathit{Base}$ is sub-cartesian (not necessarily natural) and this implies together with \cite[Theorem 8.1]{gumm05:from} that $\T$ preserves preimages iff $\mathit{Base}$ is natural. Any weak pullback preserving functor preserves preimages and thus this statement implies Proposition~\ref{fact:basenatural}. \end{rem} \section{Relation Lifting} \label{s:relationlifting} Given the key role that the lifting of binary relations plays in the semantics of Moss' logic, we need to discuss the notion in some detail. After giving the formal definition, we mention some of the basic properties of relation lifting: first the ones that hold for any functor, then the ones for which we require the functor to preserve weak pullbacks, and finally, we see important technical properties of relation lifting that rest on the fact that the set functor under consideration is standard. We discuss the connection of the relation lifting with categorical distributive laws: as we will see later on, this connection plays an important role in the axiomatization of $\nabla$. Finally we introduce the notion of a slim redistribution, which is needed to formulate one of our axioms. \subsection{Basics} \label{ss:basics2} First we give the formal definition of relation lifting. \begin{definition} \label{d:rellift} Let $\T$ be a set functor. Given a binary relation $R$ between two sets $X_1$ and $X_2$, we define the relation $\rl{\T} R \subseteq \T X_1 \times \T X_2$ as follows: \[ \rl{\T} R := \{ ((\T\pi^R_{1}) \rho, (\T\pi^R_{2})\rho) \mid \rho \in \T R \}. \] The relation $\rl{\T} R$ will be called the \emph{$\T$-lifting} of $R$. \end{definition} \noindent In other words, we apply the functor $\T$ to the relation $R$, seen as a \emph{span} $\xymatrix{X_{1} & R \ar[l]_{\pi_{1}} \ar[r]^{\pi_{2}} & X_{2}}$, and define $\rl{\T} R$ as the image of $\T R$ under the product map $\langle \T\pi_{1},\T\pi_{2}\rangle$ obtained from the lifted projection maps $\T\pi$ and $\T\pi'$. In a diagram: \\ \centerline{\xymatrix{% X_{1} & R \ar[l]_{\pi_{1}} \ar[r]^{\pi_{2}} & X_{2} \\ \T X_{1} & \T R \ar[l]_{\T \pi_{1}} \ar@{>>}[d] \ar@/_5mm/[dd]_{\langle\T\pi_{1},\T\pi_{2}\rangle} \ar[r]^{\T\pi_{2}} & \T X_{2} \\ & \rl{\T} R \ar@{^{(}->}[d] \\ & \T X_{1} \times \T X_{2} \ar[ruu] \ar[luu] }} Let us first see some concrete examples. \begin{exa}\label{ex:rellift} Fix two sets $X$ and $X'$, and a relation $R \subseteq X \times X'$. For the identity and constant functors, we find, respectively: \begin{eqnarray*} \rl{\Id} R &=& R \\ \rl{C} R &=& \mathit{id}_{C}. \end{eqnarray*} The relation lifting associated with the power set functor $\funP$ can be defined concretely as follows: \[ \rl{\funP} R = \{ (A,A') \in \funP X \times \funP X' \mid \forall a \in A\, \exists a' \in A'. aRa' \text{ and } \forall a' \in A'\, \exists a \in A. aRa' \}. \] This relation is known under many names, of which we mention that of the \emph{Egli-Milner} lifting of $R$. Relation lifting for the finitary multiset functor is slightly more involved: given two maps $\mu\in \mathit{B}_{\omega}X, \mu'\in \mathit{B}_{\omega}X'$, we put \begin{align*} \mu \rel{\rl{\mathit{B}}_{\omega}R} \mu' \text{ iff there is some map } \rho: R\to\mathbb{N} \text{ such that } & \forall x \in X.\, \textstyle{\sum} \{ \rho(x,x') \mid x' \in X' \} = \mu(x) \\ \text{and } & \forall x' \in X'.\, \textstyle{\sum} \{ \rho(x,x') \mid x \in X \} = \mu'(x'). \end{align*} The definition of $\rl{D_{\omega}}$ is similar. Finally, relation lifting interacts well with various operations on functors~\cite{herm98:stru}. In particular, we have \begin{eqnarray*} \rl{T_{0}\cof T_{1}}R &=& \rl{T_0}(\rl{T_1} R) \\ \rl{T_{0}+T_{1}}R &=& \rl{T_0}R \cup \rl{T_1}R \\ \rl{T_{0}\times T_{1}}R &=& \left\{ \left((\xi_{0},\xi_{1}),(\xi'_{0},\xi'_{1})\right) \mid (\xi_{i},\xi'_{i}) \in \rl{T_i}R, \text{ for } i \in \{0,1\} \right\} \\ \rl{T^{D}}R &=& \{ (\phi,\phi') \mid (\phi(d),\phi'(d)) \in \rl{T}R \text{ for all } d \in D \}. \end{eqnarray*} From this one may easily calculate the relation lifting of all extended Kripke polynomial functors of Example~\ref{ex:2}. \end{exa} \begin{rem} \label{r:rl-wd} Strictly speaking, when defining the $\T$-lifting of a relation $R \subseteq X_{1} \times X_{2}$, we should explicitly mention the type of $R$, that is, the pair of sets $X_{1}$ and $X_{2}$. To see this, let $X_{1},X_{2},Y_{1}$ and $Y_{2}$ be sets such that $Y_{i} \subseteq X_{i}$, for $i \in \{ 1,2 \}$. Now any relation $R \subseteq Y_{1} \times Y_{2}$ can also be seen as a relation between $X_{1}$ and $X_{2}$. But in general we do not have $\T Y_{i} \subseteq \T X_{i}$, and so the relation $\rl{\T} R \subseteq Y_{1}\times Y_{2}$ is not necessarily a relation between $X_{1}$ and $X_{2}$. It is easy to see that if $\T$ preserves inclusions, then this problem evaporates. Since we will assume $\T$ to be standard almost throughout the paper, we ignore this subtlety for the time being. Readers who are worried about this may add the condition that $\T$ preserves inclusions throughout the subsections~\ref{ss:basics2} and~\ref{ss:wpp}. \end{rem} \begin{rem} \label{r:bisi} Relation lifting can be used to define the notion of a \emph{bisimulation} between two coalgebras. Recall that, given two coalgebras $\mathbb{X}_{1} = \struc{X_{1},\xi_{1}}$ and $\mathbb{X}_{2} = \struc{X_{2},\xi_{2}}$, a relation $Z \times X_{1} \times X_{2}$ is a bisimulation if there is a coalgebra map $\zeta: Z \to \T Z$ making the two projection functions $\pi_{1}: Z \to X_{1}$ and $\pi_{2}: Z \to X_{2}$ into coalgebra morphisms. It can be shown that this is equivalent to requiring that $\xi_{1}(x_{1}) \rel{\rl{\T} Z} \xi(x_{2})$ whenever $x_{1} \rel{Z} x_{2}$. \end{rem} As mentioned, in this section we will discuss some important properties of relation lifting. We start with listing a number of properties that $\T$-lifting has for {\em any} given set functor $\T$. The proof of the fact below is elementary. \begin{fact} \label{fact:basiclift} Let $\T$ be an arbitrary set functor. Then the relation lifting $\rl{\T}$ \begin{enumerate}[(1)] \item extends $\T$: $\rl{\T} f = \T f$ for all functions $f:X_{1} \to X_{2}$, \item preserves the diagonal: $\rl{\T} \Id_{X} = \Id_{\T X}$ for any set $X$; \item is monotone: $R \subseteq Q$ implies $\rl{\T} {R} \subseteq \rl{\T} {Q}$ for all relations $R,Q \subseteq X_{1} \times X_{2}$; \item commutes with taking converse: $\rl{\T} \converse{R}=\converse{(\rl{\T} R)}$ for all relations $R \subseteq X_{1} \times X_{2}$. \end{enumerate} \end{fact} \subsection{Weak pullback preserving functors} \label{ss:wpp} Fact~\ref{fact:basiclift} states a number of operations on relations that interact well with relation lifting. Conspicuously absent in that list is \emph{relational composition}: observe that $\rl{\T}$ would be a \emph{functor} on the category $\mathsf{Rel}$ if it would satisfy $\rl{\T}(R\corel Q) = \rl{\T} R \corel \rl{\T} Q$. Here we arrive at the main reason why we are interested in functors that preserve weak pullbacks: as we will see now, that property is a necessary and sufficient condition on $\T$ for $\rl{\T}$ to be functorial. In fact, given the characterisation of (weak) pullbacks in the category $\mathsf{Set}$, in terms of the relation $\mathit{pb}$ (see Fact~\ref{f:wp}), it is easy to formulate the composition $R\corel Q$ of two relations $R$ and $Q$ as a pullback of the projection maps $\pi_2^R$ and $\pi_1^Q$. Therefore it is not surprising that the question whether the $\T$-lifting of a relation commutes with the composition of relations is tightly connected with the preservation of weak pullbacks by $\T$. The following fact was first proved in~\cite{trnk80:gene}. \begin{fact}\label{fact:char_wpp} A functor $\T:\mathsf{Set} \to \mathsf{Set}$ weakly preserves pullbacks iff for all relations $R \subseteq X_1\times X_2$ and $Q \subseteq X_2 \times X_3$ we have \begin{equation} \label{eq:char-wpp} \rl{\T} (R \corel Q) = \rl{\T} R \corel \rl{\T} Q. \end{equation} \end{fact} \begin{proof} First, assume that $\T$ preserves weak pullbacks and let $R\subseteq X_1\times X_2$ and $Q \subseteq X_2 \times X_3$ be two binary relations. The pullback of $\pi_2^R$ and $\pi_1^Q$ is given by the following set: \[ \mathit{pb} \mathrel{:=} \{ \left\langle(x_1,x_2),(x_3,x_4)\right\rangle \mid (x_1,x_2) \in R, (x_3,x_4) \in Q \; \mbox{and} \; x_2 = x_3\} , \] and there is a surjective map $e: \mathit{pb}( \pi_2^R,\pi_1^Q) \twoheadrightarrow R \corel Q$ given by $e(\left\langle(x_1,x_2),(x_3,x_4) \right\rangle) = (x_1,x_4)$ with the property that \begin{equation}\label{equ:e} \pi_1^{R\corel Q} \cof e = \pi_1^R \cof \pi_1^{\mathit{pb}} \quad \mbox{and} \quad \pi_2^{R\corel Q} \cof e = \pi_2^Q \cof \pi_2^{\mathit{pb}}. \end{equation} The situation is depicted in Figure~\ref{fig:compo}. \begin{figure} \centerline{\xymatrix{ & & \mathit{pb} \ar@/_{.7cm}/[ldd]_{\pi_1^\mathit{pb}} \ar@/^{.7cm}/[rdd]^{\pi_2^\mathit{pb}} \ar@{-->>}[d]^e & & \\ & & R\corel Q \ar@/_{1cm}/[lldd]_{\pi_1^{R\corel Q}} \ar@/^{1cm}/[rrdd]^{\pi_2^{R\corel Q}} & & \\ & R \ar[ld]_{\pi^R_1} \ar[rd]^{\pi^R_2} & & Q \ar[ld]_{\pi_1^Q} \ar[rd]^{\pi_2^Q} & \\ X_1 & & X_2 & & X_3 }} \caption{Composition of relations \& pullback} \label{fig:compo} \end{figure} We now prove \eqref{eq:char-wpp}. For the inclusion ``$\subseteq$'', let $(x,y) \in \rl{\T}(R\corel Q)$. By definition there exists some $z \in \T(R\corel Q)$ such that $\T \pi^{R\corel Q}_1(z) = x$ and $\T \pi^{R\corel Q}_2(z) = y$. We know that $e$ and thus also $\T e$ is surjective. Therefore there exists some $z' \in \T(\mathit{pb})$ such that $\T e(z')= z$, and using (\ref{equ:e}) we obtain $\T\pi_1^R(\T \pi_1^\mathit{pb}(z')) = \T\pi_1^{R\corel Q}(\T e(z')) = \T \pi_1^{R\corel Q}(z)= x$ and similarly $\T\pi_2^Q(\T \pi_2^\mathit{pb}(z')) = y$. On the other hand, by the definition of $\mathit{pb}$, we have $\T \pi_2^R(\T\pi_1^\mathit{pb}(z'))=\T \pi_1^Q(\T\pi_1^\mathit{pb}(z'))=u$. This implies that $(x,u) \in \rl{\T}(R)$ and $(u,y) \in \rl{\T}(Q)$ and we proved $(x,y)\in \rl{\T}(R)\corel \rl{\T}(Q)$ as required. For the converse inclusion suppose that $(x,y) \in \rl{\T}(R)\corel \rl{\T}(Q)$. We want to prove that this implies $(x,y)\in \rl{\T}(R\corel Q)$. It follows from $(x,y) \in \rl{\T}(R)\corel \rl{\T}(Q)$ that there is some $u \in \T X_{2}$ such that $(x,u) \in \rl{\T}(R)$ and $(u,y) \in \rl{\T}(Q)$; spelling out the definitions we find a $u_x \in \T R$ and a $u_y \in \T Q$ such that $\T \pi_1^R(u_x)=x$, $\T \pi_2^Q(u_y) =y$ and $\T \pi_2^R(u_x) =\T\pi_1^Q(u_y) = u$. By our assumption that $\T$ is weak pullback preserving we have that $\T(\mathit{pb})$, together with the maps $\T \pi_1^{\mathit{pb}}$, $\T \pi_2^{\mathit{pb}}$ is the weak pullback of $\T \pi^R_2$ and $\T \pi^Q_1$. Therefore there must be some $z \in \T (\mathit{pb})$ such that $\T \pi_1^\mathit{pb} (z) = u_x$ and $\T \pi_2^\mathit{pb}(z) = u_y$. This implies $$ \T \pi^{R\corel Q}_1 (\T e (z)) = \T \pi_1^R( \T \pi_1^\mathit{pb} (z)) = \T \pi_1^R (u_x) = x$$ and likewise $\T \pi^{R\corel Q}_2 (\T e(z)) = y$. By definition this means that $(x,y)\in \rl{\T}(R\corel Q)$ as required. \medskip For the converse implication of the statement of the proposition, suppose that $\T$ does not preserve weak pullbacks and let the following be a pullback that is not weakly preserved by $\T$:\\ \centerline{ \xymatrix{ P\ar[d]_{p_1} \ar[r]^{p_2} & X_2 \ar[d]_g\\ X_1 \ar[r]_f & X_3} } Then it is not difficult to see that the following isomorphic diagram, is also a pullback diagram that is not weakly preserved by $\T$:\\ \centerline{ \xymatrix{ R\ar[d]_{\pi^R_1} \ar[r]^{\pi^R_2} & \converse{\mathit{Gr}(g)} \ar[d]^{\pi_1^{\converse{g}}}\\ \mathit{Gr}(f) \ar[r]_{\pi_2^f} & X_3}} where $\mathit{Gr}(f)$ and $\converse{\mathit{Gr}(g)}$ denote the graph of $f$ and the converse of the graph of $g$, respectively, and $R \subseteq \mathit{Gr}{f} \times \converse{\mathit{Gr}{g}}$ is the pullback of $\pi_2^f$ and $\pi_1^{\converse{g}}$. We will show the existence of a pair $(x,y) \in \rl{\T} f \corel \rl{\T} \converse{g} \setminus \rl{\T}(f\corel \converse{g})$, which is a clear counterexample to \eqref{eq:char-wpp}. As before there is a surjection $e': R \twoheadrightarrow f\corel \converse{g}$ satisfying \begin{equation} \label{eq:qq1} \pi_{1}^{f\corel \converse{g}} \cof e' = \pi_{1}^{f} \cof \pi_{1}^{R} \text{ and } \pi_{2}^{f\corel \converse{g}} \cof e' = \pi_{2}^{\converse{g}} \cof \pi_{2}^{R} \end{equation} By assumption, $(\T R, \T \pi_{1}^{R}, \pi_{2}^{R})$ is not a weak pullback of $\T \pi_2^f$ and $\T \pi_1^{\converse{g}}$. Hence by Fact~\ref{f:wp}(2), there must be a $z_1 \in \T \mathit{Gr}(f)$ and a $z_2 \in \T \converse{\mathit{Gr}(g)}$ such that $\T \pi_2^f (z_1) = \T \pi_1^{\converse{g}}(z_2) = u$, while \begin{equation}\label{equ:noexist} \mbox{there is no } z \in \T R \mbox{ such that } \T\pi_1^R(z) = z_1 \mbox{ and } \T \pi_2^R(z) = z_2 . \end{equation} Define $x \mathrel{:=} \T \pi_1^f(z_1)$ and $y\mathrel{:=}\T \pi_2^{\converse{g}}(z_2)$. Since $\pi_{2}^{f} = f \cof \pi_{1}^{f}$, we have $\T\pi_{2}^{f} = \T f \cof \T\pi_{1}^{f}$, and so we find $u = (\T f) x$; likewise, we obtain $u = (\T g) y$. From this it is clear that $(x,y) \in \rl{\T} f\corel \rl{\T} \converse{g}$. Now suppose for a contradiction that $(x,y) \in \rl{\T} (f\corel \converse{g})$. By definition this entails the existence of some $z' \in \T(f\corel \converse{g})$ such that $\T \pi_1^{f\corel \converse{g}}(z')=x$ and $\T \pi_2^{f\corel \converse{g}}(z') =y$. By surjectivity of $e'$, and hence, of $\T e'$, then there must be some $z'' \in \T R$ such that $\T e(z'')=z'$. Furthermore it follows from \eqref{eq:qq1} that $$x = \T \pi_1^{f\corel \converse{g}} (z') = \T \pi_1^{f\corel \converse{g}} (\T e'(z'')) = \T\pi_1^f( \T\pi_1^R(z'')) $$ and, similarly, $y = \T\pi_2^{\converse{g}}( \T\pi_2^R(z''))$. Both $\T \pi_1^f$ and $\T \pi_2^{\converse{g}}$ are isomorphisms and thus we obtain $\T\pi_1^R(z'')=z_1$ and $\T \pi_2^R(z'') = z_2$ - a contradiction to (\ref{equ:noexist}) above. \end{proof} Putting this together with Fact~\ref{fact:basiclift}(2,3) we immediately obtain the following. \begin{cor}\label{cor:extT} Let $\T$ be a set functor and let $\rl{\T}$ be the operation that maps a set $X$ to $\rl{\T} X \mathrel{:=} \T X$ and a relation $R$ to the $\T$-lifting $\rl{\T} R$ of $R$. Then the following are equivalent: \begin{enumerate}[\em(1)] \item $\T$ preserves weak pullbacks; \item $\rl{\T}$ is a functor on the category $\mathsf{Rel}$ of sets and relations; \item $\rl{\T}$ is a \emph{relator}, that is, a monotone functor on the category $\mathsf{Rel}$. \end{enumerate} \end{cor} Closely related to this is an important consequence of the functor preserving weak pullbacks, namely that the notions of bisimilarity and behavioral equivalence coincide. \begin{rem} In \cite{rutt:univ00} it is proved that if $\T$ preserves weak pullbacks then for any pair of coalgebras $\mathbb{X} = \struc{X,\xi}$ and $\mathbb{X}' = \struc{X',\xi'}$, two states $x$ and $x'$ are behaviorally equivalent iff there is a bisimulation (see Remark~\ref{r:bisi}) linking $x$ to $x'$. \end{rem} \subsection{Standard functors} \label{ss:standard} As mentioned earlier on we will almost exclusively work with $\mathsf{Set}$-functors that are standard. In Remark~\ref{r:rl-wd} we saw that this will ensure that the definition of the lifting of a relation $R$ is independent of the type of $R$. Now we will see some further nice consequences of standardness for the notions of relation lifting. To start with, in case $\T$ is standard, $\rl{\T}$ commutes with the domain and range of a function; and if $\T$ preserves weak pullbacks in addition, then $\rl{\T}$ also commutes with restrictions. \begin{prop} \label{p:st-rl} Let $\T$ be a standard set functor. Then \begin{enumerate}[\em(1)] \item $\rl{\T}$ commutes with taking domains: $\mathsf{dom}(\rl{\T} R) = \T(\mathsf{dom} R)$ for all relations $R \subseteq X_{1} \times X_{2}$. \item $\rl{\T}$ commutes with taking range: $\mathsf{rng}(\rl{\T} R) = \T(\mathsf{rng} R)$ for all relations $R \subseteq X_{1} \times X_{2}$. \item If $\T$ preserves weak pullbacks, then $\rl{\T}$ commutes with taking restrictions: \[ \rl{\T} (R\rst{Y_{1}\times Y_{2}}) = (\rl{\T} R)\rst{\T Y_{1} \times \T Y_{2}} \] for all sets $X_{1},X_{2},Y_{1}$ and $Y_{2}$, with $Y_{1}\subseteq X_{1}$ and $Y_{2}\subseteq Y_{1}$, and for all relations $R \subseteq X_{1} \times X_{2}$. \end{enumerate} \end{prop} \begin{proof} For part~1, we first consider the inclusion $\mathsf{dom}(\rl{\T} R) = \T(\mathsf{dom} R)$. Let $R \subseteq X_1 \times X_2$ be a relation and take an element $\alpha \in \mathsf{dom}(\rl{\T} R)$. Then $(\alpha,\beta) \in \rl{\T} R$, for some $\beta\in \T X_{2}$. We denote by $\iota: \mathsf{dom}(R) \to X_1$ the inclusion of $\mathsf{dom}(R)$ into $X_1$ and by $\pi_1': R \to \mathsf{dom}(R)$ the restriction of the projection map $\pi_1: R \to X_1$; then we have $\pi_1 = \iota \cof \pi_1'$. By definition of $\rl{\T}$ there exists some $\rho \in T R$ such that $\T \pi_1 (\rho) = \alpha$ and hence $\T \iota (\T \pi_1' (\rho))=\alpha$. As $\T$ is standard this shows that $\alpha = \T \pi_1'(\rho) \in \T \mathsf{dom}(R)$ as required. For the opposite inclusion, let $f: \mathsf{dom}(R) \to \mathsf{rng}(R)$ be any map such that $f \subseteq R$; then it follows that $\T f \subseteq \rl{\T} R$. In other words, for all $\alpha \in \T(\mathsf{dom} R)$ we have $\alpha \rel{\rl{\T} R} \T f(\alpha)$. From this it is immediate that $\T(\mathsf{dom} R) \subseteq \mathsf{dom}(\rl{\T} R)$. The proof of part~2 is completely analogous. For part~3, we refer to~\cite[Prop.~6.4]{kuve08:coal}. \end{proof} \noindent Proposition~\ref{p:st-rl} is particularly useful for linking the relation lifting of $\T$ to that of its finitary version $\Tom$. \begin{prop} \label{lem:tomb} \label{p:tomb} Let $\T$ be a standard and weak pullback preserving set functor, let $\Tom$ be its finitary version and let $R \subseteq X_1 \times X_2$ be a relation. Then \[ \rl{\T_\omega} R = \rl{\T} R \cap (\Tom X_{1} \times \Tom X_{2}). \] \end{prop} \begin{proof} Let $R \subseteq X_1 \times X_2$ be a relation and take a pair $(\alpha,\beta) \in \Tom X_1 \times \Tom X_2$. By definition of $\Tom$ there must be finite sets $X_1' \subseteq_\omega X_1$ and $X_2 ' \subseteq_\omega X_2$ such that $\alpha \in \Tom X_1' = \T X_1'$ and $\beta \in \Tom X_2' = \T X_2'$. In order to prove the inclusion $\supseteq$, assume that $(\alpha,\beta) \in \rl{\T} R$. By Proposition~\ref{p:st-rl} we have \begin{equation}\label{equ:resricttofiniterel} (\alpha,\beta) \in \rl{\T} R \quad \mbox{iff} \quad (\alpha,\beta) \in \rl{\T} (R\rst{X_1' \times X_2'}) \end{equation} and because $\rl{\T_\omega}(R\rst{X_1' \times X_2'}) \subseteq \rl{\T_\omega}(R)$ the inclusion holds if we can prove that $(\alpha,\beta) \in \rl{\T_\omega} R'$ with $R' \mathrel{:=} R\rst{X_1' \times X_2'}$. The following diagram commutes: \\ \centerline{\xymatrix{ \Tom X_1' \ar@{=}[d] & \Tom R' \ar@{=}[d] \ar[l] \ar[r] & \Tom X_2' \ar@{=}[d]\\ \T X_1' & \ar[l] \T R' \ar[r] & \T X_2' }} Therefore we have that $(\alpha,\beta) \in \rl{\T} R'$ iff $(\alpha,\beta) \in \rl{\T_\omega} R'$. By (\ref{equ:resricttofiniterel}) we have $(\alpha,\beta) \in \rl{\T} R'$ and hence $(\alpha,\beta) \in \rl{\T_\omega} R'$ as required. The proof of the opposite inclusion is similar. \end{proof} On the basis of Proposition~\ref{p:tomb} we will often be sloppy and write $(\alpha,\beta) \in \rl{\T} R$ instead of $(\alpha,\beta) \in \rl{\T_\omega} R$, for elements $\alpha \in \Tom X_1$ and $\beta \in \Tom X_2$. More importantly, Proposition~\ref{p:tomb} allow us to prove our earlier claim, that $\Tom$ inherits the properties of standardness and weak pullback preservation from $\T$. \begin{proofof}{Proposition~\ref{p:Tomwp}} Let $\T$ be a standard, weak pullback preserving set functor. In order to see that $\Tom$ is standard consider two sets $X,X'$ with $X'\subseteq X$ and let $\iota:X' \to X$ be the inclusion of $X'$ into $X$. By the definition of $\Tom$ for every set $X$ we have that $\Tom X$ is a subset of $\T X$ and that the inclusion $\tau_X: \Tom X \to \T X$ is natural. It follows by naturality that $\Tom\iota$ is also an inclusion: $$\xymatrix{ T_\omega X' \ar[d]_{\Tom \iota} \ar@{^{(}->}[r]^{\tau_{X'}} & T X' \ar@{^{(}->}^{\T\iota}[d] \\ T_\omega X \ar@{^{(}->}[r]_{\tau_{X}} & T_\omega X} $$ More precisely, for all $\alpha \in \Tom X$ we have \begin{eqnarray*} \Tom \iota (\alpha) & = & \tau_X (\Tom \iota (\alpha)) \stackrel{\mbox{\tiny (nat. of $\tau$)}}{=} \T \iota (\tau_X'(\alpha)) = \T \iota (\alpha) \stackrel{\mbox{\tiny{$\T$ standard}}}{=} \alpha \end{eqnarray*} which demonstrates that $\Tom \iota$ is the inclusion map from $\Tom X'$ into $\Tom X$, and shows that $\Tom$ is standard indeed. We now prove that $\Tom$ preserves weak pullbacks. By Fact~\ref{fact:char_wpp} it suffices to prove that for arbitrary relations $R \subseteq X_1 \times X_2$ and $Q \subseteq X_2 \times X_3$ we have $\rl{\T_\omega}(R\corel Q) = \rl{\T_\omega}(R) \corel \rl{\T_\omega}(Q)$. In order to see this we use Proposition~\ref{lem:tomb}. We have \begin{eqnarray*} (\alpha,\beta) \in \rl{\T_\omega}(R\corel Q) & \mbox{iff} & (\alpha,\beta) \in \rl{\T}(R\corel Q)\rst{\Tom X_1 \times \Tom X_3} \\ & \mbox{iff} & (\alpha,\beta) \in \rl{\T}(R\corel Q)\rst{\T X_1' \times \T X_3'} \mbox{ for some } X_1' \subseteq_\omega X_1, X_3' \subseteq_\omega X_3 \\ & \mbox{iff} & (\alpha,\beta) \in \rl{\T}((R\corel Q)\rst{X_1' \times X_3'}) \mbox{ for some } X_1' \subseteq_\omega X_1, X_3' \subseteq_\omega X_3 \\ & \mbox{iff} & (\alpha,\beta) \in \rl{\T}(R\rst{X_1' \times X_2'}\corel Q\rst{X_2' \times X_3'}) \\ & & \mbox{for some} \; X_1' \subseteq_\omega X_1, X_2' \subseteq_\omega X_2, X_3' \subseteq_\omega X_3 \\ & \mbox{iff} & (\alpha,\beta) \in \rl{\T}(R\rst{X_1' \times X_2'})\corel \rl{\T}(Q\rst{X_2' \times X_3'}) \\ & & \; \mbox{for some} \; X_1' \subseteq_\omega X_1, X_2' \subseteq_\omega X_2, X_3' \subseteq_\omega X_3 \\ & \mbox{iff} & (\alpha,\beta) \in \rl{\T_\omega}(R)\corel \rl{\T_\omega}(Q) \rlap{\hbox to 197 pt{\hfill\qEd}} \end{eqnarray*}\let\hbox{\textsc{qed}}=\relax \end{proofof}\let\hbox{\textsc{qed}}=\qed \noindent Finally, we finish this subsection with noting that relation lifting interacts well with the natural transformation $\mathit{Base}: \Tom \to \Pom$. \begin{prop} \label{prop:base-hom} Let $\T$ be a standard functor that preserves weak pullbacks. Given a relation $R \subseteq X_{1} \times X_{2}$ and elements $\alpha_{i} \in \T X_{i}$, $i \in \{ 1,2 \}$, it follows from $\alpha_{1} \mathrel{\rl{\T} R} \alpha_{2}$ that $\mathit{Base}(\alpha_{1}) \mathrel{\rl{\funP} R} \mathit{Base}(\alpha_{2})$. In particular, we have that $\mathit{Base}(\alpha_{1}) \subseteq \mathsf{dom}(R)$ and $\mathit{Base}(\alpha_{2}) \subseteq \mathsf{rng}(R)$. \end{prop} \begin{proof} Let $\pi^{R}_{i}$ be the projection of $R$ to $X_{i}$, then it follows from $\alpha_{1} \mathrel{\rl{\T} R} \alpha_{2}$ that $\alpha_{i} = \T\pi^{R}_{i}(\rho)$ for some $\rho\in \T R$. But then by naturality of $\mathit{Base}$ we find that $\mathit{Base}(\alpha_{i}) = \mathit{Base}(\T\pi^{R}_{i}(\rho)) = (\funP \pi^{R}_{i})(\mathit{Base}(\rho))$, and so $\mathit{Base}(\rho) \in \funP R$ is a witness to the fact that $\mathit{Base}(\alpha_{1}) \mathrel{\rl{\funP} R} \mathit{Base}(\alpha_{2})$. \end{proof} \subsection{Relation Lifting \& distributive laws} \label{ss:distributive-laws} A relation that plays an important role in our paper is the $\T$-lifting of the membership relation $\in$. If needed, we will denote the element relation, restricted to a given set $X$, as the relation ${\in_{X}} \subseteq X \times \funP X$. \begin{definition}\label{def:elementlift} Given a standard functor $\T$ that preserves weak pullbacks, we define, for every set $X$, a function $\lambda\!^{\T}_X:\T \Pow X \to \Pow \T X$ by putting \[ \lambda\!^{\T}_X(\Phi) \mathrel{:=} \{ \alpha \in \T X \mid \alpha \mathrel{\rl{\T} {\in_{X}}} \Phi \}. \] Elements of $\lambda\!^{\T}_X(\Phi)$ will be referred to as \emph{lifted members} of $\Phi$. The family $\lambda\!^{\T}=\{\lambda\!^{\T}_X\}_{X \in \mathsf{Set}}$ will be called the {\em $\T$-transformation}. \end{definition} Properties of $\rl{\T}$ are intimately related to those of $\lambda\!^{\T}$. In order to express the connection, we need to introduce the concept of a distributive law. \begin{definition}\label{d:distlaw} Let $\T$ be a covariant set functor. A \emph{distributive law} of $\T$ over a (co- or contravariant) set functor $M$ is a natural transformation $\theta: \T M \to M \T$; that is, the following diagram commutes, for every map $f: X \to Y$: \[ \xymatrix{ \T M X \ar[d]_{\T M f} \ar[r]^{\theta_X} & M \T X \ar[d]^{M \T f} \\ \T M Y \ar[r]^{\theta_Y} & M \T Y } \] (Clearly, in case $M$ is a contravariant functor the downward arrows have to be reversed.) For $\theta$ to be \emph{distributive law} of $\T$ over a set monad $(M,\eta, \mu)$, we require in addition that $\theta$ is compatible with the monad structure, in the sense that the following diagrams commute, for every set $X$: \begin{equation} \label{eq:dl-diag} \xymatrix{ \T X \ar[rd]_{\eta_{\T X}} \ar[r]^{\T \eta_X} & \T M X \ar[d]^{\theta_X} \\ & M \T X } \hspace*{20mm} \xymatrix{\T M M X \ar[d]^{\T \mu_X} \ar[r]^{\theta_{ M X}} & M \T M X \ar[r]^{ M \theta_X}& M M \T X \ar[d]^{\mu_X} \\ \T M X \ar[rr]_{\theta_X}& & M \T X } \end{equation} \end{definition} If the functor $\T$ preserves weak pullbacks, the $\T$-transformation $\lambda\!^{\T}$ provides a distributive laws of $\T$ over the power set monad $\PowMo = (\Pow,\{ \cdot\},\bigcup)$. A detailed proof of this fact can be found in~\cite[Sec.~4]{bart04:trac}. \begin{fact}\label{fact:distriblaw} If $\T$ preserves weak pullbacks, $\lambda\!^{\T}=\{\lambda\!^{\T}_X\}_{X \in \mathsf{Set}}$ is a distributive law of $\T$ over the power set monad $\PowMo$. \end{fact} What it means, set-theoretically, for $\lambda\!^{\T}$ to be a distributive law of $\T$ over $\PowMo$ is the following. The fact that $\lambda\!^{\T}$ is a natural transformation from $\T\funP$ to $\funP\T$ is another way of saying that for every map $f: X \to Y$, and every object $\Phi \in \T\funP X$, we obtain the lifted members of $\T\funP\Phi$ by applying the operation $\T f$ to the lifted members of $\Phi$. The diagram on the left of \eqref{eq:dl-diag}, relating the singleton map $\eta_{X}: X \to \funP X$ to the $\T$-transformation, states that an object $\alpha \in \T X$ is always the \emph{unique} lifted member of the lifted set $\T\eta_{X}(\alpha)$. To understand the diagram on the right, recall that the multiplication $\mu$ of $\PowMo$ is the union map $\bigcup_{X}: \funP\funP X \to \funP X$. Applying the functor to this we obtain a map $\T\bigcup_{X}: \T\funP\funP X \to \T\funP X$. Observe that given an object $\Phi \in \T\funP\funP X$, we may thus take lifted members of $(\T\bigcup_{X})(\Phi)$; however, we may also take lifted members of $\Phi$ itself, and since each of these will belong to the set $\T\funP X$, we may repeat the operation of taking lifted members. Now the right diagram in \eqref{eq:dl-diag} states that the lifted members of $(\T\bigcup_{X})(\Phi)$ coincide with the objects we may obtain as lifted members of lifted members of $\Phi$. \begin{rem} The existence of a distributive law of a set functor $\T$ over the power set monad $\PowMo$ corresponds to an extension of the functor $\T$ to the Kleisli category $\mathsf{Kl}(\PowMo)$ of $\PowMo$. Furthermore it is easy to see that $\mathsf{Kl}(\PowMo)$ is isomorphic to the category $\mathsf{Rel}$ of sets with relations. Putting these facts together it is clear that any distributive law of a set functor $\T$ over $\PowMo$ corresponds to an extension of $\T$ to a functor on the category $\mathsf{Rel}$. We saw in Corollary~\ref{cor:extT} that the $\T$-lifting of a relation can be used to extend $\T$ to a functor $\rl{\T}:\mathsf{Rel} \to \mathsf{Rel}$ iff $\T$ preserves weak pullbacks. In this case $\lambda\!^{\T}$ is the corresponding distributive law. Further remarks and references can be found in Section~\ref{s:rellift:notes}. \end{rem} Perhaps somewhat surprisingly, the $\T$-transformation can be also seen as a distributive law over the \emph{contravariant} power set functor. \begin{prop} \label{p:nbdlfunQ} Let $\T:\mathsf{Set} \to \mathsf{Set}$ be a functor that preserves weak pullbacks. Then $\lambda\!^{\T}$ is a distributive law of $\T$ over the contravariant power set functor. \end{prop} \begin{proof} Let $f: X \to Y$ be a function. We have to show that the following diagram commutes: \centerline{ \xymatrix{ \T \funQ Y \ar[r]^{\lambda\!^{\T}_Y} \ar[d]_{\T\funQ f} & \funQ\T Y \ar[d]^{\funQ \T f} \\ \T \funQ X \ar[r]_{\lambda\!^{\T}_X} & \funQ\T X}} This can be verified by a straightforward calculation: \[ \begin{array}{rclcl} \alpha \in \lambda\!^{\T}_X((\T \funQ f)(\Phi)) & \mbox{iff} & \Phi (\T \funQ f\corel \rl{\T} {\ni_X}) \alpha & \mbox{iff} & \Phi (\rl{\T}(\funQ f\corel {\ni_X})) \alpha \\& \mbox{iff} & \Phi (\rl{\T}({\ni_Y}\corel \converse{f})) \alpha & \mbox{iff} & \Phi (\rl{\T} {\ni_Y} \corel \converse{\T f}) \alpha \\& \mbox{iff} & \T f( \alpha) \in \lambda_Y(\Phi) & \mbox{iff} & \alpha \in (\funQ \T f) ( \lambda_Y(\Phi)) \end{array} \] Here we freely apply properties of relation lifting, and in the third equivalence we use the easily verified fact that $\funQ f\corel {\ni_X} = {\ni_Y}\corel \converse{f}$. \end{proof} In our paper both distributive laws play an important role. The fact that $\lambda\!^{\T}$ is a distributive law over $\funQ$ is essential for proving that the semantics of Moss' logic is bisimulation invariant, and the distributivity of $\T$ over the monad $\PowMo$ is crucial for the soundness of our axiomatization. \medskip To finish this subsection, we gather some elementary facts on the $\T$-transformation. \begin{prop} \label{p:nbsem} Let $\T$ be a standard, weak pullback-preserving functor, let $X$ be some set and let $\Phi \in \Tom\funP X$. \begin{enumerate}[\em(1)] \item If $\varnothing \in \mathit{Base}(\Phi)$ then $\lambda\!^{\T}(\Phi) = \varnothing$. \item \label{item:memberofdistri} If $\mathit{Base}(\Phi) \subseteq \{ Y \}$ for some $Y \subseteq X$, then $\lambda\!^{\T}(\Phi) \subseteq \T Y$. \item \label{item:singletonredistri} If $\mathit{Base}(\Phi)$ consists of singletons only, then $\size{\lambda\!^{\T}(\Phi)} =1$. \item If $\T$ maps finite sets to finite sets, then for all $\Phi \in \Tom \Pom X$, $|\lambda\!^{\T}(\Phi)| < \omega$. \item If $\Phi \in \Tom\Pom X$, then $\lambda\!^{\T}(\Phi) \in \funP\Tom X$. \end{enumerate} \end{prop} \begin{proof} For part~1, assume that $\varnothing \in \mathit{Base}(\Phi)$ and assume for contradiction that $\alpha$ is a lifted member of $\Phi$. It follows by Proposition~\ref{prop:base-hom} that $\mathit{Base}(\alpha) \rel{\Pb{\in}} \mathit{Base}(\Phi)$. But from this it would follow, if $\varnothing \in \mathit{Base}(\Phi)$, that $\mathit{Base}(\alpha)$ contains a member of $\varnothing$, which is clearly impossible. Consequently, the set $\lambda\!^{\T}(\Phi)$ must be empty. In order to prove part~\ref{item:memberofdistri}, assume that $\Phi \in \T \{ Y \}$, for some subset $Y$ of $X$, and suppose that $\alpha \rel{\Tb{\in}} \Phi$. Then by Proposition~\ref{p:st-rl}(3) we have $\alpha \mathrel{\rl{\T} {\in_{\rst{X \times \{ Y \}}}}} \Phi$ and so by part~1 of the same Proposition we find $\alpha \in \T \mathsf{dom} (\in_{\rst{X \times \{ Y\}}}) = \T Y$. For part~3, observe that another way of saying that $\mathit{Base}(\Phi)$ consists of singletons only, is that $\Phi\in \Tom S_{X}$, where $S_{X} \subseteq \funP X$ is the collection of singletons from $X$. Let $\theta_{X}: S_{X} \to X$ be the inverse of $\eta_{X}$, that is, $\theta_{X}$ is the bijection mapping a singleton $\{ x \}$ to $x$. Clearly then, the map $\Tom\theta_{X}: \Tom X \to \Tom S_{X}$ is a bijection as well. In addition, we have $\cv{\theta_{X}} = {\in_{X}}$, from which it follows by elementary properties of relation lifting that $\cv{(\T\theta_{X})} = \rl{\T}{\in_{X}}$. From this it is immediate that if $\Phi \in \Tom S_{X}$, then $(\T\theta_{X}) (\Phi)$ is the unique lifted member of $\Phi$. Concerning part~4, assume that $\Phi \in \Tom \Pom X$. Then by definition, $\Phi \in \T \mathcal{Y}$ for some $\mathcal{Y}\subseteq_{\omega} \Pom X$. From this it follows that $\mathcal{Y} \subseteq \funP Y$ for some finite $Y \subseteq X$, and this implies that $\mathit{Base}(\Phi) \subseteq \funP Y$. If $\alpha$ is a lifted member of $\Phi$, then by Proposition~\ref{prop:base-hom} we obtain $\mathit{Base}(\alpha) \rel{\Pb{\in}} \mathit{Base}(\Phi)$, and so in particular we find $\mathit{Base}(\alpha) \subseteq \bigcup \mathit{Base}(\Phi) \subseteq Y$. From this it follows that $\lambda\!^{\T}(\Phi) \subseteq \T Y$, and so by the assumption on $\T$, the set $\lambda\!^{\T}(\Phi)$ must be finite. Finally, we consider part~5. Take an object $\Phi \in \Tom\Pom X$ and let $\alpha \in \T X$ be an arbitrary lifted member of $\Phi$. Reasoning just as for part~4, we obtain that $\alpha \in \T Y$ for some finite $Y \subseteq X$, and so by definition of $\Tom$ we find that $\alpha \in \Tom X$. \end{proof} \subsection{Slim redistributions} \label{ss:slim} The syntax of Moss' logic is built using negations, conjunctions, disjunctions and the $\nabla$-operator. An axiomatisation of the logic has to specify the interaction of these operations. As we will see, so-called \emph{slim redistributions} are the key to understand how conjunction interacts with the $\nabla$-operator. \begin{definition} \label{d:srd} Let $\T$ be a set functor. A set $\Phi \in \T \Pow X$ is a {\em redistribution} of a set $A \in\Pow \T X$ if $A \subseteq \lambda\!^{\T}_X(\Phi)$, that is, every element of $A$ is a lifted member of $\Phi$. In case $A \in \Pom\Tom X$, we call a redistribution $\Phi$ {\em slim} if $\Phi \in \Tom \Pom (\bigcup_{\alpha \in A} \mathit{Base}(\alpha))$. The set of slim redistributions of $A$ is denoted as $\mathit{SRD}(A)$. \end{definition} Intuitively, redistributions of $A$ are ways to reorganize the material of $A$. The slimness condition $\Phi \in \Tom\Pom (\bigcup_{\alpha \in A} \mathit{Base}(\alpha))$ should be seen as a minimality requirement, ensuring that $\Phi$ is `built from the ingredients of $A$'. \begin{exa} \label{ex:srd} First we consider the binary $C$-labelled tree functor $\mathit{B}_{C}$ of Example~\ref{ex:2}. Let $\pi_{C},\pi_{1}$ and $\pi_{2}$ denote the respective projections from $\mathit{B}_{C} X$ to $C$, $X$ and $X$, respectively. An object $\Phi \in \mathit{B}_{C}\funP X$ is of the form $(c,Y,Z)$ with $c\in C$ and $Y,Z \in \funP X$. Such a $\Phi$ is a redistribution of a set $A = \{ (c_{i},y_{i},z_{i}) \mid i \in I \} \subseteq_{\omega} \mathit{B}_{C} X$ iff for all $i \in I$ we have $c_{i} = c$, $y_{i} \in Y$ and $z_{i} \in Z$, and such a redistribution is slim if in addition, $Y \cup Z \subseteq \{ y_{i} \mid i \in I \} \cup \{ z_{i} \mid i \in I \}$. On this basis it is not hard to derive that \[ \mathit{SRD}(A) = \left\{\begin{array}{ll} \{(c,\varnothing,\varnothing) \mid c \in C \} & \text{if } A = \varnothing \\ \varnothing & \text{if}\ |\pi_{C}[A]|\ge 2 \\ \{ (c_{A},S_1,S_2) \mid \pi_j[A] \subseteq S_j \subseteq \pi_1[A] \cup \pi_2[A] \mbox{ for } j=1,2\} & \text{if}\ \pi_{C}[A]=\{c_A\} \end{array}\right. \] \end{exa} \begin{rem}\label{rem:superslim} For our purpose it would suffice to consider instead of $\mathit{SRD}(A)$ a smaller set $\mathit{SRD}'(A)$ as long as it order-generates $\mathit{SRD}(A)$ in the sense that for all $\Phi\in\mathit{SRD}(A)$ there is $\Phi'\in\mathit{SRD}'(A)$ such that $\Phi' \,\rl{\T}(\subseteq)\, \Phi$. Such an $\mathit{SRD}'(A)$ can replace the $\mathit{SRD}(A)$ in the rule ($\nabla 2$) that will form a crucial part in our derivation system. In the example above, $\mathit{SRD}'(A)$ can be given by simplifying the third clause to \[ \begin{array}{ll} \{ (c_{A},\pi_1[A],\pi_2[A]) \} & \text{if}\ \pi_{C}[A]=\{c_A\} \end{array} \] We thank Fredrik Dahlqvist for pointing out that this clause does not give $\mathit{SRD}(A)$. \end{rem} \begin{exa}\label{ex:srd2} In case we are dealing with the power set functor $\funP$, first observe that given a set $X$, the relation $\rl{\funP}{\in_{X}} \subseteq \funP X \times \funP\funP X$ is given by \[ \alpha \rel{\Pb{\in}} \Phi \mbox{ iff } \alpha \subseteq \bigcup\Phi \text{ and } \alpha\cap\beta\neq\varnothing \text{ for all } \beta \in \Phi. \] On the basis of this observation it is easy to check that $\Phi \in \funP X$ is a redistribution of $A \in \funP\funP X$ if $\bigcup A \subseteq \bigcup\Phi$ and $\alpha\cap\beta\neq\varnothing$ for all $\alpha\in A$ and $\beta\in \Phi$. Furthermore, we obtain \[ \Phi \in \mathit{SRD}(A) \text{ iff } \bigcup A = \bigcup\Phi \text{ and } \alpha\cap\beta\neq\varnothing \text{ for all $\alpha\in A$, $\beta\in \Phi$}. \] Hence, in the case of the power set functor we are dealing with a \emph{symmetric} relation: $\Phi \in \mathit{SRD}(A)$ iff $A \in \mathit{SRD}(\Phi)$. \end{exa} The following observation, which is due to M.~B\'ilkov\'a, shows that slim redistributions naturally occur in the context of distributive lattices. \begin{exa} \label{ex:bilk} Let $\mathbb{D}$ be a distributive lattice. The distributive law for $\mathbb{D}$ can be formulated as follows. For any set $A \in \Pom\Pom D$, we have \begin{equation*} \label{eq:dl0} \bigwedge_{\alpha\in A} \bigvee \alpha = \bigvee_{\gamma\in\mathit{CF}(A)} \bigwedge \mathsf{rng}(\gamma), \end{equation*} where $\mathit{CF}(A)$ is the set of \emph{choice functions} on $A$, that is, $\mathit{CF}(A)$ is the set of maps $\gamma: A \to D$ such that $\gamma(\alpha) \in \alpha$, for all $\alpha \in A$. Then it is straightforward to verify that the set $\{ \mathsf{rng}(\gamma) \mid \gamma \in \mathit{CF}(A) \}$ is in fact a slim redistribution of $A$. In fact, we may prove that \begin{equation} \label{eq:rdl1} \bigwedge_{\alpha\in A} \bigvee \alpha = \bigvee_{\Phi\in\mathit{SRD}(A)}\bigvee_{\phi\in\Phi} \bigwedge \phi. \end{equation} Later on we will see that our axiom governing the interaction of $\nabla$ with conjunctions, generalizes \eqref{eq:rdl1}. \end{exa} We finish the section with a proposition for future reference. \begin{prop} \label{item:redistriofempty} $\mathit{SRD}(\varnothing) = \T\{\varnothing\}$. \end{prop} \begin{proof} If $\Phi$ is a slim redistribution of the empty set, then by definition $\Phi \in \T\Pom(\varnothing) = \T \{ \varnothing \}$. Conversely, any $\Phi \in \T \{ \varnothing \}$ satisfies the condition that $\varnothing \subseteq \lambda\!^{\T}(\Phi)$, and so $\Phi \in \mathit{SRD}(\varnothing)$. \end{proof} \subsection{Notes}\label{s:rellift:notes} The relation lifting via spans as in Definition~\ref{d:rellift} was defined by Barr in \cite[Section 2]{barr:rela70}. Without stating it explicitly, he also proves that the relation lifting $\rl{\T}$ is a functor on $\mathsf{Rel}$ iff $T$ preserves weak pullbacks; see also Trnkov\'a~\cite{trnk80:gene} and, for a generalisation beyond set functors, Carboni, Kelly and Wood~\cite[4.3]{carb:2cat91} and Hermida~\cite[Theorem~2.3]{hermida:relational-modalities}. \cite{carb:2cat91} also studies the question which functors $\mathsf{Rel}\to\mathsf{Rel}$ arise from functors $\mathsf{Set}\to\mathsf{Set}$. Closely related notions of relator, also accounting for simulation as opposed to only bisimulation, are studied by Thijs~\cite{thijs:diss} and in the context of coalgebraic logic by \cite{baltag:cmcs00,cirstea:cmcs04,hugh-jaco:simulations}. The connection between coalgebraic logic and relation lifting goes back to the original paper by Moss~\cite{moss:coal99} which introduced $\nabla$ and defined its semantics by using relation liftings, albeit without making this notion explicit. Independently, essentially the same notion of relation lifting was studied in a fibrational setting by Hermida and Jacobs~\cite{herm98:stru}. For a comparison of the notions of bisimulation arising from relation lifting and related definitions see Staton~\cite{staton:calco09}. The relation lifting can also be obtained via a distributive law between a functor and a monad as in Definition~\ref{d:distlaw}, which is a slight, commonly used variant of the notion of a distributive law between monads \cite{beck:dist69}. As shown in \cite{beck:dist69}, there is a 1-1 correspondence between distributive laws and liftings of functors to the category of algebras. Similarly, distributive laws $\lambda:TM\to MT$ between a functor $T$ and a monad $M$, or monad op-functors $(T,\lambda):(\mathsf{Set},M)\to(\mathsf{Set},M)$ in the terminology of Street~\cite{Street:Monads}, are in 1-1 correspondence with liftings $\rl{\T}$ of $T$ to the Kleisli category of $M$. We thank Dirk Hofmann, Ji{\v r}{\'\i} Velebil and Steve Vickers for pointing out various references and their significance.
1206.4991
\section{Introduction} \label{intro} Single molecule electronic devices are being extensively studied because they offer perspectives for further miniaturization of electronic circuits with important potential applications \cite{nitzan,venk,galp,molen,cuevas}, and also due to its intrinsic interest in basic research, for example as realizations of the Kondo effect, which manifests experimentally by an increased conductance at low temperatures \cite{park,lian}. In addition, quantum phase transitions involving partially Kondo screened spin-1 molecular states were induced changing externally controlled parameters \cite{roch,parks,serge}. These experiments could be explained semiquantitatively using extensions of the impurity Anderson model treated with either the numerical renormalization group (NRG) \cite{parks,serge,epl} or with the non-crossing approximation (NCA) \cite{serge,epl,st1,st2}. This approximation allows calculations out of equilibrium and in particular at finite bias voltage. The conductance through benzene-1,4-dithiol has been measured using a mechanically controllable break junction \cite{reed}. More recently, other molecules containing benzene rings or related phenyl groups were studied \cite{dani,dado}. The states nearer to the Fermi level in benzene are built from the 2p states of the C atoms which lie perpendicular to the plane of the molecule (the so called $\pi $ states). Transport through aromatic molecules and in particular through benzene in different geometries has been calculated by several groups \cite{hett,carda,ke,bege,mole}. Hettler \textit{et al.} \cite{hett} started from an exact calculation of an extended Hubbard model for the $\pi $ states, and included the coupling to the leads perturbatively (for couplings smaller than the temperature), to calculate the current through neutral benzene under an applied bias voltage $V_b$ for zero gate voltage $V_{g}$, including radiative relaxation. Cardamone \textit{et al.} \cite{carda}, described the molecule by a Hubbard model supplemented by repulsions at larger distance (the so called Pariser-Parr-Pople (PPP) Hamiltonian \cite{ppp}) using the self-consistent Hartree-Fock approximation. The conductance of the effective one-body problem for small $V_{g}$ has been calculated using the Landauer-B\"{u}ttiker formalism \cite{but}, The authors propose to exploit the destructive interference and to control the conductance of the device introducing symmetry breaking perturbations. Similar results were obtained using \textit{ab initio} calculations \cite{ke}, which have also the drawback of neglecting the effect of correlations. This effect together with that interference were included in more recent works which started from the exact solution of the PPP model \cite{bege,mole}, as well as in previous works which discussed the conductance through molecules or rings of quantum dots (QDs) threaded by a magnetic flux \cite{jagla,hall,ihm,frie,soc,rinc}. The conductance $G$ depends on which sites of the molecule are coupled to the leads and on interference phenomena related to the symmetry of the system \cite{bege,mole,rinc}. For certain conditions $G$ can be totally suppressed and restored again by symmetry-breaking perturbations \cite{mole}. While the main conclusions are safe in general, the coupling to the leads was included in some perturbative approach, either by a Liouville equation method \cite{bege}, or using the Jagla-Balseiro formula (JBF) for the conductance \cite{jagla,ihm}, which assumes a non-degenerate ground state. These approaches miss the Kondo effect which is non-perturbative in the coupling to the leads \cite{soc}. This is particularly important when the gate voltage is such that the benzene molecule becomes charged, because the conductance is larger \cite{mole}. The JBF has also been used to predict that the transmittance integrated over a finite energy window \cite{jagla,frie,hall} and the equilibrium conductance \cite{hall,rinc} through a ring of strongly correlated one-dimensional systems display dips as a function of the applied magnetic flux at \emph{fractional} values of the flux quantum, due to destructive interference at crossings of levels with different charge and spin quantum numbers \cite{rinc} (as a consequence of spin-charge separation). Recently, we have confirmed the presence of dips in the current under a finite applied bias voltage at low temperatures, using the low-energy effective Hamiltonian, consisting of two doublets and a singlet, for the case of perfect destructive interference \cite{desint}. At equilibrium (zero bias voltage), the conductance as a function of temperature for this particular case has been calculated with NRG \cite{izum2}. The interplay between interference and interactions were also studied for spinless electrons in multilevel systems \cite{meden}, and benzene attached to two leads \cite{bohr}. Although at first sight they might seem artificial, spinless models describe effective Hamiltonians for realistic systems under applied magnetic fields \cite{nils,pss}. Interference phenomena were observed in systems with QDs. In a multilevel QD, the crossing of levels and the ensuing destructive interference has been induced applying magnetic field to a system with large, level-dependent $g$ factors \cite{nils}. Aharonov-Bohm oscillations were observed in systems involving two quantum dots (QDs) \cite{hata}. In this work, we construct the low-energy effective Hamiltonian for a ring of $n$ sites with one orbital per site and symmetry that includes the point group $C_{nv}$ (or $C_{n}$), weakly coupled to two conducting leads, retaining for $n$ even, the lowest singlet with $n$ particles and the two lowest doublets with $n+1$ particles (electrons or holes depending on the sign of the applied gate voltage $V_{g}$). For $n$ odd, the charge of doublets and singlets are interchanged. Using a gauge transformation, three of the four hopping matrix elements between the doublets and the leads can be made real. The phase of the fourth one $\phi $ is in general different from zero and depends on the position of the leads and the wave vectors of the states involved. While the coupling to the leads should be small in such a way that the neglected states do not affect the calculations \cite{lobos}, we treat this coupling in a self-consistent, non perturbative way, using the the Keldysh formalism of the NCA for problems out of equilibrium \cite{st2,nca,nca2}, appropriately extended for this problem. We calculate the non-equilibrium conductance for case of benzene ($n=6$) in a regime of $V_{g}$ for which the doublets are favored, leading to Kondo effect. For the case of one doublet, comparison of NCA with NRG results \cite{compa}, shows that the NCA describes accurately the Kondo physics. The leading behavior of the differential conductance for small voltage and temperature \cite{roura} agrees with alternative Fermi-liquid approaches \cite{ogu,scali}, and the temperature dependence of the conductance practically coincides with the NRG result over several decades of temperature \cite{roura}. A shortcoming of the NCA is that, at very low temperatures, it introduces an artificial spike at the Fermi energy in the spectral density when the ground state of the system without coupling to the leads (zero hybridization) is non-degenerate, although the thermodynamic properties continue to be well described \cite{nca}. This is not important in the present work, where the doublets are below the singlet and there is no applied magnetic field. Another limitation of the method is that it is restricted to temperatures above $\sim T_{K}/20$, where $T_{K}$ is the Kondo temperature. In our case, this limitation is not important either because as we shall see, the conductance has already saturated at temperatures above this limit. A virtue of the NCA that becomes important in our case, is its ability to capture features at high energies, such as peaks in the spectral density out of the Fermi level, which might be lost in NRG calculations \cite{vau}. An example is the plateau at intermediate temperatures observed in transport through C$_{60}$ molecules for gate voltages for which triplet states are important \cite{roch,serge}, which was missed in early NRG studies, but captured by the NCA \cite{st1,st2}. More recent NRG calculations using tricks to improve the resolution \cite{freyn}, have confirmed this plateau \cite{serge}. For the particular case of perfect destructive interference $\phi =\pi $ (which does not correspond to benzene), the spectral densities \cite{fcm} and the non-equilibrium current \cite{desint}, were calculated before for symmetric coupling to the leads. In this case, the model is equivalent to an SU(4) impurity Anderson model, used by several authors \cite{lim,ander,lipi,buss} to interpret transport experiments in C nanotubes \cite{jari,maka}, and more recently in Si fin-type field effect transistors \cite{tetta}. If in addition a finite splitting $\delta $ between the doublets is allowed, the symmetry is reduced to SU(2). The transition between SU(4) and SU(2) has been studied by several authors using the NCA \cite{desint,fcm,lipi,tetta}. In particular we found that the reduction of the Kondo temperature with $\delta $ agrees very well with a simple formula obtained from a variational calculation \cite{desint,fcm}. An important difference between our model and those used for other systems is the connectivity with the leads, which imposes that in our case, the equilibrium conductance $G(V_b=0)$ is zero in the SU(4) limit $\delta =0$, and therefore, it is not proportional to the total spectral density of states \cite{desint}. For $\phi =\pi $ and any $\delta $, it is possible to calculate $G(0)$ in terms of static occupations, using a generalized Friedel-Langreth sum rule \cite{desint}, due to the simplicity of the ``orbital" field $\delta $, which acts in the same way as a magnetic field, and allows to relate the phase shift for each channel and spin with the respective occupancies. In turn, these phase shifts enter a general Fermi-liquid expression for the conductance obtained by Pustilnik and Glazman \cite{pus}. This is not possible for general $\phi$. Therefore a novel treatment is needed to calculate the conductance even at equilibrium. Recently, it has been shown that in carbon nanotube QDs with disorder induced valley mixing, the SU(4) symmetry is broken and the tunnel couplings to metallic leads become complex and depend on the applied magnetic field \cite{grove}. The present formalism is also suitable to study this case. \begin{figure}[tbp] \includegraphics[width=8.0cm ]{fig1.eps} \caption{(Color online) Scheme of the relevant matrix elements at low energies.} \label{scheme} \end{figure} The paper is organized as follows. In Section \ref{model} we describe the construction of the effective Hamiltonian. The approximations and the equation for the current are presented in Section \ref{forma}. Section \ref{res} contains the numerical results for transport through the benzene molecule and their interpretation. Section \ref{sum} contains a summary and a short discussion. In appendix \ref{cc} we show that the NCA conserves the current, extending the existing demonstration for the case of one doublet \cite{nca}. Appendix \ref{deta} contains some numerical details. \section{Model} \label{model} The effective model, which is represented at the right of Fig. \ref{scheme}, contains a singlet with total wave vector $K_{0}$ (usually 0 or $\pi$) and two doublets with wave vectors $K_{1}$ and $K_{2}$, that represent the states of lowest energy of two neighboring configurations of a ring of $n$ sites with symmetry $C_{nv}$. In the case of benzene, they correspond to the singlet ground state, invariant under rotations ( $K_{0}=0$) and two degenerate doublets with total wave vectors $\pm K$, which are the ground state of the molecule for one added electron or hole, depending on the sign of the applied gate voltage \cite{bege,mole,rinc}. For one added hole $K=\pi /3$, while for one added electron $K=2\pi /3$. The effective Hamiltonian is \begin{eqnarray} H &=&E_{s}|s\rangle \langle s|+\sum_{i\sigma }E_{i}|i\sigma \rangle \langle i\sigma |+\sum_{\nu k\sigma }\epsilon _{\nu k}c_{\nu k\sigma }^{\dagger }c_{\nu k\sigma } \nonumber \\ &&+\sum_{i\nu k\sigma }(V_{i}^{\nu }|i\sigma \rangle \langle s|c_{\nu k\sigma }+\mathrm{H.c}.), \label{ham} \end{eqnarray where the singlet $|s\rangle $ and the two doublets $|i\sigma \rangle $ ($i=1,2$; $\sigma =\uparrow $ or $\downarrow $) denote the localized states, $c_{\nu k\sigma }^{\dagger }$ create conduction states in the left ($\nu =L$) or right ($\nu =R$) lead, and $V_{i}^{\nu }$ describe the hopping elements between the leads and both doublets, assumed independent of $k$. This hopping element is calculated as \cite{rinc} \begin{equation} V_{i}^{\nu }=t_{\nu }\langle i\sigma |c_{j_{\nu }\sigma }^{\dagger }|s\rangle , \label{hop} \end{equation} where $c_{j\sigma }^{\dagger }$ creates an electron (or hole) with spin $\sigma $ at the $\pi $ orbital at site $j$ (1 to $n$) of the molecule, $j_{\nu }$ denotes the site connected to the lead $\nu $, and $t_{\nu }$ is the hopping between this site and the lead $\nu $. Choosing adequately the phases of the gauge transformation $|i\sigma \rangle \rightarrow e^{i\varphi _{i}}|i\sigma \rangle $, both $V_{i}^{L}$ can be made real and positive, and by reflection symmetry $V_{1}^{L}=V_{2}^{L}$. Using rotational symmetry, it is easy to see that \cite{rinc} \begin{equation} V_{i}^{R }=(t_{R}/t_{L})V_{i}^{L }\exp \left[ -i(j_{R}-j_{L})(K_{i}-K_{0})\right] , \label{hopr} \end{equation} where $K_{i}$ is the wave vector of $|i\sigma \rangle $. The phase of $V_{1}^{R}$ can be absorbed by a gauge transformation in the $c_{Rk\sigma }$, rendering it real and positive. The remaining matrix element is $V_{2}^{R}=V_{1}^{R}e^{-i\phi }$, where $\phi =(j_{R}-j_{L})(K_{2}-K_{1})$. This result is general for a ring geometry. For benzene $K_{2}-K_{1}\equiv \pm 2\pi /3$. Then, if the leads are connected in the \emph{para} position ($j_{R}-j_{L}=3$) , $\phi \equiv 0$, while in the \emph{ortho} ($j_{R}-j_{L}=1$) or \emph{meta} ($j_{R}-j_{L}=2$) positions, $\phi \equiv \pm 2\pi /3$. Note that the sign of $\phi $ does not affect our results. For simplicity we shall assume $|t_{R}/t_{L}|=1$ in the calculations presented here. Then, the hopping of the leads to the relevant states of the benzene molecule are $V_{1}^{L}=V_{2}^{L}=V_{1}^{R}=V$, $V_{2}^{R}=Ve^{-i\phi }$, with $\phi =0$ for the \emph{para} position, and $\phi =\pm 2\pi /3$ for the other two possibilities of connecting the leads. For $\phi =0$, the state $|B\sigma \rangle =(|1\sigma \rangle -|2\sigma \rangle )/\sqrt{2}$ decouples from the leads and the transport properties of the system are the same as those for a single level [$|A\sigma \rangle =(|1\sigma \rangle +|2\sigma \rangle )/\sqrt{2}$] QD connected to he leads, with hopping $\sqrt{2}V$, which are well known \cite{nca,izum,costi,proetto,vel}. Another known limit of the model, which cannot be realized in benzene molecules, but in rings of a number of sites multiple of four \cite{mole} is $\phi =\pi $. In this case, $|A\sigma \rangle $ ($|B\sigma \rangle $) is coupled only to the left (right) lead and the model is equivalent to an SU(4) Anderson model \cite{desint,fcm}. However, in contrast to the case of C nanotubes \cite{lim,ander,lipi,buss} and Si fin-type field effect transistors \cite{tetta}, the conductance vanishes identically as a consequence of perfect destructive interference \cite{desint,izum2}. An important technical difference is that in our model, the hybridization matrices of the states $|i\sigma \rangle $ with the left and right leads are not proportional for $\phi \neq 0 $. If the degeneracy between the levels can be lifted (for example applying an external flux \cite{rinc}), the symmetry is reduced to SU(2) and a finite conductance is restored \cite{desint,izum2,fcm}. In C nanotubes with disorder induced valley mixing \cite{grove}, the effective model becomes very similar to ours, including a finite level splitting and a phase $\phi$, which depends on the applied magnetic field. We shall use our previous results for the case $\phi =\pi $ and a finite level splitting $\delta =E_{2}-E_{1}$ to help in the analysis of the results presented here for benzene. \section{The formalism} \label{forma} In this section we describe the extension of the non-crossing approximation (NCA) applied before for the Anderson model with infinite on-site repulsion out of equilibrium \cite{nca,nca2}, to our effective Hamiltonian. \subsection{Representation of the Hamiltonian with slave particles} \label{repre} An auxiliary boson $b$, and four auxiliary fermions $f_{i\sigma }$ are introduced, so that the localized states are represented as \begin{equation} |s\rangle =b^{\dag }|0\rangle {\rm , }|i\sigma \rangle =f_{i\sigma }^{\dag }|0\rangle , \label{rep} \end{equation where $|0\rangle $ is the vacuum. These pseudoparticles should satisfy the constraint \begin{equation} b^{\dag }b+\sum_{i\sigma }f_{i\sigma }^{\dag }f_{i\sigma }=1. \label{cons} \end{equation Introducing it by a Lagrange multiplier, the effective Hamiltonian takes the form \begin{eqnarray} H^{\prime } &=&(E_{s}+\lambda )b^{\dag }b+\sum_{i\sigma }(E_{i}+\lambda )f_{i\sigma }^{\dag }f_{i\sigma }+\sum_{\nu k\sigma }\epsilon _{\nu k}c_{\nu k\sigma }^{\dagger }c_{\nu k\sigma } \nonumber \\ &&+\sum_{k\nu i\sigma }\left( V_{i}^{\nu }f_{i\sigma }^{\dagger }bc_{k\nu \sigma }+{\rm H.c.}\right) . \label{hp} \end{eqnarray The NCA solves a system of self-consistent equations to obtain the Green functions of the pseudoparticles (described below), which is equivalent to sum an infinite series of diagrams (all those without crossings) in the corresponding perturbation series in the last term of $H^{\prime }$, and afterwards project on the physical subspace of the constraint. The main new difficulties compared to the case of one doublet arise as a consequence of the matrix structure of the pseudofermion Green functions and self energies, which are absent in the SU(4) case or in this case with simple SU(2) symmetry breaking perturbations, as a magnetic \cite{tetta} or "orbital" \cite{desint,izum2,fcm} field. \subsection{Green functions} \label{green} The lesser and greater Keldysh Green functions for the psedoparticles for stationary non-equilibrium processes are defined as \cite{lif,mahan} \begin{eqnarray} G_{ij,\sigma }^{<}(t-t^{\prime }) &=&+i\langle f_{j\sigma }^{\dag }(t^{\prime })f_{i\sigma }(t)\rangle , \nonumber \\ D^{<}(t-t^{\prime }) &=&-i\langle b^{\dag }(t^{\prime })b(t)\rangle , \nonumber \\ G_{ij,\sigma }^{>}(t-t^{\prime }) &=&-i\langle f_{i\sigma }(t)f_{j\sigma }^{\dag }(t^{\prime })\rangle , \nonumber \\ D^{>}(t-t^{\prime }) &=&-i\langle b(t)b^{\dag }(t^{\prime })\rangle , \label{glg} \end{eqnarray the retarded and advanced fermion Green functions are $G_{ij,\sigma }^{r}(t)=\theta (t)[G_{ij,\sigma }^{>}(t)+G_{ij,\sigma }^{<}(t)]$, $G_{ij,\sigma }^{a}=G_{ij,\sigma }^{r}+G_{ij,\sigma }^{<}-G_{ij,\sigma }^{>}$, and similarly for the boson Green functions replacing $G_{ij,\sigma }$ by $D$. These Green functions correspond to the interacting (dressed) propagators and are determined selfconsistently within the NCA. Evaluating the corresponding diagrams in second order in the $V_{i}^{\nu }$, and replacing the bare propagators by the dressed ones, the expressions for the lesser self-energies take the form \begin{eqnarray} \Pi ^{<}(\omega ) &=&-\sum_{\nu lm\sigma }\Gamma _{lm}^{\nu }\int \frac d\omega ^{\prime }}{2\pi }(1-f_{\nu }(\omega ^{\prime }-\omega ))G_{ml,\sigma }^{<}(\omega ^{\prime }), \nonumber \\ \Sigma _{lm,\sigma }^{<}(\omega ) &=&-\sum_{\nu }\Gamma _{lm}^{\nu }\int \frac{d\omega ^{\prime }}{2\pi }f_{\nu }(\omega -\omega ^{\prime })D^{<}(\omega ^{\prime }), \label{sigl} \end{eqnarray} where \begin{equation} \Gamma _{ij}^{\nu }(\omega )=2\pi \sum_{k}V_{i}^{\nu }\bar{V}_{j}^{\nu}\delta (\omega -\epsilon _{\nu k}) \label{gam} \end{equation} assumed independent of $\omega $. Similarly, the greater self-energies become \begin{eqnarray} \Sigma _{lm,\sigma }^{>}(\omega ) &=&\sum_{\nu }\Gamma _{lm}^{\nu }\int \frac{d\omega ^{\prime }}{2\pi }(1-f_{\nu }(\omega -\omega ^{\prime }))D^{>}(\omega ^{\prime }), \nonumber \\ \Pi ^{>}(\omega ) &=&\sum_{\nu lm\sigma }\Gamma _{lm}^{\nu }\int \frac d\omega ^{\prime }}{2\pi }f_{\nu }(\omega ^{\prime }-\omega )G_{ml,\sigma }^{>}(\omega ^{\prime }). \label{sigg} \end{eqnarray} As in the case of one doublet only \cite{nca}, in the expressions for the retarded self energies, $\Sigma ^{r}(t)=-\theta (t)(\Sigma ^{<}(t)-\Sigma ^{>}(t))$, $\Sigma ^{<}$ can be neglected in comparison with $\Sigma ^{>}$ due to the effect of the constraint. Then after Fourier transforming one obtain \begin{eqnarray} \Sigma _{ij,\sigma }^{r}(\omega ) &=&i\int \frac{d\omega ^{\prime }}{2\pi \frac{\Sigma _{ij,\sigma }^{>}(\omega ^{\prime })}{\omega -\omega ^{\prime }+i\eta }, \nonumber \\ \Pi ^{r}(\omega ) &=&i\int \frac{d\omega ^{\prime }}{2\pi }\frac{\Pi ^{>}(\omega ^{\prime })}{\omega -\omega ^{\prime }+i\eta }, \label{sigr} \end{eqnarray where $\eta $ is a positive infinitesimal. The advanced self energies \Sigma _{ij,\sigma }^{a}$ and $\Pi ^{a}$ are obtained changing the sign of \eta $ in the expressions above. For the case of SU(4) symmetry or when this symmetry is broken by simple symmetry breaking fields \cite{desint,izum2,fcm,lim,ander,lipi,buss,tetta} ($\phi =\pi $ and symmetric coupling to the leads in our effective model), the fermion Green functions \ and self energies are diagonal in an appropriate basis and the self-consistency problem simplifies considerably. In the general case (including $\phi \equiv \pm 2\pi /3$ for benzene) one has to solve a matrix Dyson equation \cite{lif,mahan} which includes not only the Keldysh (or +, - branch index in the Keldysh contour), but also the doublet index $i=1,2$ in the fermion case. For the retarded fermion Green functions and self energies, combining $G_{ij,\sigma }^{r}$ in a $2\times 2$ matrix $\mathbf{G}^{r}$ and similarly for self energies and unperturbed Green functions $g_{ij,\sigma }^{r}=\delta _{ij}(\omega -\tilde{E}_{i})^{-1}$, with $\tilde{E}_{i}=E-i+\lambda$ the Dyson equations take the simple form $\mathbf{G}^{r}=\mathbf{g}^{r} +\mathbf{g}^{r}\mathbf{\Sigma }^{r}\mathbf{G}^{r}$. Solving the system for the G_{ij,\sigma }^{r}$ we obtain \begin{eqnarray} G_{11,\sigma }^{r} &=&\frac{1}{\mathbb{D}}(\omega -\tilde{E}_{2}-\Sigma _{22,\sigma }^{r}), \nonumber \\ G_{12,\sigma }^{r} &=&\frac{1}{\mathbb{D}}(\Sigma _{12,\sigma }^{r}), \nonumber \\ G_{21,\sigma }^{r} &=&\frac{1}{\mathbb{D}}(\Sigma _{21,\sigma }^{r}), \nonumber \\ G_{22,\sigma }^{r} &=&\frac{1}{\mathbb{D}}(\omega -\tilde{E}_{1}-\Sigma _{11,\sigma }^{r}), \label{dysonf} \end{eqnarray where \begin{equation*} \mathbb{D}=(\omega -\tilde{E}_{1}-\Sigma _{11,\sigma }^{r})(\omega -\tilde{E}_{2}-\Sigma _{22,\sigma }^{r})-\Sigma _{12,\sigma }^{r}\Sigma _{21,\sigma }^{r}. \end{equation*} For the boson, which has no subscripts, one has \begin{equation} D^{r}(\omega )=\frac{1}{\omega -\tilde{E}_{s}-\Pi ^{r}}. \label{dysonb} \end{equation} The advanced Green functions can be obtained from the retarded ones by the replacement $\eta \rightarrow -\eta $. The remaining equations that relate the lesser and greater pseudoparticle Green functions with the corresponding self energies are (in compact matrix form for the fermions) \cite{note} \begin{eqnarray} \mathbf{G}^{\lessgtr } &=&\mathbf{G}^{r}\mathbf{\Sigma }^{\lessgtr }\mathbf{G}^{a}, \nonumber \\ D^{\lessgtr } &=&D^{r}\Pi ^{\lessgtr }D^{a}. \label{dysonne} \end{eqnarray} We solve numerically the system of integral equations (\ref{sigl}) to (\ref{dysonne}) for the pseudoparticle Green functions and self energies. The details are given in appendix \ref{deta} For the calculation of the current, one needs the Green functions of the physical fermions $d_{i\sigma }^{\dagger }=|i\sigma \rangle \langle s|=f_{i\sigma }^{\dagger }b$. These functions, which we identify with the subscript $\mathbf{d}$, are defined in the same way as the pseudofermion ones [see Eqs. (\ref{glg})], replacing $f_{i\sigma }$ by $d_{i\sigma }$. In terms of the auxiliary-particle Green functions, the lesser and greater physical Green functions take the form \cite{nca} \begin{eqnarray} G_{\mathbf{d}ij,\sigma }^{<}(\omega ) &=&i\int \frac{d\omega ^{\prime }} 2\pi Q}G_{ij,\sigma }^{<}(\omega ^{\prime }+\omega )D^{>}(\omega ^{\prime }), \nonumber \\ G_{\mathbf{d}ij,\sigma }^{>}(\omega ) &=&i\int \frac{d\omega ^{\prime }} 2\pi Q}G_{ij,\sigma }^{>}(\omega ^{\prime }+\omega )D^{<}(\omega ^{\prime }), \label{gd} \end{eqnarray where $Q=\langle b^{\dag }b+\sum_{i\sigma }f_{i\sigma }^{\dag }f_{i\sigma }\rangle $ for a given Lagrange multiplier $\lambda $ (see appendix \ref{deta}). \subsection{Equation for the current} \label{curre} Using general formulas for the current through a region with interacting electrons \cite{meir,jau} and the relation $G^{r}-G^{a}=G^{>}-G^{<}$ between Green functions, the current of our effective model for a spin degenerate system (without applied magnetic field) can can be written as \begin{eqnarray} I &=&\pm \frac{ie}{h}\int d\omega {\rm Tr}[(\mathbf{\Gamma ^{L} f_{L}(\omega )-\mathbf{\Gamma ^{R}}f_{R}(\omega ))\mathbf{G}_{\mathbf{d }^{>}(\omega ) \nonumber \\ &+&((1-f_{L}(\omega ))\mathbf{\Gamma ^{L}}-(1-f_{R}(\omega ))\mathbf{\Gamma ^{R}})\mathbf{G}_ \mathbf{d}}^{<}(\omega )], \label{ia} \end{eqnarray where the + (-) sign correspond to the case in which the doublets have one more electron (hole) than the singlet, $f_{\nu }(\omega )=[\exp [(\omega -\mu _{\nu })/kT]+1]^{-1}$ where $\mu _{\nu }$ is the chemical potential of the lead $\nu $, and $\mathbf{\Gamma ^{\nu }}$, $\mathbf G}_{\mathbf{d}}^{<}$ and $\mathbf{G}_{\mathbf{d}}^{>}$ are $2\times 2$ matrices with matrix elements given by Eqs (\ref{gam}) and (\ref{gd}). In particular, taking the unperturbed density of conduction states per spin $\rho =1/(2D)$ where $2D$ is the band width, and symmetric coupling to the leads, we have \begin{eqnarray} \mathbf{\Gamma ^{L}} &=&\frac{\pi V^{2}}{D}\left( \begin{array}{cc} 1 & 1 \\ 1 & \end{array \right) , \nonumber \\ \mathbf{\Gamma ^{R}} &=&\frac{\pi V^{2}}{D}\left( \begin{array}{cc} 1 & e^{i\phi } \\ e^{-i\phi } & \end{array \right) . \label{gam2} \end{eqnarray Note that unless $\phi =0$, $\mathbf{\Gamma ^{L}}$ is not proportional to $\mathbf{\Gamma ^{R}}$. As a consequence, the trick used to relate the conductance at $V_b=0$ with the spectral density of states \cite{meir,jau} cannot be used. We calculate the conductance $G=dI/dV_b$ by numerical differentiation of the current even for $V_b \rightarrow 0$. The traces appearing in Eq. (\ref{ia}) have the form \begin{eqnarray} {\rm Tr}(\mathbf{\Gamma ^{R}G}_{\mathbf{d}}^{\lessgtr }) &=&\frac{\pi V^{2 }{D}[G_{11}^{\lessgtr }+G_{22}^{\lessgtr }+\cos (\phi )(G_{21}^{\lessgtr }+G_{12}^{\lessgtr }) \nonumber \\ &&+i~{\rm sin}(\phi )(G_{21}^{\lessgtr }-G_{12}^{\lessgtr }), \label{tr1} \end{eqnarray and similarly for Tr$(\mathbf{\Gamma ^{L}G}_{\mathbf{d}}^{\lessgtr })$ replacing $\phi $ by 0. Note that from the definition of the Green functions [see Eq. (\ref{glg})], one realizes that the complex conjugates of the lesser and greater Green functions satisfy $\bar{G}_{ij}^{\lessgtr }(t)=$ -$G_{ji}^{\lessgtr }(-t)$ and after Fourier transform $\bar{G}_{ij}^{\lessgtr }(\omega )=$ -$G_{ji}^{\lessgtr }(\omega )$. Then, $G_{ii}^{\lessgtr }(\omega )$ and $G_{21}^{\lessgtr }(\omega )+G_{12}^{\lessgtr }(\omega )$ are pure imaginary, while $G_{21}^{\lessgtr }(\omega )-G_{12}^{\lessgtr }(\omega )$ is real. In appendix A we show that the current from the left lead to the molecule equals that from the molecule to the right lead, so that the current is conserved within the NCA. Other approximations, like perturbation theory in the Coulomb repulsion out of the electron-hole symmetric point do not conserve the current \cite{scali}, unless some tricks are used \cite{levy,none,mon}. \section{Numerical results} \label{res} For the numerical calculations, we assume a constant density of states per spin of the leads $\rho $ between $-D$ and $D$. We take the unit of energy as the total level width of both doublets: $\Gamma =\Gamma _{ii}^{L}+\Gamma _{ii}^{R} =4\pi \rho V^{2}$, $i=1,2$. Also $\Gamma =2\Delta $, where $\Delta $, called the resonance level width, is half the width at half maximum of the spectral density of states of particles for each level in the non-interacting case. We restrict our present study to gate voltages such that $E_{s}-E_{i}\gg \Delta $, for which correlations play a more important role, the Kondo effect develops and the conductance is higher. Without loss of generality, we take $\epsilon _{F}=E_{s}=0$, where $\epsilon _{F}$ is the Fermi level of the leads without applied bias voltage. We define $E_d=E_1$, $\delta=E_2-E_1$. For a bias voltage $V_b$, the chemical potentials are $\mu _{L}=eV_b/2$, $\mu _{R}=-eV_b/2$. In the numerical results an important low-energy scale is the Kondo temperature $T_K$. It is known that for impurity Anderson models like ours for $\delta=0$ in the Kondo regime, the spectral density shows one charge-transfer peak at energy $E_d$ below the Fermi energy of total width of the order of $N \Gamma$ for SU(N) models \cite{fcm,lim,logan}. For finite on-site Coulomb repulsion $U$ there is another charge-transfer peak at energy $E_d+U$, but this is shifted to infinite energies in our case. In addition, there is a narrow peak near the Fermi energy of width of the order of $T_K$. In this work we define $T_K$ as the half width at half maximum of this peak. For finite $\delta$ additional peaks appear as shown in Section \ref{spec}. \subsection{Conductance out of equilibrium} \label{condu} \begin{figure}[tbp] \includegraphics[width=7.0cm]{fig2a.eps} \\ \includegraphics[width=7.0cm]{fig2b.eps} \\ \includegraphics[width=7.0cm]{fig2c.eps} \\ \caption{(Color online) Differential conductance at low temperature through a benzene molecule as a function of bias voltage for several values of the energy of the doublets, with the leads connected in the \emph{para} (red dashed line) and \emph{ortho} or \emph{meta} (full black line) positions.} \label{didv} \end{figure} In Fig. \ref{didv} we show the differential conductance $G=dI/dV�_b$ as a function of bias voltage $V_b$, for the leads connected at 180 degrees (in the \emph{para} position), for which $\phi =0$, and in the other two positions ($\phi = \pm 2\pi /3$), for several values of the energy of the doublets $E_1=E_2=E_d$. The charge-transfer energy $\epsilon_F-E_d$ is the energy necessary to take a particle (electron or hole) from the Fermi energy, and add it to the localized singlet, forming a doublet, in absence of the hopping to the leads. It is tuned modifying the gate voltage $V_g$: $E_d$ decreases with increasing $V_g$. The temperature in the curve was fixed at $T=0.05 T_K$. Since we have assumed symmetric coupling to both leads, $G(-V_b)=G(V_b)$ and only positive $V_b$ are du¡isplayed in the figure. For the \emph{para} position, as explained in Section \ref{didv}, the problem is equivalent to the transport through a single level quantum dot, and $G$ shows a single peak centered near zero voltage and width of the order of $2 k T_K/e$ \cite{fcm2}. We remind the reader that half the width of $G(V_b)$ for $T=0$ times the electric charge $e$, half the width of the spectral density and the temperature (times the Boltzman constant) $T$ at which $G(T)$ for $V_b=0$ falls to half of the maximum value, are all of the same order \cite{fcm2}. This is natural from the expected universal behavior of the (single level) Anderson model in the Kondo regime, in which only one energy scale $T_K$ survives. This Kondo temperature varies exponentially with the charge-transfer energy $E_d$, and as a consequence, the width of the central peak observed in the figure also decreases exponentially with increasing $|E_d|$, as observed in Fig. \ref{didv} When the molecule is attached to the leads at 60 or 120 degrees, three main changes occur in $G(V_b)$ in comparison to the previous case: i) The maximum of $G(0)$ is lower, ii) The width of the central peak is narrower and scales in a different way with $E_d$, and iii) two new peaks appear at finite $V_b$. i) is due to the effect of partial destructive interference between the current transmitted by the two doublets. In the \emph{para} position, for which all hybridization to the leads have the same sign ($\phi =0$), $G(0)$ equals the quantum of conductance $2 e^2/h$ for symmetric coupling to the leads (as we have assumed in the present calculations). In the opposite case (hypothetical for benzene) of $\phi = \pi$, there is perfect destructive interference and $G$ vanishes \cite{desint,izum2}. Transport through the benzene molecule connected in the \emph{ortho} or \emph{meta} positions, correspond to an intermediate situation ($\phi = \pm 2\pi /3$), and therefore, an intermediate value of $G(0)$ is expected. For asymmetric coupling to the leads, as it is usually the case in transport through molecules in devices built by electromigration \cite{roch,serge}, $G(0)$ decreases strongly and it is not possible to distinguish between different positions of the leads connected to the benzene molecule from the maximum value of $G$. Note that this value decreases slightly with decreasing $T_K$. The feature, which is probably the most useful to distinguish experimentally among the different positions of the conducting leads is the presence of the side peaks in $G(V_b)$, which seem to depend weakly on the charge transfer energy $\epsilon_F-E_d$, in contrast to the width of the central peak. \begin{figure}[tbp] \includegraphics[width=7.0cm ]{fig3.eps} \caption{(Color online) Differential conductance for $\phi=\pi$ and different values of the level splitting. Other parameters are $E_d=-4$, $T=0.05 T_K$.} \label{didvpi} \end{figure} In order to help in the interpretation of the results, we calculate $G(V_b)$ for $\phi=\pi$, but allowing for a finite splitting $\delta=E_2-E_1$ between both doublets. $\delta$ acts as a symmetry breaking field on the SU(4) Kondo effect \cite{desint,fcm}. The result is shown in Fig. \ref{didvpi}. Clearly, $G(V_b)$ is qualitatively similar to the corresponding result for benzene with leads connected in the \emph{ortho} or \emph{meta} positions. The conductance, which vanishes in the SU(4) limit $\delta=0$ is restored by a finite $\delta$ and two peaks at $eV_b= \pm \delta$ appear. This similarity suggest to interpret the results for benzene starting from those for $\phi=\pi$, $\delta=0$ and thinking the difference between the coupling $V^R_2$ for $\phi=\pm 2 \pi/3$ and $\phi=\pi$ as a perturbation. This perturbation, in second order, introduces among other effects, an effective mixing between the levels, proportional to $V^2/|E_d|$, which leads to a splitting of the doublets. In fact, the position of the satellite peaks in Fig. \ref{didv} (ranging from 0.3 for $E_d=-3$ to 0.22 for $E_d=-5$) is roughly consistent with a $1/|E_d|$ dependence. Note that in the case $\phi=0$ in which this perturbation is expected to be the largest, only the symmetric combinations of left and right lead states hybridize with the impurity, while the antisymmetric ones remain decoupled. The spectral density of the latter is a delta function $\delta(\omega-E_d)$, without weight near the Fermi energy. This makes clear that actually the effective hopping of the two resulting split doublets is different and the proposed interpretation is rather qualitative. \subsection{Spectral densities} \label{spec} \begin{figure}[tbp] \includegraphics[width=7.0cm ]{fig4.eps} \caption{(Color online) Equilibrium spectral density for the benzene doublets as a function of frequency in the \emph{ortho} and \emph{meta} positions for different temperatures.} \label{rho} \end{figure} \begin{figure}[tbp] \includegraphics[width=7.0cm ]{fig5.eps} \caption{(Color online) Total spectral density at equilibrium as a function of frequency for the effective model with $\phi=\pi$, $T=0.05 T_K$ and different values of the level splitting.} \label{rhopi} \end{figure} As a further test of the interpretation outlined above, we compare the spectral density of states of both models. The spectral density \begin{equation} \rho_{i \sigma} (\omega )=(G_{{\bf d}ii \sigma}(\omega + i\eta) - G_{{\bf d}ii \sigma}(\omega - i\eta))/(2 \pi i) \label{rhos} \end{equation} of both doublets for benzene connected to the leads in \emph{ortho} or \emph{meta} positions is represented in Fig. \ref{rho} for different temperatures and energies near the Fermi energy (the charge-transfer peak \cite{fcm} is not shown). As $T$ decreases below $T_K$, not only a peak develops near the Fermi energy, but also two side peaks (the most prominent for positive frequencies) are clearly present. In Fig. \ref{rhopi} we show the total spectral density for $\phi=\pi$, very low temperatures and different values of $\delta$. In this case, the peak near to the Fermi energy corresponds to the lowest doublet and the peak at energy near $\delta$ corresponds to the highest doublet \cite{fcm}. In contrast, each peak in Fig. \ref{rho} is expected to come from some linear combination of both doublets which is not easy to identify. Another difference apparent in the figure is that the side peak at positive frequencies for benzene is sharper and more asymmetric than the corresponding one for $\phi=\pi$ and finite $\delta$. This might be due in part to a smaller effective hybridization of the excited doublet in the case of benzene, leading to a narrower peak, as in the limit $\phi \rightarrow 0$ discussed above. In spite of these differences, the spectral density for $\phi=\pi$ and finite $\delta$ shows the development of a satellite peak departing from the central one as $\delta$ increases. The spectral density for benzene shows the same qualitative features (a peak near the Fermi energy and another one at positive frequencies) which can be interpreted, in a qualitative first approximation, as coming from an effective $\delta$. This peak at finite energy in turn gives rise to the side peaks in $G(V_b)$ (although the spectral densities are modified with the applied bias voltage and the peaks are blurred). \subsection{Temperature dependence of the conductance} \label{temper} \begin{figure}[tbp] \includegraphics[width=7.0cm ]{fig6.eps} \caption{(Color online) Conductance as a function of temperature for leads connected in the \emph{ortho} or \emph{meta} position and several values of the energy of the doublets.} \label{temp} \end{figure} In Fig. \ref{temp}, we show the evolution with temperature of the conductance $G(T)$ for $V_b=0$, several values of $E_d$, and conducting leads in the \emph{ortho} or \emph{meta} position. $G(T)$ was obtained differentiating the current, as explained in Section \ref{curre}. In order to be able to display all curves in the same scale, a logarithmic scale of relative temperatures $T/T_K$ is used, where the characteristic energy scale $T_K$ is given by the half width at half maximum of the equilibrium spectral density $\rho_{i \sigma} (\omega )$. The resulting value of $T_K$ is $3 \times 10^{-2}$, $3.6 \times 10^{-3}$ and $4.8 \times 10^{-4}$ for $E_d=$ -3, -4 and -5 respectively. The peak at finite energy in the spectral density at low temperatures (which lies between 0.3 for $E_d=-3$ to 0.22 for $E_d=-5$) results in a structure in $G(T)$ at $T/T_K$ near 10, 70 and 400 for $E_d=$ -3, -4 and -5 respectively. While for $E_d=-3$ this structure is hidden in the main peak, a kind of plateau is evident for the other values of $E_d$ at the corresponding positions, as shown in the inset of Fig. \ref{temp}. This structure is reminiscent of the plateau observed by transport experiments in the ``triplet'' side of the quantum phase transition in C$_{60}$ \cite{roch,serge}, and explained by NCA \cite{st1,st2,serge} and NRG calculations with improved resolution \cite{serge}, In our case, to represent accurately the high energy features, it was necessary to use a special mesh in the frequency axis with more points near the high energy peak in the pseudofermion spectral densities (see appendix \ref{deta}). In previous work \cite{desint,fcm}, we have found that for $\phi=\pi$ and any level splitting $\delta=E_2-E_1$, $T_K$ (except for a constant of order unity) is very accurately given by the expression \begin{equation} T_{K}=\left\{ (D+\delta )D\exp \left[ \pi E_{1}/(2\Delta )\right] +\delta ^{2}/4\right\} ^{1/2}-\delta /2, \label{tk} \end{equation} obtained from a simple variational wave function that generalizes the proposal of Varma and Yafet \cite{varma} for the simplest impurity Anderson model. This equation interpolates between the SU(4) limit $\delta=0$, for which $T_K$ has the largest value $T_K^4=D\exp \left[ \pi E_{1}/(4\Delta )\right]$ and the one-level SU(2) limit $\delta \rightarrow +\infty$. The effect of $\delta$ on $T_K$ is small when $\delta < T_K^4$, while for larger $\delta$, $T_K$ decreases strongly. Note that $T_K^4$ coincides with the Kondo temperature of the equivalent one-level effective Anderson for benzene connected in the \emph{para} position (because the effective one-level resonant level width contains a factor 2, see Section \ref{model}). Therefore, we can interpret the fact that the effective energy scale in the \emph{ortho} and \emph{meta} positions is smaller than that in the \emph{para} positions (compare the widths of the central peaks in Fig. \ref{didv}) as an effect of the effective level splitting caused by a phase $\phi$ different from $\pi$. In other words, it seems that the position of the side peaks in $G(V_b)$ (or spectral densities) and the plateau in $G(T)$ indicate an effective $\delta$, which decreases the characteristic energy scale $T_K$. While we do not expect Eq. (\ref{tk}) to describe accurately $T_K$ using this effective $\delta$, the fact that the difference between the energy scales for the \emph{para} and the other positions is more noticeable for $\delta$ larger than $T_K^4$ ($T_K$ for the \emph{para} case), suggests that this equation might be useful for a qualitative understanding, and supports the interpretation of our results. \section{Summary and discussion} \label{sum} We have constructed the effective Hamiltonian for transport through a symmetric ring (point group including $C_{nv}$ as a subgroup) with one orbital per site, including a singlet and two degenerate doublets of a neighboring configuration for the isolated ring. This includes for example the ground state of the isolated ring for one electron per site, and another with one added electron or hole. The resulting effective generalized Anderson Hamiltonian describes however more general situations. An extension to the case in which a magnetic flux is added (reducing the symmetry to $C_{n}$ and breaking the doublet degeneracy) is trivial. Partial destructive interference occurs in general when two levels with $N$ particles are near to another one with $N\pm 1$ particles \cite{rinc}. Therefore, the situation of one singlet and two doublets appears frequently. From the wave vectors of these states, it is easy to calculate the phase $\phi$ of the effective model following the lines of Section \ref{model}. Therefore, we expect that our formalism can be used in a variety of physically relevant systems. We have used the resulting effective Hamiltonian to calculate the non-equilibrium transport through a benzene molecule with conducting leads connected in different positions, using an appropriate generalization of the NCA to include the Kondo effect as well as effects of partial destructive interference between the transport channels through both doublets. While for leads connected in the \emph{para} position, the conductance $G$ as a function of voltage and temperature is similar to that well known for the case of only one doublet, in the other positions the peak in $G(V_b)$ near zero bias voltage $V_b$ is narrower and of smaller intensity as an indication of the relative position of the conducting leads. In addition, $G(V_b)$ displays two additional side peaks at finite $\pm V_b$, and a characteristic plateau is present in the conductance as a function of temperature $G(T)$. These finite energy features are probably easier to detect experimentally as an indication of the relative position of the conducting leads. These results for the leads connected in the \emph{ortho} and \emph{meta} positions, which are due to partial destructive interference, can be interpreted starting from those with total destructive interference (corresponding to a phase $\phi=\pi$) for which the effective model has SU(4) symmetry, and adding a symmetry breaking splitting $\delta$ between effective doublets caused by the remaining term treated as a perturbation. For our results to be valid, the coupling to the leads should be sufficiently small, so that excited states, not included in the effective Hamiltonian, do not play an important role. The effect of these states can be estimated for each particular case. See for example Ref. \cite{lobos}. The magnitude of the coupling can be controlled for example in break junctions \cite{park,parks,reed}. Our effective model, including a finite level splitting $\delta$ from the beginning, and a general phase $\phi$ describes carbon nanotube QDs with disorder induced valley mixing \cite{grove}. Therefore, our formalism supplemented by realistic calculations of $\delta$ and $\phi$ might be used to calculate the conductance of these systems. In this work, we have neglected the effect of phonons, which can modify the effective parameters of the model \cite{har1,pablo}, and also affect the interference phenomena \cite{har2}. We have also neglected electrostatic interactions between the leads and the molecules. In the weak coupling regime, it has been shown that interaction with image charges for the doped molecules leads to a symmetry breaking and a splitting of the degenerate levels that also affects the interference phenomena \cite{kaa}. As we have shown, in the specific case of benzene, the coupling to the leads already originates a splitting of this kind. Therefore, we expect that these electrostatic effects are more relevant in the case of perfect destructive interference. \section*{Acknowledgments} We thank CONICET from Argentina for financial support. This work was partially supported by PIP 11220080101821 of CONICET and PICT R1776 of the ANPCyT, Argentina.
2003.06608
\section{Introduction\label{sec:intro}} Two-dimensional (2D) materials have provided an exciting new frontier for experimental and theoretical nanoscience in the fifteen years since the first isolation of atomically thin layers of graphene by mechanical exfoliation from graphite \cite{Novoselov_2004,Geim_2007}. In addition to graphene and its derivatives, the last few years have witnessed growing interest in semiconducting 2D materials such as transition-metal dichalcogenides \cite{Wang_2012,Mak_2010,Splendiani_2010} and phosphorene \cite{Li_2014,Koenig_2014,Liu_2014}. A recent trend has been the study of stacked heterostructures of 2D materials \cite{Geim_2013}. Heterostructures involving graphene and hexagonal boron nitride (hBN) have received particular attention, because monolayer hBN is an insulating, atomically thin 2D material with a similar lattice constant to graphene and is therefore the ideal substrate for graphene-based electronics \cite{Dean_2010,Xue_2011,Bresnehan_2012,Lee_2012,Liu_2013}. Monolayer or few-layer hBN is potentially an important component in novel electronic devices based on 2D materials, such as vertical tunneling diodes \cite{Britnell_2012,Britnell_2013} and supercapacitors \cite{Shi_2014}. In addition, due to the slight lattice mismatch, graphene placed on hBN exhibits a moir\'{e} pattern with a period of up to 14 nm \cite{Yankowitz_2012}, and the resulting superlattices allow the experimental observation of exotic phenomena such as the formation of Hofstadter's butterfly \cite{Hofstadter_1976} features in the band structure in the presence of a magnetic field \cite{Ponomarenko_2013,Dean_2013}. Despite the importance of 2D hBN in current or proposed graphene-based electronics research, the properties of monolayers of hBN are not currently well characterized due to the experimental challenge of isolating and studying monolayers. In this paper, we use advanced theoretical electronic-structure methods to provide basic information about the size and nature of the electronic band gap of monolayers of hBN\@. We find that the gap of hBN monolayers is in principle indirect (so that optical transitions involve the absorption or emission of phonons), and that the quasiparticle gap is considerably enhanced relative to the bulk. However, the conduction band around its minimum at the $\Gamma$ point is a free-electron-gas-like state that is only weakly bound to the hBN monolayer and has a relatively small spatial overlap with the valence states \cite{Blase_1995}; hence the dipole matrix element for an optical transition from the valence-band maximum to the conduction-band minimum is inevitably small. Furthermore, the precise energy of a state that extends outside the layer will be strongly affected by the environment in which the layer finds itself (substrate, encapsulation, etc.). Hence we expect inverse photoemission measurements to show the energy of the conduction band at $\Gamma$ to depend strongly on the environment. Likewise, the effective height of the energy barrier presented by an hBN monolayer in a vertical-tunneling experiment will depend sensitively on the environment of the layer. Bulk hBN (also known as white graphite) consists of layers of boron and nitrogen atoms occupying the $A$ and $B$ hexagonal sublattice sites of a 2D honeycomb lattice. These layers are weakly bound together by van der Waals interactions, resulting in both the lubricating properties of hBN and the possibility of isolating monolayers by mechanical exfoliation. Bulk hBN adopts an $AA^\prime$ stacking arrangement in which each boron atom (with a positive partial charge) has a nitrogen atom (with a negative partial charge) vertically above it and \textit{vice versa}. Whereas pristine graphene is a gapless semiconductor, monolayer hBN is an insulator due to the lack of sublattice symmetry. Bulk hBN is semiconducting, with experimental estimates of the band gap ranging from 5.2(2)--7.1(1) eV \cite{Hoffmann_1984,Carpenter_1982,Watanabe_2004,Shi_2010,Cassabois_2016}. Watanabe \textit{et al.}\ find the quasiparticle band gap to be direct and of value 5.971 eV in a single-crystal sample \cite{Watanabe_2004}. More recent experimental work by Cassabois \textit{et al.}\ has indicated that bulk hBN is in fact an indirect semiconductor with a quasiparticle band gap of 6.08 eV \cite{Cassabois_2016}. The experimental work of Cassabois \textit{et al.}, together with subsequent theoretical works \cite{Cannuccia_2019,Paleari_2019}, have elucidated the role of vibrational effects in phonon-assisted indirect optical transitions in bulk hBN\@. Many-body $GW$ calculations also indicate that bulk hBN is an indirect-gap semiconductor, with a fundamental gap of 5.95--6.04 eV between the valence-band maximum (which is near the $K$ point, on the $\Gamma \rightarrow K$ line) and the conduction-band minimum at $M$ \cite{Cappellini_1996,Arnaud_2006,Wirtz_2006}. One of the many reasons for the high levels of interest in 2D materials is that the electronic properties of monolayers often differ significantly from those of the bulk layered material. Density functional theory (DFT) within the local density approximation (LDA) predicts the indirect band gap of monolayer hBN to be 4.6 eV \cite{Blase_1995}, and the $GW_0$ shift in the quasiparticle band gap is about 3.6 eV \cite{Wirtz_2006}, giving a gap of 8.2 eV for the monolayer. Clearly the gap is considerably enhanced on going from bulk hBN to a monolayer. Bulk hBN is believed to exhibit a large exciton binding energy, with values of 0.7--1.2 eV \cite{Wirtz_2006,Arnaud_2006,Cunningham_2018} predicted by $GW$-Bethe-Salpeter-equation ($GW$-BSE) calculations. On the other hand, experimental measurements find the exciton binding energy to be only 0.13--0.15 eV \cite{Watanabe_2004,Cassabois_2016}, although there are questions over the interpretation of these experimental results \cite{Paleari_2019}. Exciton binding is further enhanced in a free-standing monolayer due to the reduction in screening. Indeed, $GW$-BSE calculations find that the exciton binding energy increases to 2.1 eV in the monolayer \cite{Wirtz_2006}. Isolating monolayer hBN by exfoliation from bulk hBN has proved challenging, although Elias \textit{et al.}\ have recently succeeded in growing atomically thin samples of hBN on graphite substrates \cite{Elias_2019}. Their reflectance and photoluminescence measurements indicate a direct gap of 6.1 eV for hBN on graphite. However, the electronic properties of an isolated hBN monolayer (i.e., a freely suspended sample) are at present only accessible through theoretical calculations. Unfortunately, DFT systematically underestimates electronic band gaps and even many-body $GW$ methods \cite{Hedin_1965} suffer from limitations, as exemplified by the disagreement between the self-consistent and non-self-consistent variants of the method when applied to hBN \cite{Wirtz_2006}. We have therefore made use of quantum Monte Carlo (QMC) methods \cite{Ceperley_1980,Foulkes_2001} to study many-body effects in the band gap. We have calculated the electronic band gaps for excitations from the valence band at the $K$ point of the hexagonal Brillouin zone ($K_{\rm v}$) to the conduction band at the $\Gamma$ and $K$ points ($\Gamma_{\rm c}$ and $K_{\rm c}$) of monolayer hBN\@. In our DFT and $GW$ calculations, and our QMC calculations for bulk hBN, we have also considered the conduction band at the $M$ point ($M_{\rm c}$). Furthermore, we have investigated the effects of the vibrational renormalization of the electronic structure at the DFT level. We have made use of two QMC methods: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) \cite{Foulkes_2001}. In VMC, Monte Carlo integration is used to evaluate quantum mechanical expectation values with respect to trial wave-function forms of arbitrary complexity. Free parameters in the trial wave functions are optimized by a variational approach. DMC involves simulating drifting, diffusion, and birth/death processes governed by the Schr\"{o}dinger equation in imaginary time to project out the ground-state component of a trial wave function \cite{Ceperley_1980}. The fixed-node approximation \cite{Anderson_1976} is used to maintain fermionic antisymmetry. All our QMC calculations were performed using the \textsc{casino} code \cite{casino}. QMC methods have only recently been applied to calculate the energy gaps of 2D materials \cite{Hunt_2018,Frank_2018}. A major challenge is the need to extrapolate the QMC band gaps to the thermodynamic limit of large system size, because the computational expense of the method necessitates the use of relatively small simulation supercells subject to periodic boundary conditions \cite{Hunt_2018}. In this article we investigate finite-size effects in the band gap of hBN\@. The rest of this article is arranged as follows. In Sec.\ \ref{sec:methodology} we describe our DFT, $GW$, and QMC methodologies. We present our results in Sec.\ \ref{sec:results}. Finally we draw our conclusions in Sec.\ \ref{sec:conclusions}. We use Hartree atomic units (a.u.)\ throughout, in which $\hbar=m_{\rm e}=|e|=4\pi\epsilon_0=1$, except where other units are given explicitly. \section{Computational methodology\label{sec:methodology}} \subsection{DFT} \subsubsection{Geometry optimization, lattice dynamics, and band-structure calculations \label{sec:dft_methodology}} We performed our DFT calculations using the LDA, the Perdew-Burke-Ernzerhof (PBE) generalized-gradient-approximation exchange-correlation functional \cite{Perdew_1996}, and the Heyd-Scuseria-Ernzerhof (HSE06) hybrid functional \cite{Heyd_2003,Krukau_2006}. We used the \textsc{castep} \cite{castep} and \textsc{vasp} \cite{vasp} plane-wave-basis DFT codes. Our DFT-LDA and DFT-PBE relaxations of the lattice parameter used an artificial periodicity of 21.17 {\AA} in the out-of-plane direction, a $53 \times 53$ Monkhorst-Pack ${\bf k}$-point grid, ultrasoft pseudopotentials, and a plane-wave cutoff of 680 eV\@. The same parameters were used in our calculations of the electronic band structure. Our phonon calculations used density functional perturbation theory \cite{castep_dfpt}, norm-conserving DFT pseudopotentials, a plane-wave cutoff energy of 1361 eV, an artificial periodicity of 26.46 {\AA}, and a $53 \times 53$ Monkhorst-Pack ${\bf k}$-point grid for both the electronic orbitals and the vibrational normal modes. In our DFT-HSE06 calculations of the lattice parameter and band structure we used an artificial periodicity of 15.875 {\AA} in the out-of-plane direction, an $11 \times 11$ Monkhorst-Pack ${\bf k}$-point grid, norm-conserving DFT pseudopotentials, and a plane-wave cutoff of 816 eV\@. \subsubsection{QMC trial wave function generation\label{sec:dft_wf_gen}} The DFT calculations performed to generate trial wave functions for our QMC calculations used Dirac-Fock pseudopotentials \cite{Trail_2005a,Trail_2005b}, a plane-wave cutoff energy of 2721 eV, and, in the monolayer case, an artificial periodicity of 18.52 {\AA} (apart from the $3\times 3$ supercell, where the plane-wave cutoff energy and artificial periodicity were 2177 eV and 13.35 {\AA}, respectively). We found that replacing PBE ultrasoft pseudopotentials with Trail-Needs Dirac-Fock pseudopotentials changes the monolayer $K_{\rm v} \rightarrow \Gamma_{\rm c}$ and $K_{\rm v} \rightarrow K_{\rm c}$ DFT-PBE gaps from $4.69$ to $4.71$ eV and from $4.67$ to $4.79$ eV, respectively. In these calculations the lattice parameter is fixed at the DFT-PBE value, $a=2.512$ {\AA}, obtained with the PBE ultrasoft pseudopotentials. This suggests that the choice of pseudopotential introduces an uncertainty of around $0.1$ eV into our QMC gap estimates. \subsection{$GW$(-BSE) calculations} In the $GW$ approximation, many-body interactions are taken into account in a quasiparticle picture in which the screened Coulomb interaction $W$ between particles is included in the self-energy to first order. Varying levels of approximation are possible: the so-called single-shot $G_0 W_0$ approach calculates the Green's function $G$ and the dielectric screening in the Coulomb interaction $W$ from DFT wave functions, while the partially and fully self-consistent $GW_0$ and $GW$ methods iterate one or both of these quantities until self-consistency is achieved. Excitonic effects in the optical absorption can be taken into account by solving the Bethe-Salpeter equation (BSE) following the $GW$ calculations. We performed $G_0 W_0$(-BSE) and $GW_0$(-BSE) calculations for monolayer hBN, and for test purposes also $G_0 W_0$ and $GW_0$ calculations for bulk hBN\@. In our $GW$ calculations we used the \textsc{vasp} \cite{vasp} plane-wave-basis code for bulk hBN\@. The HSE06 functional \cite{Heyd_2003,Krukau_2006} was used to calculate the orbitals and their derivatives as input for the single-shot $G_0W_0$ calculations \cite{Shishkin_2006}. Convergence of the $G_0W_0$ calculation with respect to its principal convergence parameters was achieved using a $12 \times 12 \times 12$ Monkhorst-Pack ${\bf k}$-point grid, with 24 electronic bands taken into account, and a plane-wave cutoff energy of 400 eV\@. These parameters converge the band gap of bulk hBN to within 0.1 eV\@. We used the same parameter set to compute the partially self-consistent $GW_0$ band gap. The results of the bulk calculations are discussed in Sec.\ \ref{sec:dmc_gap_results}. For the monolayer $GW$ calculations we used the \textsc{BerkeleyGW} code \cite{berkeleygw} in order to be able to treat a much larger number of empty bands. In the bulk, the $G_0W_0$ gap changes by less than 50 meV when the number of electronic bands is increased from 24 to 48. In contrast, the monolayer requires 1200 bands to be taken into account for the same level of convergence; otherwise the dielectric function is too inaccurate to predict reliable self-energy corrections. The ${\bf k}$-point grid for the monolayer calculations was set to $24 \times 24 \times 1$, while the plane-wave cutoff during the many-body calculations was set to 408.17 eV (30 Ry). In these calculations, the DFT wave functions were calculated using the PBE functional. For the monolayer, the optical absorption coefficient was also calculated at both the single-shot and the $GW_0$ level by solving the Bethe-Salpeter equation. In both cases we took 6 empty and 4 occupied bands into account. Truncation of the Coulomb interaction was applied in the monolayer calculations. \subsection{QMC calculations} \subsubsection{Evaluating quasiparticle and excitonic gaps\label{sec:eval_gaps}} To calculate an excitation energy using DMC we exploit the fixed-node approximation \cite{Anderson_1976} and evaluate the difference of the total energies obtained using trial wave functions corresponding to the ground state and the particular excited state of interest \cite{Mitas_1994,Williamson_1998,Towler_2000}. For each excited state an appropriate wave function can be constructed by choosing the occupancies of the orbitals in the Slater determinants (see Sec.\ \ref{sec:trial_wfs}). The DMC energy of an excited state is exact if the nodal surface of the trial wave function is exact, as is the case for the ground state, although the DMC energy is only guaranteed to be an upper bound on the energy for certain excited states \cite{Foulkes_1999}. The quasiparticle bands at a particular point may be evaluated as ${\cal E}_i({\bf k})=E^+({\bf k},i)-E^{\rm GS}$ for unoccupied states, where $E^+({\bf k},i)$ is the total energy when an electron is added to the system and occupies band $i$ at wavevector ${\bf k}$ and $E^{\rm GS}$ is the ground-state total energy. For occupied states we evaluate ${\cal E}_i({\bf k})=E^{\rm GS}-E^-({\bf k},i)$, where $E^-({\bf k},i)$ is the total energy when an electron is removed from band $i$ at wavevector ${\bf k}$. The quasiparticle band gap $\Delta_{\rm qp}$ is the difference of the energy bands at the conduction-band minimum (CBM) and valence-band maximum (VBM): \begin{equation} \Delta_{\rm qp}={\cal E}_{\rm CBM}-{\cal E}_{\rm VBM}=E^+_{\rm CBM}+E^-_{\rm VBM}-2E^{\rm GS}. \end{equation} The excitonic gap $\Delta_{\rm ex}$ is defined as the difference in energy when an electron is promoted from the VBM to the CBM\@: \begin{equation} \Delta_{\rm ex}=E^{\rm pr}_{{\rm VBM}\rightarrow {\rm CBM}}-E^{\rm GS}, \end{equation} where $E^{\rm pr}_{{\rm VBM}\rightarrow {\rm CBM}}$ is the total energy evaluated with a trial wave function in which an electron has been promoted from the VBM to the CBM\@. In Sec.\ \ref{sec:spin_states} we investigate whether it is important to construct appropriate wave functions for excitonic spin singlets or triplets when calculating gaps. \subsubsection{Trial wave functions \label{sec:trial_wfs}} We used Slater-Jastrow (SJ) trial wave functions $\Psi=\exp(J) S^\uparrow S^\downarrow$ in our QMC calculations. The Slater determinants for up- and down-spin electrons $S^\uparrow$ and $S^\downarrow$ contained Kohn-Sham orbitals generated using the \textsc{castep} plane-wave-basis code \cite{castep} in a three-dimensionally periodic cell, as described in Sec.\ \ref{sec:dft_wf_gen}. However, the orbitals were re-represented in a localized ``blip'' B-spline basis \cite{Alfe_2004} to improve the scaling of the QMC calculations with system size and to allow us, in the monolayer case, to discard the artificial periodicity in the out-of-plane direction. The Jastrow factor $\exp(J)$ is a positive, symmetric, explicit function of interparticle distances. We used the Jastrow form described in Ref.\ \onlinecite{Drummond_2004}, in which the Jastrow exponent $J$ consists of short-range, isotropic electron-electron, electron-nucleus, and electron-electron-nucleus terms, which are polynomials in the interparticle distances, as well as long-range electron-electron terms expanded in plane waves. The free parameters in our Jastrow factors were optimized by unreweighted variance minimization \cite{Umrigar_1988a,Drummond_2005}. Within Hartree-Fock theory, band gaps are significantly overestimated due to the tendency to over-localize electronic states in a theory that does not allow correlation to keep electrons apart. DMC retrieves a large but finite fraction of the correlation energy. Assuming the fraction of correlation energy retrieved in the ground state is similar to or greater than the fraction retrieved in an excited state, we expect the DMC gaps to be upper bounds on the true gaps. If we increase the fraction of correlation energy retrieved, e.g., by including a backflow transformation \cite{Kwon_1993,Lopez_2006}, then (if anything) we expect to see a decrease in the band gap. We performed some test calculations with Slater-Jastrow-backflow (SJB) trial wave functions \cite{Kwon_1993,Lopez_2006}. In a backflow wave function the orbitals in the Slater determinant are evaluated not at the actual electron positions, but at quasiparticle positions that are functions of all the particle coordinates. The backflow function, which describes the offset of the quasiparticle coordinates relative to the actual coordinates, contains free parameters to be determined by an optimization method. The Jastrow factor and backflow functions were optimized by VMC energy minimization \cite{Umrigar_2007}. As shown in Table \ref{table:dmc_energies}, backflow lowers the DMC total energies significantly. However the amount by which backflow reduces the quasiparticle and excitonic gaps is small: about $0.10(3)$ eV on average. We investigated the reoptimization of backflow functions in the supercells in which an electron has been added or removed, finding that reoptimization raises the gap slightly. This is perhaps indicative of static correlation (multireference character) effects in the nodal surface that are not addressed by the use of backflow. Since QMC simulations with backflow are significantly more expensive, and finite-size effects are a potentially dominant source of error in our work, we did not use backflow in our production calculations. \begin{table*}[!htbp] \begin{center} \caption{DMC ground-state (GS) energy per primitive cell, quasiparticle band gap, and excitonic band gap of monolayer hBN in a supercell consisting of $3 \times 3$ primitive cells, as obtained using different trial wave functions and time steps. The ${\bf k}$-vector grid includes both $\Gamma$ and $K$. Where the time step is 0, the reported results have been obtained by linear extrapolation to zero time step. The fact that the excitonic gap is higher than the quasiparticle gap is a manifestation of finite-size error in the uncorrected gaps, as explained in Sec.\ \ref{sec:finite_size}. \label{table:dmc_energies}} \begin{tabular}{lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}l} \hline \hline & \multicolumn{2}{c}{Time step} & \multicolumn{2}{c}{GS energy $E^{\rm GS}$} & \multicolumn{4}{c}{Quasiparticle gap $\Delta_{\rm qp}$ (eV)} & \multicolumn{4}{c}{Excitonic gap $\Delta_{\rm ex}$ (eV)} \\ \raisebox{1.5ex}[0pt]{Wave fn.} & \multicolumn{2}{c}{(a.u.)} & \multicolumn{2}{c}{(eV/p.\ cell)} & \multicolumn{2}{c}{~~~~$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} \\ \hline SJ &~~~$0$&$.04$ & $-350$&$.716(1)$ &~~~~~~$1$&$.18(3)$ &~~~~~~~$4$&$.21(3)$ &~$6$&$.12(1)$ &~$6$&$.25(2)$ \\ SJ & $0$&$.01$ & $-350$&$.739(3)$ & $1$&$.06(6)$ & $4$&$.22(6)$ & $6$&$.09(3)$ & $6$&$.28(3)$ \\ SJ & $0$& & $-350$&$.747(4)$ & $1$&$.02(9)$ & $4$&$.22(8)$ & $6$&$.08(4)$ & $6$&$.29(4)$ \\ SJB & $0$&$.04$ & $-350$&$.835(3)$ & $1$&$.28(5)$ & $4$&$.28(4)$ & $6$&$.17(3)$ & $6$&$.28(3)$ \\ SJB & $0$&$.01$ & $-350$&$.852(1)$ & $0$&$.97(3)$ & $4$&$.14(2)$ & $6$&$.07(2)$ & $6$&$.24(2)$ \\ SJB & $0$& & $-350$&$.857(2)$ & $0$&$.86(4)$ & $4$&$.09(4)$ & $6$&$.04(2)$ & $6$&$.22(2)$ \\ \hline \hline \end{tabular} \end{center} \end{table*} Apart from these tests we have used the ground-state-optimized Jastrow factor (and backflow function, where applicable) in all our excited-state calculations. The fixed-node SJ-DMC energy does not depend on the Jastrow factor, except via the pseudopotential locality approximation \cite{Mitas_1991}, and so reoptimizing the Jastrow factor in each excited state would be pointless in any case. The single-particle bands at $K$ and $K^\prime$ are degenerate, and hence we can construct multideterminant excited-state wave functions from the degenerate orbitals. We discuss this in Sec.\ \ref{sec:multidets}. \begin{table*}[!htbp] \begin{center} \caption{``Hartree-Fock'' VMC (HFVMC), SJ-VMC, SJB-VMC, SJ-DMC, and SJB-DMC ground-state (GS) total energies, energy variances, quasiparticle (QP) gaps, and excitonic gaps for a $3\times 3$ supercell of monolayer hBN\@. The fact that the excitonic gap is higher than the quasiparticle gap is a manifestation of finite-size error in the uncorrected gaps, as explained in Sec.\ \ref{sec:finite_size}. \label{table:hfvmc_vmc_dmc_gaps}} \begin{tabular}{lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}l} \hline \hline & \multicolumn{2}{c}{GS energy $E^{\rm GS}$} & \multicolumn{2}{c}{Var.\ $\sigma^2$} & \multicolumn{4}{c}{QP gap $\Delta_{\rm qp}$ (eV)} & \multicolumn{4}{c}{Ex.\ gap $\Delta_{\rm ex}$ (eV)} \\ \raisebox{1.5ex}[0pt]{Method} & \multicolumn{2}{c}{(eV/p.\ cell)} & \multicolumn{2}{c}{(a.u.)} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} \\ \hline HFVMC & $-341$&$.961(4)$ & $21$&$.39$ &~$2$&$.63(8)$ &~$5$&$.95(8)$ &~$7$&$.13(5)$ &~$7$&$.65(5)$ \\ SJ-VMC & $-349$&$.8780(4)$ & $3$&$.18$ & $2$&$.559(9)$ & $4$&$.593(9)$ & $7$&$.118(6)$ & $6$&$.378(5)$ \\ SJB-VMC & $-350$&$.229(2)$ & $2$&$.11$ & $2$&$.55(4)$ & $4$&$.46(4)$ &$7$&$.18(2)$ & $6$&$.30(2)$ \\ SJ-DMC & $-350$&$.747(4)$ & & & $1$&$.02(9)$ & $4$&$.22(8)$ & $6$&$.08(4)$ & $6$&$.29(4)$ \\ SJB-DMC & $-350$&$.857(2)$ & & & $0$&$.86(4)$ & $4$&$.09(4)$ & $6$&$.04(2)$ & $6$&$.22(2)$ \\ \hline \hline \end{tabular} \end{center} \end{table*} \subsubsection{DMC time step, etc.} The time-step error in the total energy per primitive cell is clearly significant, as shown in Table \ref{table:dmc_energies}; however, there is a partial cancellation of time-step errors when we take differences of total energies to obtain gaps. For the SJ-DMC gaps the time-step errors are of marginal significance. Nevertheless, since we would like to achieve very high accuracy, we have used DMC time steps of 0.01 a.u.\ and 0.04 a.u.\ and extrapolated our results linearly to zero time step. The time-step errors in our SJB-DMC gaps are considerably larger. All our DMC calculations used populations of at least 1024 walkers, making population-control bias negligible. We used Dirac-Fock pseudopotentials \cite{Trail_2005a,Trail_2005b} to represent the boron and nitrogen atoms, including core-polarization corrections \cite{Shirley_1993}. \subsubsection{Comparison of VMC and DMC gap results} VMC is considerably cheaper than DMC, typically by a factor of at least ten. VMC can therefore be used to study larger systems than DMC\@. However, whereas fixed-node DMC total energies and band gaps are independent of the Jastrow factor in the limit of zero time step and large population, VMC energies are determined by the Jastrow factor. The use of a stochastically optimized Jastrow factor is therefore an additional source of noise in the VMC gaps. VMC and DMC results obtained with different levels of trial wave function for a $3 \times 3$ supercell are presented in Table \ref{table:hfvmc_vmc_dmc_gaps}. The trial wave function in the ``Hartree-Fock'' VMC calculations was simply a Slater determinant of DFT-PBE orbitals, with no description of correlation. The fractions of correlation energy retrieved at the SJ-VMC and SJB-VMC levels are clearly different in the ground state and excited states. However, we find the VMC gaps to be larger than the DMC gaps, and the SJ gaps to be larger than the SJB ones, as expected. We do not believe our VMC results can be used to aid the extrapolation of our DMC gaps to the thermodynamic limit of infinite system size. \subsubsection{Singlet and triplet excitations\label{sec:spin_states}} We have calculated the SJ-DMC energy difference between the singlet and triplet excitonic states in a $3 \times 3$ supercell of hBN\@. We used single-determinant trial wave functions in which an electron was promoted with and without a spin-flip to describe the triplet and singlet states, respectively \cite{Marsusi_2011}. The orbitals were generated in a non-spin-polarized ground-state DFT-PBE calculation. The singlet excitonic state for a promotion from $K_{\rm v} \rightarrow K_{\rm c}$ is 0.12(2) eV lower in energy than the triplet state. For a promotion from $K_{\rm v} \rightarrow \Gamma_{\rm c}$, the energy difference between the singlet and triplet excitonic states is statistically insignificant (smaller than the error bar of 0.02 eV)\@. Because these results were obtained in a small ($3\times 3$) supercell, these estimates of the singlet-triplet splitting should only be regarded as being of qualitative accuracy. In summary, the energy difference between the singlet and triplet excitonic states in hBN appears to be small, especially when the electron and hole have different wave vectors. Apart from these tests, all the exited-state calculations reported in this paper used singlet excitations. \subsubsection{Multideterminant wave functions\label{sec:multidets}} We have considered three different ways of describing the wave function of a singlet excitonic state: (i) simply promoting a single electron from one state to another without changing its spin in a single-determinant wave function; (ii) constructing a two-determinant wave function in which spin-up and spin-down electrons are promoted in the first and second determinants, respectively; and (iii) constructing a multideterminant wave function consisting of a linear combination of all the degenerate excited-state determinants (i.e., accounting for the degeneracy of $K$ and $K^\prime$ as well as spin-degeneracy). In case (iii), we have 4- and 8-determinant wave functions for excitations from $K_{\rm v} \rightarrow \Gamma_{\rm c}$ and from $K_{\rm v} \rightarrow K_{\rm c}$, respectively. We optimized the determinant expansion coefficients for these multideterminant wave functions in the presence of a fixed Jastrow factor that was optimized in the ground state, but we did not find a statistically significant reduction in the VMC energy. Any reduction in the DMC energy would be even smaller and hence we conclude that, to the level of precision at which we are working, there is no advantage to using such a multideterminant wave function. This does not imply that larger multideterminant wave functions would not significantly reduce fixed-node errors. \subsubsection{Finite-size effects\label{sec:finite_size}} We have performed QMC calculations for monolayer hBN in a range of hexagonal supercells, from $2\times 2$ to $9\times 9$ primitive cells. Choosing the monolayer supercell to be hexagonal maximizes the distance between nearest periodic images of particles, and is therefore expected to minimize finite-size effects. For bulk hBN the choice of supercell is more complicated. In general a supercell is defined by an integer ``supercell matrix'' $S$ such that \begin{equation} {\bf a}^{\rm S}_{i} = \sum_{k}S_{ik}{\bf a}^{\rm P}_{k}, \end{equation} where ${\bf a}^{\rm P}_{k}$ is the $k^{\text{th}}$ primitive lattice vector and ${\bf a}^{\rm S}_{i}$ is the $i^{\text{th}}$ supercell lattice vector. The supercell defined by $S$ contains $N_{\rm P}=|\det(S)|$ primitive cells. For a given number of primitive cells $N_{\rm P}$ we may therefore search over integer supercell matrices $S$ such that $|\det(S)|=N_{\rm P}$ to find the supercell matrix for which the nearest-image distance is maximized \cite{LloydWilliams_2015}. In general the optimal supercell matrix is nondiagonal. Our bulk hBN supercells contained $N_{\rm P}=9$, $18$, $27$, and $36$ primitive cells. Unlike the monolayer, in bulk hBN we are unable to choose a large set of geometrically similar supercells that both maximize the distance between periodic images and have a tractable number of particles. Different choices of supercell Bloch vector \cite{Rajagopal_1994,Rajagopal_1995} allow one to obtain different points on the electronic band structure in a finite supercell \cite{Mitas_1994,Williamson_1998,Towler_2000}. In the monolayer, if one uses a $3m \times 3n$ supercell, where $m$ and $n$ are natural numbers, with the supercell Bloch vector being ${\bf k}_{\rm s}={\bf 0}$, then the set of orbitals in the trial wave function includes the bands at both the $\Gamma$ and the $K$ points of the primitive-cell Brillouin zone. In this case one can make additions or subtractions at $\Gamma$ or $K$ and promote electrons either from $K_{\rm v} \rightarrow \Gamma_{\rm c}$ or from $K_{\rm v} \rightarrow K_{\rm c}$. In a general supercell, however, one can choose the supercell Bloch vector ${\bf k}_{\rm s}$ so that the orbitals at $\Gamma$ are present in the Slater wave function, or so that the orbitals at $K$ are present, but not both at the same time. The quasiparticle gap from $K_{\rm v} \rightarrow \Gamma_{\rm c}$ can always be calculated for a given supercell by determining the CBM and VBM using two different values of ${\bf k}_{\rm s}$. Similar comments apply in the bulk case. With the optimal (nondiagonal) supercell matrices for $N_{\rm P}=\det{(S)}=18$ or $36$ primitive cells we are unable to include $\Gamma$ and $K$ simultaneously in the grid of ${\bf k}$ vectors. This prevents calculation of the $\Gamma_{\rm v} \rightarrow K_{\rm c}$ excitonic gap in these supercells. We have instead calculated the bulk $\Gamma_{\rm v} \rightarrow K_{\rm c}$ excitonic gaps in supercells defined by diagonal supercell matrices $S(N_{\rm P}=18)=\text{diag}(3,3,2)$ and $S(N_{\rm P}=36)=\text{diag}(3,3,4)$. Now let us consider ``long-range'' finite-size errors in the energy gaps in periodic supercells. Adding a single electron to or removing a single electron from a periodic simulation supercell results in the creation of an unwanted lattice of quasiparticles at the set of supercell lattice points \cite{Hunt_2018}. The leading-order systematic finite-size error in the quasiparticle bands is therefore $v_{\rm M}/2$, where $v_{\rm M}$ is the screened Madelung constant \cite{Madelung_1918} of the supercell. The situation is qualitatively similar to that encountered in \textit{ab initio} simulations of charged defects \cite{Hine_2009}. Following the notation of Sec.\ \ref{sec:eval_gaps}, a finite-size-corrected expression for an unoccupied energy band is ${\cal E}_i'({\bf k})=[E^+({\bf k},i)-v_{\rm M}/2]-E^{\rm GS}$. In a similar fashion, when one creates a lattice of holes by removing an electron from a periodic supercell, a finite-size-corrected expression for an occupied energy band is ${\cal E}_i'({\bf k})=E^{\rm GS}-[E^-({\bf k},i)-v_{\rm M}/2]$. The finite-size-corrected quasiparticle band gap is therefore \begin{equation} \Delta_{\rm qp}'={\cal E}_{\rm CBM}'-{\cal E}_{\rm VBM}'=E^+_{\rm CBM}+E^-_{\rm VBM}-2E^{\rm GS}-v_{\rm M}. \label{eq:qp_gap_corr} \end{equation} In an hBN monolayer, in-plane screening modifies the form of the Coulomb interaction between charges. The screened interaction is approximately of Keldysh form \cite{Keldysh_1979}. Including the relative permittivity $\epsilon$ of the surrounding medium, the Keldysh interaction in reciprocal space is $v(k)=2\pi/[\epsilon k(1+r_*k)]$, where $r_*$ is the ratio of the in-plane susceptibility of the layer to the permittivity of the surrounding medium. For the monolayer the leading-order finite-size error in the quasiparticle gap is the Madelung constant $v_{\rm M}$ of the supercell \textit{evaluated using the Keldysh interaction} \cite{Hunt_2018}. In small supercells, the Keldysh interaction between nearby periodic images varies logarithmically with $r$ and the Madelung constant is almost independent of system size; however, once the supercell size significantly exceeds $r_*$, the Keldysh interaction reduces to the Coulomb $1/r$ form and the Madelung constant falls off as the reciprocal of the linear size of the supercell. The parameter $r_*$ may be estimated using Eq.\ (S.7) in the Supplemental Material of Ref.\ \onlinecite{Ganchev_2015}. For free-standing monolayer hBN, $r_* \approx c(\epsilon_\|-1)/4$, where $c$ is the out-of-plane lattice parameter of bulk hBN, $\epsilon_\|$ is the in-plane component of the high-frequency permittivity tensor, and we have included an extra factor of $1/2$ due to the fact that there are two layers per bulk hBN primitive cell. The lattice parameter is measured to be $c=6.6612$ {\AA} \cite{Lynch_1966}, while DFT-PBE calculations predict that $\epsilon_\| \approx 4.69$, giving $r_* \approx 6.14$ {\AA}. Unfortunately the supercell sizes used in this work are comparable in size to $r_*$. For bulk hBN the $r_*$ value is smaller by a factor $\sqrt{\epsilon_\| \epsilon_z}$, where $\epsilon_z$ is the out-of-plane component of the permittivity tensor \cite{Ganchev_2015}. Using the DFT-PBE high-frequency out-of-plane permittivity $\epsilon_z=2.65$ gives $r_*^{\rm bulk}=1.74$ {\AA} for bulk hBN\@. Our bulk hBN supercells are sufficiently large that the interaction between periodic images can be assumed to be of Coulomb $1/r$ form; nevertheless, the strong anisotropy of the dielectric screening must be taken into account in the evaluation of the screened Madelung constant \cite{Murphy_2013} $v_{\rm M}$. The remaining systematic finite-size effects in the quasiparticle gap are primarily due to charge-quadrupole image interactions and fall off as $a_{\rm s}^{-2}$ when $a_{\rm s} \leq r_*$ and as $a_{\rm s}^{-3}$ when $a_{\rm s} \gg r_*$, where $a_{\rm s}$ is the in-plane linear size of the supercell \cite{Hunt_2018}. There are also oscillatory, quasirandom errors with a slowly decaying envelope as a function of system size due to long-range oscillations in the pair-correlation function being forced to be commensurate with the supercell (see Figs.\ \ref{fig:gap_v_NP} and \ref{fig:bulk_fs}). We remove the remaining finite-size errors by extrapolating the Madelung-corrected quasiparticle gaps in supercells of 9 or more primitive cells to infinite system size, assuming the finite-size error decays as $a_{\rm s}^{-2}$ in monolayer hBN and as $a_{\rm s}^{-3}$ in bulk hBN (i.e., as $N_{\rm P}^{-1}$ in both cases). Since the quasirandom finite-size errors dominate the QMC statistical error bars, we do not weight our data by the QMC error bars \cite{Hunt_2018}. Note that the uncorrected quasiparticle gap calculated in a finite supercell may be smaller than the excitonic gap in that supercell due to a negative Madelung constant, as can be seen in Tables \ref{table:dmc_energies} and \ref{table:hfvmc_vmc_dmc_gaps}. This is simply an artifact of the use of periodic boundary conditions and the Ewald interaction, and the effect disappears in the thermodynamic limit of infinite system size, where the excitonic gap must always be less than or equal to the quasiparticle gap due to the attractive interaction between electrons and holes. Finite-size effects in DMC excitonic gaps may arise from the confinement of a neutral exciton in a periodic simulation supercell. Once the supercell size significantly exceeds the size of the exciton, the exciton wave function is exponentially localized within the supercell; however, power-law finite-size effects in the exciton binding energy remain due to the difference between the screened Coulomb interaction in a finite, periodic supercell and in an infinite system. The length scale of an exciton under the Keldysh interaction is $r_0=\sqrt{r_*/(2\mu)}$, where $\mu$ is the electron-hole reduced mass \cite{Ganchev_2015,Mostaani_2017}. Using the DFT-HSE06 effective masses in Table \ref{table:dft_eff_masses} together with the $r_*$ value estimated above, we find the sizes of both the $K_{\rm v}\rightarrow\Gamma_{\rm c}$ and $K_{\rm v}\rightarrow K_{\rm c}$ excitons in monolayer hBN to be $r_0 \approx 3$ {\AA}, which is only slightly larger than the lattice constant. Hence all our simulation supercells are large enough to contain the excitons, and so the remaining finite-size effects are due to instantaneous dipole-dipole interactions between identical images, evaluated with the Keldysh interaction in the case of the monolayer. On the other hand, the fact that the exciton radius is comparable with the lattice constant implies that we are at the limit of the validity of the effective-mass model of excitons and it may not fully account for finite-size effects in gaps; nevertheless, this model provides us with best available framework for understanding systematic finite-size effects. In supercells with $a_{\rm s} \leq r_*$, the leading-order systematic finite-size effects in the excitonic gap go as $a_{\rm s}^{-2}$; for supercells with $a_{\rm s} \gg r_*$, the finite-size effects go as $a_{\rm s}^{-3}$. We therefore extrapolate our uncorrected excitonic gaps in the same way that we extrapolate our Madelung-corrected quasiparticle gaps to infinite system size, i.e., assuming the errors go as $N_{\rm P}^{-1}$ for both bulk and monolayer. Again we do not weight our data by the QMC error bars, since the quasirandom finite-size effects dominate the QMC error bars. Our DMC gaps against system size are presented and discussed in Sec.\ \ref{sec:finite_size_results}. \subsection{Vibrational contribution} \label{subsec:vib_method} We calculated the vibrational contribution to the quasiparticle band gap arising from the electron-phonon interaction at temperature $T$ within the Born-Oppenheimer approximation as: \begin{equation} \Delta_{\rm qp}(T)=\frac{1}{\mathcal{Z}}\sum_{\mathbf{s}}\langle\Phi_{\mathbf{s}}(\mathbf{u})|\Delta_{\rm qp}(\mathbf{u})|\Phi_{\mathbf{s}}(\mathbf{u})\rangle e^{-E_{\mathbf{s}}/(k_{\mathrm{B}}T)} \label{eq:gap_temperature} \end{equation} where the harmonic vibrational wave function $|\Phi_{\mathbf{s}}(\mathbf{u})\rangle$ in state $\mathbf{s}$ has energy $E_{\mathbf{s}}$, $\mathbf{u}=\{u_{\nu\mathbf{q}}\}$ is a collective coordinate for all the nuclei written in terms of normal modes of vibration $(\nu,\mathbf{q})$, $\mathcal{Z}=\sum_{\mathbf{s}}e^{-E_{\mathbf{s}}/(k_{\mathrm{B}}T)}$ is the partition function, and $k_{\mathrm{B}}$ is Boltzmann's constant. We evaluated Eq.\ (\ref{eq:gap_temperature}) using two complementary methods recently reviewed in Ref.\ \onlinecite{Monserrat_2018}. The first relies on a stochastic Monte Carlo sampling of the vibrational density over $M$ points: \begin{equation} \Delta_{\rm qp}^{\rm MC}(T)=\frac{1}{M}\sum_{i=1}^M\Delta_{\rm qp}(\mathbf{u}_i), \label{eq:temperature_mc} \end{equation} where configurations $\mathbf{u}_i$ are distributed according to the nuclear density. This approach enables the inclusion of the electron-phonon interaction at all orders at the expense of using large diagonal supercell matrices, and in practice we use thermal lines to accelerate the sampling \cite{Monserrat_2016b}. The second approach relies on a second order expansion of the dependence of $\Delta_{\rm qp}(\mathbf{u})$ on the mode amplitudes $\mathbf{u}$, which leads to a particularly simple \textit{quadratic} approximation: \begin{equation} \Delta_{\rm qp}^{\rm quad}(T)=\Delta_{\rm qp}+\frac{1}{N_{\mathbf{q}}}\sum_{\mathbf{q},\nu}\frac{1}{\omega_{\mathbf{q}\nu}}\frac{\partial^2\Delta_{\rm qp}}{\partial u^2_{\mathbf{q}\nu}}\left[\frac{1}{2}+n_{\mathrm{B}}(\omega_{\mathbf{q}\nu},T)\right], \label{eq:temperature_quadratic} \end{equation} where $n_{\mathrm{B}}(\omega_{\mathbf{q}\nu},T)$ is a Bose-Einstein factor. This expression can be efficiently evaluated using nondiagonal supercell matrices \cite{LloydWilliams_2015} at the expense of neglecting higher-order terms in the electron-phonon interaction. Overall, Eq.\ (\ref{eq:temperature_quadratic}) enables the convergence of the calculations with respect to supercell size (or equivalently $\mathbf{q}$-point grid density), whereas Eq.\ (\ref{eq:temperature_mc}) enables the inclusion of higher-order terms, which have been found to provide important contributions in a range of materials \cite{Monserrat_2015,Saidi_2016}. All our vibrational calculations were performed using the PBE functional, an energy cutoff of $700$ eV, and a $\mathbf{k}$-point spacing of $2\pi\times0.025$\AA$^{-1}$ to sample the electronic Brillouin zone. The results show slow convergence with respect to the $\mathbf{q}$-point grid size: the vibrational correction to the quasiparticle gap at $300$ K using the expression in Eq.\ (\ref{eq:temperature_quadratic}) converges to values better than $0.05$ eV using a grid size of $32\times32$ $\mathbf{q}$-points for the monolayer, and using a grid size of $16\times16\times16$ for the bulk. We also tested the inclusion of van der Waals dispersion corrections in the bulk calculations using the Tkatchenko-Scheffler scheme \cite{Tkatchenko_2009} but found differences smaller than $0.01$ eV compared to the calculations without dispersion corrections. Using Eq.\ (\ref{eq:temperature_mc}) instead of Eq.\ (\ref{eq:temperature_quadratic}) leads to a significant enhancement to the vibrational correction to the quasiparticle gap. However, calculations using Eq.\ (\ref{eq:temperature_mc}) are restricted to smaller $\mathbf{q}$-point grid sizes, and therefore our final results were estimated by using the $\mathbf{q}$-point converged results obtained with Eq.\ (\ref{eq:temperature_quadratic}) and adding a correction equal to $\Delta_{\rm qp}^{\rm MC}(T)-\Delta_{\rm qp}^{\rm quad}(T)$ evaluated at the largest $\mathbf{q}$-point grid size feasible within the Monte Carlo method, which is $8\times8$ for the monolayer and $4\times4\times4$ for the bulk. \section{Results\label{sec:results}} \subsection{Lattice parameter and dynamical stability\label{sec:geom_and_phonons}} The lattice parameters obtained in DFT-LDA, DFT-PBE, and DFT-HSE06 calculations are $a=2.491$, $2.512$, and 2.45 {\AA}, respectively, which may be compared with the bulk lattice parameter $a=2.5040$ {\AA} \cite{Lynch_1966} and the lattice parameter 2.5 {\AA} measured in a thin film of hBN \cite{Shi_2010}. Our DFT-PBE lattice parameter is in good agreement with a previously published result, $a=2.51$ {\AA} \cite{Ribeiro_2011}. We have used the DFT-PBE lattice parameter $a=2.512$ {\AA} in all our QMC calculations. The partial charge of each boron atom is 0.83 according to Mulliken population analysis of the DFT orbitals \cite{Mulliken_1955} and 0.21 a.u.\ according to Hirshfeld analysis of the charge density \cite{Hirshfeld_1977}. The partial charges predicted by the LDA and PBE functionals agree. The DFT-LDA and DFT-PBE phonon dispersion curves of hBN are shown in Fig.\ \ref{fig:bn_dft_phonons}. The calculations appear to predict a small region of dynamical instability in the flexural acoustic branch about the $\Gamma$ point. Such regions of instability around $\Gamma$ are a common feature in first-principles lattice-dynamics calculations for 2D materials, including graphene, molybdenum disulfide, and indium and gallium chalcogenides \cite{Zolyomi_2014}. We observe that (i) the region of instability occurs in both finite-displacement (supercell) calculations and in density functional perturbation theory calculations; (ii) the region of instability depends sensitively on every simulation parameter (basis set, ${\bf k}$-point sampling, supercell size in finite-displacement calculations, exchange-correlation functional, pseudopotential, and artificial periodicity); (iii) the size of the instability is the same as the amount by which the acoustic branches miss zero at $\Gamma$ if Newton's third law is not imposed on the force constants; and (iv) the region of instability remains even if the layer is put under tension by increasing the lattice parameter slightly. To minimize the effects of longitudinal/transverse optic-mode splitting in our three-dimensionally periodic calculations, we choose the $z$-component of the wave vector to be $\pi/L$, where $L$ is the artificial periodicity \cite{Wirtz_2003}. Our results are in good agreement with the phonon dispersion curves obtained by Wirtz \textit{et al.}\ \cite{Wirtz_2003}. An analysis of the Raman activity of phonon modes is given in Ref.\ \onlinecite{Wirtz_2005}. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{bn_dfpt_phonons.eps} \caption{(Color online) DFT-LDA and DFT-PBE phonon dispersion curves for monolayer hBN\@. \label{fig:bn_dft_phonons}} \end{center} \end{figure} \subsection{DFT electronic band structure and effective masses\label{sec:dft_bands_masses}} The DFT-LDA, DFT-PBE, and DFT-HSE06 band structures of monolayer and bulk hBN are shown in Figs.\ \ref{fig:bn_dft_bs} and \ref{fig:bulk_bn_dft_bs}, respectively. In the case of the monolayer, we fitted \begin{eqnarray} {\cal E}_{\rm c,v}({\bf q}) & = & {\cal E}_{K_{\rm c,v}} \pm \frac{q^2}{2m_{K_{\rm c,v}}^\ast}+A_{\rm c,v}q^4+B_{\rm c,v}q^6 \nonumber \\ & & {}+C_{\rm c,v}q^6\cos(6\theta) +D_{\rm c,v}q^3\cos(3\theta) \nonumber \\ & & {}+E_{\rm c,v}q^5\cos(3\theta), \label{eq:band_K} \end{eqnarray} where ${\cal E}_{K_{\rm c,v}}$, $m_{K_{\rm c,v}}^\ast$, $A_{\rm c,v}$, $B_{\rm c,v}$, $C_{\rm c,v}$, $D_{\rm c,v}$, and $E_{\rm c,v}$ are fitting parameters, to the conduction and valence bands within a circle of radius 6\% of the $\Gamma$--$M$ distance around the $K$ point. ${\bf q}$ is the wavevector relative to the $K$ point, and $\theta$ is the polar angle of ${\bf q}$. The second term is positive for the conduction band and negative for the valence band, so that $m_{K_{\rm c}}^\ast$ and $m_{K_{\rm v}}^\ast$ are the electron and hole effective masses. The root-mean-square (RMS) residual over the fitting area is less than 0.2 meV in each case. We fitted \begin{equation} {\cal E}_{\rm c}({\bf k}) = {\cal E}_{\Gamma_{\rm c}} + \frac{k^2}{2m_{\Gamma_{\rm c}}^\ast}+A^\prime k^4+B^\prime k^6+C^\prime k^6\cos(6\theta), \label{eq:band_Gamma} \end{equation} where $k=|{\bf k}|$, $\theta$ is the polar angle of ${\bf k}$, and ${\cal E}_{\Gamma_{\rm c}}$, $m_{\Gamma_{\rm c}}^\ast$, $A^\prime$, $B^\prime$, and $C^\prime$ are fitting parameters, to the conduction band within a circle of radius 40\% of the $\Gamma$--$M$ distance about $\Gamma$. The RMS residual over this area is less than 0.2 meV in each case. It is clearly much easier to represent the band over a large area around $\Gamma$ than around $K$. The fitted effective masses in Eqs.\ (\ref{eq:band_K}) and (\ref{eq:band_Gamma}) are reported in Table \ref{table:dft_eff_masses}. It was verified that the effective masses are unchanged to the reported precision when the radius of the circle used for the fit is reduced. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{bn_dft_bs.eps} \caption{(Color online) DFT-LDA, DFT-PBE, and DFT-HSE06 electronic band-structure plots for monolayer hBN\@. The zero of energy is set to the Fermi energy. The inset shows the energy range around the CBM in greater detail. \label{fig:bn_dft_bs}} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{lda_pbe.eps} \caption{(Color online) DFT-LDA and DFT-PBE electronic band-structure plots for bulk hBN\@. The zero of energy is set to the Fermi energy. \label{fig:bulk_bn_dft_bs}} \end{center} \end{figure} \begin{table}[!htbp] \begin{center} \caption{Effective masses $m^\ast$ for the $\Gamma_{\rm c}$, $K_{\rm c}$, and $K_{\rm v}$ bands from DFT-LDA, DFT-PBE, and DFT-HSE06 calculations.\label{table:dft_eff_masses}} \begin{tabular}{lccc} \hline \hline & \multicolumn{3}{c}{$m^\ast$ (a.u.)} \\ \raisebox{1.5ex}[0pt]{Functional} & $\Gamma_{\rm c}$ & $K_{\rm c}$ & $K_{\rm v}$ \\ \hline LDA & $0.96$ & $0.89$ & $0.61$ \\ PBE & $0.95$ & $0.90$ & $0.63$ \\ HSE06 & $0.98$ & $1.07$ & $0.63$ \\ \hline \hline \end{tabular} \end{center} \end{table} The DFT charge density of the conduction-band minimum at $\Gamma_{\rm c}$ consists of two delocalized, free-electron-like regions on either side of the hBN layer, whereas the charge density for the conduction-band minimum at $K_{\rm c}$ is localized on the boron atoms: see Fig.\ \ref{fig:hse_cden}. This is consistent with the observation that the conduction band at $\Gamma$ is nearly parabolic with an effective mass close to the bare electron mass. The orbital charge densities are qualitatively similar in the monolayer and in the bulk. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{hse_cden_hole_K.pdf} \\[1em] \includegraphics[clip,width=0.45\textwidth]{hse_cden_elec_Gamma.pdf} \\[1em] \includegraphics[clip,width=0.45\textwidth]{hse_cden_elec_K.pdf} \caption{(Color online) DFT-HSE06 charge densities of (a) the valence-band maximum at $K_{\rm v}$, (b) the conduction-band minimum at $\Gamma_{\rm c}$, and (c) the conduction-band minimum at $K_{\rm c}$ for monolayer hBN\@. The green spheres show the boron atoms, while the white spheres are nitrogen atoms. The charge densities were obtained using an artificial periodicity of 21.2 {\AA} in the out-of-plane direction, a $15 \times 15$ Monkhorst-Pack ${\bf k}$-point grid, DFT norm-conserving pseudopotentials, and a plane-wave cutoff energy of 680 eV\@. \label{fig:hse_cden}} \end{center} \end{figure} \subsection{Energy-gap results\label{sec:dmc_gap_results}} \subsubsection{Finite-size effects in the DMC band gap\label{sec:finite_size_results}} The SJ-DMC quasiparticle and excitonic band gaps (both $K_{\rm v} \rightarrow K_{\rm c}$ and $K_{\rm v} \rightarrow \Gamma_{\rm c}$) are plotted against system size in Fig.\ \ref{fig:gap_v_NP}. The quasiparticle gaps include the correction shown in Eq.\ (\ref{eq:qp_gap_corr}). Systematic finite-size effects in the $K_{\rm v} \rightarrow \Gamma_{\rm c}$ and $K_{\rm v} \rightarrow K_{\rm c}$ excitonic gaps are very much smaller than systematic finite-size errors in the uncorrected quasiparticle gaps. On the other hand the quasirandom finite-size noise in both types of gap has an amplitude of about 0.5 eV over the range of supercells studied. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{bn_gap_v_NP_DMC_CPP.eps} \caption{(Color online) SJ-DMC quasiparticle (QP) and excitonic gaps of monolayer hBN against $N_{\rm P}^{-1}$, where $N_{\rm P}$ is the number of primitive cells in the supercell. The quasiparticle gaps include the Madelung correction given in Eq.\ (\ref{eq:qp_gap_corr}). \label{fig:gap_v_NP}} \end{center} \end{figure} \subsubsection{Nature and size of the gap in the thermodynamic limit} Our results for the electronic band gaps are given in Table \ref{table:gap_results}. The error bars on our QMC gaps are determined by the quasirandom finite-size noise discussed in Secs.\ \ref{sec:finite_size} and \ref{sec:finite_size_results}. The SJ-DMC quasiparticle gap of monolayer hBN is indirect ($K_{\rm v} \rightarrow \Gamma_{\rm c}$) and is of magnitude 8.8(3) eV, which is considerably enhanced with respect to the gap in the bulk, and is also significantly higher than the gap predicted by our $GW_0$ calculations ($7.72$ eV\@). \begin{table*}[!htbp] \begin{center} \caption{Static-nucleus quasiparticle and excitonic gaps for monolayer hBN, calculated by different methods. Our DFT calculations indicate that vibrational effects lead to a renormalization of the static-nucleus gaps by $-0.73$ eV at 300 K\@. \label{table:gap_results}} \begin{tabular}{lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}l} \hline \hline & \multicolumn{6}{c}{Quasiparticle gap $\Delta_{\rm qp}$ (eV)} & \multicolumn{4}{c}{Ex.\ gap $\Delta_{\rm ex}$ (eV)} & \multicolumn{4}{c}{Ex.\ bind.\ $\Delta_{\rm qp}-\Delta_{\rm ex}$ (eV)} \\ \raisebox{1.5ex}[0pt]{Method} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow M_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v} \rightarrow K_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v} \rightarrow K_{\rm c}$} \\ \hline DFT-LDA &~~$4$&$.79$ &~~~~~$4$&$.60$ &~~~~$4$&$.68$ & & & & & & & & \\ DFT-PBE & $4$&$.69$ & $4$&$.67$ & $4$&$.79$ & & & & & & & & \\ DFT-HSE06 & $5$&$.65$ & $6$&$.31$ & $6$&$.31$ & & & & & & & & \\ $G_0W_0$(-BSE) & $7$&$.43$ & $7$&$.90$ & $8$&$.00$ & & & $5$&$.81$ & & & $2$&$.09$ \\ $GW_0$(-BSE) \cite{Wirtz_2006} & & & $8$&$.2$ & & & & & ~~~~$6$&$.1$ & & & ~~~$2$&$.1$ \\ $GW_0$(-BSE) & $7$&$.72$ & $8$&$.18$ & $8$&$.28$ & & & $6$&$.10$ & & & $2$&$.08$ \\ SJ-DMC & $8$&$.8(3)$ & $10$&$.4(3)$ & & & $6$&$.9(3)$ & ~~~$8$&$.6(2)$ & ~~~$1$&$.9(4)$ & ~~~~~~$1$&$.8(4)$ \\ \hline \hline \end{tabular} \end{center} \end{table*} In Fig.\ \ref{fig:GW-BSE} we compare the electronic band structure predicted by the two levels of $GW$ theory and DFT-PBE calculations (top panel), and we plot the $GW_0$-BSE absorption spectrum (bottom panel). The exciton binding energy is extracted by comparing the BSE optical absorption spectrum with its random-phase-approximation counterpart, in which electron-hole interactions are neglected. Our single-shot $G_0W_0$ quasiparticle gaps are significantly smaller than the SJ-DMC gaps, by about 1.4--2.5 eV\@. The partially self-consistent $GW_0$ quasiparticle gaps are somewhat larger, but are still 1.1--2.2 eV smaller than the SJ-DMC quasiparticle gaps. The exciton binding energies obtained using first-principles $GW_0$-BSE and SJ-DMC calculations are in reasonable agreement with the exciton binding energies of $2.38$ eV ($\Gamma_{\rm v}\rightarrow K_{\rm c}$) and $2.41$ eV ($K_{\rm v} \rightarrow K_{\rm c}$) obtained using an effective-mass model \cite{Mostaani_2017} of an electron and a hole interacting via the Keldysh interaction with the effective masses in Table \ref{table:dft_eff_masses} and the $r_*$ parameter estimated in Sec.\ \ref{sec:finite_size}. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{GWvsPBE_BN_1L.eps} \\[1em] \includegraphics[clip,width=0.45\textwidth]{BSE_BN_1L.eps} \caption{(Color online) Electronic band structure of monolayer hBN, comparing DFT-PBE with $GW$ theory at the single-shot ($G_0W_0$) and partially self-consistent ($GW_0$) level (top panel). $GW_0$-BSE optical absorption spectrum of monolayer hBN for in- and out-of-plane polarization (bottom panel). \label{fig:GW-BSE}} \end{center} \end{figure} DFT-LDA and DFT-PBE band-structure calculations are qualitatively incorrect for monolayer hBN\@: they predict the gap to be direct ($K_{\rm v} \rightarrow K_{\rm c}$). DFT-HSE06 and $GW$ calculations show that the conduction-band energies at $K_{\rm c}$ and $M_{\rm c}$ are similar, but that the CBM lies at $\Gamma_{\rm c}$, in agreement with SJ-DMC\@. We find the gap of monolayer hBN to be indirect, with the CBM lying at the $\Gamma_{\rm c}$ point, although recent experiments indicate a direct gap at the ${\rm K}$ point of the Brillouin zone \cite{Elias_2019}. Part of the reason for the discrepancy is that those experiments studied hBN on a graphite substrate; however, the delocalized nature of the (nearly free) CBM state at $\Gamma_{\rm c}$ may also have consequences for optical absorption experiments. Electrons with small in-plane momentum experience the hBN monolayer as an attractive $\delta$-function-like potential, always supporting one bound state. This weakly bound state is potentially sensitive to perturbations caused by substrates or other aspects of the material environment. We have investigated the behavior of the conduction band at $\Gamma_{\rm c}$ in bulk hBN as the out-of-plane lattice parameter $c$ is increased, describing the crossover from bulk to isolated monolayer. In Fig.\ \ref{fig:crossover_gamma_1}, we plot the normalized DFT-PBE charge density of the state at $\Gamma_{\rm c}$ along a line through the unit cell, moving through a boron atom at $z/c=0.25$ and through a nitrogen atom at $z/c=0.75$. In Fig.\ \ref{fig:crossover_gamma_2} we plot the DFT-PBE band structure along the $\Gamma \rightarrow {\rm K}$ line. While all other states are well-converged with respect to $c$, including the state at ${\rm K}_{\rm c}$, the state at $\Gamma_{\rm c}$ remains relatively sensitive to the particular choice of $c$. In the inset to Fig.\ \ref{fig:crossover_gamma_2} the two lowest-lying conduction states have been retained for clarity, and this sensitivity is made very clear. The expected trend in the energy of the two near-degenerate conduction states originating from each monolayer is observed, and as $c$ increases, the energy splitting of these two states reduces. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.47\textwidth,clip]{all_densities.eps} ~~ \includegraphics[width=0.47\textwidth,clip]{all_densities_abs.eps} \caption{(Color online) DFT-PBE charge density of the state at $\Gamma_{\rm c}$ as a function of lattice parameter $c$ for bulk hBN\@. $c=12.5878$ a.u.\ is the experimental lattice parameter. Panels (a) and (b) show the density against fractional and absolute $z$ coordinates, respectively. The charge density is plotted along a straight line in the $z$ direction, passing through a boron atom at $z/c=0.25$ and a nitrogen atom at $z/c=0.75$. At large $c$ the CBM at $\Gamma_{\rm c}$ is an arbitrary linear combination of the degenerate monolayer CBMs. \label{fig:crossover_gamma_1}} \end{center} \end{figure*} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.45\textwidth,clip]{bulk_large_c_pbe_disp_curves.eps} \caption{(Color online) DFT-PBE bulk hBN band structure at three large values of the lattice parameter $c$. The inset to (b) displays a close-up of the two near-degenerate states at $\Gamma_{\rm c}$ \label{fig:crossover_gamma_2}} \end{center} \end{figure} \subsubsection{Vibrational renormalization of the band structure} Using a combination of the quadratic and stochastic approaches as described in Sec.\ \ref{subsec:vib_method}, we obtain a vibrational renormalization of the minimum band gap $K_{\mathrm{v}}\to\Gamma_{\mathrm{c}}$ of monolayer hBN of $-0.56$ eV at $0$ K\@. This zero temperature correction arises purely from quantum zero-point motion, which has a strong effect in a system like hBN containing light elements, and is similar in size to that calculated for diamond \cite{Giustino_2010,Antonius_2014,Monserrat_2016}. Thermal motion further renormalizes the band gap, resulting in a vibrational correction of $-0.73$ eV at $300$ K\@. Our results for the $K_{\mathrm{v}}\to K_{\mathrm{c}}$ gap show a zero-point renormalization of the band gap of $-0.54$ eV, which increases to $-0.73$ eV at $300$ K\@. The similar corrections for the $K_{\mathrm{v}}\to\Gamma_{\mathrm{c}}$ and $K_{\mathrm{v}}\to K_{\mathrm{c}}$ gaps suggest that vibrational corrections to the gap are largely uniform across the Brillouin zone. \subsubsection{Bulk hBN} As a test of the accuracy of our methods, we have calculated the quasiparticle and excitonic gaps of bulk hBN between various high-symmetry points in the Brillouin zone with the QMC and $GW$ methods. Our QMC calculations are identical to those performed for the monolayer, save for the use of the experimental geometry (lattice parameters $a=2.504$ {\AA} and $c=6.6612$ {\AA}) \cite{Lynch_1966}, and the use of the ``T-move'' scheme, which reduces pseudopotential locality approximation errors \cite{Casula_2006,Casula_2010,Drummond_2016}. \begin{figure}[!htbp] \begin{center} \includegraphics[clip,width=0.45\textwidth]{bulk_gaps.eps} \caption{(Color online) SJ-DMC quasiparticle gaps $\Delta_{\rm qp}$ and excitonic gaps $\Delta_{\rm ex}$ of bulk hBN against $1/N_{\rm P}$, where $N_{\rm P}$ is the number of primitive cells in the supercell. The quasiparticle gaps include the Madelung correction given in Eq.\ (\ref{eq:qp_gap_corr}). The statistical error bars show the random error in the SJ-DMC gap in a particular supercell; the noise due to quasirandom finite-size effects clearly exceeds the noise due to the Monte Carlo calculation.} \label{fig:bulk_fs} \end{center} \end{figure} Our QMC results are given in Table \ref{table:bulk_gap_results}, with error bars determined as discussed in Sec.\ \ref{sec:finite_size}. Our raw gap data are plotted against system size in Fig.\ \ref{fig:bulk_fs}. We find that quasirandom finite-size effects are much more prominent in the bulk than in the monolayer. This could be partially due to the lack of geometrical similarity of the supercells studied, leading to nonsystematic behavior in the charge-quadrupole finite-size effect. Our $GW$ results for bulk hBN are also shown in Table \ref{table:bulk_gap_results}. Here we find that the quasiparticle gaps evaluated with SJ-DMC are somewhat larger than those predicted by $GW$ calculations, just as they are in the monolayer. The SJ-DMC $K_{\rm v} \rightarrow K_{\rm c}$ exciton binding energy of bulk hBN, which is corrected by the subtraction of the screened Madelung constant and then extrapolated against $N^{-1}_{\rm P}$ to infinite system size \cite{Hunt_2018}, is $0.8(1)$ eV\@. This is consistent with the range of $GW$-BSE values, and is significantly smaller than the monolayer exciton binding energy, as one would expect. The $K_{\rm v} \rightarrow \Gamma_{\rm c}$ exciton binding is $0.3(5)$ eV, which is smaller than the statistical error bars. The SJ-DMC $K_{\rm v} \rightarrow M_{\rm c}$ quasiparticle gap is 7.96(9) eV\@. The VBM in bulk hBN is near the $K$ point, while the CBM is at or near the $M$ point \cite{Cassabois_2016}. Allowing for our calculated zero-temperature vibrational correction in bulk hBN of $-0.35$ eV (which increases to $-0.40$ eV at $300$ K), the SJ-DMC quasiparticle gap appears to overestimate the experimental gap of around 6 eV significantly. As a probe of this discrepancy, we have considered (in the $N_{\rm P}=9$ supercell) the effects of a backflow transformation of the many-electron wave function. We have also investigated our use of high-symmetry points ($K$ and $M$) in the Brillouin zone rather than the true positions of the VBM and CBM at the DFT-HSE06 level of theory. We find that backflow lowers the DMC quasiparticle ($K_{\rm v} \rightarrow$~CBM) gap of bulk hBN in the $N_{\rm P}=9$ supercell by 0.17(5) eV\@. By considering the exact VBM and CBM positions, we find a further energy lowering of 0.02(6) eV, which is not statistically significant. Further, we have also considered explicit re-optimization of backflow functions in anionic and cationic states for the VBM~$\rightarrow$~CBM quasiparticle gap. This has recently been shown to lead to significant further lowering of SJB-DMC quasiparticle energy gaps \cite{Hunt_2018}; however, in this case we find that re-optimization of the backflow functions by minimizing the VMC energy \textit{raises} the SJB-DMC gap by 0.08(3) eV as is also found in the monolayer. Near-degeneracy of the bands at the $M$ point is a possible cause of both the unusual behavior of the DMC energy in the presence of backflow and the overestimate of the gap. Near-degeneracy can lead to multireference character and hence significant fixed-node errors with a single-determinant wave function. \begin{table*}[!htbp] \begin{center} \caption{Static-nucleus quasiparticle and excitonic gaps for bulk hBN, determined by different methods, compared with experimental results. Our DFT vibrational-renormalization calculations indicate that the static-nucleus gaps should be renormalized by $-0.40$ eV at 300 K\@. Where references are not given, the results are from the present work. An asterisk (*) denotes the SJ-DMC energy gap from $K_{\rm v} \rightarrow M_{\rm c}$.} \begin{tabular}{lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}lr@{}l} \hline \hline & \multicolumn{12}{c}{Quasiparticle gap $\Delta_{\rm qp}$ (eV)} & \multicolumn{8}{c}{Excitonic gap $\Delta_{\rm ex}$ (eV)} \\ \raisebox{1.5ex}[0pt]{Method} & \multicolumn{2}{c}{$\Gamma_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$M_{\rm v}\rightarrow M_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} & \multicolumn{2}{c}{$M_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{VBM${}\rightarrow{}$CBM} & \multicolumn{2}{c}{$\Gamma_{\rm v}\rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v} \rightarrow \Gamma_{\rm c}$} & \multicolumn{2}{c}{$K_{\rm v}\rightarrow K_{\rm c}$} & \multicolumn{2}{c}{VBM${}\rightarrow{}$CBM} \\ \hline DFT-LDA &~~$6$&$.09$ & $4$&$.54$ & ~~~~$4$&$.93$ &~~$4$&$.84$ & $5$&$.28$ & $4$&$.05$ & & & & & & & \\ DFT-PBE &$6$&$.65$ & $4$&$.76$ & $5$&$.42$ & $4$&$.94$ & $5$&$.78$ & $4$&$.28$ & & & & & & & & \\ DFT-HSE06 &$8$&$.01$ & $6$&$.09$ & $6$&$.54$ & $6$&$.33$ & $6$&$.95$ & $5$&$.55$ & & & & & & & & \\ $G_0W_0$ & $7$&$.3$ & ~~~~~$7$&$.0$ & & & $9$&$.7$ & ~~~~$6$&$.1$ & ~~~~~~~$5$&$.4$ & & & & & & & & \\ $GW_0$ & $7$&$.3$ & $7$&$.1$ & & & $9$&$.9$ & $6$&$.1$ & $5$&$.5$ & & & & & & & & \\ $GW$ \cite{Arnaud_2006} & $8$&$.4$ & $6$&$.5$ & $6$&$.9$ & $6$&$.9$ & $7$&$.3$ & $5$&$.95$ & & & & & & & & \\ SJ-DMC & $10$&$.1(2)$ & & & $8$&$.5(2)$ & $9$&$.06(8)$ & & & $7$&$.96(9)^{*}$ & ~~$9$&$.2(2)$ & ~~~$8$&$.2(5)$ & ~~~~~$8$&$.3(1)$ & & \\ Exp.\ \cite{Watanabe_2004,Cassabois_2016} & & & & & & & & & & & \multicolumn{2}{c}{5.971, 6.08} & & & & & & & \multicolumn{2}{c}{5.822, 5.955} \\ \hline \hline \end{tabular} \label{table:bulk_gap_results} \end{center} \end{table*} \section{Conclusions\label{sec:conclusions}} We have performed DFT, $GW$, and SJ-DMC calculations to determine the electronic structure of free-standing monolayer and bulk hBN\@. Systematic finite-size errors in the SJ-DMC quasiparticle gaps fall off as the reciprocal of the linear size of the simulation supercell, but can be corrected by subtracting an appropriately screened Madelung constant from the gap. The remaining finite-size effects are dominated by quasirandom oscillations as a function of system size, arising from the fact that long-range oscillations in the pair-correlation function are forced to be commensurate with the supercell. We find the SJ-DMC quasiparticle gap for the monolayer to be indirect ($K_{\rm v} \rightarrow \Gamma_{\rm c}$) and of magnitude 8.8(3) eV\@, which is larger than the gap predicted by the $G_0W_0$, $GW_0$, and $GW$ methods. Our bulk SJ-DMC quasiparticle gaps are also systematically larger than those predicted by $GW$ calculations \cite{Arnaud_2006}. Using DFT, we also find a sizeable vibrational correction to the monolayer band gap of $-0.73$ eV at $300$ K, and a vibrational correction of $-0.40$ eV to the bulk band gap at $300$ K\@. SJ-DMC shows that hBN exhibits large exciton binding energies of $1.9(4)$ eV and $1.8(4)$ eV for the indirect ($K_{\rm v}\rightarrow \Gamma_{\rm c}$) and direct ($K_{\rm v}\rightarrow K_{\rm c}$) excitons in the monolayer. The latter binding energy is similar to the value predicted by our $GW_0$-BSE calculation for the direct exciton and compares well to previous $GW$-BSE calculations \cite{Arnaud_2006,Wirtz_2006,Cunningham_2018}, as well as the exciton binding energy obtained within the effective-mass approximation with the Keldysh interaction between charge carriers \cite{Mostaani_2017}. The predicted quasiparticle gaps of hBN increase significantly as one goes from DFT with local functionals, to DFT with hybrid functionals, to $G_0W_0$, to $GW_0$, to $GW$, to SJ-DMC\@. Comparing SJ-DMC gaps with experimental results for bulk hBN shows that the SJ-DMC gaps are significantly too high, even when DFT-calculated vibrational renormalizations are included; the overestimate is around $1.5$ eV\@. Several sources of error on a 0.1--0.3 eV energy scale have been identified: uncertainties due to pseudopotentials, residual finite-size errors after extrapolation of the noisy data to infinite system size, and the need for a more complete treatment of dynamical correlation effects through the use of backflow wave functions. In addition there are unquantified fixed-node errors arising from the use of a single-determinant wave function. Although we investigated very small multideterminant wave functions for the monolayer, it is possible that there could be significant uncanceled fixed-node errors due to multireference character in some of the excited-state wave functions. The mismatch between the minima of the VMC and DMC energies with respect to backflow functions gives some hint that this might be the case. A further possible cause of the disagreement with experiment is the underestimate of the vibrational renormalization of the gap. Several materials exhibit vibrational corrections to the band gap that are up to 50\% (although typically only 10--20\%) larger when calculated using $GW$ theory or hybrid functionals rather than a semilocal DFT functional \cite{Antonius_2014,Monserrat_2016}. In the case of hBN, vibrational renormalizations of the band gap could therefore be as large as $-1$ eV for the monolayer at $300$ K and $-0.5$ eV for the bulk at $300$ K\@. Static-nucleus self-consistent $GW$ calculations agree remarkably well with the experimental quasiparticle gap of bulk hBN, but taking into account vibrational effects we find that the $GW$ quasiparticle gap is underestimated by about $0.4$ eV\@. When vibrational effects are included, single-shot $G_0W_0$ methods underestimate the experimental gap by about $1$ eV\@. Determining the electronic structure of hBN from first principles with quantitative accuracy remains a challenging problem. \begin{acknowledgments} We acknowledge financial support from the U.K.\ Engineering and Physical Sciences Research Council (EPSRC) through a Science and Innovation Award, the E.U.\ through the grant \textit{Concept Graphene}, the Royal Society, and Lancaster University through the Early Career Small Grant Scheme. R.\ J.\ H.\ is fully funded by the Graphene NOWNANO CDT (Grant No.\ EP/L01548X/1). B.\ M.\ acknowledges support from the Winton Programme for the Physics of Sustainability, and from Robinson College, Cambridge, and the Cambridge Philosophical Society for a Henslow Research Fellowship. Computational resources were provided by Lancaster University's High-End Computing facility. VZ acknowledges the European Graphene Flagship Project and the ARCHER National UK Supercomputer RAP Project e547. This work made use of the facilities of N8 HPC provided and funded by the N8 consortium and EPSRC (Grant No.\ EP/K000225/1). We acknowledge useful discussions with V.\ I.\ Fal'ko, W.\ M.\ C.\ Foulkes, and S.\ Murphy. \end{acknowledgments}
1408.6329
\section{Introduction} Recently, the transport properties of hot and dense matter has attracted lot of attention in the context of relativistic heavy ion collisions\cite{heinzrev} as well as cosmology\cite{berera}. Such properties enter in the hydrodynamical evolution and therefore essential for studying the near equilibrium evolution of a thermodynamic system. In the context of heavy ion collisions, the coefficients of shear viscosity perhaps has been the mostly studied transport coefficient. The spatial anisotropy in a nuclear collision gets converted to a momentum anisotropy through a hydrodynamic evolution. The equilibration of momentum anisotropy is mainly controlled by shear viscosity. Indeed, elliptic flow measurement at RHIC led to $\frac{\eta}{s}$, the ratio of shear viscosity ($\eta$) to the entropy density $s$, close to $\frac{1}{4\pi}$ which is the smallest for any known liquid in nature \cite{hirano}. Indeed, arguments based on ADS/CFT correspondence suggest that the ratio $\frac{\eta}{s}$ cannot be lower than this 'Kovtun-Son-Starinets' (KSS) bound \cite{kss}. Thus the quark gluon plasma(QGP) formed in the heavy ion collision is the most perfect fluid. Apart from shear viscosity, the transport coefficient that relates the momentum flux with a velocity gradient is the bulk viscosity. Generally, it was believed that the bulk viscosity does not play any significant role in the hydrodynamic evolution of the matter produced in heavy ion collision experiments. The argument being that the bulk viscosity $\zeta$ scales like $\epsilon-3 p$ and therefore will not play any significant role as the matter might be following the ideal gas equation of state. However, in course of the expansion of the fire ball the temperature can be near the critical temperature $T_c$ where $\epsilon-3p$ can be large as expected from the lattice QCD simulations\cite{tanmoy,borsonyi} leading to large value of bulk viscosity. This in turn can give rise to phenomena of cavitation when the pressure vanishes and the hydrodynamic description for the evolution breaks down\cite{cavitation}. There has been various attempts to estimate bulk viscosity for strongly interacting matter. The rise of bulk viscosity near the transition temperature has been observed in various effective models of strong interaction. These include chiral perturbation theory\cite{dobado}, quasi particle models \cite{sasakiqp} as well as Nambu-Jona-Lasinio model \cite{sasakinjl}. One of the interesting way to extract this is using symmetry properties of QCD once one realizes that the bulk viscosity characterizes the response to conformal transformation. This was attempted in Ref.\cite{karschkharzeev}. Based on Kubo formula for the $\zeta$ and the low energy theorems \cite{ellislet} bulk viscosity gets related to thermodynamic properties of strongly interacting system. It may be noted that, it is also of both practical and fundamental importance to know the transport coefficients in the hadron phase to distinguish the signatures of QGP matter and hadronic matter. The computation of the transport coefficient of the hadronic mixture is not an easy task. There have been various attempt on this field over last few years involving various approximations like relaxation time approximation, Chapman-Enscog as well as Green Kubo approach to estimate the shear viscosity to entropy ratio using different effective models for hadronic interactions \cite{dobadoshear,sasakinjl,prakashwiranata,purnendu,greco}. In a different approach, $\frac{\eta}{s}$ has also been calculated within a hadron resonance gas model in an excluded volume approximation \cite{gorenstein} with a molecular kinetic theory approach to relate shear viscosity coefficient to the average momentum transfer. This was used later to include the effects of rapidly rising hadronic density of states near the critical temperature modeled by hagedorn type exponential rise of density of states \cite{greinerprl}. Such a description could describe the lattice data and indicated that the hadronic matter could become almost a perfect fluid where $\frac{\eta}{s}$ could approach the KKS bound. Later lattice data \cite{borsonyi} which indicated a lower pseudocritical temperature about 160 MeV led to the assertion that the hot hadronic matter described through hadron resonance gas is far from being a perfect fluid\cite{greinerprc}. All these studies have been done at zero baryon density. It has been also known that the basic features of hadronization in heavy ion collisions are well described by the hadron resonance gas models. The multiplicities of particle abundances of various hadrons in these experiments show good agreement with the corresponding thermal abundances calculated in HRG model with appropriately chosen temperature and chemical potentials \cite{hrgexp}. In the present work, we generalize the above approach of \cite{greinerprc} for studying viscosity coefficients within the ambit of hadron resonance gas model to include finite chemical potential effects. This can possibly have some relevance on the current and planned experiments with heavy ion collisions at {\it beam energy scan} at RHIC \cite{bes}, {\it compressed baryonic matter} at GSI \cite{cbm} and {\it neuclotron-based ion collider facility (NICA)} at Dubna \cite{nica}. The shear viscosity to entropy ratio at finite baryon density has been estimated using relativistic Boltzmann equations for pion nucleon system using phenomenological scattering amplitude\cite{nakano,itakura}. This leads to the ratio as a decreasing function of chemical potential in the T-$\mu$ plane. Further, this ratio as a function of chemical potential shows a valley structure at low temperature which was interpreted as a signature of liquid gas phase transition\cite{nakano,itakura}. The bulk viscosity at finite chemical potential using low energy theorems of QCD has been studied in Ref.\cite{wang}. This was estimated using a Schwinger Dyson approach to calculate the dressed quark propagator at finite chemical potential to use it for calculation of thermodynamical quantities needed to estimate bulk viscosity. As mentioned, we shall estimate these viscosity coefficients within the ambit of hadron resonance gas which can be a complimentary to the above approaches. We organize the paper as follows. In the following section we recapitulate the results of Ref.\cite{karschkharzeev} for bulk viscosity coefficient as related to the thermodynamic quantities using Kubo formula and low energy theorems generalized to include finite chemical potential terms. We also note down here the expression for the shear viscosity using quantum molecular dynamics method modified appropriately for relativistic system. In section III we spell out the the hadron resonance gas model including a hagedorn spectrum above a cut off and the resulting thermodynamics. We estimate the quark condensates in a thermal, dense medium of hadron gas in a subsection here. In section IV we discuss the results. Finally, we summarize and conclude in section V. \section{Bulk and shear viscosity coefficients at finite $T$ and $\mu$} Bulk viscosity corresponds to the response of the system to conformal transformations and can be written as per Kubo formula as a bilocal correlation function\cite{karschkharzeev} \be \zeta=\lim_{\omega\rightarrow 0} \frac{1}{9\omega}\int_0^\infty d t\int d\zbf x \exp(i\omega t) \left[\theta_\mu^\mu(x),\theta_\mu^\mu(0)\right] \equiv\int d^4x \:iG^R(x) \label{defbulk} \ee with $G^R(x)$ being the retarded function for the trace of energy momentum tensor. One can introduce a spectral function $\rho(\omega,\zbf p)= -(1/\pi)Im G(\omega,\zbf p)$ to write a dispersion relation for the $G^R(\omega,\zbf p)$. Assuming an ansatz for the spectral function at low energy\cite{karschkharzeev} as $\rho(\omega,\zbf 0)/\omega= (9\zeta/\pi)(\omega_0^2/(\omega_0^2+\omega^2)$, where, $\omega_0$ is a scale at which perturbation theory becomes valid , the bulk viscosity can be written as \be 9\zeta\omega_0=2\int_0^\infty du \frac{\rho(u,0)}{u}du= \int d^4x \langle \theta_\mu^\mu(x)\theta_\mu^\mu(0)\rangle\equiv\Pi \label{pi} \ee The stress energy tensor for QCD is given as \be \theta_\mu^\mu=m\bar qq+\frac{\beta(g)}{2g}G^a_{\mu\nu}G^{a\mu\nu}\equiv \theta_q+\theta_g \ee In the above $g$ is the strong coupling and $\beta(g)$ is the QCD beta function that decides the running of the QCD coupling. Thus the evaluation of the bulk viscosity reduces to evaluation of the stress energy correlator. This is done by using the low energy theorems of QCD generalized to finite temperature and density according to which for any operator $\hat O$, its correlator with the gluonic part of the stress tensor $\theta_g$ is given as \be \int d^4x \langle \theta_g(x)\hat O)\rangle=(\hat D-d)\langle\hat O \rangle(T,\mu), \label{lettmu} \ee where, $\hat D =T\partial/\partial T+\mu\partial/\partial \mu-d$, with $d$ being the canonical dimension of the operator $\hat O$. Using Eq.(\ref{lettmu}) in Eq.(\ref{pi}) one has \bearr \Pi &=& (\hat D-4)\langle\theta_\mu^\mu\rangle + (\hat D-2)\langle{\theta^q}^\mu_\mu\rangle\nonumber\\ &=& 16|\epsilon_{vac}^g|+6 (f_\pi^2m_\pi^2+f_k^2m_k^2)\nonumber\\ &+&TS(\frac{1}{c_s^2}-3)+(\mu\frac{\partial}{\partial\mu}-4)(\epsilon^*-3p^*)+(\hat D-2) m_q\langle\bar qq\rangle_{*} \label{pi1} \eearr In the above we have used $\langle\theta_\mu^\mu\rangle=\epsilon-3 p$ and the thermodynamic relations $c_v=\partial\epsilon/\partial T$, $\partial p/\partial T=s$ and $c_s^2=S/c_v$ for the velocity of sound of the medium. We have also separated the contributions to the correlators in terms of the vacuum and the medium. In Eq.(\ref{pi1}) we have neglected terms quadratic in the current quark masses and have used PCAC relations to express vacuum condensates to the masses and decay widths of pions and kaons. It is trivial to check that for $\mu=0$ Eq.(\ref{pi1}) reduces to the main results of Ref.\cite{karschkharzeev}. For T=0 and $\mu\neq 0$, one can simplify Eq.(\ref{pi1})and Eq.(\ref{pi}) reduces to \be 9\zeta(\mu)\omega_0=16P(\mu)-7\mu\rho+\mu^2\frac{\partial\rho}{\partial\mu} +(\mu\frac{\partial}{\partial\mu}-2)m\langle\bar qq\rangle \label{zetamu} \ee We might note here that the above expression differs from the same given in Ref.\cite{wang}.This, however, matches with the expression given in Ref.\cite{agasian} in the appropriate limit, where, bulk viscosity was computed including the effects of magnetic field at finite baryon densities and temperature. Thus the coefficients of bulk viscosity gets related to the vacuum properties of QCD as well as to the equilibrium thermodynamic system parameters of QCD like the velocity of sound, non-ideality and the in medium quark condensates. These thermodynamic quantities shall be estimated within hadron resonance gas model which we shall spell out in the next section. Next, we consider the shear viscosity coefficient $\eta$ for the hadronic medium. It is known that hadrons interact in various channels and there is possibility of attractive and repulsive interactions. Within the hadron resonance model, the attractive channels are effectively included by including the resonances and the repulsive channels can be modeled in a simple manner through and excluded volume correction \cite{hagedorn,kapustaolive,rischkegorenstein}. The shear viscosity in a relativistic gas of multi component hard core spheres can be written as \cite{gorenstein,greinerprc} \be \eta=\frac{5}{64\sqrt 8 r^2}\sum_i\langle|\zbf p|\rangle \frac{n_i}{n} \label{eta} \ee where, $\langle|\zbf p|\rangle$ is the average momentum of the i-th species particles and $r$ corresponds to hard core radius of each hadron . Further, in the above, $n_i$ is the number density of the i-th particle species and $n=\sum_in_i$. \section{Hadron Resonance Gas model} The central quantity in the hadron resonance gas models (HRGM) is the thermodynamic potential which is that of a free boson or fermion gas and is given as \be \log(Z,\beta,\mu,)=\int dm \left(\rho_M(m)\log Z_b(m,v,\beta,\mu)+\rho_B(m)\log Z_f(m,v,\beta,\mu)\right) \label{logz} \ee where, the gas of hadrons is contained in a volume V, at a temperature $\beta^{-1}$ and chemical potential $\mu$. $Z_b$, $Z_f$ are the partition functions of boson and fermions respectively with mass $m$. Further, $\rho_M$ and $\rho_B$ are the spectral densities of bosons and fermions respectively. Using Eq.(\ref{logz}), one can calculate the energy density $\epsilon$ by taking derivative with respect to $\beta$, pressure p, by taking a derivative with respect to $V$, number density $\rho$ by taking a derivative with respect to $\mu$. One can also find out the trace anomaly $\epsilon-3p$, entropy density, specific heat as well as the speed of sound from these quantities. Hadron properties enter in these models through the spectral densities $\rho_{B/M}(m)$. One common approach in HRGMs is taking all the hadrons and their resonances up to a mass cutoff $\Lambda$ and write \be \rho_{B/M}(m)=\sum_{i}^{M_i<\Lambda} g_i\delta(m-M_i) \label{hrgm1} \ee where, the sum is over all the baryons or meson states up to a mass that is less than the cut off $\Lambda$. $M_i$ are the masses of the known hadrons and $g_i$ is the degeneracy factor (spin, isospin). On the other hand, an exponentially increasing density of state was necessary to explain the rapid increase in entropy density near the transition region in lattice QCD simulation \cite{majumdermueller}. Such exponential rise of density of states has also been used to study observables like dilepton production \cite{leonidov} as well as chemical equilibration\cite{igorgreiner}. Motivated by such observations we take the modified spectral function as\cite{cleymans,guptagod,greinerprl} \be \rho_{B/M}(m)=\sum_{i}^{M_i<\Lambda} g_i\delta(m-M_i)+\rho_{HS}(m) \label{hrgm2} \ee where $\rho_{HS}(m)$ is the spectral density for the heavier Hagedorn states(HS). To describe the much needed large density of states, one can take an exponentially rising density of state \cite{Hagedorn} for $\rho_{HS}$ beyond the cut-off $\Lambda$ which implies an underlying string picture for hadrons. On the other hand, one can also consider a simple power law form introduced in Ref.\cite{shuryak} as a nice alternative to describe the rise of the hadronic mass spectrum \cite{shuryak}. We shall consider here both the forms for the continuum part of the spectral density given as \be \rho_{exp}=\frac{A}{({m^{2}+m_{0}^{2}})^{2}}e^{\frac{m}{T_{H}}} \label{rhoexp} \ee \be \rho_{power}=\frac{A}{T_{H}}\bigg(\frac{m}{T_{H}}\bigg)^{\alpha} \label{rhopower} \ee where parametrization of the two spectral forms is given in table below. \begin{center} \begin{tabular}{ | l | l | l |l |p{0.5cm} |} \hline spectral density & $T_{H}(GeV)$ & \ A & $m_{0}(GeV)$ & \ $\alpha$\\ \hline \ \ \ \ \ \ \ $\rho_{exp}$ & \ \ \ 0.210 & 0.63 & \ \ \ \ 0.5 & \ -\\ \hline \ \ \ \ \ \ \ $\rho_{power}$ & \ \ \ 0.180 & 0.51 & \ \ \ \ -& \ 3 \\ \hline \end{tabular} \end{center} We have taken the parameters $A$ and $m_0$ for $\rho_{exp}$ as in Ref. \cite{majumdermueller} and taken a different value for $T_H$ so as to fit the lattice data of Ref.\cite{borsonyimu}. Similarly the parameters $\alpha$ and $T_H$ for $\rho_{power}$ is taken so as to fit the lattice data of Ref.\cite{borsonyimu} while keeping the parameter $A$ same as taken in Ref.\cite{greinerprc}. With the ansatz for the spectral densities, the pressure $P=P_M+P_B$ arising from mesons and baryons respectively are given by \bearr P_M &=&\frac{1}{2\pi^2}\bigg[-\sum_ig_i\int k^2 dk\log\left(1-\exp(-\beta\epsilon_i)\right)\nonumber\\ &+&\int_\Lambda^\infty \rho_{HS}(m) dm \frac{m^2}{\beta^2}K_2(\beta m)\bigg] \label{pmes} \eearr \bearr P_B &=&\frac{1}{2\pi^2}\bigg[-\sum_ig_i\int k^2 dk\bigg(\log\big(1-\exp(-\beta(\epsilon_i-\mu))\big) +\log\left(1-\exp(-\beta(\epsilon_i+\mu))\right)\bigg)\nonumber\\ &+& 2\int_\Lambda^\infty \rho_H(m) dm \frac{m^2}{\beta^2}K_2(\beta m)\cosh(\beta\mu)\bigg ] \label{pbar} \eearr Here, $K_n(x)$ is the modified Bessel function of order $n$. Similarly, the energy density $\epsilon=-\frac{1}{\beta} \frac{\partial}{\partial\beta}(\beta p)+\mu\frac{\partial}{\partial \mu}p=\epsilon_M+\epsilon_B$ , with the energy density of mesons $\epsilon_M$ given as \bearr \epsilon_M &=& \frac{1}{2\pi^2}\bigg[\sum_ig_i\int k^2dk \frac{\epsilon_i}{\exp(\beta\epsilon_i)-1} \nonumber\\ &+&\int_\Lambda^\infty \rho_{HS}(m)dm m^4 \left(\frac{3}{\beta^2m^2}K_2(\beta m) +\frac{1}{\beta m}K_1(\beta m)\right)\bigg] \label{emes} \eearr and, the contribution of the baryons to the energy density $\epsilon_B$ is given as \bearr \epsilon_B& =& \frac{1}{2\pi^2}\bigg[\sum_ig_i\int k^2dk {\epsilon_i} \left(\frac{1}{\exp(\beta(\epsilon_i-\mu))+1} +\frac{1}{\exp(\beta(\epsilon_i+\mu))+1}\right)\nonumber\\ & +& \int_\Lambda^\infty \rho_H(m)dm m^4\left(\frac{3}{\beta^2m^2}K_2(\beta m) +\frac{1}{\beta m}K_1(\beta m)\right)\bigg] \label{ebar} \eearr The baryon number density is given by \bearr n_B&=&\frac{1}{2\pi^2}\bigg[{g_i}\int k^2dk \left( \frac{1}{\exp(\beta(\epsilon_i-\mu))+1} - \frac{1}{\exp(\beta(\epsilon_i+\mu))+1}\right)\nonumber\\ &+& 2\int_\Lambda^\infty \rho_H(m)dm \frac{m^2}{\beta^2}K_2(\beta m)\bigg] \label{rhob} \eearr Using these quantities one can calculate the other quantities like the interaction measure $\epsilon-3p$, entropy density $s=(\frac{\partial p}{\partial T})$ as needed for the estimation of bulk viscosity. \subsection{quark condensates in the hadronic medium} The other quantity we need to know is the quark condensates in the medium to estimate the bulk viscosity. To estimate this within the framework of HRGM, it is necessary to know the dependence of hadron masses on the current quark masses. The chiral condensate is given in terms of the thermodynamic potential (negative of the pressure) as $\langle\bar q q\rangle=-\frac{\partial p}{\partial m_q}$ which leads to \be \langle\bar q q\rangle=\langle\bar q q\rangle_0+\sum_{mesons}\frac{\sigma^M}{m_q }n_M +\sum_{baryons}\frac{\sigma^B}{m_q }n_B \label{mediumcondensate} \ee where $n_M$ and $n_B$ are the scalar densities of mesons and baryons given respectively as \be n_M=\frac{g_i}{2\pi^2}\int k^2 dk \frac{m_M}{\epsilon_M}\frac{1}{\exp(\beta\epsilon_M)-1}, \ee \be n_B=\frac{g_i}{2\pi^2}\int k^2 dk \frac{m_B}{\epsilon_B}\left(\frac{1}{\exp(\beta(\epsilon_B-\mu_B))+1} +\frac{1}{\exp(\beta(\epsilon_B+\mu_B))+1}\right) \ee Further, the $\sigma^{M/B}$ is the hadronic sigma term i.e. the response of hadronic masses to the changes of the current quark masses \be\sigma_q^{M/B}=m_q\frac{\partial M_{M/B}}{\partial m_q} \ee Thus computing the behavior of in-medium condensate within HRGM reduces to the problem of calculating the $\sigma$-terms of the hadrons. We do this in a manner similar to given in Ref.\cite{blaschke}. For the pseudoscalar bosons, we use the Gell Mann-Oakes-Renner (GOR) relation to have \be \frac{\partial m_\pi^2}{\partial m_q}=-\frac{\langle\bar q q\rangle_0}{f_\pi^2}\left(1+2\kappa \frac{m_pi^2}{f_\pi^2}\right) \ee \be \frac{\partial m_K^2}{\partial m_{q,s}}=-\frac{\langle\bar q q\rangle_0+\langle\bar s s\rangle_0 }{2f_K^2}\left(1+2\kappa \frac{m_K^2}{f_\pi^2}\right) \ee with the parameter $\kappa=0.021\pm0.008$ \cite{jaminplb}. here, we have taken $m_q=m_u=m_d=5.5 MeV$, $m_S=138 MeV$, $f_\pi=92.4MeV$, $f_K=113 MeV$$\langle\bar u u\rangle_0=\langle\bar d d\rangle_0=\langle\bar q q\rangle_0= (-240 MeV)^3$, $\langle\bar s s\rangle_0=0.8 \langle \bar q q\rangle_0$. For the other hadrons we use a model based on valence quark structure as in Ref. \cite{leupold}. Here the masses of the baryons (B) or mesons (M) scale as \be m_B=(3-N_s) M_q+N_sM_s +\kappa_B \ee \be m_M=(2-n_s)M_q+N_sM_s+\kappa_m \ee In the above, $M_q$, $M_s$ are the constituent quark masses for the light and strange quarks respectively, $\kappa_{B/M}$ are constants depending upon the hadronic state but not on current quark masses and $N_s$ is the numbers of strange quarks. The constituent quarks $M_q$ and $M_s$ partially account for the strong interaction dynamics. For computation of the $\sigma$ term one meeds to know the variation of the constituent quarks with current quark masses. This dependence is taken from Nambu-Jona-Lasinio model \cite{hmnjl} where, the dynamical mass changes by 14 MeV as the current quark mass is changed from 0 to 5.5 MeV. Similarly for strange quark the mass change is about 235.5 MeV as current quark mass is varied from 0 to 140.7 MeV. This e.g. results in $\sigma$ terms for nucleons and $\Lambda$ hyperon as 42 MeV and 263.5 MeV respectively. \section{Results and discussions} Let us first discuss the thermodynamics of hadron resonance gas specified by the spectral density as given by Eq.(\ref{hrgm2}). To estimate different thermodynamic quantities, for the discrete part of spectrum , we have taken all the hadrons and their resonances with mass less than 2 GeV \cite{pdgb}. For the Hagedorn part, consider both the forms of spectral density given by Eq. (\ref{rhoexp}) and Eq.(\ref{rhopower}). \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{pres_r2.eps}& \includegraphics[width=9cm,height=7cm]{pres_r4.eps}\\ (1 a)&(1 b) \end{tabular} \caption{ Thermodynamics of hadron resonance gas. Left panel (Fig. 1 a) shows pressure as a function of temperature for $\mu_b=0$(blue) and $\mu_B=300$ MeV (red) with the hagedorn spectrum $\rho=\rho_{exp}$ as in Eq.(\ref{rhoexp}). The dotted line correspond to taking discrete spectrum for hadron resonance gas. The right panel shows the same quantities but with the spectral function $\rho=\rho_{power}$ as given in Eq.(\ref{rhopower}). The data points are from the lattice simulation results taken from Ref. \cite{borsonyimu}. }\label{thermo1} \end{center} \end{figure} In Fig.\ref{thermo1} we have plotted the pressure in units of $T^{4}$ for two different chemical potentials, $\mu=0$ MeV and $\mu=300$ MeV. The lattice points with the error bars have been taken from the table 4 of the Ref. \cite{borsonyimu} corresponding to the continuum extrapolation. The dotted lines in Fig.\ref{thermo1} correspond to considering only the discrete part of the spectral density in Eq.(\ref{hrgm1}) . Left panel correspond to exponential form of spectral density for continuum part while right panel corresponds to power law form of spectral density in Eq.(\ref{hrgm2}). As can be noted in this figure, the discrete spectrum coupled with continuum spectrum describe the lattice data quite well up to $T=170 MeV$ with the parametrization given in table 1 within the error bars of the lattice simulations. In fig. \ref{thermo2} we have plotted the dimensionless scale anomaly $(\epsilon-3p)/T^{4}$ as a function of temperature at two different chemical potentials. As can be noted from both the Fig.s (2a) and (2b), the discrete part of the spectral density does not give a good fit to the lattice data beyond $140 MeV$, but when coupled with continuum part as in Eq.(\ref{hrgm2}) gives good fit to lattice data up to $150 MeV$ even at $\mu=300 MeV$ reasonably well. We have taken a higher $T_{H}$ value compared to \cite{greinerprc} that was required to fit the lattice data\cite{borsonyimu}. This is because, in Ref.\cite{greinerprc}, the lattice data was taken for $N_t=10$ lattice data of Ref.\cite{borsonyi} while we have fitted with the continuum extrapolation of for $\mu=0$ the lattice data in Ref.\cite{borsonyimu}. \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{e3p_r2.eps}& \includegraphics[width=9cm,height=7cm]{e3p_r4.eps}\\ (2 a)&(2 b) \end{tabular} \caption{Scale anomaly as a function of temperature for exponential spectral density (2 a) and power law spectral density function (2 b). } \label{thermo2} \end{center} \end{figure} \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{cs2_r2.eps}& \includegraphics[width=9cm,height=7cm]{cs2_r4.eps}\\ (3 a)&(3 b) \end{tabular} \caption{Square of sound velocity as a function of temperature for exponential spectral density (3 a) and power law spectral density function (3 b). } \label{thermo3} \end{center} \end{figure} Fig \ref{thermo3} shows speed of sound squared ($C_{s}^{2}$) as a function of temperature at fixed values of chemical potential along with the lattice simulation results of Ref.\cite{borsonyimu}. As can be noted from the figure, keeping only the discrete part of the spectral density, does not fit the lattice results although the same could fit the lattice result for pressure and the scale anomaly results of Ref.\cite{borsonyimu}. On the other hand the power law parametrization for the continuum part of spectral density along with the discrete part leads to a reasonable fit to lattice data up to $150 MeV$ both at $\mu=0$ and $\mu=300MeV$. The initial rise in sound velocity with temperature is reflection of the fact that the light degrees of freedom are excited easily at low temperature and contribute to pressure and energy. But at larger temperatures when baryons are excited, they contribute significantly to energy density but almost nothing to pressure. This leads to decrease of sound velocity with temperature seen at higher temperatures ($T> 80 $MeV). As chemical potential increases, heavier baryonic channels opens up at low temperature and contribute to energy density significantly but nothing to pressure. This leads to lower values of $C_{s}^{2}$ as the chemical potential is increased. \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{sbin_r4.eps}& \includegraphics[width=9cm,height=7cm]{cs2_sbin.eps}\\ (4 a)&(4 b) \end{tabular} \caption{ Velocity of sound at constant entropy per baryon rations. Left panel (4a) shows trajectories of constant entropy per baryon in the phase diagram. velocity of sound for constant entropy per baryon is plotted in Fig. 4b. } \label{isentropic} \end{center} \end{figure} We have also plotted speed of sound for isentropic situation in figure \ref{isentropic}. To get the chemical potential for a given temperature, we vary chemical potential so that ratio $S/N$ is constant. Resulting isentropic trajectories in the $\mu-T$ phase space is shown in fig. (4a). $S/N=30$ and $S/N=45$ corresponds to AGS and SPS \cite{blum}. As expected from the results for constant chemical potential (Fig.\ref{thermo3}), sound velocity is lower for lower $S/N$. \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{r2_zetabis.eps}& \includegraphics[width=9cm,height=7cm]{r4_zetabis.eps}\\ (5 a)&(5 b) \end{tabular} \caption{ Bulk viscosity to entropy ratio as a function of temperature for different chemical potentials. Left panel is with exponential hagedorn spectrum and the right panel is with power law hagedorn spectrum} \label{zetabis} \end{center} \end{figure} \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{r2_etabist.eps}& \includegraphics[width=9cm,height=7cm]{r4_etabist.eps}\\ (6 a)&(6 b) \end{tabular} \caption{ Shear viscosity to entropy ratio in the hadronic phase. Left panel (6 a) shows $\frac{\eta}{s}$ as a function of temperature for different chemical potential with the exponential hagedorn spectrum . The right panel shows the same with the power law hagedorn spectrum. } \label{etabis} \end{center} \end{figure} We next use these thermodynamic results for the hadron resonance gas to Eq.(\ref{pi}) and Eq.(\ref{pi1}) to estimate the bulk viscosity. We also include here the contributions from the quark condensates in the discrete part of the spectrum using Eq.(\ref{mediumcondensate}). Contribution of these terms to $\zeta/s$ turns out to be only few percent of the the total contribution. The resulting behavior of $\zeta/s$ as a function of temperature is shown in Fig.\ref{zetabis} for different values of the baryon chemical potential. In general, the ratio decrease with temperature at low temperature followed by a sharp increase and finally flattens out at temperatures around 160 MeV. This behavior is connected with the behavior of velocity of sound with temperature through Eq.(\ref{pi1}). The initial decrease of $\zeta/s$ with temperature is due to increase of sound velocity at low temperature due to excitation of light hadrons. At temperature $T>60$MeV, the sharp rise is related to the decrease of velocity of sound with excitations of heavier hadrons leading to decrease of sound velocity which finally flattens out at temperatures around 155 MeV as shown in Fig.\ref{thermo3}. The larger bulk viscosity to entropy ratio at higher chemical potential is again related to decrease of velocity of sound due to excitation of heavier baryons. \begin{figure}[h] \vspace{-0.4cm} \begin{center} \begin{tabular}{c c} \includegraphics[width=9cm,height=7cm]{r2_etabismu.eps}& \includegraphics[width=9cm,height=7cm]{r2_etabismu1.eps}\\ (7 a)&(7 b) \end{tabular} \caption{Shear viscosity to entropy ratio as a function of chemical potential.} \label{etabismuu} \end{center} \end{figure} In Fig.\ref{etabis}, we have plotted the shear viscosity to entropy ratio for different chemical potentials as a function of temperature. The finite volume effects arise here through the $\frac{1}{r^2}$ factors arising from the finite size of the hadrons as in Eq.(\ref{eta}). We also retain here the finite volume corrections to the entropy density $s$ as in Ref.\cite{peter}. We might mention here that the thermodynamic quantities are not sensitive to the value of $r$ for $r> 0.2$fm as was demonstrated in Ref.\cite{greinerprc}. We have taken here a uniform size of $r=0.4 fm$ for all mesons and $r=0.5fm$ for all baryons \cite{bugaev,gorenstein}. For $\mu=0$ the minimum reaches about$\frac{\eta}{s} =0.7$ which is an order of magnitude larger than viscosity bound of $\frac{\eta}{s}=\frac{1}{4\pi}$. As the chemical potential is increased, temperature dependence is similar to that at $\mu=0$. On the other hand, the behavior of the ratio $\eta/s$ is nontrivial. It decreases with chemical potential for temperature less than about 130 MeV, beyond which increases with $\mu$. The initial decrease of $\eta$ with respect to $\mu$ can be understood as enhancement of the hardcore cross section with nucleon number density . However, the entropy density $\s$ starts decreasing with increase in chemical potential at higher temperature. This is due to the fact that the volume corrections proportional to the density of the particles enter in the denominator for the entropy density \cite{peter}. This in turn makes a larger value for the ratio $\frac{\eta}{s}$ We also looked into the behavior of $\frac{\eta}{s}$ at low temperatures as a function of $\mu$ where it shows a valley structure which is plotted in Fig.\ref{etabismuu}. Such an observation was also made in Ref.s\cite{itakura, nakano,cpsingh}. The existence of a minimum of $\eta/s$ was interpreted in these references indicative of a liquid gas phase transition. This is due to the fact that a minimum in the ratio $\eta/s$ as a function of the controlling parameter of thermodynamics like temperature or chemical potential could be indicative of a phase transition \cite{csernai,nakano,itakura}. As the temperature increase, the valley structure become shallower as is clearly shown in Fig.7 b, possibly suggestive of the phase transition. However, on the other hand, the corresponding entropy does not show such a structure. Further, the corresponding nucleon number density here (0.07/$fm^3$), however, turns out to be about half the nuclear matter density. \section{summary} We have here tried to estimate the bulk and shear viscosity to entropy ratio in a hadronic medium modeling the same as a hadron resonance gas. Apart from including all the hadrons below a cutoff of 2 GeV, we have also included a continuum density of state beyond 2GeV. Such a description of hadronic model gives a good fit to the lattice data both at zero and finite chemical potential \cite{borsonyimu}. The thermodynamic quantities so obtained is used to estimate the bulk viscosity of hadron gas at finite chemical potential using the method as outlined in Ref.\cite{karschkharzeev} for finite temperature and zero chemical potential. At finite chemical potential, the $\frac{\zeta}{s}$ become higher as compared to $\mu=0$ and is related to the fact that the velocity of sound becomes smaller due to finite chemical potential with excitation of heavier baryons contributing more to the energy density as compared to the pressure. This approach has already been used to estimate $\eta/s$ for hadronic medium in Ref. \cite{greinerprl} to obtain $\eta/s$ reaching the viscosity bound of $\frac{1}{4\pi}$ at temperature of about $T=190$MeV using the lattice data available at that time. However, later lattice data pointed to a lower critical temperature giving rise to indicate that the hadron resonance gas can lead to $\eta/s$ being about an order of magnitude higher than the viscosity bound. We observe that at finite chemical potential $\frac{\eta}{s}$ increases with temperature with its magnitude increasing with chemical potential. For low temperatures $(T<30 MeV)$ and high baryon chemical potential, we observed a valley structure for this ratio which can have a connection with liquid gas phase transition in nuclear matter. \def\karschkharzeev{F. Karsch, D. Kharzeev, and K. Tuchin, Phys. Lett. B 663, 217 (2008).} \def\joglekar{J.C. Collins, A. Duncan, S.D. Joglekar, Phys. Rev. D 16, 438 (1977).} \def\blaschke{J. Jankowski, D. Blaschke, M.Spalinski, Phys.Rev.D 87, 105018 (2013). } \def\gorenstein{M. Gorenstein, M. Hauer, O. Moroz, Phys.Rev.C 77,024911 (2008)} \def\bugaev{K. Bugaev et al, Eur.Phys.J. A 49, 30 (2013)} \def\cpsingh{S.K. Tiwari, P.K. Srivastava, C.P. Singh, Phys.Rev. C 85, 014908 (2012) } \def\chen{J.-W. Chen, Y-H. Li, Y.-F. Liu, and E. Nakano, Phys. Rev. D 76, 114011 (2007)} \def\chennakano{J.-W. Chen, and E. Nakano, Phys. Lett. B 647, 371 (2007)} \def\itakura{K. Itakura, O. Morimatsu, and H. Otomo, Phys. Rev. D 77, 014014 (2008)} \def\cleymans{J. Cleymans, H. Oeschler, K. Redlich, and S. Wheaton, Phys. Rev. C 73, 034905 (2006)} \def\worku{J. Cleymans and D. Worku, Mod. Phys. Lett. A26,1197,(2011).} \def\guptagod{S. Chatterjee, R. M. Godbole and S. Gupta, {\PRC{81}{044907}{2010}}.} \def\Noronha{Noronha-Hostler J, Noronha J and Greiner C 2012 Phys. Rev. C 86 024913} \def\berera{A. Bstero-Gil, A. Berera and R. Ramos, JCAP1107, 030 (2011).} \def\heinzrev{U. Heinz and R. Snellings, Annu. Rev. Nucl. Part. Sci. 63, 123-151, 2013} \def\hirano{P. Romatschke and U. Romatschke, Phys. Rev. Lett.{\bf 99},172301, (2007); T. Hirano and M. Gyulassy, Nucl. Phys. {\bf A 769}, 71, (2006).} \def\kss{P. Kovtun, D.T. Son and A.O. Starinets, Phys. Rev. Lett.{\bf 94}, 111601, (2005).} \def\tanmoy{A. Bazavov {\it etal}, e-print:arXiv:1407.6387.} \def\cavitation{K. Rajagopal and N. Trupuraneni, JHEP1003, 018(2010); J. Bhatt, H. Mishra and V. Sreekanth, JHEP 1011, 106,(2010);{\it ibid} Phys. Lett. B704, 486 (2011); {\it ibid} Nucl. Phys. A875, 181(2012).} \def\borsonyi{S. Borsonyi{\it etal}, JHEP1011, 077 (2010).} \def\borsonyimu{S. Borsonyi{\it etal}, JHEP1208, 053 (2012).} \def\dobado{A. Dobado,F.J.Llane-Estrada amd J. Torres Rincon, {\PLB{702}{43}{2011}}.} \def\dobadoshear{A. Dobado,F.J.Llane-Estrada amd J. Torres Rincon, {\PRD{79}{055207}{2009}}.} \def\sasakiqp{C. Sasaki and K.Redlich,{\PRC{79}{055207}{2009}}.} \def\sasakinjl{C. Sasaki and K.Redlich,{\NPA{832}{62}{2010}}.} \def\ellislet{I.A. Shushpanov, J. Kapusta and P.J. Ellis,{\PRC{59}{2931}{1999}} ; P.J. Ellis, J.I. Kapusta, H.-B. Tang,{\PLB{443}{63}{1998}}.} \def\prakashwiranata{Anton Wiranata and Madappa Prakash, {\PRC{85},{054908}{2012}}.} \def\purnendu{P. Chakravarti and J.I. Kapusta {\PRC{83}{014906}{2011}}.} \def\greco{S.Plumari,A. Paglisi,F. Scardina and V. Greco,{\PRC{83}{054902}{2012}a.}} \def\bes{H. Caines, arXiv:0906.0305 [nucl-ex], 2009.} \def\greinerprl{J. Noronha-Hostler,J. Noronha and C. Greiner, {\PRL{103}{172302}{2009}}.} \def\greinerprc{J. Noronha-Hostler,J. Noronha and C. Greiner , {\PRC{86}{024913}{2012}}.} \def\igorgreiner{J. Noronha-Hostler, C. Greiner and I. Shovkovy, , {\PRL{100}{252301}{2008}}.} \def\majumdermueller{A. Majumder and B. Mueller, {\PRL{105}{252002}{2010}}.} \def\leonidov{ A. V. Leonidov and P. V. Ruuskanen, {\EPJC{4}{519}{1998}}.} \def\cbm{ B. Friman, C.H. Ohne, J. Knoll, S. Leupold, J. Randrup, R. Rapp, P. Senger (Eds.), Lect. Notes Phys., vol. 814, 2011.} \def\nica {A.N. Sissakian, A.S. Sorin, J. Phys. G 36 (2009) 064069.} \def\nakano{J.W. Chen,Y.H. Li, Y.F. Liu and E. Nakano, {\PRD{76}{114011}{2007}}.} \def\itakura{K. Itakura, O. Morimatsu, H. Otomo, {\PRD{77}{014014}{2008}}.} \def\wang{M.Wang,Y. Jiang, B. Wang, W. Sun and H. Zong, Mod. Phys. lett. {\bf A76}, 1797,(2011).} \def\agasian{N.O. Agasian, JETP Lett. 95, 171, (2012), arXiv:1109.5849.} \def\Hagedorn{R. Hagedorn and J. Rafelski,{\PLB{97}{136}{1980}}.} \def\kapustaolive{J.I. Kapusta and K. A. Olive, {\NPA{408}{478}{1983}}.} \def\hrgexp{P. Braunmunzinger, J. Stachel, J.P. Wessels and N. Xu, {\PLB{365}{1}{1996}}; G.D. Yen and M.I. Gorenstein, {\PRC{59}{2788}{1999}}; F. Becattini, J. Cleymans, A. Keranen, E. suhonen and K. Redlich, {\PRC{64}{024901}{2001}}.} \def\rischkegorenstein{.D.H. Rischke, M.I. Gorenstein, H. Stoecker and W. Greiner, Z.Phys. C {\bf 51}, 485 (1991).} \def\hmnjl{Amruta Mishra and Hiranmaya Mishra, {\PRD{74}{054024}{2006}}.} \def\pdgb{C. Amseler {\it et al}, {\PLB{667}{1}{2008}}.} \def\shuryak{E.V. Shuryak, Yad. Fiz. {\bf 16},395, (1972).} \def\leupold{S. Leupold, J. Phys. G{\bf32},2199,(2006)} \def\peter{A. Andronic, P. Braun-Munzinger , J. Stachel and M. Winn, {\PLB{718}{80}{2012}}} \def\blum{M. Blum, B. Kamfer, R. Schluze, D. Seipt and U. Heinz,{\PRC{76}{034901}{2007}}.} \def\jaminplb{M. Jamin{\PLB{538}{71}{2002}}.} \def\csernai{L.P. Csernai, J.I. Kapusta and L.D. McLerran,{\PRL{97}{152303}{2006}}.} \def\hagedorn{R. Hagedorn, Nuovo Cim. Suppl. 3,147 (1965); Nuovo Sim. A56,1027 (1968).}
0802.2036
\section{INTRODUCTION} \label{sect:intro} Massive stars play an important role in the evolution of the interstellar medium (ISM) and galaxies; nevertheless their formation process is still poorly understood from the observational perspective because of their relatively short evolution periods, complex ambient circumstances and gregarious nature. Two important approaches are systematic surveys and multi-wavelength studies towards individual sources to increase our knowledge about high-mass molecular cores that may harbor forming massive stars or mark the sites of future massive star formation. Previous surveys of high-mass star-forming regions focused on the sources associated with ultra-compact (UC) H\,{\sc ii} regions and their precursors (PUCHs) \citep{chu2002}. These works identified a number of high-mass protostellar objects (HMPOs) \citep{mol1996,mol2002,sri2002,beu2002,wu2006}. However, the identified objects usually have high luminosities ($L_{\rm IR} > 10^{3}\,L_{\odot}$), indicating that most of them do not represent the earliest stage of massive star formation. By comparing millimeter and mid-infrared (MIR) images of fields containing candidate HMPOs, \citet{sri2005} further identified a sample of potential high-mass starless cores (HMSCs), which may be the sites of future massive star formation. However, their MIR identification based on 8.3\,$\mu$m images from the \textit{Midcourse Space Experiment} (\textit{MSX}) was not sufficient to validate a genuine HMSC, because a heating accreting protostar may remain undetected up to 8\,$\mu$m \citep{bs2007}. The recently released high-resolution sensitive MIR and far-infrared (FIR) images obtained by the Galactic survey of the \textit{Spitzer Space Telescope} could be used to verify these HMSC candidates. At the same time, some works towards individual sources indicated that massive cores at early stages might exist in the vicinity of evolved star-forming sites like UC H\,{\sc ii} or H\,{\sc ii} regions \citep{for2004,gar2004,wu2005}. These works suggest that previously identified evolved sources may harbor objects at various evolutionary phases, including HMSCs, high-mass cores harboring accreting protostars, and HMPOs \citep{beu2007}. One scenario is that the early-stage objects are stimulated by the star-forming activities in evolved regions. Another hypothesis is that they may form with their evolved companions during the fragmentation of parent clouds, but are restrained to give birth to stars in time by some physical supporting mechanisms. The third possible explanation is that the detected objects are just diffuse quiescent gas/dust clumps and will not form stars eventually. Probing the physical properties and circumstances of these objects may help to address the questions above. S87, cataloged as an optical H\,{\sc ii} nebula by \citet{sha1959}, is a complex star-forming region at a distance of $\sim$ 2.3\,kpc \citep{racine1968,cra1978}. It is associated with a bright FIR source IRAS~19442+2427 and has been studied by a number of authors. \citet{chr1986} detected two 22\,GHz water masers in S87. \citet{bar1989} studied it at radio, infrared, and optical wavelengths, suggesting the existence of a biconical outflow. A compact H\,{\sc ii} region was detected in centimeter radio continuum, with an extended emission component \citep{bal1983,bar1989}. Two near-infrared (NIR) clusters were identified by \citet{che2003}, labeled as S87E and S87W. The submillimeter (sub-mm) continuum emission of S87 exhibited an asymmetric spatial configuration \citep{Jen1995,hun2000,mue2002}, which was also confirmed by the molecular line map of CS~\textit{J}\,=\,5\,--\,4 \citep{shi2003}. The previous works in ammonia (NH$_{3}$) lines \citep{zin1997,stu1984} exposed two kinematically separate components, which spatially overlap in the direction of S87E. The recent work of \citet{sai2007} identified several gas clumps in the C$^{18}$O~\textit{J}\,=\,1\,--\,0 map and proposed a hypothesis that S87E was formed by a cloud-cloud collision. All of the works mentioned above suggest that complex spatial and kinematic structures exist in S87, which may harbor objects at different evolutionary phases. The abundant data currently available from various wavelengths give us a great opportunity to perform a further comprehensive investigation towards S87. It may construct a consistent physical picture for this massive star-forming region and test the previously proposed hypotheses. In this article, we present a multi-wavelength study of S87, mainly based on the online archival data, our observations in molecular lines, and two published observations \citep{zin1997,mue2002}. We describe the used dataset and the observational results of S87 in \S\,\ref{sect:obsdata} and \ref{sect:results}. We concentrate on the spectral energy distribution (SED) analysis of the identified sub-mm clumps in \S\,\ref{sect:SED}. In \S\,\ref{sect:discussion}, we mainly discuss the stellar content, the star-forming activity and evolutionary stage of each sub-mm clump. The conclusions are summarized in \S\,\ref{sect:conclusion}. \section{DATA AND OBSERVATIONS} \label{sect:obsdata} \subsection{Continuum Data} All of the sub-mm/FIR/MIR continuum maps or images of S87 were obtained from data archives. The 850 and 450\,$\mu$m sub-mm continuum data were retrieved from the James Clerk Maxwell Telescope\footnote{The James Clerk Maxwell Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada.} (JCMT) Science Archive, measured with the Submillimetre Common-User Bolometer Array (SCUBA) \citep{hol1999} installed at JCMT. Two 850 and 450\,$\mu$m maps are available since S87 was observed twice in 2003. One observation was carried out in jiggle map mode on 2003 May 24th (JCMT program ID: M03AN23); the other was performed in Emerson II scan map mode on August 24th (M03BU45). The beamwidths of JCMT were 7\arcsec.5 (450\,$\mu$m) and 14\arcsec (850\,$\mu$m). All of the retrieved data have been fully calibrated with the ORAC-DR pipeline \citep{jen2002} for flat-fielding, extinction correction, sky noise removal, despiking, and removal of bad pixels, in the units of mJy beam$^{-1}$. The \textit{Spitzer} MIR/FIR data were retrieved from the \textit{Spitzer} Science Center\footnote{\url{http://ssc.spitzer.caltech.edu}}, including the 3.6, 4.5, 5.8, and 8.0\,$\mu$m images measured with the Infrared Array Camera (IRAC) \citep{faz2004} and the 24 and 70\,$\mu$m images measured with the Multiband Imaging Photometer for \textit{Spitzer} (MIPS) \citep{rie2004}. The IRAC and MIPS data are, respectively, from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) \citep{ben2003} and the recently released MIPS Inner Galactic Plane Survey (MIPSGAL) \citep{car2005}. All of them were calibrated by the \textit{Spitzer} Science Center data processing pipelines. In addition, we also retrieved the MIR images and point source catalog (PSC) of \textit{MSX} \citep{ega2003} from the Infrared Processing and Analysis Center (IPAC)\footnote{\url{http://www.ipac.caltech.edu}} for our study. \subsection{Spectral Observation} To investigate the molecular gas of S87, we mapped a region of 4\arcmin~$\times$~4\arcmin~ centered on IRAS~19442+2427 in the \textit{J}\,=\,1\,--\,0 transitions of CO, $^{13}$CO, C$^{18}$O and HCO$^+$, with the 13.7\,m millimeter telescope of the Purple Mountain Observatory (PMO) in 2005 January and 2006 May. A cooled SIS receiver was employed, and the system temperature $T_{\rm sys}$ at the zenith was $\sim$ 250\,K (SSB). The backend included three acousto-optical spectrometers, which was able to measure the \textit{J}\,=\,1\,--\,0 transitions of CO, $^{13}$CO, and C$^{18}$O simultaneously. All the observations were performed in position switch mode. The center reference coordinates are: R.A.\,(J2000)\,=\,19$^{\rm h}$46$^{\rm m}$19$^{\rm s}$.9, Dec.\,(J2000)\,=\,+24\degr35\arcmin24\arcsec. The grid spacings of the CO and HCO$^+$ mapping observations were 60\arcsec and 30\arcsec~respectively. The background positions were checked by single point observations before mapping. The pointing and tracking accuracy was better than 10\arcsec. The obtained spectra were calibrated in the scale of antenna temperature $T_{\rm A}^{*}$ during the observation, corrected for atmospheric and ohmic loss by the standard chopper wheel method \citep{ku1981}. Table~\ref{tb1} summarizes the basic information about our observations, including: the transitions, the center rest frequencies $\nu_{\rm rest}$, the half-power beam widths (HPBWs), the bandwidths, the equivalent velocity resolutions ($\Delta V_{\rm res}$), and the typical rms levels of measured spectra. All of the spectral data were transformed from the $T_{\rm A}^{*}$ to $T_{\rm mb}$ scale with the main beam efficiencies before analysis. The uncertainty of brightness was estimated as 10\%. The GILDAS\footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}} software package (CLASS/GREG) was used for the data reduction \citep{gl00}. \subsection{Other Archival Data} We acquired the 350\,$\mu$m continuum and NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) line maps of S87 through private communications with K. Young and I. Zinchenko. The 350\,$\mu$m map was measured with the Sub-mm High Angular Resolution Camera (SHARC) installed at the Caltech Sub-mm Observatory (CSO) \citep{mue2002}. The NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) line map was obtained with the Effelsberg 100\,m telescope \citep{zin1997}. The technical details are summarized in the corresponding reference articles. \section{RESULTS} \label{sect:results} \subsection{Sub-mm Maps} \label{sect:dec} Fig.~\ref{fig1} displays the 850\,$\mu$m scan map (contours) and the \textit{Spitzer} 8.0\,$\mu$m image (inverse grey-scale), in which the NIR clusters S87E and S87W are revealed as two bright MIR nebulae. The strongest peak of 850\,$\mu$m is associated with S87E, and two other peaks exist to the northeast of it. We propose that these three 850\,$\mu$m peaks are associated with three individual sub-mm clumps. They lie along an axis from southwest to northeast, hereafter labeled as SMM\,1, SMM\,2, and SMM\,3. We processed the sub-mm maps using the Richardson-Lucy (RL) iteration deconvolution algorithm to moderately enhance the spatial resolutions. As many other deconvolution solutions, this algorithm did not produce uncertainty information of the results. Therefore, we have to note that this process is not targeted to get the most ``accurate" deconvolved maps. Our steps of deconvolutions are similar to those described by \citet{smi2000}. The 850 and 450\,$\mu$m beam patterns of SCUBA were constructed from the Uranus maps measured in 2003 August. A procedure in Starlink/KAPPA\footnote{\url{http://www.jach.hawaii.edu/software/starlink/}} was used to perform the image processing tasks. We avoided the pixels at the edge of sub-mm maps during the iterations due to their low signal-to-noise level. The deconvolutions of the two 850\,$\mu$m maps converged within 100 iterations and produced acceptable enhanced maps without apparently artificial structures. However, for the 450\,$\mu$m maps, the procedure failed to converge within 150 iterations. Fig.~\ref{fig2} displays the deconvolved 850\,$\mu$m maps and undeconvolved 450\,$\mu$m maps. All of them have been converted into the units of mJy arcsec$^{-2}$. SMM\,3 is not covered in the jiggle maps (see Fig.~\ref{fig2}c and Fig.~\ref{fig2}d) due to the limitation of the observational fields of view. SMM\,1 is clearly elongated in the deconvolved 850\,$\mu$m maps, and there are extended lobes to the west and south of its peak. SMM\,2 is slightly elongated in the north-south direction. All of the three sub-mm clumps are revealed in a common envelop, suggesting that they may be associated although not necessarily in the same sky plane. To evaluate the CO~\textit{J}\,=\,3\,--\,2 contribution to the 850\,$\mu$m data, we examined our previous observation of S87 in CO~\textit{J}\,=\,3\,--\,2 at the K\"{o}ln Observatory for Sub-mm Astronomy (KOSMA) 3m telescope. This observation was carried out for a CO multi-line survey of (UC) H\,{\sc ii} regions and has not been published yet (Xue \& Wu~2008, in preparation). After converting the integrated intensity of CO~\textit{J}\,=\,3\,--\,2 to a flux density at 850\,$\mu$m, we found that the contribution of CO~\textit{J}\,=\,3\,--\,2 is less than the noise level of the 850\,$\mu$m maps. We also evaluated the contribution of CO~\textit{J}\,=\,6\,--\,5 to the 450\,$\mu$m data by estimating its integrated intensity from CO~\textit{J}\,=\,3\,--\,2 , under the assumption of local thermodynamic equilibrium (LTE) with an excitation temperature of 80\,K. We found that its contribution is also small. Therefore, the effect of line contaminations can be ignored at 450 and 850\,$\mu$m. \subsection{Mid/Far-Infrared Images} \label{sect:mir} A luminous MIR point source is revealed at the position of the compact H\,{\sc ii} region (see the IRAC images of Fig.~\ref{fig1} and Fig.~\ref{fig3}), which we henceforth label as MIRS\,1\footnote{The recently released GLIMPSE Spring '05 Catalog names it SSTGLMC~G60.8838-0.1282}. A weaker MIR point source is found in the 3.6, 4.5, and 5.8\,$\mu$m images, $\sim$ 8\arcsec~to the southwest of MIRS\,1. It is also detected by the 2MASS NIR All-Sky Survey and cataloged as 2MASS~19461947+2435247. However, we did not find the NIR counterpart of MIRS\,1 in 2MASS images, indicating that MIRS\,1 is highly obscured by the surrounding gas/dust envelope at NIR wavelengths. In the zoomed 8\,$\mu$m image of Fig.~\ref{fig1}, two other point sources are found to the north of MIRS\,1, which we label as MIRS\,2 and MIRS\,3. Strong diffuse MIR emission exists to the southeast of MIRS\,1, coincident with the extended centimeter emission detected by \citet{bar1989}. Faint diffuse MIR emission is detected in the northeast of the IRAC images, coincident with SMM\,3. SMM\,2 has no MIR counterpart in all of the IRAC bands. S87E and S87W saturate the 24 and 70\,$\mu$m MIPS images. Five other sources are detected in the 24\,$\mu$m band, which are coincident with the diffuse MIR emission in the IRAC bands. We label them as MIRS\,4 to MIRS\,8 (see Fig.~\ref{fig3}). Although the 24 and 70\,$\mu$m images are saturated towards SMM\,1, it is still clear that the peaks of sub-mm and 24\,$\mu$m emission are separate, which is also confirmed by a comparison with the \textit{MSX} E band (21.3\,$\mu$m) image. SMM\,3 is associated with MIRS\,4. However, its sub-mm peak and MIRS\,4 are also slightly separate. There are only weak emission patches in the IRAC 5.8 and 8.0\,$\mu$m bands towards SMM\,3, indicating that MIRS\,4 is still embedded in its gas/dust cocoon. No 24 or 70\,$\mu$m emission is detected towards SMM\,2, suggesting that it may be less evolved. \subsection{Molecular Lines} \label{sect:line} The \textit{J}\,=\,1\,--\,0 transitions of CO, $^{13}$CO, and HCO$^+$ exhibit asymmetric line profiles (see the left panel of Fig.~\ref{fig4}), and two components are detected in C$^{18}$O~\textit{J}\,=\,1\,--\,0 . Since C$^{18}$O~\textit{J}\,=\,1\,--\,0 is usually optically thin, we can rule out the possibility that the asymmetric line profile in the other transitions is caused by the self-absorption in an infall envelope \citep{mye1996,wu2005}. Previous observations of \citet{stu1984} and \citet{zin1997} also detected two separate components in NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) and NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(2,\,2) lines. The NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) spectra of \citet{zin1997} and our C$^{18}$O~\textit{J}\,=\,1\,--\,0 spectra at several positions are plotted in the right panel of Fig.~\ref{fig4}. These spectra further confirm that the broad lines of CO~\textit{J}\,=\,1\,--\,0 , $^{13}$CO~\textit{J}\,=\,1\,--\,0 , and HCO$^{+}$~\textit{J}\,=\,1\,--\,0 consist of two components. We fitted our spectra at the reference position with Gaussian profiles. The results are displayed as the thin lines in Fig.~\ref{fig4}, and the corresponding derived parameters are summarized in Table~\ref{tb2}, including: the line center velocities, the fitted line widths, and the brightness temperatures. We estimated the beam-averaged column densities of C$^{18}$O at the reference position using the standard LTE method. The excitation temperature of each component is assumed to be 35\,K, in agreement with the estimation from CO~\textit{J}\,=\,1\,--\,0 (assuming it is optically thick). The derived C$^{18}$O column densities are $\sim$ 7.8$\times10^{15}$ and 4.0$\times10^{15}$\,cm$^{-2}$ for the components at low and high velocities. Fig.~\ref{fig5}a is the HCO$^{+}$~\textit{J}\,=\,1\,--\,0 position-velocity diagram along the northeast-southwest direction, which also exhibits two components. One is located at the reference position and associated with SMM\,1; the other extends from the reference position to the northeast, coincident with SMM\,2 and SMM\,3. We propose that these components arise from two clouds. Hereafter, they are named Cloud I and II, corresponding with the components at low and high velocity respectively. The integrated intensity maps of HCO$^{+}$~\textit{J}\,=\,1\,--\,0 and NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) are also exhibited in Fig.~\ref{fig5}. Two different integrated intervals are adopted, chosen to separate the emission from Cloud I and II. All of the presented intensity maps suggest: SMM\,2, SMM\,3, and the northeast part of SMM\,1 may be associated with Cloud II; the main part of SMM\,1 is contributed by Cloud I. \section{SED ANALYSIS} \label{sect:SED} \subsection{Observational SEDs} We extracted the 850 and 450\,$\mu$m flux densities of each clump using a photometric procedure in the Starlink/GAIA software package. The measured results, as well as the positions and sizes of the adopted photometric apertures, are summarized in Table~\ref{tb3}. We note that the uncertainties in Table~\ref{tb3} are just statistical errors (rms deviations derived from clean regions), and the estimation of the overall photometric uncertainties is difficult due to the limited information from the online data archives. However, a comparison among different observational modes and the previous similar observation may provide an evaluation of the accuracy of our results. \citet{Jen1995} observed S87 using the receiver UKT14 at JCMT in 1994. They detected two sources, which were coincident with SMM\,1 and SMM\,2 respectively. Our photometric results at 850\,$\mu$m are in good agreement with theirs (see the last two columns of Table~\ref{tb3}), but the 450\,$\mu$m results from SCUBA are systematically larger. Since UKT14 is a single-element bolometer and its measurements may be affected by the change of sky conditions and other factors, we believe that the calibration of SCUBA data is more reliable. The photometric differences of the jiggle and scan maps are acceptable, less than 20\% at 450\,$\mu$m. We examined the CSO map and the MIPS images to measure the flux densities of each clump at 24, 70 and 350\,$\mu$m. Since SMM\,1 saturates the 24 and 70\,$\mu$m images, only lower limits can be derived at these wavelengths. In addition, we checked the \textit{MSX} PSC and found that the photometric apertures of SMM\,1 and SMM\,3 are coincident with the \textit{MSX} point sources MSX6C~G060.8828-00.1295 and MSX6C~G060.9049-00.1275 respectively. Their flux densities are also adopted to construct the SEDs of SMM\,1 and SMM\,3. The measured flux densities are summarized in Table~\ref{tb4}, extending from sub-mm to MIR. The average results of the scan and jiggle maps are adopted for 850 and 450\,$\mu$m. Their differences are considered as the uncertainties. The 350\,$\mu$m uncertainties follow the description of \citet{mue2002} and the uncertainties at 24 and 70\,$\mu$m are the statistical errors. \subsection{Isothermal Dust Model} A simple isothermal gray-body dust model is used to fit the observational SEDs. The details follow the method described by \citet{sch2007}. In the adopted model, the mean weight of interstellar materials per hydrogen molecule $\mu$ is $\sim$ 2.33. The dust opacity (mass absorption coefficient) $\kappa_{\lambda}$ is dependent on the wavelength and can be described using the equation: \begin{equation} \kappa_{\lambda} = \kappa_{1300} (\frac{1300\,\mu\rm m}{\lambda})^{\beta}, \label{eq4} \end{equation} where $\beta$ is the dust opacity index and $\kappa_{1300}$ is the dust opacity at 1300\,$\mu$m. Assuming a gas-to-dust ratio of 100, we adopt $\kappa_{1300} = 0.009\,$cm$^2$\,g$^{-1}$, which is derived from a gas/dust model with thin ice mantles \citep{oh1994}. We used a non-linear least-squares method (the Levenberg-Marquardt algorithm coded within IDL) to obtain the best-fit models for the observational SEDs. The physical properties of each sub-mm clump were obtained, including: the average dust temperature $T_{\rm d}$, the dust opacity index $\beta$, and the aperture-average column density of hydrogen molecules $N_{\rm H_{2}}$. In this fitting test, only the data upwards 70\,$\mu$m were used. We assumed the 70\,$\mu$m flux density of SMM\,1 to be 4000\,Jy, which was estimated from the interpolation of the IRAS flux densities subtracted with the potential contributions from SMM\,2 and SMM\,3. Fig.~\ref{fig6} displays the best-fit model SEDs for three clumps. We further calculated their clump masses and bolometric luminosities $L_{\rm SED}$ (by integrating the model SEDs over the range 1\,$\mu$m$<\lambda<2.0$\,mm). All of these derived results are summarized in Table~\ref{tb5}. Additionally, we adopted a Monte Carlo method used by \citet{sch2007} to estimate the errors of the derived parameters that arise from the observational uncertainties. The 3$\sigma$ intervals are denoted as the superscripts and subscripts in Table~\ref{tb5}. The results in Table~\ref{tb5} show that the sub-mm clumps are all massive (110\,---\,220\,$M_{\odot}$). SMM\,1 has a higher dust temperature, and its bolometric luminosity dominates in the whole region, implying the existence of strong internal heating source(s). The fitted dust opacity indices of three sub-mm clumps are slightly different ($\sim$ 1.3\,---\,1.8) and consistent with the typical values between 1 and 2 \citep{hil2006}. It must be noted that the derived $N_{\rm H_2}$ is directly affected by the adopted value of $\kappa_{1300}$. If we reduce $\kappa_{1300}$ by a factor of 2, $N_{\rm H_2}$ and the derived clump mass $M$, will increase by a factor of 2. However, the other derived parameters will not be affected by this change. \subsection{Two-temperature Dust Model} In Fig.~\ref{fig6}, the best-fit models of SMM\,1 and SMM\,3 failed to describe the observational results below 70\,$\mu$m. However, the model SED of SMM\,2 can explain the absence of its MIR emission. To better characterize the excess MIR emission of SMM\,1 and SMM\,3, we performed another SED fitting test using a model with two dust components at different temperatures. In this fitting test, we adopted the observational data upwards 14.7\,$\mu$m (excluding 24\,$\mu$m for SMM\,1). To reduce the fitting parameters, we assumed $\beta$ is 1.5 and 1.3 for each dust component of SMM\,1 and SMM\,3 respectively. The best-fit model SEDs are exhibited in Fig.~\ref{fig7}, and the derived parameters of the warm and cool dust components are listed in Table~\ref{tb6}. The two-temperature model fits the observational data very well above 12\,$\mu$m, which is consistent with the physical fact that there are warm dust around the internal heating sources and relatively cool dust envelopes surrounding the star-forming sites in SMM\,1 and SMM\,3. Although the \textit{IRAS} 100\,$\mu$m flux density exceeds the model SED of SMM\,1 (see Fig.~\ref{fig7}), we believe that the deviation is due to the large beam of \textit{IRAS} . The results in Fig.~\ref{fig7} and Table~\ref{tb6} show that the warm components contribute little to the total masses and the flux densities at sub-mm wavelengths, but are required to explain the excess at MIR wavelengths. We tried to modify $\beta$ to fit the emission below 12\,$\mu$m. However, no satisfying results were found. The emission in the \textit{MSX} A and C bands does not follow the predication of gray-body models, suggesting these models are invalid at these wavelengths. Generally, two significant spectral features may exist at this MIR wavelength range. One is the emission of polycyclic aromatic hydrocarbons (PAHs), which is often detected towards compact H\,{\sc ii} regions and photodissociation regions (PDRs). Previous studies have showed that the \textit{MSX} A and C bands often contain PAH emission lines \citep{gho2002,kra2003,pov2007}. The other is the silicate feature, which has been predicted in the dust model of \citet{oh1994} and demonstrated to be important in the recent sophisticated SED models \citep{rob2006,rob2007}. This feature may be expected as the absorption at 9.7\,$\mu$m towards some UC H\,{\sc ii} regions, produced by the dust cocoons around center objects \citep{fai1998}. \citet{pee2002} have identified both of PAH and silicate features towards S87E in the previous \textit{ISO} spectroscopy observation. The silicate absorption feature may be caused by Cloud II, which partly overlaps above the compact H\,{\sc ii} region. Since our gray-body models are focused to evaluate the overall properties of dust clumps, which are mainly constrained by the thermal emission from longer wavelengths, a detailed SED model explaining the PAH and silicate features is beyond our purpose. \section{DISCUSSION} \label{sect:discussion} \subsection{Cloud-Cloud Collision} \label{sect:cc} All of the FIR/sub-mm images and molecular line maps exhibit the complex spatial and kinematical structures of S87. The recently published high-resolution C$^{18}$O~\textit{J}\,=\,1\,--\,0 observation \citep{sai2007} revealed several gas clumps at different velocities, which further confirms our identification of Cloud I and II. However, are Cloud I and II really related to each other? \citet{sai2007} proposed that the gas clumps at higher velocity might be on the near side along the line-of-sight because the observation of \citet{che2003} detected many reddened sources in NIR there. They further reasoned that the clumps at low and high velocities were approaching and the NIR cluster S87E was possibly formed by a cloud-cloud collision. In the following, we verify the cloud-cloud collision model by a multi-wavelength comparison. Firstly, the 8.0\,$\mu$m emission shows a sharp edge to the northeast of MIRS\,1 (see Fig.~\ref{fig1}). This feature is probably caused by the large extinction at 8.0\,$\mu$m because the sub-mm emission is still strong. The position of the extinction patch is consistent with that of Cloud II, confirming that Cloud II is on the near side along the line-of-sight. Therefore, Cloud I and II are approaching. Next, we can infer from the intensity maps of Fig.~\ref{fig5} that the peak of SMM\,1 and S87E are in the overlapping region of Cloud I and II. The MIR point sources in SMM\,1 suggest that there is not only a formed NIR cluster, but also ongoing star-forming activities. The strong and continuous star-forming process is likely to be interpreted by the stimulation of a cloud-cloud collision rather than the spontaneous evolution of molecular clouds alone. Furthermore, the spatial configuration of the compact H\,{\sc ii} region and the associated extended centimeter emission \citep{bar1989,kur1994} also supports the cloud-cloud collision model. The champagne flow model of \citet{kim2001} combined with clumpy structures of molecular clouds can explain the extended centimeter component which stretches to the southeast of the compact H\,{\sc ii} region. If Cloud I and II are in contact, their contact plane will be along the northwest-southeast direction (see Fig.~\ref{fig5}b and Fig.~\ref{fig5}c). Consequently, the compact H\,{\sc ii} region will be better confined in the direction perpendicular to the contact plane and the champagne flow should be easier to spurt out in the southeast direction. If Cloud I and II are not in contact, the champagne flow will be more likely to splash in the direction perpendicular to the border of the parent cloud of the compact H\,{\sc ii} region. The observational result is consistent with the predication of the first scenario, supporting that the two clouds are colliding. Assuming that the two clouds have typical sizes $R$ $\sim$ 1\,pc and a velocity separation $\delta v$ $\sim$\,2\,km\,s$^{-1}$, the collision duration is at least $R/\delta v \sim 5\times10{^5}$\,yrs, comparable with the time scale forming a compact H\,{\sc ii} region. The cloud-cloud collision is considered as an efficient mechanism to trigger star formation. It may compress molecular gas and lead to local gravitational collapse \citep{lor1976,hab1992,mar2001}. However, its possibility is small in the diffuse molecular clouds \citep{elm1998}. Additionally, high velocity off-axis collisions could be destructive rather than lead to gravitational instabilities \citep{hau1981,gil1984}. Therefore, the fraction of star formation triggered by cloud-cloud collisions may be small in our Galaxy. All of the current evidence demonstrates that S87 is a new example of cloud-cloud collisions, and similar samples are still limited \citep{lor1976,dic1978,koo1994,val1995,buc1996,sat2000,loo2006}. \subsection{Molecular Line Emission} \subsubsection{HCO$^{+}$~\textit{J}\,=\,1\,--\,0 } Our observation shows that the line profile of HCO$^{+}$~\textit{J}\,=\,1\,--\,0 is similar to that of CO~\textit{J}\,=\,1\,--\,0 (see the left panel of Fig.~\ref{fig4}). Additionally, we found that both of CO~\textit{J}\,=\,1\,--\,0 and HCO$^{+}$~\textit{J}\,=\,1\,--\,0 spectra show slight features of high-velocity (HV) gas when compared with C$^{18}$O~\textit{J}\,=\,1\,--\,0 , suggesting that HCO$^{+}$ extends in diffuse gas rather than simply concentrates in the dense parts of gas clumps. Previous observational and theoretical works have pointed out the abundance enhancement of HCO$^{+}$ in diffuse or shocked gas \citep{tur1995,gir1999}, which can explain our finding. However, the formation mechanism of the HV gas in S87 is unclear. We propose three different explanations: (i) the HV gas may arise from stellar outflows; (ii) it may be contributed by the high-pressure shocked material that is squirted out when the clouds collide; (iii) or, it is from the non-impacting portions of the colliding clouds since they do not slow down to a common speed during the cloud-cloud collision. Although \citet{bar1989} identified HV blue and red wings in her CO~\textit{J}\,=\,1\,--\,0 observation and proposed that the HV gas resulted from a biconical outflow with a wide opening angle viewed at large inclination, our identification of two individual clouds apparently rejects this model. High-resolution and sensitive observations are required to clarify the origin of the HV gas. Because both stellar outflows and cloud-cloud collisions can produce HV gas and broad non-Gaussian line profiles, it is possible that some observational results previously interpreted as bipolar outflows are caused by cloud-cloud collisions. However, since the possibility of cloud-cloud collisions is not high, similar cases like S87 should be rare. \subsubsection{NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) } A feature of the NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) intensity maps is that the NH$_{3}$ emission tends to ``evade'' the luminous MIR sources. The NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) peak of Cloud I is separate from MIRS\,1 and the sub-mm peak of SMM\,1. The NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) emission is absent to the southeast of MIRS\,1, where the diffuse MIR emission is strong. The NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) peak of Cloud II is coincident with SMM\,2, which has no MIR counterpart. In contrast, the observations of \citet{sai2007} and \cite{shi2003} showed that the C$^{18}$O~\textit{J}\,=\,1\,--\,0 and CS~\textit{J}\,=\,5\,--\,4 emission is strong in SMM\,1 and SMM\,3. Since both of SMM\,1 and SMM\,3 are dense clumps identified from sub-mm continuum, their relatively weak NH$_{3}$ emission may be explained by the underabundance of NH$_{3}$. \citet{tur1995n} suggested that NH$_{3}$ could be destroyed by C$^{+}$ that dominates in PDRs. However, molecules like C$^{18}$O are formed via C$^+$ and not affected by the photo-destruction process \citep{jan1995}. The diffuse 5.8 and 8.0\,$\mu$m emission near SMM\,1 and SMM\,3 is usually contributed by PAHs and interpreted as a tracer of PDRs. The existence of MIR emission there, as well as the strong C$^{18}$O~\textit{J}\,=\,1\,--\,0 emission and the weak NH$_{3}$ emission, are consistent with the prediction of the chemical process proposed in previous works. \subsubsection{virial States} The line widths of molecular spectra are usually used to probe the kinematics of gas clumps. Since Cloud I and II can be well resolved in NH$_{3}$ lines, we estimate the virial masses of these two clouds in this section. We derived the line widths and brightness temperatures of NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) at the NH$_{3}$ peaks of Cloud I and II, using the hyperfine structure fitting procedure of GILDAS/CLASS. The results are exhibited as thin lines in Fig.~\ref{fig4}. The angular diameters $\theta_{\rm obs}$ of Cloud I and II are calculated using the equation: \begin{equation} \theta_{\rm obs} = 2\sqrt{\frac{\Omega}{\pi}}, \end{equation} in which, $\Omega$ is the measured angular area of each cloud. After that, we corrected the beam effect and estimated the intrinsic sizes of Cloud I and II following the equation: \begin{equation} R = D\frac{\sqrt{\theta_{\rm obs}^2-\theta_{\rm mb}^2}}{2}, \end{equation} where $R$ is the radius of the gas cloud in pc, $D$ is the distance of S87, and $\theta_{\rm mb}$ is the beamwidth of the NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) observation. Assuming that Cloud I and II are homogeneous spherical gas clouds with a density distribution $\rho \propto r^{-\alpha}$ ($\alpha=1.5$) and neglecting the contributions from magnetic field and surface pressure, the virial masses can be derived using the equation \citep{mac1988}: \begin{equation} M_{\rm vir} = 126(\frac{5-2\alpha}{3-\alpha}) R \Delta V_{\rm FWHM}^{2}, \end{equation} in which, $M_{\rm vir}$ is the virial mass in $M_{\odot}$ and $\Delta V_{\rm FWHM}$ is the full width at half-maximum intensity (FWHM) of NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) in km~s$^{-1}$. All the measured and derived parameters of Cloud I and II are listed in Table~\ref{tb7}, including: the positions of NH$_{3}$ peaks, the angular and intrinsic sizes, the line widths at NH$_{3}$ peaks, and the derived virial masses of two clouds. The total virial mass of Cloud I and II is $\sim$\,430\,$M_{\odot}$, which is much smaller than the previous estimation ($\sim$\,1080\,$M_{\odot}$) obtained with the total line width of two components \citep{zin1997} but comparable with the mass estimated from the SED fitting ($\sim$\,460\,$M_{\odot}$, from the isothermal dust model). However, we note that the above comparison of the masses estimated from different approaches can be affected by the adopted dust opacity and the assumption of $\alpha$. Although a variation of $\alpha$ is not likely to cause much change in virial masses, the dust opacity may change by at least a factor of 2 \citep{oh1994}, which leads to a large uncertainty in the masses estimated from SEDs. \subsection{Stellar Contents of SMM\,1 and SMM\,3} \label{sect:smm} The position of MIRS\,1 is consistent with that of the compact H\,{\sc ii} region, within the astrometric error (1.5\arcsec), indicating that MIRS\,1 is the exciting massive (proto)star. We examined the high-resolution centimeter map of \citet{bar1989} and found that neither MIRS\,2 nor MIRS\,3 shows compact radio continuum emission. The possible explanation is that MIRS\,2 and MIRS\,3 are less evolved compared with MIRS\,1 or they are not massive enough to ionize their surroundings and to excite compact H\,{\sc ii} regions. \citet{chr1986} detected a strong water maser in SMM\,1, which is often considered to be associated with HMPOs. The velocity range of this water maser is 21\,---\,25\,km\,s$^{-1}$, in good agreement with the systematic velocity of the molecular clouds. All the evidence mentioned above supports that SMM\,1 is a high-mass star-forming site that harbors massive forming stars or cluster. The Lyman continuum radiation from massive stars mainly escapes in the form of free-free emission. \citet{kur1994} estimated that the Lyman continuum photon flux required to keep the entire region of S87E ionized was $3.2\times10^{46}$\,photons\,s$^{-1}$, which corresponds to that of a B0.5 ZAMS star. The bolometric luminosity of such stars is $\sim$ $3.0\times10^{4}$\,$L_{\odot}$ \citep{cro2005}, slightly smaller than that of SMM\,1 ($\sim$ $3.7\times10^{4}$$L_{\odot}$, from the two-component model). The extra luminosity of SMM\,1 may come from the relatively weak MIR sources near MIRS\,1, which can not be traced by the free-free emission. SMM\,3 contains the bright 24\,$\mu$m source MIRS\,4. The ratio of the luminosities from its cool and warm components is $\sim$ 2.1, lower that of SMM\,1. Its bolometric luminosity is $\sim$ 740\,$L_{\odot}$, also lower than that of SMM\,1, which indicates that SMM\,3 is more likely to be an intermediate-mass star-forming site. \subsection{Physical Properties of SMM\,2} \label{sect:com} No MIR point source or diffuse emission below 70\,$\mu$m is detected towards SMM\,2. Since only a cold dust component can describe its observational SED, strong internal heating sources are not likely to exist in SMM\,2. \citet{chr1986} detected a weak 22\,GHz water maser near SMM\,2, which usually arises from the dense circumstellar disks around protostars \citep{par2007} or originates in outflows from the birth of a massive star \citep{van1998}. Since this water maser is in the velocity range 8\,---\,15\,km\,s$^{-1}$, significantly different from that of the molecular clouds, we favor the second explanation for its origin. We notice that this water maser is on a sub-mm emission ridge connecting the peaks of SMM\,1 and SMM\,2 rather than near the peak of SMM\,2, and its position uncertainty is large when compared with sub-mm observations. Therefore, we doubt that this weak maser is produced by the intrinsic factors of SMM\,2. For instance, the potential outflows from massive protostars of SMM\,1 may shock the ambient molecular gas of SMM2 and produce a weak water maser at the rear side of SMM2. This scenario is consistent with the lower velocity of the water maser. Therefore, we believe that the existence of this water maser does not necessarily contradict SMM\,2's physical properties derived from the SED and MIR image analyses. Based on the information available, we support that SMM\,2 is probably a HMSC that may form massive stars or intermediate star clusters eventually. \section{CONCLUSIONS} \label{sect:conclusion} We have carried out a multi-wavelength study of the massive star-forming region S87. The main results are summarized as follows. 1.~We identified three sub-mm clumps in S87, labeled as SMM\,1, SMM\,2, and SMM\,3. They are estimated to have masses of 210, 140, and 110\,$M_{\odot}$, with average dust temperatures of 41, 21, and 24\,K respectively (from the isothermal gray-body model). 2.~We examined molecular line maps from our observations and compared them with previous results of other authors. We concluded that the star-forming activities in SMM\,1 are stimulated by a cloud-cloud collision. 3.~We found that HCO$^{+}$ can trace diffuse gas and NH$_{3}$ may be destructed by chemical processes in the region harboring MIR sources or exhibiting strong diffuse MIR emission. 4.~We calculated the virial masses of the two colliding clouds, which are in good agreement with those estimated from SEDs. 5.~The stellar contents and star-forming activities of sub-mm clumps are identified. Their SEDs reveal that these clumps are at various evolutionary stages. SMM\,1 and SMM\,3 are high-mass and intermediate-mass star-forming regions respectively. SMM2 is massive and cold, has no MIR counterpart, which is probably a HMSC. All of these results expose that the star formation in S87 is at multiple phases. \acknowledgments We are grateful to K. Young and I. Zinchenko for their sharing of the 350\,$\mu$m and NH$_{3}$~(\textit{J},\,\textit{K})\,=\,(1,\,1) maps. We would like to thank the staff at the Qinghai Station of PMO for their assistance during the observations and Weilai Gu for her help to obtain the HCO$^{+}$~\textit{J}\,=\,1\,--\,0 data. We acknowledge the anonymous referee for his/her careful reading and helpful suggestions. Qifeng Yin and Sophia Day are also thanked for their help on the manuscript. This research was funded by the Grant 10733030 and 10128306 of NSFC. It employed the facilities of the Canadian Astronomy Data Center operated by the National Research Council of Canada with the support of the Canadian Space Agency, and it is also partly based on observations made with the \textit{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. {\it Facilities:} \facility{JCMT}, \facility{Spitzer}, \facility{MSX}
0802.0269
\section{Introduction} \label{sect-1} The problem of variability of fundamental physical constants has a long history starting 70 years ago with publications by \citet{Mil35} and \citet{Dir37}. The review of its current status is given in \cite{Uza03,GIK07}. Recent achievements in laboratory studies of the time-variation of fundamental constants are described, for example, in Refs.~\cite{Lea07,FK07c}. The variability of the dimensionless physical constants is usually considered in the framework of the theories of fundamental interactions such as string and M theories, Kaluza-Klein theories, quintessence theories, etc. In turn, the experimental physics and observational astrophysics offer possibilities to probe directly the temporal changes in the physical constants both locally and at early cosmological epochs comparable with the total age of the Universe ($T_{\rm U} = 13.8$ Gyr for the $H_0 = 70$ km~s$^{-1}\,$ Mpc$^{-1}$, $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$ cosmology). Here we discuss a possibility of using the ground state fine-structure (FS) transitions in atoms and ions to probe the variability of $\alpha$ at high redshifts, up to $z \sim 10$ ($\sim96$\% of $T_{\rm U}$). The constants which can be probed from astronomical spectra are the proton-to-electron mass ratio, $\mu = m_{\rm p}/m_{\rm e}$, the fine-structure constant, $\alpha = e^2/(\hbar c)$, or different combinations of $\mu$, $\alpha$, and the proton gyromagnetic ratio $g_{\rm p}$. The reported in the literature data concerning the relative values of $\Delta\mu/\mu$\,\ and $\Delta\alpha/\alpha$\,\ at $z\sim$~1--3 are controversial at the level of a few ppm (1ppm = $10^{-6}$): $\Delta\mu/\mu$\, = $24\pm6$ ppm \cite{RBH06} versus $0.6\pm1.9$ ppm \cite{FK07a}, and $\Delta\alpha/\alpha$\, = $-5.7\pm1.1$ ppm \cite{MWF03} versus $-0.6\pm0.6$ ppm \cite{SCP04}, $-0.4\pm1.9$ ppm \cite{QRL04}, and $5.4\pm2.5$ ppm \cite{LML07}. Such a spread points unambiguously to the presence of unaccounted systematics. Some of the possible problems were studied in \cite{MRA07,MLM07,MWF07,SCP07}, but the revealed systematic errors cannot explain the full range of the observed discrepancies between the $\Delta\alpha/\alpha$\,\ and $\Delta\mu/\mu$\,\ values. We can state, however, that a conservative upper limit on the hypothetical variability of these constants is $10^{-5}$. Astronomical estimates of the dimensionless physical constants are based on the comparison of the line centers in the absorption/emission spectra of astronomical objects and the corresponding laboratory values. In practice, in order to disentangle the line shifts caused by the motion of the object and by the putative effect of the variability of constants, lines with different sensitivities to the constant variations should be employed. However, if different elements are involved in the analysis, an additional source of errors due to the so-called Doppler noise arises. The Doppler noise is caused by non-identical spatial distributions of different species. It introduces offsets which can either mimic or obliterate a real signal. The evaluation of the Doppler noise is a serious problem \cite{Lev94,L04,BSS04,Car00,KCL05,LRK07}. For this reason lines of a single element arising exactly from the same atomic or molecular level are desired. This would provide reliable astronomical constraints on variations of physical constants. In the present communication we propose to use the mid- and far-infrared FS transitions within the ground multiplets $^3\!P_J$, $^5\!D_J$, $^6\!D_J$, $^3\!F_J$ and $^4\!F_J$ of some of the most abundant atoms and ions, such as Si~\textsc{i}, S~\textsc{i}, Ti~\textsc{i}, Fe~\textsc{i}, Fe~\textsc{ii}, S~\textsc{iii}, Ar~\textsc{iii}, Fe~\textsc{iii}, Mg~\textsc{v}, Ca~\textsc{v}, Na~\textsc{vi}, Fe~\textsc{vi}, Mg~\textsc{vii}, Si~\textsc{vii}, Ca~\textsc{vii}, Fe~\textsc{vii}, and Si~\textsc{ix} for constraining the variability of $\alpha$. This approach has the following advantages. Most important is that each element provides two, or more FS lines which can be used independently~--- this considerably reduces the Doppler noise. The mid- and far-infrared FS transitions are typically more sensitive to the change of $\alpha$ than optical lines. For high redshifts ($z > 2$), the far-infrared (FIR) lines are shifted into sub-mm range. The receivers at sub-mm wavelengths are of the heterodyne type, which means that the signal can be fixed at a high frequency stability ($\sim 10^{-12}$). Besides, FIR lines can be observed at early cosmological epochs ($z \gtrsim 10$) which are far beyond the range accessible to optical observations ($z\lesssim 4$). \section{Astronomically observed FS transitions}\label{SecAstro} The ground state FS transitions in mid- and far-infrared are observed in emission in the interstellar dense and cold molecular gas clouds, diffuse ionized gas in the star forming H~\textsc{ii} regions and in the `coronal' gas of active galactic nuclei (AGNs), and in the warm gas envelopes of the protostellar objects. Cold molecular gas clouds have been observed not only in our Galaxy, but also in numerous galaxies with redshifts $z > 1$ up to $z = 6.42$ \cite{MCC05} and often around powerful quasars and radio galaxies \cite{Omo07}. Recently the C~\textsc{ii} 158 $\mu$m line and CO low rotational lines were used to set a limit on the variation of the product $\mu\alpha^2$ at $z = 4.69$ and 6.42 \cite{LRK07}. The FIR transitions in C\,{\sc i} (370, 609 $\mu$m) were detected at $z=2.557$ towards H1413+117 \cite{WHD03,WDH05}. Four other observations of the C\,{\sc i} 609 $\mu$m line were reported at $z = 4.120$ (PSS 2322+1944) \cite{PBC04}, at $z = 2.285$ (IRAS F10214+4724) and $z=2.565$ (SMM J14011+0252) \cite{WDH05}, and at $z = 3.913$ (APM 08279+5255) \cite{WWN06}. In our Galaxy the most luminous protostellar objects are seen in the O~\textsc{i} lines $\lambda\lambda63, 146$ $\mu$m\, \cite{CHT96} and in the FIR lines from intermediate ionized atoms O~\textsc{iii}, N~\textsc{iii}, N~\textsc{ii} and C~\textsc{ii}, photoionized by the stellar continuum \cite{BP03}. The lines of N~\textsc{ii} (122, 205 $\mu$m) S~\textsc{iii} (19, 34 $\mu$m), Fe~\textsc{iii} (23 $\mu$m), Si~\textsc{ii} (35 $\mu$m), Ne~\textsc{iii} (36 $\mu$m), O~\textsc{iii} (52, 88 $\mu$m), N~\textsc{iii} (57 $\mu$m), O~\textsc{i} (63, 146 $\mu$m), and C~\textsc{ii} (158 $\mu$m) \cite{GDG77, MSF80, CHE93}, as well as Ne~\textsc{ii} (13 $\mu$m), S~\textsc{iv} (11 $\mu$m), and Ar~\textsc{iii} (9 $\mu$m) \cite{GWD81} have been observed in the highly obscured ($A_v \simeq 21$ mag) massive star forming region G333.6--0.2. The FS transitions of N~\textsc{iii}, O~\textsc{iii}, Ne~\textsc{iii}, S~\textsc{iii}, Si~\textsc{ii}, N~\textsc{iii}, O~\textsc{i}, C~\textsc{ii}, and N~\textsc{ii} are detected in numerous Galactic H~\textsc{ii} regions \cite{SCR95, SCC97, SRC04}. Compact and ultracompact H~\textsc{ii} regions are the sources of the FS lines of S~\textsc{iii}, O~\textsc{iii}, N~\textsc{iii}, Ne~\textsc{ii}, Ar~\textsc{iii}, and S~\textsc{iv}\, \cite{ACW97, OKY03}. Giant molecular clouds in the Orion Kleinmann-Low cluster \cite{LBS06}, the Sgr~B2 complex \cite{GRC03, PBS07}, the $\rho$~Oph and $\sigma$~Sco star-forming regions \cite{OON06}, and in the Carina nebular \cite{MOS04, OPS06} emit the FIR lines of O~\textsc{i}, N~\textsc{ii}, C~\textsc{ii}, Si~\textsc{ii}, O~\textsc{iii}, and N~\textsc{iii}. Ions with low excitation potential $E_{\rm ex} < 50$ eV (N~\textsc{ii}, Fe~\textsc{ii}, S~\textsc{iii}, Ar~\textsc{iii}, Fe~\textsc{iii}) as well as ions with high excitation potential 50 eV $ < E_{\rm ex} \leq 351$ eV (O~\textsc{iii}, Ne~\textsc{iii}, Ne~\textsc{v}, Mg~\textsc{v}, Ca~\textsc{v}, Na~\textsc{vi}, Mg~\textsc{vii}, Si~\textsc{vii}, Ca~\textsc{vii}, Fe~\textsc{vii}, Si~\textsc{ix}) are effectively produced by hard ionizing radiation and ionzing shocks in the gas surrounding active galactic nuclei. The FS emission lines of these ions have been detected with the {\it Infrared Space Observatory (ISO)} and the {\it Spitzer Space Telescope (Spitzer)} in Seyfert galaxies, 3C radio sources and quasars, and in ultraluminous infrared galaxies in the redshift interval from $z \sim 0.01$ up to $z = 0.994$ \cite{GC00,SLV02,DSA06,DWS07,SVH07,GCWL07}. The infrared FS lines of the neutral atoms Si~\textsc{i}, S~\textsc{i}, and Fe~\textsc{i} have not been detected yet in astronomical objects, but these atoms were observed in resonance ultraviolet lines in two damped Ly$\alpha$ systems at $z = 0.452$ \cite{VD07} and $z = 1.15$ \cite{QRB08} toward the quasars HE 0000--2340 and HE 0515--4414, respectively. The FIR lines are expected to be observed in extragalactic objects at a new generation of telescopes such as the Stratospheric Observatory for Infrared Astronomy (SOFIA), the Herschel Space Observatory originally called `FIRST' for `Far InfraRed and Submillimeter Telescope', and the Atacama Large Millimeter Array (ALMA) which open a new opportunity of probing the relative values of the fundamental physical constants with an extremely high accuracy ($\delta \sim 10^{-7}$) locally and at different cosmological epochs. \section{Estimate of the sensitivity coefficients} \label{estimate} In the nonrelativistic limit and for an infinitely heavy point-like nucleus all atomic transition frequencies are proportional to the Rydberg constant, $\mathcal{R}$. In this approximation, the ratio of any two atomic frequencies does not depend on any fundamental constants. Relativistic effects cause corrections to atomic energy, which can be expanded in powers of $\alpha^2$ and $(\alpha Z)^2$, the leading term being $(\alpha Z)^2\mathcal{R}$, where $Z$ is atomic number. Corrections accounting for the finite nuclear mass are proportional to $\mathcal{R}/(\mu Z)$, but for atoms they are much smaller than relativistic corrections. The finite nuclear mass effects form the basis for the molecular constraints to the $m_{\rm p}/m_{\rm e}$ mass ratio variation \cite{RBH06,FK07a,T75,P77,VL93,PIV98,MSIV06}. Consider the dependence of an atomic frequency $\omega$ on $\alpha$ in the co-moving reference frame: \begin{align}\label{qfactor1} \omega_z = \omega + q x + \dots, \quad x \equiv \left({\alpha_z}/{\alpha}\right)^2 - 1\, . \end{align} Here $\omega$ and $\omega_z$ are the frequencies corresponding to the present-day value of $\alpha$ and to a change $\alpha \rightarrow \alpha_z$ at a redshift $z$. The parameter $q$ (so-called $q$-factor) is individual for each atomic transition \cite{DFK02}. If $\alpha$ is not a constant, the parameter $x$ differs from zero and the corresponding frequency shift, $\Delta\omega = \omega_z - \omega$, is given by: \begin{align}\label{qfactor2} {\Delta\omega}/{\omega} = 2\mathcal{Q}\,({\Delta\alpha}/{\alpha})\,, \end{align} where ${\cal Q} = q/\omega$ is the dimensionless sensitivity coefficient, and $\Delta\alpha/\alpha \equiv (\alpha_z - \alpha)/\alpha$. Here we assume that $|\Delta\alpha/\alpha| \ll 1$. If such a frequency shift takes place for a distant object observed at a redshift $z$, then an apparent change in the redshift, $\Delta z = \tilde{z} - z$, occurs: \begin{align}\label{qfactor3} {\Delta\omega}/{\omega} = -\Delta z/(1+z) \equiv {\Delta v}/{c}\, , \end{align} where $\Delta v$ is the Doppler radial velocity shift. If $\omega'$ is the observed frequency from a distant object, then the true redshift is given by \begin{align}\label{zfactor1} 1+z = \omega_z/\omega' \, , \end{align} whereas the shifted (apparent) value is \begin{align}\label{zfactor2} 1+\tilde{z} = \omega/\omega' \, . \end{align} If we have two lines of the same element with the apparent redshifts $\tilde{z}_1$ and $\tilde{z}_2$ and the corresponding sensitivity coefficients ${\cal Q}_1$ and ${\cal Q}_2$, then \begin{align}\label{zfactor3} 2\Delta {\cal Q}(\Delta\alpha/\alpha) = (\tilde{z}_1 - \tilde{z}_2)/(1 + z ) = \Delta v /c\, , \end{align} where $\Delta v = v_1 - v_2$ is the difference of the measured radial velocities of these lines, and $\Delta {\cal Q} = {\cal Q}_2 - {\cal Q}_1$. Relativistic corrections grow with atomic number $Z$, but for optical and UV transitions in light atoms they are small, i.e. $\mathcal{Q} \sim (\alpha Z)^2\ll 1$. For example, Fe~\textsc{ii} lines have sensitivities $\mathcal{Q} \sim 0.03$ \citep{PKT07}. Other atomic transitions, used in astrophysical searches for $\alpha$-variation have even smaller sensitivities. The only exceptions are the Zn~\textsc{ii} $\lambda 2026$ \AA\ line, where $\mathcal{Q} \approx 0.050$ \citep{DFK02} and the Fe~\textsc{i} resonance transitions considered in \citep{DF08} where $\mathcal{Q}$ ranges between 0.03 and 0.09. One can significantly increase the sensitivity to $\alpha$-variation by using transitions between FS levels of one multiplet \cite{DF05b}. In the nonrelativistic limit $\alpha \rightarrow 0$ such levels are exactly degenerate. Corresponding transition frequencies $\omega$ are approximately proportional to $(\alpha Z)^2$. Consequently, for these transitions $\mathcal{Q}\approx 1$ and \begin{align}\label{qfactor4} {\Delta\omega}/{\omega} \approx 2 {\Delta\alpha}/{\alpha}\, , \end{align} which implies that for any two FS transitions $\Delta {\cal Q} \approx 0$. In this approximation $\Delta\alpha/\alpha$\,\ cannot be determined from \Eref{zfactor3}. We will show now that in the next order in $(\alpha Z)^2$ the $\mathcal{Q}$-factors of the FS transitions deviate from unity and $\Delta \mathcal{Q}$ in \Eref{zfactor3} is not equal to zero. In fact, for heavy atoms with $\alpha Z \sim 1$ it is possible to find FS transitions with $|\Delta \mathcal{Q}| \gg 1$ \cite{DF05b}. Here we focus on atoms with $\alpha Z \ll 1$, which are more important for astronomical observations. For such atoms $|\Delta \mathcal{Q}| < 1$ and, as we will show below, there is a simple analytical relation between $\Delta \mathcal{Q}$ and experimentally observed FS intervals. There are two types of relativistic corrections to atomic energy. The first type depends on the powers of $\alpha Z$ and rapidly grows along the periodic table. The second type of corrections depends on $\alpha$ and does not change much from atom to atom. Such corrections are usually negligible, except for the lightest atoms. Expanding the energy of a level of the FS multiplet $^{2S+1}\!L_J$ into (even) powers of $\alpha Z$ we have (see \cite{Sob79}, Sec.~5.5): \begin{align}\nonumber E_{L,S,J} &= E_0 +\tfrac{A(\alpha Z)^2}{2}\left[J(J+1)-L(L+1)-S(S+1)\right] \nonumber\\ &+B_J\,(\alpha Z)^4 + \dots\,, \label{FS1} \end{align} where $A$ and $B_J$ are the parameters of the FS multiplet. Note, that in general, $B_J$ depends on quantum numbers $L$ and $S$, but we will omit $L$ and $S$ subscripts since they do not change the following discussion. In \Eref{FS1} we keep the term of the expansion $\sim(\alpha Z)^4$, but neglect the term $\sim\alpha^2$. This is justified only for atoms with $Z\gtrsim 10$. Therefore, the following discussion is not applicable to atoms of the second period. As long as these atoms are very important for astrophysics, we will briefly discuss them in the end of this section. The strongest FS transitions are of {\it M1}-type. They occur between levels with $\Delta J=1$: \begin{align}\label{FS2} \omega_{J,J-1} &=E_{L,S,J}-E_{L,S,J-1} \nonumber\\ &=AJ(\alpha Z)^2+\left(B_J-B_{J-1}\right)(\alpha Z)^4\, . \end{align} In the first order in $(\alpha Z)^2$ we have the well known Land\'{e} rule: $\omega_{J,J-1} = AJ(\alpha Z)^2$, which directly leads to \Eref{qfactor4}. In the next order we get: \begin{align}\label{FS3} \mathcal{Q}_{J,J-1} = 1 + \frac{B_J-B_{J-1}}{AJ}\,(\alpha Z)^2 \,. \end{align} Let us consider the multiplet $^3\!P_J$ (i.e. the ground multiplet for Si~\textsc{i}, S~\textsc{i}, Ar~\textsc{iii}, Mg~\textsc{v}, Ca~\textsc{v}, Na~\textsc{vi}, Mg~\textsc{vii}, Si~\textsc{vii}, Ca~\textsc{vii}, and Si~\textsc{ix}). For two transitions $\omega_{2,1}$ and $\omega_{1,0}$ \Eref{FS3} gives: \begin{align}\label{FS4} \mathcal{Q}_{2,1}-\mathcal{Q}_{1,0} = \frac{B_2-3B_1+2B_0}{2A}\,(\alpha Z)^2\,. \end{align} At the same time, \Eref{FS2} gives the following expression for the frequency ratio: \begin{align}\label{FS5} \frac{\omega_{2,1}}{\omega_{1,0}} &= 2 + \frac{B_2-3B_1+2B_0}{A}\,(\alpha Z)^2\,. \end{align} Comparison of Eqs.~\eqref{FS4} and \eqref{FS5} leads to the final result: \begin{align}\label{FS6} \Delta \mathcal{Q} = \mathcal{Q}_{2,1}-\mathcal{Q}_{1,0} = \frac{1}{2}\,\left(\frac{\omega_{2,1}}{\omega_{1,0}}\right)-1\,. \end{align} In a general case of the $^{2S+1}\!L_J$ multiplet the difference between the sensitivity coefficients $\mathcal{Q}_{J,J-1}$ and $\mathcal{Q}_{J-1,J-2}$ is given by \begin{align}\label{FS7} \Delta \mathcal{Q} = \frac{J-1}{J}\, \left(\frac{\omega_{J,J-1}}{\omega_{J-1,J-2}}\right)-1\,. \end{align} If two arbitrary FS transitions $\omega_{J_1,J_1'}$ and $\omega_{J_2,J_2'}$ of the $^{2S+1}\!L_J$ multiplet are considered, then the difference $\Delta\mathcal{Q}=\mathcal{Q}_{J_2,J_2'}-\mathcal{Q}_{J_1,J_1'}$ is expressed by \begin{align}\label{FS7a} \Delta \mathcal{Q} = \frac{J_1(J_1+1) - J_1'(J_1'+1)}{J_2(J_2+1) - J_2'(J_2'+1)}\, \left(\frac{\omega_{J_2,J_2'}}{\omega_{J_1,J_1'}}\right) - 1\,. \end{align} This equation can be used also for $E2$-transitions with $\Delta J=2$ and for combination of $M1$- and $E2$-transitions. It is to note that the derived values of $\Delta \mathcal{Q}$ for two FS transitions are expressed in terms of their frequencies, which are known from the laboratory measurements. Another point is that the right-hand side of \Eref{FS7} turns to zero when the frequency ratio equals $J/(J-1)$, i.e. when the Land\'{e} rule is fulfilled. Eqs.~\eqref{FS6} --~\eqref{FS7a} hold only as long as we neglect corrections of the order of $\alpha^2$ and $(\alpha Z)^6$ to \Eref{FS1}, which is justified for the atoms in the middle of the periodic table, i.e. approximately from Na ($Z=11$) to Sn ($Z=50$). \begin{table*}[tbh] \caption{The differences of the sensitivity coefficients $\Delta \mathcal{Q}$ of the FS emission lines within the ground multiplets $^3\!P_J$, $^5\!D_J$, $^6\!D_J$, $^4\!F_J$, and $^3\!F_J$ for the most abundant atoms and ions. The FS intervals for S~\textsc{i}, Fe~\textsc{i--iii}, Ar~\textsc{iii}, Mg~\textsc{v}, Ca~\textsc{v}, and Si~\textsc{vii} are inverted. The excitation temperature $T_{\rm ex}$ for the upper level is indicated. Transition wavelengths and frequencies (rounded) are taken from Ref.~\cite{NIST}. The values of $\Delta \mathcal{Q}$ for the ions C~\textsc{i}, N~\textsc{ii}, and O~\textsc{iii} are calculated using Eq.~(5.197) from Ref.~\cite{Sob79}.} \label{tab1} \begin{tabular}{lcdddcdddcd} \hline\hline\\[-7pt] \multicolumn{1}{c}{Atom/Ion} &\multicolumn{4}{c}{Transition $a$} &\multicolumn{4}{c}{Transition $b$} &\multicolumn{1}{c}{$\omega_b/\omega_a$} &\multicolumn{1}{c}{$\Delta \mathcal{Q}=$} \\ &\multicolumn{1}{c}{$(J_a,J_a')$} &\multicolumn{1}{c}{$\lambda_a$ ($\mu$m)} &\multicolumn{1}{c}{$\omega_a$ (cm$^{-1}$)} &\multicolumn{1}{c}{$T_{\rm ex}$ (K)} &\multicolumn{1}{c}{$(J_b,J_b')$} &\multicolumn{1}{c}{$\lambda_b$ ($\mu$m)} &\multicolumn{1}{c}{$\omega_b$ (cm$^{-1}$)} &\multicolumn{1}{c}{$T_{\rm ex}$ (K)} &&\multicolumn{1}{c}{$\mathcal{Q}_b-\mathcal{Q}_a$} \\ \hline\\[-5pt] C~\textsc{i} & (1,0) &609.1& 16.40& 24& (2,1) &370.4& 27.00& 63&1.646& -0.008 \\ Si~\textsc{i} & (1,0) &129.7& 77.11& 111& (2,1) & 68.5& 146.05& 321&1.894& -0.053 \\ S~\textsc{i} & (0,1) & 56.3&177.59& 825& (1,2) & 25.3& 396.06& 570&2.230& 0.115 \\ Ti~\textsc{i} & (2,3) & 58.8&170.13& 245& (3,4) & 46.1& 216.74& 557&1.274& -0.045 \\ Fe~\textsc{i} & (2,3) & 34.7&288.07&1013& (3,4) & 24.0& 415.93& 599&1.444& 0.083 \\ & (1,2) & 54.3&184.13&1278& (2,3) & 34.7& 288.07&1013&1.565& 0.043 \\ & (0,1) &111.2& 89.94&1407& (1,2) & 54.3& 184.13&1278&2.048& 0.024 \\ N~\textsc{ii} & (1,0) &205.3& 48.70& 70& (2,1) &121.8& 82.10& 188&1.686& -0.016 \\ Fe~\textsc{ii} & (5/2,7/2) & 35.3&282.89& 961& (7/2,9/2) & 26.0& 384.79& 554&1.360& 0.058 \\ & (3/2,5/2) & 51.3&194.93&1241& (5/2,7/2) & 35.3& 282.89& 961&1.451& 0.037 \\ & (1/2,3/2) & 87.4&114.44&1406& (3/2,5/2) & 51.3& 194.93&1241&1.703& 0.022 \\ O~\textsc{iii} & (1,0) & 88.4&113.18& 163& (2,1) & 51.8& 193.00& 441&1.705& -0.027 \\ S~\textsc{iii} & (1,0) & 33.5&298.69& 430& (2,1) & 18.7& 534.39&1199&1.789& -0.105 \\ Ar~\textsc{iii}& (0,1) & 21.9&458.05&2259& (1,2) & 9.0&1112.18&1600&2.428& 0.214 \\ Fe~\textsc{iii}& (2,3) & 33.0& 302.7&1063& (3,4) & 22.9& 436.2 & 628&1.441& 0.081 \\ & (1,2) & 51.7& 193.5&1342& (2,3) & 33.0& 302.7 &1063&1.564& 0.043 \\ & (0,1) &105.4& 94.9 &1478& (1,2) & 51.7& 193.5 &1342&2.039& 0.019 \\ Mg~\textsc{v} & (0,1) & 13.5& 738.7&3628& (1,2) & 5.6&1783.1 &2566&2.414& 0.207 \\ Ca~\textsc{v} & (0,1) & 11.5& 870.9&4713& (1,2) & 4.2&2404.7 &3460&2.761& 0.381 \\ Na~\textsc{vi} & (1,0) & 14.3& 698 &1004& (2,1) & 8.6& 1161 &2675&1.663& -0.168 \\ Fe~\textsc{vi} & (5/2,3/2) & 19.6& 511.3& 736& (7/2,5/2) & 14.8& 677.0 &1710&1.324& -0.054 \\ & (7/2,5/2) & 14.8& 677.0&1710& (9/2,7/2) & 12.3& 812.3 &2879&1.200& -0.067 \\ Mg~\textsc{vii}& (1,0) & 9.0 & 1107 &1593& (2,1) & 5.5& 1817 &4207&1.641& -0.179 \\ Si~\textsc{vii}& (0,1) & 6.5 & 1535 &8007& (1,2) & 2.5& 4030 &5817&2.625& 0.313 \\ Ca~\textsc{vii}& (1,0) & 6.2 &1624.9&2338& (2,1) & 4.1&2446.5 &5858&1.506& -0.247 \\ Fe~\textsc{vii}& (3,2) & 9.5 &1051.5&1513& (4,3) & 7.8&1280.0 &3354&1.217& -0.087 \\ Si~\textsc{ix} & (1,0) & 3.9&2545.0&3662& (2,1) & 2.6&3869 &9229&1.520& -0.240 \\ \hline\hline \end{tabular} \end{table*} \tref{tab1} lists the calculated $\Delta \mathcal{Q}$ values for the most abundant atoms and ions observed in Galactic and extragalactic gas clouds. The ions C~\textsc{i}, Si~\textsc{i}, N~\textsc{ii}, O~\textsc{iii}, Na~\textsc{vi}, Mg~\textsc{vii}, and Ca~\textsc{vii}, have configuration $ns^2 np^2$ and `normal' order of the FS sub-levels. The ions Mg~\textsc{v}, Si~\textsc{vii}, S~\textsc{i}, and Ca~\textsc{v} have configuration $ns^2 np^4$ and `inverted' order of the FS sub-levels. However, \Eref{FS6} is applicable for both cases. We note that the FS lines of N~\textsc{ii} (122, 205 $\mu$m) can be asymmetric and broadened due to hyperfine components, as observed in \cite{LBS06,PBS07}. The hyperfine splitting occurs also in the FS lines of Na~\textsc{vi} (8.6, 14.3 $\mu$m). Transition wavelengths and frequencies listed in \tref{tab1} are approximate and are given only to identify the FS transitions. At present, many of them have been measured with a sufficiently high accuracy \cite{NIST}. The Iron ions Fe~\textsc{i}, Fe~\textsc{ii}, Fe~\textsc{iii}, Fe~\textsc{vi}, and Fe~\textsc{vii}, have ground multiplets $^5\!D$, $^6\!D$, $^5\!D$, $^4\!F$, and $^3\!F$, respectively. All these multiplets, except the last one, produce more than two FS lines, which can be used to further reduce the systematic errors. The sensitivity coefficients for transitions in Iron and Titanium from \tref{tab1} are calculated with the help of \Eref{FS7}. According to \tref{tab1}, the absolute values of the difference $\Delta\mathcal{Q}$ are usually quite large even for atoms with $Z\sim 10$. The sign of $\Delta\mathcal{Q}$ is negative for atoms with configuration $ns^2 np^2$ and positive for atoms with configuration $ns^2 np^4$. These features are not surprising if we consider the level structure of the respective configurations \cite{Sob79}. Both of them have three terms: $^3\!P_{0,1,2}$, $^1\!D_2$, and $^1\!S_0$, but for the configuration $ns^2 np^4$, the multiplet $^3\!P_J$ is `inverted'. The splitting between these terms is caused by the residual Coulomb interaction of $p$-electrons and is rather small compared to the atomic energy unit $2\mathcal{R}$. For example, the level $^1\!D_2$ for Si~\textsc{i} lies only 6299~cm$^{-1}$ above the ground state, which corresponds to $E_D-E_P=0.029$~a.u.. Relativistic corrections to the energy are dominated by the spin-orbit interaction, which for $p$-electrons has the order of $0.1(\alpha Z)^2$~a.u.. The diagonal part of this interaction leads to the second term in \Eref{FS1}, i.e. $A\,(\alpha Z)^2\sim 200$~cm$^{-1}$. In the second order the non-diagonal spin-orbit interaction causes repulsion between the levels $^3\!P_2$ and $^1\!D_2$ and results in non-zero parameter $B_2$. We can estimate this correction as $B_2(\alpha Z)^4\sim A^2(\alpha Z)^4/(E_P-E_D)\,\sim -10~\mathrm{cm}^{-1}$. This estimate has an expected order of magnitude. Note that $B_2$ is negative. For normal multiplets it reduces the ratio $\omega_{2,1}/\omega_{1,0}$, whereas for the inverted multiplet the ratio increases. We see that this is in a qualitative agreement with \tref{tab1}. Iron and Titanium ions have configurations $3d^k 2s^l$, with $k=6,\,l=2$ for Fe~\textsc{i} and $k=2,\,l=0,2$ for Fe~\textsc{vii} and Ti~\textsc{i} respectively. As we can see from \tref{tab1}, here also all normal multiplets (for Ti~\textsc{i}, Fe~\textsc{vi}, and Fe~\textsc{vii}) have negative values of $\Delta\mathcal{Q}$, while inverted multiplets for all other ions have positive values of $\Delta\mathcal{Q}$. Equation~(\ref{FS4}) shows that sensitivity to $\alpha$-variation grows with $Z$. For heavy atoms, $\alpha Z \sim 1$, neglected terms in expansion \eqref{FS1} become important. That breaks relation \eqref{FS7} between $\Delta \mathcal{Q}$ and FS intervals and sensitivity coefficients $\mathcal{Q}$ have to be calculated numerically. According to \tref{tab1}, the largest coefficients $B_J$ appear for Ca~\textsc{v} and Si~\textsc{vii}. The neglected corrections to $\Delta \mathcal{Q}$ can be estimated as $\sim [A(\alpha Z)^2/(E_P-E_D)]^2 $, i.e. the uncertainty in $\Delta \mathcal{Q}$ for Ca~\textsc{v} and Si~\textsc{vii} is less than 20\%. For other elements listed in \tref{tab1} this correction should be smaller. Note that for Iron ions, which have the largest $Z$, the relativistic effects are suppressed, because for $d$-electrons they are typically an order of magnitude smaller, than for $p$-electrons. For light elements the accuracy of our estimate depends on the neglected terms $\sim \alpha^2$. The discussion of these terms can be found in \cite{Sob79} (see Eq.~(5.197) and Table~5.21 therein). The corresponding correction decreases from almost 50\% for Na~\textsc{vi} to 30\% for Mg~\textsc{vii} and to 15\% for Si~\textsc{ix}. For atoms with $Z\lesssim 10$ one can calculate $\Delta \mathcal{Q}$ using Eq.~(5.197) from Ref.~\cite{Sob79}. For example, for C~\textsc{i}, N~\textsc{ii}, and O~\textsc{iii}, we get $\Delta \mathcal{Q} = -0.008$, $-0.016$, and $-0.027$, respectively. As expected, these values are much smaller than those for the heavier elements. On the other hand, these ions are so important for astrophysics, that we keep them in \tref{tab1}. Numerical calculations for heavy many-electron atoms are rather difficult to perform and the computed $\Delta \mathcal{Q}$ values may not be very accurate. For atoms with $\alpha Z \ll 1$ one can use \Eref{FS7} to check the accuracy of the numerical results. \begin{table}[bh] \caption{The differences between sensitivity coefficients of the FS transitions within ground $^6\!D_J$ multiplet of Fe~\textsc{ii}, $\Delta \mathcal{Q} \equiv \mathcal{Q}_{J,J-1}-\mathcal{Q}_{J-1,J-2}$. In the third column we use calculated $q$-factors from \cite{PKT07} (see Table~I from this Ref., basis set [7$spdf$]). In the fourth and fifth columns we apply \Eref{FS7} to calculated and experimental FS intervals, respectively.} \label{tab2} \begin{tabular}{ccddd} \hline\hline \multicolumn{2}{c}{Transitions} &\multicolumn{3}{c}{$\Delta\mathcal{Q}$}\\ (5/2,7/2) & (7/2,9/2) & 0.045 & 0.049 & 0.058 \\ (3/2,5/2) & (5/2,7/2) & 0.023 & 0.029 & 0.037 \\ (1/2,3/2) & (3/2,5/2) & 0.017 & 0.016 & 0.022 \\ \hline\hline \end{tabular} \end{table} As an example we consider the ground $^6\!D_J$ multiplet of Fe~\textsc{ii} ion (\tref{tab2}). One can see that numerical results in Ref.~\cite{PKT07} are in good agreement with the values obtained from \Eref{FS7} for the calculated FS intervals. However, when we apply \Eref{FS7} to actual experimental FS intervals, the agreement worsens noticeably. It is well known that deviations from the Land\'{e} rule for FS intervals depend on the interplay between the (non-diagonal) spin-orbit and the residual Coulomb interactions \cite{Sob79}. For this reason numerical results are very sensitive to the treatment of the effects of the core polarization and the valence correlations. Note also that the calculated $q$-factors are firstly used to find sensitivity coefficients $\mathcal{Q}$, and then the (small) differences are taken. Obviously this makes the whole calculation rather unstable. Similarly, \Eref{FS7} can be used to check calculations of the $q$-factors for other atoms considered in Refs.~\cite{BFK05a,BFK06,DF08}. Numerical calculations for light atoms with $Z\lesssim 10$ are usually much simpler and more reliable. However, as we have pointed out above, the differences in the sensitivity coefficients of the light atoms depend on the relativistic corrections $\sim\alpha^2$. This means that the Breit interaction between valence electrons should be accurately included, while the majority of the published results were obtained in the Dirac-Coulomb approximation. There is a certain similarity between the present method and the method of optical doublets, used previously to study $\alpha$-variation (see, e.g., \cite{BSS04} and references therein). In that method, however, the FS energy constitutes a small fraction of the total transition energy. Therefore, the parameter $\Delta\mathcal{Q}$ for optical transitions is much smaller. Note that for the mid- and far-infrared FS lines, the transition energy and the FS splitting coincide, which leads to a much larger parameter $\Delta\mathcal{Q}$. \section{Discussion and Conclusions} In this paper we suggest to use two, or more FS lines of the same ion to study possible variation of $\alpha$ at early stages of the evolution of the Universe up to $\Delta T \sim 96$\% of $T_{\rm U}$. The sensitivity of the suggested method is proportional to $\Delta\mathcal{Q}$, as seen from \Eref{zfactor3}. We have deduced a simple analytical expression to calculate $\Delta\mathcal{Q}$ for the FS transitions in light atoms and ions within the range of nuclear charges $11 \le Z\le 26$. We found that $|\Delta\mathcal{Q}|$ grows with $Z$ and reached 0.2~--~0.4 for the ions of Ar and Ca. This is about one order of magnitude higher than typical sensitivities in the optical and UV range. In addition of being more sensitive, this method provides also a considerable reduction of the Doppler noise, which limits the accuracy of the optical observations. Using the lines of the same element reduces the sources of the Doppler noise to the inhomogeneity of the excitation temperature $T_{\rm ex}$ within the cloud(s). Alternatively, when the lines of different species are used, the Doppler noise may be significantly higher because of the difference of the respective spatial distributions. At present, the precision of the existing radio observations of the FS lines from distant objects is considerably lower than in the most accurate optical observations. For example, the error in the line center position for the C~\textsc{i} $J = 2 \rightarrow 1$ and $J = 1 \rightarrow 0$ lines at $z = 2.557$ was $\sigma_{v,{\rm radio}} = 8$ and 25 km~s$^{-1}\,$\, respectively \cite{WHD03,WDH05}. This has to be compared with the precision of the modern optical measurements of $\sigma_{v,{\rm opt}} = 85$ m~s$^{-1}\,$~\cite{LML07,LCM06}. In the optical range the error $\sigma_{v,{\rm opt}}$ includes both random and systematic contributions. The systematic error is the wavelength calibration error which is negligible at radio frequencies. In the forthcoming observations with ALMA, the statistical error is expected to be several times smaller than 85 m~s$^{-1}\,$. Together with the higher sensitivity to $\alpha$-variation, this would allow to estimate $\Delta\alpha/\alpha$\,\ at the level of one tenth of ppm~--- well beyond the limits of the contemporary optical observations and comparable to the anticipated sensitivity of the next generations of spectrographs for the VLT and the EELT \cite{MML06,M07}. Thus, FIR lines offer a very promising strategy to probe the hypothetical variability of the fine-structure constant both locally and in distant extragalactic objects. \begin{acknowledgments} MGK, SGP, and SAL gratefully acknowledge the hospitality of Hamburger Sternwarte while visiting there. This research has been partly supported by the DFG projects SFB 676 Teilprojekt C, the RFBR grants No. 06-02-16489 and 07-02-00210, and by the Federal Agency for Science and Innovations grant NSh 9879.2006.2. \end{acknowledgments}
0802.1032
\section{Introduction} Present-day cosmological research hovers around the investigation of {\it dark energy}, an exotic type of entity responsible for generating acceleration in expanding Universe. In fact, various recent observational results \cite{Riess1998,Perlmutter1999,Knop2003,Spergel2003,Riess2004,Tegmark2004,Astier2005,Spergel2006} suggest that the Universe is expanding with an acceleration alone while some other works \cite{Riess2001,Amendola2003,Padmanabhan2003} indicate that the acceleration is a phenomenon of recent past and was preceded by a phase of deceleration. Now, the exact nature of dark energy being still unknown, its investigation are going on along various paths. Phenomenological models are also contenders in this dark energy investigation. Although these type of phenomenological models do not originate from any underlying quantum field theory, yet they are useful enough to arrive at some fruitful conclusions. Out of three main variants of phenomenological models, viz. kinematic, hydrodynamic and field-theoretic models \cite{Sahni2000}, the present work deals with kinematical models where the dark energy representative $\Lambda$ is assumed to be a function of time. Recently, Ray et al.\cite{Ray2007} and Mukhopadhyay et al.\cite{Mukhopadhyay2005} have shown the equivalence of four $\Lambda$ models, viz. $\Lambda \sim (\dot a/a)^2$, $\Lambda \sim \ddot a/a$, $\Lambda \sim \rho$ and $\Lambda\sim \dot H$ for spatially flat ($k=0$) Universe. But, since the closed ($k=1$) and open ($k=-1$) Universes cannot be entirely ruled out, so there is enough reason to investigate dark energy for general $k$. In this work, therefore, one of the equivalent $\Lambda$ models, viz. $\Lambda \sim (\dot a/a)^2$ is selected to solve Einstein equations for general $k$ in order to have a broader view of accelerating Universe. The scheme of the investigation is as follows: Sec. 2 and 3 deals respectively with the Field equations and their solutions while some physical feature arising out of this work are described in Sec. 4. Finally, some conclusions are made in Sec. 5. \section{Field Equations for the Spherically Symmetric Space-times} The Einstein field equations are given by \begin{eqnarray} R^{ij}-\frac{1}{2}Rg^{ij}= -8\pi G\left[T^{ij}-\frac{\Lambda}{8\pi G}g^{ij}\right] \end{eqnarray} where the cosmological term $\Lambda$ is time-dependent, i.e. $\Lambda = \Lambda(t)$. Let us choose the spherically symmetric FLRW metric \begin{eqnarray} ds^2= -dt^2+a(t)^2\left[\frac{dr^2}{1-kr^2}+r^2(d\theta^2+sin^2\theta d\phi^2)\right] \end{eqnarray} where $k$ is the curvature constant and $a=a(t)$ is the scale factor. For the metric given by equation (2), the field equations (1) yield Friedmann and Raychaudhuri equations respectively given by \begin{eqnarray} \left(\frac{\dot a}{a}\right)^2+\frac{k}{a^2} = \frac{8\pi G\rho}{3}+\frac{\Lambda}{3}, \end{eqnarray} \begin{eqnarray} \left(\frac{\ddot a}{a}\right) = -\frac{4\pi G}{3}\left(\rho+3p\right)+\frac{\Lambda}{3} \end{eqnarray} where $c$, the velocity of light in vacuum, is assumed to be unity. The generalized energy conservation law, when both $\Lambda$ and $G$ vary, is derived by Shapiro et al.\cite{Shapiro2005} using Renormalization Group Theory as well as by Vereschagin and Yegorian\cite{Vereshchagin2006} using Gurzadyan-Xue formula\cite{Gurzadyan2003}. Since in the present work $G$ is assumed as a constant and $\Lambda$ is a variable, then the above mentioned generalized conservation law reduces to the particular form \begin{eqnarray} 8\pi G(p+\rho)\left(\frac{\dot a}{a}\right) = -\frac{8\pi G}{3}\dot\rho-\frac{\dot\Lambda}{3}. \end{eqnarray} The barotropic equation of state relating pressure and density is given by \begin{eqnarray} p= \omega\rho \end{eqnarray} where the barotropic index $\omega$ can assume the values $0$, $1/3$, $1$ and $-1$ for pressure-less dust, electromagnetic radiation, stiff fluid and vacuum fluid respectively. From (1), using equation (6), we get, \begin{eqnarray} \rho =\frac{3}{4\pi G(1+3\omega)}\left(\frac{\Lambda}{3}-\frac{\ddot a}{3}\right) \end{eqnarray} Again, differentiating equation (3) and using equations (5) - (7) we obtain the differential equation \begin{eqnarray} \left(\frac{\dot a}{a}\right)^2+\frac{2}{1+3\omega}\left(\frac{\ddot a}{a}\right)+\frac{k}{a^2} = \left(\frac{1+\omega}{1+3\omega}\right)\Lambda. \end{eqnarray} \section{Solutions for the Phenomenological Model $\Lambda = 3\alpha (\dot a/a)^2$} Using the {\it ansatz} $\Lambda = 3\alpha (\dot a/a)^2$, we immediately get from equation (8) \begin{eqnarray} \frac{\ddot a}{\dot a} = \left[3\alpha(1+\omega)-(3\omega+1)\right]\frac{\dot a}{2a}-(3\omega+1)\frac{k}{2a\dot a}. \end{eqnarray} The above equation after simplification reduces to the form \begin{eqnarray} a\dot a \frac{d}{dt}\left[ln{(\dot a a^{-s/2})}\right] = -\frac{(3\omega+1)k}{2} \end{eqnarray} where $s = 3\alpha(1+\omega)-(3\omega+1)$. Let us now study the following case when $s = -2$. In this case equation (10) reduces to \begin{eqnarray} a\dot a \frac{d}{dt}\left[ln(a\dot a)\right] = -\frac{(3\omega+1)k}{2}. \end{eqnarray} Solving equation (11) we get our solution set as \begin{eqnarray} a(t) = \left[{C_0}^{\prime}t+{C_1}^{\prime}-\frac{(3\omega+1)}{2}kt^2\right]^{1/2}, \end{eqnarray} \begin{eqnarray} H(t) = \frac{{C_0}^{\prime}-(1+3\omega)kt}{2\left[{C_0}^{\prime}t+{C_1}^{\prime}-\frac{(3\omega+1)}{2}kt^2\right]}, \end{eqnarray} \begin{eqnarray} \rho(t) = \frac{3(1-3\alpha)}{16\pi G}\frac{\left[{C_0}^{\prime}-\frac{(3\omega+1)}{2}kt\right]+ 2k\left[{C_0}^{\prime}t+{C_1}^{\prime}-\frac{(3\omega+1)}{2}kt^2\right]} {\left[[{C_0}^{\prime}t+{C_1}^{\prime}-\frac{(3\omega+1)}{2}kt^2\right]^2}, \end{eqnarray} \begin{eqnarray} \Lambda(t) = \frac{3\alpha[{C_0}^{\prime}-(3\omega+1)kt]^2}{4[{C_0}^{\prime}t+{C_1}^{\prime}-\frac{(3\omega+1)}{2}kt^2]^2} \end{eqnarray} where ${C_0}^{\prime}=2C_0$, ${C_1}^{\prime}=2C_1$, $C_0$ and $C_1$ being constants of integration. If we impose the boundary condition $a(t)=0$ when $t=0$, then $C_1=0$ which implies ${C_1}^{\prime}=0$. Then the simplified solution set becomes \begin{eqnarray} a(t) = \left[{C_0}^{\prime}t-\frac{(3\omega+1)}{2}kt^2\right]^{1/2}, \end{eqnarray} \begin{eqnarray} H(t) = \frac{{C_0}^{\prime}-(1+3\omega)kt}{2\left[{C_0}^{\prime}t-\frac{(3\omega+1)}{2}kt^2\right]}, \end{eqnarray} \begin{eqnarray} \rho(t) = 3\frac{[(\alpha+1){{C_0}^{\prime}-(1+3\omega)kt}^2+2(1+3\omega)k{{C_0}^{\prime}t-\frac{(1+3\omega)}{2}kt^2}]}{16\pi G (1+3\omega)[{C_0}^{\prime}t-\frac{(1+3\omega)}{2}kt^2]^2}, \end{eqnarray} \begin{eqnarray} \Lambda(t) = \frac{3\alpha\left[{C_0}^{\prime}-(3\omega+1)kt\right]^2}{4\left[{C_0}^{\prime}t-\frac{(3\omega+1)}{2}kt^2\right]^2}. \end{eqnarray} It is clear from equation (19) that for a repulsive $\Lambda$, $\alpha$ must be positive whereas $\alpha=0$ implies a null $\Lambda$. This means that we are getting Einstein's expanding Universe without $\Lambda$. \section{Physical Features of the Solutions} \subsection{Density of the Universe $\Omega$} The above solution set is obtained by assuming $s=-2$. Now, $s=-2$ means \begin{eqnarray} \frac{2}{3(1-\alpha)(1+\omega)} = \frac{1}{2}. \end{eqnarray} For $k=0$ we get from equations (16), (17), (19) and (20) respectively $a(t)\propto t^{2/3(1-\alpha)(1+\omega)}$, $H(t)\propto 1/t$, $\rho(t)\propto 1/t^2$ and $\Lambda(t)\propto 1/t^2$. These results were obtained by Ray et al.\cite{Ray2007} for flat ($k=0$) Universe. Again, from equation (20) we have \begin{eqnarray} \frac{(3\omega-1)}{(1+\omega)} = 3\alpha. \end{eqnarray} Since equation (19) suggests that for a repulsive $\Lambda$ we must have $\alpha>0$, then from equation (21) we find that either $\omega>1/3$ or $\omega<-1$. For $\omega>1/3$ we get a Universe where contribution of electromagnetic radiation is negligible (for radiation dominated Universe, $\omega=1/3$) while $\omega<-1$ signifies the presence of phantom energy. Again, using equation (18), the expression for cosmic matter energy density $\Omega_m$ can be easily derived and is given by \begin{eqnarray} \Omega_m = \frac{2(\alpha+1)}{(1+3\omega)}+4k\frac{[{C_0}^{\prime}t-\frac{(1+3\omega)}{2}kt^2]}{[{C_0}^{\prime}-(1+3\omega)kt]^2}. \end{eqnarray} Also, from the {\it ansatz} $\Lambda = 3\alpha (\dot a/a)^2$ we get \begin{eqnarray} \Omega_{\Lambda} = \alpha. \end{eqnarray} Then, using equations (21) - (23) we obtain \begin{eqnarray} \Omega_m + \Omega_{\Lambda} = 1+\frac{4k[{C_0}^{\prime}t-\frac{(1+3\omega)}{2}kt^2]}{[{C_0}^{\prime}-(1+3\omega)kt]^2}. \end{eqnarray} For the flat Universe ($k=0$), equation (24) reduces to the case of Ray et al.\cite{Ray2007}. Again, equation (24) shows that at time $t=0$, the sum of $\Omega_m$ and $\Omega_{\Lambda}$ becomes independent of the curvature constant $k$ and takes a unit value whatever may be the value of $k$. On the other hand, when $t$ tends to infinity, from (24) we have \begin{eqnarray} \Omega_m + \Omega_{\Lambda} = 1-\frac{2}{1+3\omega}. \end{eqnarray} From equation (25) we again observe that, $\Omega_m + \Omega_{\Lambda}$ is independent of $k$. Thus, both the early and late phases of the Universe exhibit the same behaviour so far as the curvature dependency of the sum of $\Omega_m$ and $\Omega_{\Lambda}$ is concerned. It has already been shown that for physically valid $\alpha$, either $\omega>1/3$ or $\omega<-1$. In the former case, $2/(1+3\omega)<1$ and hence by equation (25) \begin{eqnarray} 0<\Omega_m + \Omega_{\Lambda}<1. \end{eqnarray} But, for $\omega<-1$ we have $-2/(1+3\omega)<1$ and hence equation (25) provides the following constraint \begin{eqnarray} 1<\Omega_m + \Omega_{\Lambda}<2. \end{eqnarray} The above two relations (26) and (27) suggest that in distant future not only the sum total of matter and dark energy density will be independent of curvature of space but also they will be either less than (for $\omega>1/3$) or greater than (for $\omega<-1$) unity which misfits with the present status of the sum of two type of energy densities dominating the present Universe . This result is very important for visualizing the cosmic evolution in future. Now, let us suppose that the Universe is composed of a mixture of two types of fluids having barotropic indices $\omega_a$ and $\omega_b$ (say). Then, from equation (25) we get \begin{eqnarray} (\Omega_m + \Omega_{\Lambda})_a = 1-\frac{2}{1+3\omega_a}, \end{eqnarray} \begin{eqnarray} (\Omega_m + \Omega_{\Lambda})_b = 1-\frac{2}{1+3\omega_b}. \end{eqnarray} If $(\Omega_m + \Omega_{\Lambda})_{avg}$ be the average of $(\Omega_m + \Omega_{\Lambda})_a$ and $(\Omega_m + \Omega_{\Lambda})_b$ then \begin{eqnarray} (\Omega_m + \Omega_{\Lambda})_{avg} = 1-\left[\frac{2+3(\omega_a+\omega_b)}{(1+3\omega_a)(1+3\omega_b)}\right]. \end{eqnarray} Equation (30) shows that when $\omega_a+\omega_b=-2/3$, then \begin{eqnarray} (\Omega_m + \Omega_{\Lambda})_{avg} = 1. \end{eqnarray} This means that like the early and present Universe, for late Universe also the sum of $\Omega_m$ and $\Omega_{\Lambda}$ will be unity only if the Universe contains a mixture of two types of fluids rather than a single fluid. Since $\omega_a+\omega_b=-2/3$ and it has already been shown that either $\omega>1/3$ or $\omega<-1$, then let us suppose, $\omega_a=1/3+\epsilon$ and $\omega_b=-1-\epsilon$ where $\epsilon>0$. Therefore, for large value of $t$, when the Universe is filled with a mixture of two types of fluids (one of them being phantom-fluid) and if the value of barotropic index of one fluid is $-(1/3+\epsilon)$ and that of the other is $(-1-\epsilon)$, then the average value of the sum of $\Omega_m$ and $\Omega_{\Lambda}$ will be equal to one. \subsection{Deceleration parameter $q$} Let us now consider the expression for the deceleration parameter $q$ which is given by \begin{eqnarray} q = -\frac{a\ddot a}{\dot a^2} = -\left(1+\frac{\dot H}{H^2}\right). \end{eqnarray} If the Universe is composed of two fluids with equation of state parameters $\omega_a$ and $\omega_b$, then each of them will have some effect on the dynamics of the Universe. So for calculating the value of the deceleration parameter $q$, contributions coming from each component should be taken into account. If $q_a$ and $q_b$ be the values of two separate parts of $q$ coming from fluids having barotropic indices $\omega_a$ and $\omega_b$ respectively, then using equation (17) we get from equation (33) the following two expressions \begin{eqnarray} q_a = \frac{{{C_0}^{\prime}}^2}{[{C_0}^{\prime}-(2+3\epsilon)kt]^2}, \end{eqnarray} \begin{eqnarray} q_b = \frac{{{C_0}^{\prime}}^2}{[{C_0}^{\prime}+(2+3\epsilon)kt]^2}. \end{eqnarray} If $q_{eff}$ be the effective value for $q$, coming after considering the separate parts $q_a$ and $q_b$, then \begin{eqnarray} q_{eff} = \frac{4{{C_0}^{\prime}}^3kt}{[{{C_0}^{\prime}}^2-(2+3\epsilon)^2k^2t^2]^2}. \end{eqnarray} Equation (35) shows that the sign of $q$ depends only on two quantities, viz., the integration constant ${C_0}^{\prime}(=2 C_0)$ and the curvature constant $k$. If for simplicity we assume $C_0$ to be positive then we get an accelerating or a decelerating Universe according as $k<0$ or $k>0$. This result can be interpreted as follows. The Universe is made of two types of fluids having equation of state parameters $\omega_a$ and $\omega_b$, one of which is acting as a prohibitor and another as a supporter of cosmic acceleration. In the previous matter dominated phase, $k$ had a small positive value (i.e. $q_a>q_b$) and as a result the Universe was decelerating. But at a certain time during cosmic evolution, the second type of fluid (viz., phantom fluid) took the upper-hand (i.e. $q_a<q_b$) and consequently $k$ has become slightly negative. That is why the present Universe is in a state of acceleration. A very small positive or negative value of the curvature constant do not contradict the observational result that the present Universe is nearly flat. Further, the change of sign of $q$ shows that the cosmic acceleration is a recent phenomenon. \section{Conclusions} The present work, apart from being a generalization of an earlier one\cite{Ray2007}, has revealed some new and interesting physical features also. For a flat Universe, all the results of Ray et al.\cite{Ray2007} can be recovered from the expressions of $a(t)$, $H(t)$, $\rho(t)$ and $\Lambda(t)$ of the present work. Moreover, it has been possible to trace the entire cosmic evolution, starting from the Big-Bang and extending to distant future. The most significant result is related to the cosmic matter and dark energy densities for non-flat Universe. It has been shown that for non-flat Universe, $\Omega_m + \Omega_{\Lambda}=1$ only when the Universe is composed of two types of fluids, one with $\omega>1/3$ and another with $\omega<-1$. This means that both stiff-fluid ($\omega=1$) and phantom-fluid ($\omega=-1$) can be one of the two constituents of the Universe when the Universe is not flat. The evolution of the deceleration parameter $q$ shows that a non-flat Universe would be decelerating in the past and accelerating at present. This result is very significant for $\Lambda$-CDM cosmology. Another interesting point is the absence of Big-reap even in the presence of a fluid with $\omega<-1$. Caldwell\cite{Caldwell2002} and Caldwell et al.\cite{Caldwell2003} demonstrated the occurrence of a Big-reap in the presence of a fluid with supernegative equation of state. It may be mentioned here that Gonzalez-Diaz \cite{Gonzalez-Diaz2003} has shown that by a proper generalization of the Chaplygin-gas model, a Big-reap can be avoided even in the presence of phantom energy whereas Abdalla et al.\cite{Abdalla2004} have arrived at the same result through a slight modification of GTR. But in the present work, a cosmic doomsday is shown to be impossible within the normal framework of GTR. One of the reasons of it may be the presence of another fluid apart from the phantom fluid. That other fluid may act as an inhibitor of Big-reap. It may be mentioned that \u{S}tefan\u{c}i{\'c} \cite{Stefancic2004} has developed a model in which the dark energy component and the matter component interact with each other resulting in the appearance of phantom energy out of non-phantom matter. The present work can be considered as a counter example of that because here phantom matter, in the presence of another component as a mediator, can behave as a non-phantom matter. Finally, it is to be noted that the present work has demonstrated that although current observational data points towards a $k=0$ Universe, yet we are not in a position right now to completely rule out $k=\pm 1$ cosmologies. \section*{Acknowledgments} One of the authors (SR) would like to express his gratitude to the authorities of IUCAA, Pune, India for providing him the Associateship Programme under which a part of this work was carried out.
0802.1838
\section{Introduction} According to Thomas Wright mapping the Cosmos on the very largest scales is about gaining {\it ``a partial View of Immensity, or without much Impropriety perhaps, a finite View of Infinity''}\footnote{ {\it An Original Theory of the Universe} (1750, 9th letter, Plate XXXI).}. Unfortunately, charting the cosmic territory beyond our local volume into the distant Universe is observationally challenging. Until recently, our understanding of the large-scale organisation of galaxies at $z \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 0.2$ had to rely on the predictions of numerical simulations in the framework of the rather successful cold dark matter model \citep[{\it e.g.\ }][]{spri05}. Within this scenario, which has now developed into the leading theoretical paradigm for the formation of structures in the Universe, structures grow from weak, dark-matter density fluctuations present in the otherwise homogeneous and rapidly expanding early universe. The standard version of the model incorporates the assumption that this primordial, Gaussian-distributed fluctuations are amplified by gravity, eventually turning into the rich structures we see today. This picture in which gravity, as described by general relativity, is the engine driving cosmic growth is generally referred to as the gravitational instability paradigm (GIP). However plausible it may seem, it is important to test its validity. In the local universe the GIP paradigm has been shown to make sense of a vast amount of independent observations on different spatial scales from galaxies to superclusters of galaxies \citep[{\it e.g.\ }][]{pea01,teg06}. Deep redshift surveys now allow us to test whether the predictions of this assumption are also valid at earlier epochs. In this paper we test the role of gravity in shaping density inhomogeneities by using three-dimensional maps of the distribution of visible matter revealed by the VIMOS-VLT Deep Survey over the large redshift baseline $0<z<1.5$ (see Massey et al. 2007 for three dimensional cartography of mass overdensities in the COSMOS field). We present first a qualitative picture of the large-scale organization of remote cosmic structures, and then quantify the observed clustering by computing the probability distribution functions (PDF) of galaxy overdensities $\delta_g$. In this way, we trace how the amplitude and spatial arrangement of galaxy fluctuations changes with cosmic time. We explore the mechanisms governing this growth by comparing the time evolution of the low-order moments of the galaxy PDF, ({\em i.e.} the {\it variance} amplitude $<\delta_g^2>$ and the {\it normalised skewness} $S_3=<\delta_g^3>_c/<\delta_g^2>^{2}$) with the corresponding quantity theoretically predicted for matter fluctuations in the linear and semi-linear perturbative regime. (Note that in the following we shall often speak equivalently of the variance or of its square root, i.e. the {\it root mean square} amplitude $<\delta_g^2>^{1/2}$ when referring to the second-order moment). This provides a test of GIP-specific predictions at as-yet unexplored epochs that are intermediate between the present era and the time of decoupling. Knowledge of the precise growth history of density inhomogeneities provides also a way to test the theory of gravitation~\citep[{\it e.g.\ }][]{lin05}. In addition to the statistical approach presented in this paper, we have recently addressed this same issue also from a dynamical point of view. We have used linear redshift-space distortions in the VVDS-{\it Wide} data to measure the growth rate of matter fluctuations at $z\sim 0.8$ \citep{nature07}. This approach offers promising prospects for determining the cause of cosmic acceleration in the near future \citep{lin07}. The work presented here is also complemented by a parallel paper (Cappi et al. 2008) in which we study the behavior of the N-point correlation functions for this same sample. Higher-order galaxy correlation functions are known to display a hierarchical scaling as a function of the variance of the count distribution ({\it e.g.\ } Peebles 1980). In the same spirit, we use this scaling to test the standard assumption of evolution under gravitational instability of an initially Gaussian distribution of density fluctuations. The paper is organised as follows: in \S 2 we briefly describe the first-epoch VVDS data sample. In \S 3 we present 3D overdensity maps from the galaxy distribution in the VVDS to $z \sim 1.5$; we then characterise the evolution of galaxy fluctuations with cosmic epoch by computing their PDF in two redshift slices. In \S 4 we compare the observed redshift evolution of the low-order moments (i.e. variance and skewness) of the PDF of the galaxy fluctuations with linear and semi-linear theoretical predictions of the Gravitational Instability Paradigm. Conclusions are presented in \S 5. The coherent cosmological picture emerging from independent observations and analysis motivates us to present our results in the context of a $\Lambda$CDM cosmological model with $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$. Throughout, the Hubble constant is parameterised via $h=H_{0}/100$. All magnitudes in this paper are in the AB system (Oke \& Gunn 1983), and from now on we will drop the suffix AB. \section{The First-Epoch VVDS-Deep Redshift Sample} \label{data} The primary observational goal of the VIMOS-VLT Redshift Survey as well as the survey strategy and first-epoch observations in the VVDS-0226-04 field (from now on simply VVDS-02h) are presented by \citet{lef05}. Here it is enough to stress that, in order to minimise selection biases, the VVDS-Deep survey has been conceived as a purely flux-limited ($17.5\leq I \leq24$) survey, {\it i.e.\ } no target pre-selection according to colors or compactness is used. Stars and QSOs have been {\it a-posteriori} removed from the final redshift sample. Photometric data in this field are complete and free from surface brightness selection effects, down to the limiting magnitude $I_{AB}$=24 \citep{mcc03}. Spectroscopic observations were carried out using the VIMOS multi-object spectrograph using one arcsecond wide slits and the LRRed grism which covers the spectral range $5500<\lambda(\AA)<9400$ with an effective spectral resolution $R\sim 227$ at $\lambda=7500\AA$. The {\it rms} accuracy in the redshift measurements is $\sim$275 km/s. Details on the observations and data reduction are given in \citet{lef04} and in \citet{lef05}. The VVDS-02h data sample extends over an area of 0.7$\times$0.7 sq.deg (which was targeted according to a one, two or four passes strategy, {\it i.e.\ } giving to any single galaxy in the field one, two or four chances to be targeted by VIMOS masks (see fig. 12 in \citet{lef05}) has a median depth of about $\,${\it z}$\,$$\sim$0.76. It contains 6582 galaxies with secure redshifts ({\it i.e.\ } redshift determined with a quality flag$\ge$2 (see \citet{lef05})) and probes a comoving volume (up to $z=1.5$) of nearly $1.5\cdot 10^6 h^{-3}$ Mpc$^{3}$. This volume has transverse dimensions $\sim 37 \times 37$ $h^{-1}$Mpc at $z=1.5$ and extends over a comoving length of 3060 $h^{-1}$Mpc in the radial direction. For the statistical analysis presented in this paper, we first define a sub-sample (VVDS-02h-4) including galaxies with redshift $\,${\it z}$\,$$<$1.5 and over the sky region (0.4$\times$0.4 deg$^2$) that was repeatedly covered by four independent VIMOS observations in each point. Even if measured redshifts in the VVDS reach up to $\,${\it z}$\,$$\sim$5 and cover a wider area, these conservative limits bracket the range where we can sample in a denser way the underlying galaxy distribution and, thus, minimise biases in the reconstruction of the density field (see the analysis in \S 4.1). The VVDS-02h-4 subsample contains 3448 galaxies with secure redshift (3001 with $0.4<z<1.5$), probes one-third of the total VVDS-02h volume and it is characterised by a redshift sampling rate of $\sim 30\%$ (i.e. on average about one over three galaxies with magnitude $I_{AB}\leq$24 has a measured redshift). This high spatial sampling rate is a critical factor for minimising biases in the reconstruction of the 3D density field of galaxies. To optimise the analysis of the associated probability density function, we further select only galaxies with absolute blue magnitude $M_B<-20+\log h$. With this selection, we define two nearly volume-limited sub-samples in the redshift ranges $0.7<z<1.1$ and $1.1<z<1.5$ respectively. A discussion of possible effects of galaxy evolution on our results is presented in \S~4.3. \section{The galaxy density field at high redshift} The first large redshift surveys of the local Universe \citep[e.g.][]{dav81,gh89,gh91,s92a,dac94} showed that galaxies have a highly non-random spatial distribution and cluster in a hierarchical fashion. The corresponding three-dimensional maps reveal a complex web-like network of thin, filamentary structures connecting centrally condensed clusters of galaxies, punctuated by large, quasi-spherical, low-density voids. These structures are the outcome of more than 13 billion years of evolution of small-amplitude fluctuations that we see reflected in the temperature anisotropy of the Cosmic Microwave Background (CMB) at $z\simeq 1100$ \citep{sper07}. Recent analyses (e.g. Tegmark et al. 2006) have shown the remarkable consistency between two-point statistics of the galaxy distribution at $z \sim 0$ and the CMB power spectrum which probes matter clustering at the recombination. Mapping the large-scale structure at $z \sim 1$ is thus crucial to further test the coherency of the gravitational instability picture at an intermediate time between the epoch of last scattering and today. In this section we present a reconstruction of the 3D galaxy density field, discussing first the methodology and summarising the techniques adopted to correct for observational selection effects. These are fully presented in Marinoni et al. (2005, hereafter Paper I) and Cucciati et al. (2006), to which the reader is referred for more details. \subsection{Density reconstruction method} \label{method} The continuous galaxy density fluctuation field \begin{equation} \delta_g({\bf r,R})=\frac{\rho({\bf r}, R)-\bar{\rho}}{\bar{\rho}} \end{equation} \noindent represents the adimensional excess/deficit of galaxies on a scale R, at any given comoving position ${\bf r}$ with respect to the mean density $\bar{\rho}$. As suggested by Strauss and Willick (1995) we estimate the smoothed number density of galaxies brighter than $\mathcal{M}_c$ on a scale $R$, $\rho(r, R, <\mathcal{M}_c)$, by summing over an appropriately weighted convolution of Dirac-delta functions with a normalised Gaussian filter F \begin{equation} \rho({\bf r}, R, <\mathcal{M}_c)=\sum_i\frac{ \int_{0}^{\infty}\delta^{D}(u-|{\bf \Delta r}_i|/R) F(u)du}{S(r_i, \mathcal{M}_c)\Phi(m)\zeta(r_i,m) \Psi(\alpha,\delta)} \label{denesti} \end{equation} \begin{equation} F(u)=\big(2 \pi R^2 \big)^{-3/2} \exp \Big[ -\frac{1}{2} u^2\Big] \,\,\,\, . \end{equation} \noindent Here $\Delta {\bf r}=({\bf r_i}-{\bf r})$ is the separation between galaxy positions and the location ${\bf r}$ where the density field is evaluated. We compute the characteristic mean density at position {\bf r} using equation (\ref{denesti}) by simply averaging the galaxy distribution in survey slices $r \pm R_s$, with $R_s = 400$$ h^{-1}$Mpc. The four functions in the denominator of equation \ref{denesti} correct for various observational characteristics: - $S(r_i,\mathcal{M}_c)$ is the distance-dependent selection function of the sample. This function is identically one when a volume-limited sample is used. When the full magnitude-limited survey ($17.5< I<24$ in our case) is used, however, this function corrects for the progressive radial incompleteness due to the fact at any given redshift we can only observe galaxies in a varying absolute magnitude range. While the PDF of galaxy fluctuations will be derived from volume-limited samples, in the next section we shall make use of this function when reconstructing a minimum-variance 3D density map from the full VVDS survey. The actual values of $S(r,\mathcal{M}_c)$ are derived using the VVDS galaxy luminosity function (Ilbert et al. 2005), assuming a minimum absolute magnitude $\mathcal{M}_c = -15+5 \log h$ and accounting for its evolution as measured from the VVDS itself. A more detailed discussion of the derivation of the selection function can be found in Paper I - $\Phi(m)$ corrects for the slight bias against bright objects introduced by the slit positioning tool VMMPS/SPOC (Bottini et al. 2005). - $\zeta(r_i,m)$ is the correction for the varying spectroscopic success rate as a function of the apparent $I_{AB}$ magnitude and of the distance of the object (see Ilbert et al. 2005). - $\Psi(\alpha,\delta)$ is the angular selection function correcting for the uneven spectroscopic sampling of the VVDS on the sky (see Fig 1. of Cucciati et al. 2006). Its purpose is to make allowance for the different number of passes done by the VIMOS spectrograph in different sky regions (a factor which is anyway maximised in the 4-pass sub-area of the sample). \smallskip \begin{figure*} \begin{center} \includegraphics[width=160mm,angle=0]{fig1.ps} \caption{ The reconstructed density field for $0.4<z<1.4$, as traced by the galaxy distribution in the VVDS-Deep redshift survey to $I \leq 24$. This figure preserves the correct aspect ratio between transverse and radial dimensions. The mean inter-galaxy separation of this sample at the typical depth of the VVDS ($z=0.75$) is $4.6$$ h^{-1}$Mpc, comparable to local redshift surveys as the 2dFGRS. The galaxy density distribution has been smoothed using a 3D Gaussian window of radius $R=2$$h^{-1}$Mpc and noise has been filtered away using a Wiener filtering technique (see Strauss \& Willick 1995, Marinoni et al. 2005). Only fluctuations above a signal-to-noise threshold of $2$ are shown. The accuracy and robustness of the reconstruction methods have been tested using realistic mock catalogues \citep{pol05,mar05}.} \end{center} \label{fig1} \end{figure*} The analytical form of these selection functions is discussed in Cucciati et al. (2006). The underlying assumption in this reconstruction scheme is that the subset of observed galaxies (e.g. in the case of a flux-limited sample, those luminous enough to enter the sample at a given redshift) is representative of the full population. This assumption clearly neglects any dependence of clustering on luminosity and could bias the density field reconstructed from the pure flux-limited sample at different redshifts; for this reason, the quantitative measurements presented in this paper will all be based on quasi-volume-limited samples, limited to an absolute magnitude $M_B=-20+5\log h$. Finally, it should also be mentioned that in adopting a universal luminosity function we do not take into account the possible dependence of the luminosity function on morphological type and environment; this is, however, a second order effect in this work. The shot-noise error affecting the reconstructed field at different {\bf r} is estimated by computing the square root of the variance \begin{equation} \displaystyle \epsilon({\bf r})=\frac{1}{\overline{\rho_g}}\Bigg[\sum_i \Bigg(\frac{F\big(\frac{|{\bf \Delta r_i}|}{R}\big)}{S(r_i,\mathcal{M}^c) \Phi(m_i), \zeta(z,m),\Psi(\alpha,\delta)}\Bigg)^2\Bigg] ^{1/2} \,\, . \label{shot} \end{equation} The amplitude of the shot noise increases as a function of redshift in a purely flux-limited survey. We deconvolve the signature of this noise from the density maps by applying the Wiener filter \citep[cf.][]{pre92,s92b} which provides the {\it minimum variance} reconstruction of the smoothed density field, given the map of the noise and the {\it a priori} knowledge of the underlying power spectrum \citep[{\it e.g.\ }][]{lah94}. For this we assume that the observed galaxy density field $\delg({\bf r})$, and the true (i.e. including all galaxies) underlying field $\delta_{T}({\bf r})$, both smoothed on the same scale, are related via \begin{equation} \delg({\bf r}) = \delta_{T}({\bf r}) + \epsilon({\bf r}), \end{equation} \noindent where $\epsilon({\bf r})$ is the local contribution from shot noise (see Eq. (\ref{shot})). The Wiener filtered density field, in Fourier space, is \begin{equation} {\tilde\delta}_{F}({\bf k}) = \mathcal{F}({\bf k}) {\tilde \delta}_g({\bf k})\,\, , \end{equation} \noindent where \begin{equation} \mathcal{F}({\bf k}) = { \vev{{\tilde\delta}_{T}^2({\bf k})} \over \vev{{\tilde\delta}_{T}^2({\bf k})} + (2\pi)^{3} P_{\epsilon}({\bf k})}\ . \label{Wiener} \end{equation} \noindent where brackets denote statistical averages and where $P_{\epsilon}({\bf k})=(2\pi)^{-3} \vev{|{\tilde\epsilon}^2({\bf k})|}$ is the power spectrum of the noise. Assuming ergodic conditions, this last quantity can be computed as $P_{\epsilon}({\bf k})=(2\pi)^{-3} |{\tilde\epsilon}({\bf k})|^2.$ The calculation of $\vev{{\tilde\delta}_{T}^2({\bf k})}$ taking into account the form of the window function $F$ and the peculiar VVDS survey geometry is presented in paper I. \subsection{A cosmographical tour up to $z=1.5$} We have first applied our reconstruction technique to the global flux-limited VVDS sample to build a visual three-dimensional map of galaxy density fluctuations to $z=1.5$ which exploits the full information content of the survey. The $I \leq 24$ sample is characterised by an effective mean inter-particle separation of ($\vev{r} \sim 5.1$ $h^{-1}$Mpc ) in the redshift range [0,1.5]. For comparison, this sampling is better (denser) than the early CfA1 survey ($\vev{r}\sim 5.5$$ h^{-1}$Mpc) used by \citet{dav81} to reconstruct the 3D density field of the local Universe ({\it i.e.\ } out to $\sim$ 80 $h^{-1}$Mpc ). Also, at the median depth of our survey, {\it i.e.\ } in the redshift interval $0.7<z<0.8$, the mean inter-particle separation is 4.4 $ h^{-1}$Mpc, a value nearly equal to the 2dFGRS at its median depth. The recovered galaxy density field is presented in Fig. 1. Fluctuations have been smoothed on a scale $R=2$$ h^{-1}$Mpc. Only density contrasts with signal-to-noise ratio $S/N>2$ are shown. A remarkable feature of this {``\em geographical''} exploration of the Universe at early cosmic epochs is the abundance of large-scale structures similar in density contrast and size (at least in one direction) to those observed by local surveys. In particular, it is tempting to identify qualitatively a few filament-like density enhancements bridging more condensed structures along the line of sight, although the survey transverse size is still too small to fully sample their extent. Nevertheless, it is interesting to notice that these apparently one-dimensional structures remain coherent over scales $\sim 100$$ h^{-1}$Mpc, separating low-density regions of similar size. Figs. one and two visually confirm that the familiar web pattern observed in the local Universe is not a present-day transient phase of the galaxy spatial organisation but it is already well-defined at $\sim 1.5$ when the Universe was $\sim 30\%$ its present age \citep[{\it e.g.\ }][]{lef96, ger05,sco07}. This implies that large-scale features of the galaxy distribution essentially reflects the long-wavelength modes of the initial power spectrum, in agreement with theoretical predictions of the CDM hierarchical scenario. Numerical simulations of large scale structure formation in fact show that the present-day web of filaments and walls is actually present when the universe was in embryonic form in the overdensity pattern of the initial fluctuations, with subsequent linear and non-linear gravitational dynamics just sharpening its features \citep[{\it e.g.\ }][]{bon96, spri05}. The limited angular size of the survey is exemplified by a dense ``wall'' at $z=0.97$ that stretches across the whole survey solid angle ($0.7 \times 0.7$ deg) (see Fig. 2). This two-dimensional structure is coherent over more than $\sim 30$$h^{-1}$Mpc (comoving) in the transverse direction, is only $\sim 10$$h^{-1}$Mpc thick along the line of sight, and has a mean overdensity $\delta_g=2.4 \pm 0.3$. This makes it similar to the largest and rarest structures observed in the local Universe, such as the Shapley concentration \citep[e.g.][]{scar89,bar00}. By applying a Voronoy-Delaunay cluster finding code \citep{mardav02}, we find 10 distinct groups in this structure, with between 5 and 12 galaxy members each (down to the limiting magnitude I=24), for a total of 164 galaxies. If one considers the evolution of {\it mass} fluctuations in the standard $\Lambda$CDM model, the probability of finding a structure with similar {\it mass} overdensity at such early times ($0.9<z<1$) would be nearly 4 times smaller than today: one such {\it mass fluctuation} would be expected in a volume of $\sim 3\cdot10^{6} h^{-3}$Mpc$^3$, {\it i.e.\ } nearly 5 times larger than our surveyed volume up to $z \sim 1$. In fact, as we shall describe in section 3.3, finding such a {\it galaxy} overdensity is not so unusual: it is clear evidence that the {\it biasing} between galaxies and matter at these epochs is higher than today. This makes fluctuations in the galaxy distribution to be highly enhanced with respect to those in the mass. \begin{figure*} \begin{center} \includegraphics[width=110mm,angle=-90]{fig2.ps} \caption{ Density distribution and properties of a large-scale planar structure at $z=0.97$, that completely fills the VVDS-02h field-of-view. } \label{fig2} \end{center} \end{figure*} \subsection{Evolution of the PDFs of galaxy fluctuations in the VVDS} Several approaches may be used to characterise in a quantitative way the distribution of galaxy fluctuations $\delg$ shown in Fig 1. A complete specification of the overdensity field may be given by the full set of galaxy N-point correlation functions \citep{dp77}. This approach has been explored and routinely applied over the past decade as better and deeper redshift surveys have become available. An alternative description may instead be given in terms of the probability distribution function of a random field. By definition, the PDF of cosmological density fluctuations describes the probability of having a fluctuation in the range $(\delta, \delta+d\delta)$, within a spherical region of characteristic radius R randomly located in the survey volume. In principle, it encodes all the information contained within the full hierarchy of correlation functions, and provides insights about the time evolution of density fluctuations. This definition can be applied either to the distribution of galaxies, characterizing their number density fluctuations, or to the dark-matter dominated mass distribution. For the latter case, the expected shape of the PDF can be predicted as a function of redshift given a cosmological model, at least for large-scale fluctuations; this can be done either analytically (see below) or using numerical simulations. On the observational side, in the case of surveys of the local Universe this fundamental statistics has been often overlooked (but see Marinoni \& Hudson 2002, Ostriker et al. 2003). On the other hand, only recently deep redshift surveys have reached sufficient volumes to allow these measurements to be extended back in time. In paper I, we have discussed and tested in detail the methodology to estimate the PDF from this kind of samples. In particular we have checked the robustness of the reconstruction against the specific VVDS-02h survey selection function, shot-noise errors and other observational biases. We used fully realistic mock samples of the VVDS-02h survey data and showed that, once the smoothing scale $R$ is larger than the mean inter-galaxy separation, the overall shape of the reconstructed PDF is an unbiased realisation of the complete parent galaxy population. In particular, we showed that for redshifts up to $z=1.5$ the VVDS-02h sky coverage and sampling rate are sufficient for obtaining a reliable reconstruction of the PDF shape (in both low- and high-density regions) on scales $R \ge 8$$ h^{-1}$Mpc. Clearly, the degree with which the PDF measured from this sample is a fair representation of the ``universal'' PDF up to $\,${\it z}$\,$=1.5 is a separate, yet critical question. A difference is naturally expected due to fluctuations on scales larger than the volume probed (``cosmic variance''). An estimate of this effect is actually included in our error bars, as these were drawn from the scatter among our set of VVDS mock samples. We have therefore applied the estimator of Eq. \ref{denesti} and the full de-noising technique described in \S~\ref{method} to the two luminosity-limited sub-samples of our survey described in \S~\ref{data}, reconstructing the PDF of galaxy fluctuations in top-hat spheres of radius $R=10$$h^{-1}$Mpc at two different epochs ($0.7 < z < 1.1$ and $1.1 \leq z < 1.5$). The typical luminosity of the galaxies selected in these two intervals ($M_{B} \leq -20+5\log h$) corresponds to a median luminosity $L_{B}\simeq 2L^{*}_{B}$ at $z\sim 0$ (i.e. the same median luminosity of the whole 2dFGRS sample \citep{ver02}). As discussed previously, the use of luminosity-selected samples virtually eliminates distance-dependent shot-noise contributions (clearly neglecting the residual evolution within the two redshift bins, which is well within the errors). The results are shown in Fig. 3. The measured PDF's in Fig. 3 show several interesting features. First, as times passes (redshift decreases) the maximum of the PDF shifts to smaller $\delta$-values; secondly, the low-density tail is enhanced, with more low-density regions appearing at lower redshifts. Quantitatively, this implies in particular that the probability of having an under-dense ($\delta_{g}<0$) region of radius $R=10$$h^{-1}$Mpc at $0.7 \leq z < 1.1$ is nearly $10\%$ larger than at earlier times ($1.1 \leq z <1.5$). \subsection{The expected PDF of mass fluctuations in the mildly non-linear regime} The shape of the PDF of the galaxy overdensities is strongly dependent on the non-linear effects implicit both in the gravitational growth and in the physical mechanisms responsible for galaxy formation \citep[e.g.][]{wat01}. Initial density fluctuations are normally assumed to have a Gaussian PDF; this is then modified by the action of gravity and, in the case of the galaxy field, by the way galaxies trace the underlying mass ({\em biasing} scheme). If galaxies were faithful and unbiased tracers of the underlying mass, the peak shift and the development of a low-density tail we observe in Fig.3 could be naturally interpreted as the key signature of dynamical evolution purely driven by gravity. In fact, gravitational growth in an expanding Universe makes low density regions propagate outwards and become more common as time goes by, while at the same time the high-density tail increases. If this interpretation is correct, we expect the PDF of galaxy overdensities to coincide with the PDF of mass fluctuations in each redshift range, once they are normalised to the observed clustering at $z\sim0$, where we know that $L\sim 2L^*$ galaxies trace the mass \citep{ver02}. Let us verify whether this is the case by first summarising the main formalism to compute the PDF of mass fluctuations in a given cosmological scenario. In hierarchical models, it is well established from numerical simulations that when structure growth reaches the nonlinear regime on a scale R, the PDF of mass density contrasts in comoving space is well described by a lognormal distribution \citep{col91,kof94,tay00,kay01}, \begin{equation} f_R(\del)=\frac{(2 \pi \omega^2_R)^{-1/2.}}{1+\del} \exp \Big\{ -\frac{ [\ln(1+\del) +\omega^2_R/2]^2}{2\,\omega^2_R} \Big\}\,\,\, . \label{teopdf} \end{equation} \noindent This is fully characterised by a single parameter $\omega_R$, related to the variance of the $\del$-field on a scale R as \begin{equation} \omega^2_R=\ln [1+\vev{\del^2}_R] \label{omega} \end{equation} At high redshifts, the variance $\sig^2_{R} \equiv \vev{\del^2}_R$ over sufficiently large scales $R$ (those explored in this paper) is given in the linear theory approximation by \begin{equation} \sig_R(z)=\sig_R(0) D(z) \label{siglt} \end{equation} \noindent where $D(z)$ is the linear growth factor of density fluctuations (normalised to unity at $z=0$), \begin{equation} D(z) = \exp{-\Big[\int_{0}^{z}\rm{f}(z)\,d\ln(1+z)\Big]} \,\,\,\, . \end{equation} \noindent In the standard $\Lambda$CDM cosmological model, the expression for the logarithmic derivative of the growth factor, $\rm{f}=d\log D/d \log a$ (with $a=(1+z)^{-1}$), can be approximated to excellent accuracy as \[\rm{f}(z) \sim \Omega_{m}^{\gamma}(z)\] \noindent where $\gamma \simeq 0.55$ (Wang \& Steinhardt 1998, Linder 2005) and \[\Omega_m(z)=\Omega_m^{0}\frac{(1+z)^3}{E^2(z)}\] \[E^2(z)=\Omega_m^0(1+z)^3+\Omega_{\Lambda}.\] The lognormal approximation formally describes the distribution of matter fluctuations computed in real comoving coordinates. On the contrary, the PDF of galaxies is observationally derived in redshift space, where its shape is distorted by the effects of peculiar motions \citep[e.g.][]{mar98,nature07}. In order to map properly the mass overdensities into galaxy overdensities the mass and galaxy PDFs must be computed in a common reference frame. It has been shown by \citet{sbd00} that an optimal strategy to derive galaxy biasing is to compare both mass and galaxy density fields directly in redshift space. Implicit in this approach is the assumption that mass and galaxies are statistically affected in the same way by gravitational perturbations, i.e. that there is no velocity bias in the motion of the two components. The relation between the variances measured in real and redshift comoving space is \begin{equation} \sig^z_R(z)=p(z)\sig_R(z) \label{sigcor} \end{equation} \noindent where $p(z)$ is a redshift-dependent correcting factor which takes into account the average contribution of the linear redshift distortions induced by peculiar velocities \citep{kai87}. Its expression, in the high redshift regime, is given by \citep{hamilton,mar05} \begin{equation} p(z)=\big[1+\frac{2}{3}\rm{f}(z)+\frac{1}{5}\rm{f}^{2}(z)\big]^{1/2}. \end{equation} We have used this formalism to compute the PDF expected for mass fluctuations in the redshift ranges explored using our galaxy samples. This is given, for the two ranges, by the curves in the top panel of Fig. 3. The evident discrepancy between the galaxy and mass PDF's indicates that the observed evolution cannot be only the product of gravitational growth (in the adopted cosmological model), but that a time-evolving bias between the galaxy and mass density fields is needed: at high redshifts and on large scales galaxy overdensities trace in a more biased way the underlying pattern of dark matter fluctuations. In the following section we shall summarise our current knowledge on the properties and evolution of the biasing function and show how the presence of a non-linear bias is a necessary ingredient to theoretically understand the evolution of the PDF in Fig.3. This will in particular provide us with the necessary background to interpret the evolution of the low-order moments of the PDF at different redshifts, which is our aim in this paper. \begin{figure} \includegraphics[width=74mm,height=71mm, angle=0]{fig3.ps} \includegraphics[width=74mm,height=71mm, angle=0]{fig4.ps} \caption{ The PDF of galaxy fluctuations (in units $y=1+\delta$) for VVDS galaxies with $M_{B} \leq -20+5 \log h$ from the VVDS within two independent volumes, corresponding to different cosmic epochs: $0.7 < z <1.1$ (blue shaded histogram), and $1.1 \leq z <1.5$ (green shaded histogram). The galaxy PDF has been reconstructed using a Top-Hat smoothing window of comoving size $R=10$$ h^{-1}$Mpc. The histograms actually correspond to the distribution function $G(y) = \ln(10) y g(y)$ because the binning is done in $\log(y)$. The two observed histograms have been reproduced in both the upper and lower panels. They are compared to the theoretical predictions for the PDF of, respectively, mass fluctuations (top, from Eq. \ref{teopdf}) and of {\em galaxy} fluctuations as inferred from Eq. \ref{fife} using the non-linear biasing function measured from the VVDS (bottom). The blue and red lines correspond to the higher- and lower-redshift samples respectively. } \label{fig3} \end{figure} \subsection{Evolution and non-linearity of biasing} Biasing lies at the heart of all interpretations of large-scale structure models. Structure formation theories predict the distribution of mass; thus, the role of biasing is pivotal in mapping the observed light distribution back into the theoretical model. In our case we need to disentangle the imprint of biasing from that of pure gravity in the evolution of the galaxy PDF. In paper I we inferred the biasing relation $\delg=\delg(\del)=b(\delta)\delta$ between mass and galaxy overdensities from their respective probability distribution functions $f(\del)$ and $g(\delg)$. Assuming a one-to-one mapping between mass and galaxy overdensity fields, conservation of probabilities implies \citep[e.g.][]{sbd00,wild} \begin{equation} g(\delta_g)d\delta_g=f(\delta)d\delta \label{fife} \end{equation} This approach implies the assumption of a cosmological model (the standard $\Lambda$CDM model in our case) from which to compute $f(\delta)$, the mass PDF. The advantage over other methods is that we can explore the functional form of the relationship $\delg=b(z,\del, R) \del$ over a wide range in mass density contrasts, redshift intervals and smoothing scales R without imposing any {\it a-priori} parametric functional form for the biasing function. Note that, by definition, this scheme is ineffective in capturing information about possible stochastic properties of the biasing function. The numerical solution $\delta_g=\delta_g(\delta)$ of Eq. \ref{fife} maps the mass PDF (solid lines in the top panel of Fig. 3) into the galaxy PDF (solid lines in the bottom panel of Fig. 3) and can be analytically approximated using a Taylor expansion \citep{fg93} \begin{equation} \delta_g(\delta)=\sum_{k=0}^{n}\frac{b_{k}(z)}{k!}\delta^{k}, \label{nlbf} \end{equation} \noindent where the coefficients $b_i$ depend on redshift. We consider this power series only to second order, and fit the numerical solution for the biasing function leaving $b_0$ as a free parameter. Avoiding setting $b_0$ as an integral constraint ($<\delta_g>=0$) allows us to account for possible (un-modelled) contributions from higher order moments of the expansion. This approach has the advantage of minimising biases in the estimates of the lower moments of the expansion, specifically $b_1$ and $b_2$. The key result from Paper I has been to show that galaxy biasing is poorly described in terms of a single scalar and better characterised by a more sophisticated representation. Specifically, always considering a scale $R=10$$ h^{-1}$Mpc, the ratio between the quadratic and linear bias terms has been evaluated in four different high redshift intervals (see Table 2 of Paper I). When averaged over the full redshift baseline $0.7<z<1.5$, this ratio turns out to be \begin{equation} \left\langle \frac{b_2}{b_1} \right\rangle=-0.19 \pm 0.04 \label{rationl} \end{equation} \noindent i.e. different from zero at more than 4$\sigma$ confidence level. This means that -- at least over the redshift range and scales considered here -- the level of biasing depends on the underlying value of the mass density field. In other words, the way galaxies are distributed in space depends in a non-linear manner on the local amplitude of dark matter fluctuations. The measurement of a non-linear term in the biasing relation is fully consistent with a parallel analysis of the hierarchical scaling of the N-point correlation functions in the same VVDS sample (Cappi et al. 2008). These results confirm a generic prediction of hierarchical models of galaxy formation \citep[{\it e.g.\ }]{som01}. It is relevant to compare them to estimates of the bias function at the current epoch. Early works indirectly suggested that also at $z\sim 0$ the biasing function should have a non-negligible non-linear component. Comparing the two--point correlation functions and the normalized skewness of SSRS2 galaxies, Benoist et al. (1999) showed that the relative bias between galaxies with different luminosity is non-linear, which indirectly indicates that (at least for luminous galaxies) the bias with respect to the dark matter must be non-linear as well. A similar analysis was performed by Baugh et al. (2004) and Croton et al. (2004) on the 2dFGRS, finding results consistent with Benoist et al.; finally, Feldman et al. (2001) and Gazta\^{n}aga et al. (2005) directly measured the three--point correlation function (in both Fourier and real space) for the IRAS and 2dFGRS samples respectively and found evidence for $b_2 < 0$. On the other hand, these results seem to be inconsistent with another analysis of the 2dFGRS performed using the bi-spectrum \citep{ver02}. Hikage et al. 2005 by analysing the SDSS galaxies with a Fourier-phase technique also conclude that the bias in this survey is essentially linear. If one ignored the other independent analysis of the 2dFGRS, it could be speculated that the large-scale non-linear term that we detect at $z>0.7$ is suppressed as a function of cosmic time; this is however not supported by the results of numerical experiments \citep{som01}. Interestingly, we note that when compared to the local non-linear measurements ($b_2/b_1\sim -0.35$) \citep{fel01,gaz05}, our estimate seems to suggest that the amplitude of the quadratic term $b_2/b_1$ decreases (in absolute terms) as a function of redshift, a results in qualitative agreement with indications from simulations. It seems therefore more likely that the difference in the reconstruction methods used (with different sensitivity to higher order terms in Eq. \ref{nlbf}) is a better explanation of the discrepant results at $z\sim 0$. We will show in the next section (\S 4) how the self-consistency of the evolution of the variance and skewness of galaxy counts with redshift, indeed requires the presence of a non-linear biasing component. The information contained in the non-linear function of Eq. \ref{nlbf} can be compressed into a single scalar term that can be used to interpret the evolution of two-point statistics (correlation function) as well as the variance of the galaxy density field (see \S 4.2). Since, by definition, $\vev{b(\del) \del}=0$, the most interesting linear bias estimators are associated to the second order moments of the PDFs, {\it i.e.\ } the variance $\vev{\delg^2}$ and the covariance $\vev{\delg\,\del}$. Following the prescriptions of \citet{del99}, we characterize the biasing function as follows \begin{equation} \bl^2 \equiv \frac{\vev{b^2(\del)\, \del^2}}{\vev{\del^2}} \label{linb} \end{equation} \noindent where $\bl^2$ is an estimator of the linear biasing parameter defined, in terms of the two-point correlation function, as $\xi_g=b_L^2 \xi$. We evaluate Eq. \ref{linb} using Eq. \ref{nlbf} with parameters $b_i(z)$ estimated locally by Verde et al. (2002) and in the redshift range $0.4<z<1.5$ by Marinoni et al. (2005). The best fitting phenomenological model describing the redshift scaling of the linear biasing parameter for a volume limited population of ``normal'' galaxies with median luminosity $L \sim 2L^{*}(z=0)$ is \[b_{L}(z)=1+(0.03\pm0.01) (1+z)^{3.3\pm0.6}\] While today $\sim 2L^{*}$ galaxies trace the underlying mass distribution on large scales \citep{lah02, ver02, gaz05}, in the past the two fields were progressively dissimilar and the relative biasing systematically higher. In Paper I we showed how this observed redshift trend compares to different theoretical models for biasing evolution, i.e. a ``galaxy conserving'' model (Fry et al. 1996), a ``halo merging'' model (Mo \& White 1996) and a ``star forming'' model (Tegmark \& Peebles 1998). \section{Testing gravitational instability with the low-order moments of the PDF} Having decoupled biasing effects from the purely gravitational evolution of the galaxy PDF we have now all the ingredients to use this latter quantity to test the consistency of some general predictions of the GIP. The evolution of the low-order statistical moments of the galaxy PDF, specifically its second and third moments can be compared, on large scales with analytical predictions of linear and second order perturbation theory respectively. \subsection{Estimating the moments from redshift survey data} Following standard conventions, we define the second- and third-order moments, on a scale $R$, of a continuous, zero-mean overdensity field as \begin{equation} \vev{\delg^2}_{R} = \int_{-1}^{\infty} \delg^2 g_{R}(\delg) d \delg. \label{var} \end{equation} \noindent and \begin{equation} \vev{\delg^3}_{R}= \int_{-1}^{\infty} \delg^3 g_{R}(\delg) d \delg. \label{skew} \end{equation} Note that the moments cannot be estimated as ensemble averages over the reconstructed PDF. In fact, this last quantity has been reconstructed using the Wiener filtering technique. This minimises the shot noise contribution (\S~\ref{method}) but gives a biased estimate of the density field moments (via Eq. \ref{var} and \ref{skew}) as it requires an input power spectrum (and therefore {\it assuming} a second moment). A standard practical way to estimate moments is to randomly throw spherical cells down within the galaxy distribution and reconstruct the count probability distribution function $P_k=n_k/N$ (where $n_k$ is the number of cells that contain k galaxies out of a total number of cells N. The moments are then estimated as \begin{equation} \vev{\delg^p}=\bar{N}^{-p} \sum_{k=0}^{\infty} P_k \vev{(k-\bar{N})^p} \end{equation} where $\bar{N}=\sum_{k=0}^{\infty}k P_k$. The quantities we are interested in are the cumulants $\vev{\delta^{p}}_c$ of the one-point density PDF. For a density field smoothed with a top-hat window, the $p$-order cumulant \begin{equation} \vev{\delg^{p}}_c=\frac{1}{v^{p}_{R}}\int \xi_{p}(r_1,r_2...r_p)d^3r_1d^3r_2...d^3r_p \end{equation} is the average of the N-point reduced correlation function over the corresponding cell of volume $v_R$ (from now on we will only consider the scale $R=10$$h^{-1}$Mpc and we will drop the suffix $R$, unless we need to emphasize it). This is defined as the connected part of the N-point correlation function $\vev{\delg(r_1)\delg(r_2)...\delg(r_p)}$ in such a way that for $p>2$ $\xi_p=0$ for a Gaussian field. Since the galaxy distribution is a discrete process (Eq.~\ref{denesti} is a sum over Dirac delta functions) and since, by definition, the density contrast has a zero mean, the connection between low-order cumulants and moments is given by \begin{equation} \vev{\delg^2}_c=\vev{\delg^2}-\frac{1}{\bar{N}} \label{varp} \end{equation} \begin{equation} \vev{\delg^3}_c=\vev{\delg^3}-3\frac{\vev{\delg^2}_c}{\bar{N}}-\frac{1}{\bar{N}^2} \end{equation} These relations accounts for discreteness effects using the Poisson shot-noise model \citep[{\it e.g.\ }][]{pee80,fry85}. Possible biases introduced by this technique are discussed by Hui \& Gazta\~naga 1999, while an alternative approach is detailed by Kim \& Strauss (1998). Finally, it is necessary to devise a strategy to compensate for the fact that a cell will sample regions that have varying angular and spectroscopic completeness and which may even span the survey boundary. For this reason the galaxy counts are scaled up in proportion to the degree of incompleteness in the cell. This is done by weighting galaxy counts using the selection functions $\Phi(m)$, $\zeta(r, m)$, and $\Psi(\alpha,\delta)$ defined in \S~\ref{method}. Additionally, although our reconstruction scheme accounts for the non-uniform VVDS angular coverage, we further restrict ourselves to counts in spheres having at least 70\% of their volume in the denser 4-pass region, in order to avoid possible edge effects. We remark that moments are estimated from virtually volume-limited samples, as defined in \S~\ref{data}. As a consequence, the radial selection function is constant and any variations in the density of galaxies are due only to large-scale structure. \subsection{The evolution of the {\it rms} and skewness of galaxy fluctuations} Since in perturbation theory higher order cumulants are predicted to be a function of the variance, it is useful, in the following, to define the normalized skewness \begin{equation} S_3=\vev{\delg^3}_c/\sigma^2 \,\, , \end{equation} where the shot-noise corrected variance $\sigma^2$ is given by Eq. \ref{varp}. Fig. 4 shows the evolution of the {\it rms} fluctuation and the normalized skewness on a scale $R=10$$ h^{-1}$Mpc, as measured from the VVDS volume-limited sub-samples. Errors have been computed using the 50 fully-realistic mock catalogs of VVDS-Deep discussed in Pollo et al. (2005). This allows us to include an estimate of the contribution of cosmic variance, which represents the most significant term in our error budget. The top panel of Fig.~4 shows that the square-root of the variance, which measures the r.m.s. amplitude of fluctuations in galaxy counts, is with good approximation constant over the full redshift baseline investigated: in redshift space, the mean value of $\sigma_g$ for our volume-limited galaxy samples is $0.78\pm 0.09$ for $0.7<z<1.5$. A similar, nearly constant value is also consistent with the value estimated at $z\sim 0.15$ from the 2dF galaxy redshift survey \citep{cro04} that is also reported in same figure. This means that over nearly 2/3 of the age of the Universe the observed fluctuations in the galaxy distribution look almost as frozen, despite the underlying gravitational growth of mass fluctuations. This quantifies the visual impression we had from Fig.~1, that the distribution of galaxies is as inhomogeneous at $z\sim 1$ as it is today. \begin{figure} \includegraphics[width=90mm,angle=0]{fig5.ps} \caption{Evolution of the {\it r.m.s} deviation (top) and skewness (bottom) of the PDF of galaxy fluctuations on a scale $R=10$$ h^{-1}$Mpc. The filled squares correspond to two volume-limited samples from the VVDS with $M_{B}< -20+5\log h$ covering the redshift intervals indicated by the shaded regions. Triangles correspond to the 2dFGRS measurements at $z\simeq 0.15$ \citep{cro04}, from a sample including similarly bright galaxies. Error bars give 68\% confidence errors, and, in the case of VVDS measurements, include the contribution from cosmic variance. The dashed lines in both panels show the theoretical predictions for the evolution of the variance (Eq. \ref{var1}) and skewness (Eq. \ref{skew2}) inferred using VVDS measurement of biasing. Predictions for the skewness (based on the $(b_1(z),b_2(z))$ measurements in the redshift range $0.7<z<1.5$ quoted in Table 2 of Paper I) have been extrapolated to $z\sim 0$ using the local (2dFGRS) biasing measurements of Verde et al. (2002) (linear bias, dotted line) and of Gazta\~{n}aga et al. (2005) (quadratic bias with $b_2/b_1=-0.34$, dot-dashed line). } \label{fig5} \end{figure} The third moment, which measures asymmetries between under- and over-dense regions, indicates that the galaxy density field was non-Gaussian on large scales (10 $h^{-1}$Mpc ) even at these remote epochs ($\sim 4 \sigma$ detection). In particular we find indication for an increase of the normalised skewness with cosmic time, when comparing the VVDS values to the local measurement by 2dFGRS. Using the measured bias evolution, we can translate the specific predictions of the GIP for the variance and skewness of the matter density field into the corresponding observed quantities. Using linear perturbation theory, the scaling of the {\it rms} of number density fluctuations is \begin{equation} \sigma_{g}(z) \sim b_{L}(z)D(z)p(z)\sigma(0) \,\,\, , \label{var1} \label{mia} \end{equation} In a Universe in which primordial density fluctuations were Gaussian, the non-linear nature of gravitational dynamics leads to the emergence of a non-trivial skewness of the local density PDF. Within the framework of gravitational perturbation theory, the first non-vanishing term describing the evolution of the skewness of a top-hat filtered, initially Gaussian matter density field corresponds to second-order. According to non-linear, second-order perturbation theory predictions, the skewness of the mass distribution is approximately independent of time, scale, density, or geometry of the cosmological model. Assuming that its evolution only depends on the hypothesis that the initial fluctuations are small and quasi-Gaussian and that they grow via gravitational clustering one derives that, in redshift-distorted space \citep{pee80, jbc93, ber94, hiv95} \begin{equation} S_{3}\sim \frac{35.2}{7}-1.15(n+3) \end{equation} \noindent where $n$ is the effective slope of the power spectrum on the scales of interest [i.e. in our case, since $R=10$$h^{-1}$Mpc , n is approximately given by -1.2 \citep[e.g.][]{ber02}]. Subsituting the evolution of bias in second order approximation, the evolution of the observed skewness is given by \citep{fg93} \begin{equation} S_{3,g}\sim b_{1}(z)^{-1} \Big[S_3 + 3 \frac{b_2(z)}{b_1(z)} \Big]. \label{skew2} \end{equation} The curves in both panels of Fig. 4 show that equations (\ref{var1}) and (\ref{skew2}) reproduce extremely well the evolution of variance and skewness observed within the VVDS. The mass PDF is a one-parameter family of curves completely specified once the linear evolution model for the mass variance $\vev{\del^2}$ is supplied. This implies that our {\it non-linear} biasing estimate is fully independent from predictions of higher-order perturbation theory. On the contrary, non-linear biasing at $z=0$ is inferred by directly matching 3-point galaxy statistics with the corresponding mass statistics derived from weakly non-linear perturbation theory (e.g. Verde et al. 2000, Gazta\~{n}aga et al. 2005). As a consequence, the agreement we find between predicted and observed third-order moments is not a straightforward consequence of the method used to derive the biasing function. These results provide an indication of the consistency, at $z=1$, of some constitutive elements of the standard picture of gravitational instability from Gaussian initial conditions. Concerning the local measurements from 2dFGRS, the predicted scaling for the skewness continues to show very good agreement if the local, non-linear measurement of Gazta\~{n}aga et al. (2005) is considered. The value of $S_{3,g}$, however, cannot be consistent with GIP predictions if in the local universe the simple linear biasing measurement of Verde et al. (2002) (i.e. $b_2=0$) is adopted. \subsection{Effect of galaxy luminosity evolution} In the above comparison of galaxy samples at three different epochs, we have so far neglected an important point. Galaxy luminosity evolves significantly between $z\sim 0$ and $z \sim 1$, with a mean brightening of at least $ 1$ magnitude for an average spectral type (Ilbert et al. 2005). Thus, the contribution to the clustering signal at progressively earlier epochs may not be be due to the progenitors of the galaxies that are sampled at later times in the same luminosity interval. Luminosity evolution between $z=1$ and $z=1.5$ is more uncertain but certainly smaller due to the shorter time interval. The brightening between the two VVDS sub-samples considered is expected not to be very significant for these very luminous objects. To compare galaxies at $z=1$ and $z\sim 0$ one should in principle confront our high-redshift results with those of local galaxies about one magnitude fainter. According to 2dFGRS results, this means shifting the local estimates of variance to a slightly smaller value (Norberg et al. 2002) and leaving the skewness measurement unchanged within the quoted error bars (Croton et al. 2004). These changes would only reinforce our conclusions about the evolution of the low order moments of the PDF of galaxy fluctuations. In particular, since a fainter sample has a smaller bias threshold, the locally measured skewness would make the discrepancy with GIP predictions for a simple linear biasing model even stronger. \section{Conclusions} The results presented in this paper provide the first direct evidence at $z\sim1$ for the consistency of the GIP hypothesis as described in the framework of general relativity. The standard theory of structure formation via gravitational instability successfully explains the present day statistics (e.g. Tegmark et al. 2006) and dynamics (e.g. Peacock et al. 2001) of large scale structures. We have shown that observations are fully consistent with these predictions over the entire redshift baseline $0<z<1.5$ once the biasing between the galaxy and matter distribution is properly described. In Paper I we showed that it is necessary to include a small (10\%) yet crucial non-linear component to accurately account for the observed probability distribution function of galaxy overdensities. Here we have shown that this component is also necessary as to understand the observed evolution of the low-order moments of the galaxy overdensity field. More specifically, our analysis of the 3D density fluctuation field traced by a volume-limited sample of VVDS galaxies (with $M_{B} \leq -20 +5 \log h$) at different epochs unambiguously reveals the time-dependent effects of gravitational evolution: \indent a) underdense regions progressively occupy a larger volume fraction as a function of cosmic time, as expected from gravitational growth in an expanding background; \indent b) the second moment of the field traced by this ``normal'' population of galaxies (with median luminosity $\sim 2L*$) is statistically consistent with the local ($z \sim 0$) estimate for similarly luminous galaxies, i.e. it is approximately constant over the full redshift baseline $0<z<1.5$. This implies that the apparent inhomogeneity in the galaxy distribution remains similar, i.e. galactic fluctuations have almost frozen over nearly 2/3 of the age of the universe \citep{giav98,coil04,pol06}. We have shown that this is readily explained by the combination of the gravitational growth of mass fluctuations with the evolution of the bias between galaxies and mass. These two factors almost cancel each other out; \indent c) there are some hints that the skewness increases with cosmic time, its value at $z\sim 1.5$ being nearly 2$\sigma$ times lower than that measured locally by the 2dFGRS for similarly luminous galaxies. In particular, the measured value of the skewness at $z\sim 1.5$ (on scales $R=10$$h^{-1}$Mpc ) indicates that galaxy fluctuations are strongly non-Gaussian ($\sim 4 \sigma$ detection) even at such an early epoch (see Cappi et al. 2008 for a different approach which arrives at similar conclusions); \indent d) remarkably, once VVDS measurements of non-linear biasing are included, both these trends are consistent with predictions of linear and second-order perturbation theory for the evolution of gravitational perturbations as described within the framework of general relativity; \indent e) we have shown that the values of the skewness we measure at high redshift are difficult to reconcile with the 2dFGRS measurements if local biasing is linear \citep{ver02}. A fully coherent gravitational picture emerges, however, over the whole baseline $0<z<1.5$ if the non-linearity of the local biasing function is taken into account, at the level estimated by Feldman et al. 2001 and Gazta\^{n}aga et al. 2005. Compared to these local measurements our results seem to suggest that the amplitude of the quadratic term $|b_2/b_1|$ is a decreasing function of redshift at least up to $z\sim1.5$. \section*{Acknowledgments} We thank the referee, M. Strauss, for important suggestions that significantly improved the manuscript. LG acknowledges the hospitality of MPE and the ``Excellence Cluster Universe'' in Garching, where part of this work was completed. This research has been developed within the framework of the VVDS consortium and it has been partially supported by the CNRS-INSU and its Programme National de Cosmologie (France), and by the Italian Ministry (MIUR) grants COFIN2000 (MM02037133) and COFIN2003 (num.2003020150). The VLT-VIMOS observations have been carried out on guaranteed time (GTO) allocated by the European Southern Observatory (ESO) to the VIRMOS consortium, under a contractual agreement between the Centre National de la Recherche Scientifique of France, heading a consortium of French and Italian institutes, and ESO, to design, manufacture and test the VIMOS instrument.
0802.0510
\section{Introduction} \label{introduction} Let ${\mathcal T}$ and ${\mathcal S}$ be the categories of based spaces and spectra, localized at a fixed prime $p$, and $\Sigma^{\infty}: {\mathcal T} \rightarrow {\mathcal S}$ and $\Omega^{\infty}: {\mathcal T} \rightarrow {\mathcal S}$ the usual adjoint pair. For $n \geq 1$, in \cite{k1}, the author constructed functors between the homotopy categories $$ \Phi^K_n: ho({\mathcal T}) \rightarrow ho({\mathcal S})$$ such that \begin{equation*}\Phi^K_n(\Omega^{\infty} X) \simeq L_{K(n)}X, \end{equation*} where $L_{K(n)}$ denotes Bousfield localization with respect to the Morava $K$--theory $K(n)$. Thus the $K(n)$--localization of a spectrum depends only on its zero space. This mid 1980's result was modeled on the $n=1$ version that had been newly established by Pete Bousfield in \cite{bousfield1}, and heavily used the newly proved Nilpotence and Periodicity Theorems of Ethan Devanitz, Mike Hopkins, and Jeff Smith \cite{dhs,hs}. Bousfield's main application was to proving uniqueness results about infinite loopspaces; for example, he gives a `conceptual' proof of the Adams--Priddy theorem \cite{adamspriddy} that $BSO_{(p)}$ admits a unique infinite loopspace structure up to homotopy equivalence. My main application was to note that the evaluation map $\epsilon: \Sigma^{\infty} \Omega^{\infty} X \rightarrow X$ has a section after applying $L_{K(n)}$, and thus after applying other functors like $K(n)_*$. The functors $\Phi^K_n$ as described above allow for two important refinements. Firstly, let $T(n)$ be the mapping telescope of any $v_n$--self map of a finite CW spectrum of type $n$. It is a well known application of the Periodicity Theorem that the associated localization functor $L_{T(n)}$ is independent of choices, and it is evident that $K(n)$--local objects are $T(n)$--local\footnote{The Telescope Conjecture, open for $n \geq 2$, asserts that the converse is also true, so that $L_{T(n)} = L_{K(n)}$.}. This suggests that $\Phi^K_n$ might refine to a functor $$ \Phi^T_n: ho({\mathcal T}) \rightarrow ho({\mathcal S})$$ with $L_{K(n)}\circ \Phi^T_n = \Phi^K_n$, such that $$\Phi^T_n(\Omega^{\infty} X) \simeq L_{T(n)}X.$$ Indeed, a careful reading of the arguments in \cite{k1} shows that this is the case. One application of this refined functor is that there is a natural isomorphism of graded homotopy groups $$ [B, \Phi^T_n(Z)]_* \simeq v^{-1}\pi_*(Z;B),$$ for all spaces $Z$, where $v: \Sigma^d B \rightarrow B$ is any unstable $v_n$ self map. Thus the spectrum $\Phi^T_n(Z)$ determines `periodic unstable homotopy'. Secondly, what one {\em really} wishes to have is a functor on the level of model categories, $$ \Phi_n: {\mathcal T} \rightarrow {\mathcal S},$$ inducing $\Phi^T_n$ on associated the homotopy categories. Once again, inspection of the papers \cite{bousfield1, k1} suggests that this should be possible. However, it was not until Bousfield revisited these constructions in his 2001 paper \cite{bousfield3} that this was carefully worked out. One new consequence that emerged was Bousfield's beautiful theorem that every spectrum is naturally $T(n)$--equivalent to a suspension spectrum. Along with Bousfield's new application, there has been recent use of $\Phi_n$ by the author \cite{k2,k3} and C. Rezk \cite{rezk}, and new methods for computation available using the work of Arone and Mahowald in \cite{aronemahowald}. All of this suggests that the functors $\Phi_n$ have a fundamental role in the study of homotopy, both stable and unstable, as chromatically organized. Bousfield's detailed paper \cite{bousfield3} is not an easy read: the partial ordering on that paper's set of lemmas, propositions, and theorems induced by the logical flow of the proof structure is poorly correlated with the numerical total ordering. One could make a similar statement about \cite{bousfieldJAMS}, on which \cite{bousfield3} relies in essential ways. By constrast, my paper \cite{k1} offers a quite direct approach to the construction of the $\Phi^K_n$, while being admittedly short on detail. If one fills in details, and adds refinement as described above, it emerges that Bousfield and I have slightly different constructions. It turns out that both flavors satisfy basic characterizing properties, and thus they are naturally equivalent. Motivated by all of this, here we offer a guide to the $\Phi_n$. This includes \begin{itemize} \item a listing of basic properties, and characterization of the functors by some of these, \item a step by step discussion of their construction, including model category issues that arise, \item Bousfield's `left adjoint' $\Theta_n: {\mathcal S} \rightarrow {\mathcal T}$ and its basic application, \item a discussion of the uniqueness of the section to $L_{T(n)}(\epsilon)$, and \item a discussion of calculations of $\Phi_n(Z)$ for various spaces $Z$ including spheres. \end{itemize} Though most of the results surveyed appear in the literature, a few haven't. Among those that have, I have tweaked the order in which they are `revealed'. For example, and most significantly, our \thmref{v Tn thm}, which describes important properties of the functor $\Phi_v$ (see just below) when $v$ is a $v_n$--self map, is proved in a direct manner here, enroute to proving our main theorem \thmref{main thm}, which lists important properties of $\Phi_n$. By constrast, in \cite{bousfield3}, these properties of $\Phi_v$ first occur as consequences of the properties of $\Phi_n$. We hope readers appreciate such unknotting of the logic. We end this introduction by stating a new characterization of the $\Phi_n$. We need to briefly describe the basic construction on which the functors $\Phi_n$ are based. A self map of a space $v: \Sigma^d B \rightarrow B$ with $d>0$, induces a natural transformation $ v(Z): \operatorname{Map}_{{\mathcal T}}(B,Z) \rightarrow \Omega^d\operatorname{Map}_{{\mathcal T}}(B,Z)$ for all spaces $Z \in {\mathcal T}$. The map $v(Z)$ then can be used to define a periodic spectrum $\Phi_v(Z)$ of period $d$, such that $$ \pi_*(\Phi_v(Z)) \simeq \operatorname*{colim} \{[B,Z]_* \xrightarrow{v} [B,Z]_{*+d} \xrightarrow{v} \dots\} = v^{-1}\pi_*(Z;B).$$ \begin{thm} \label{main thm} For each $n\geq 1$, there is a continuous functor $\Phi_n: {\mathcal T} \rightarrow {\mathcal S}$ satisfying the following properties. \\ \noindent{\bf (1)} \ $\Phi_n(Z)$ is $T(n)$--local, for all spaces $Z$. \\ \noindent{\bf (2)} \ There is a weak equivalence of spectra $\operatorname{Map}_{{\mathcal S}}(B,\Phi_n(Z)) \simeq \Phi_v(Z)$, for all unstable $v_n$ self maps $v:\Sigma^d B \rightarrow B$, natural in both $Z$ and $v$. \\ \noindent{\bf (3)} \ There is a natural weak equivalence $\Phi_n(\Omega^{\infty} X) \simeq L_{T(n)}X$, for all $\Omega$--spectra $X$. \\ Furthermore, properties {\bf (1)} and {\bf (2)} characterize $\Phi_n$, up to weak equivalence of functors. \end{thm} The rest of the paper is organized as follows. Background material, about both the model category of spectra and periodic homotopy, is given in \secref{background}. In \secref{phi_v section}, we present the basic theory of the telescopic functor $\Phi_v$ associated to a self map $v:\Sigma^d B \rightarrow B$, and, in \secref{phi v:part 2}, we study $\Phi_v$ when $v$ is additionally a $v_n$--self map. In \secref{phi_n section}, we define $\Phi_n$, and the proof of \thmref{main thm} follows quickly from the previous results. The adjoint $\Theta_n$ is defined in \secref{theta_n section}, and using it, we prove Bousfield's theorem that spectra are $T(n)$--equivalent to suspension spectra. A short discussion about the section to the $T(n)$--localized evaluation map is given in \secref{eta_n section}. Finally, in \secref{computations}, we offer a brief guide to known computations of $\Phi_n(Z)$ and periodic homotopy groups. An outline of this material was presented in a talk at the special session on homotopy theory at the A.M.S.~ meeting held in Newark, DE in April, 2005. I would like to offer my congratulations to Martin Bendersky, Don Davis, Doug Ravenel, and Steve Wilson -- the 60th birthday boys of that session and the March, 2007 conference at Johns Hopkins University -- and thank them all for setting fine examples of grace and enthusiasm to we algebraic topologists who have followed. \section{Background} \label{background} \subsection{Categories of spaces and spectra} \label{categories} These days, it seems prudent to be precise about our categories of `spaces' and `spectra', and needed model category structures. We will let ${\mathcal T}$ denote the category of based compactly generated topological spaces, though one could as easily work instead with the category of based simplical sets, as Bousfield always does. Regarding spectra, we would like a single map of the form $C \rightarrow \Omega^d C$ to specify a spectrum. This suggests using the `plain vanilla' category of (pre)spectra ($\mathcal N$--spectra in \cite{mmss}). An object $X$ in the category ${\mathcal S}$ is a sequence of spaces in ${\mathcal T}$, $X_0, X_1, \dots$, together with a sequence of based maps $\sigma^X_n: \Sigma X_n \rightarrow X_{n+1}$, or, equivalently, $\tilde{\sigma}^X_n: X_n \rightarrow \Omega X_{n+1}$, for $n \geq 0$. A morphism $f:X \rightarrow Y$ in ${\mathcal S}$ is then a sequence of based maps $f_n: X_n \rightarrow Y_n$ such that the diagram \begin{equation*} \xymatrix{ \Sigma X_n\ar[d]^{\sigma^X_n} \ar[r]^{\Sigma f_n} & \Sigma Y_n \ar[d]^{\sigma^Y_n} \\ X_{n+1} \ar[r]^{f_{n+1}} & Y_{n+1} } \end{equation*} commutes for all $n$. The category ${\mathcal S}$ is a topological category; in particular $\operatorname{Map}_{{\mathcal S}}(X,Y)$ is an object in ${\mathcal T}$. It is also tensored and cotensored over ${\mathcal T}$, with $A \wedge X$ and $\operatorname{Map_{\mathcal S}}(A,X)$ denoting the tensor and cotensor product of $A \in {\mathcal T}$ with $X \in {\mathcal S}$. (See \cite[p.447]{mmss} for more detail.) We let $\Sigma^d X$ and $\Omega^d X$ denote $S^d \wedge X$ and $\operatorname{Map_{\mathcal S}}(S^d, X)$, as is usual. The adjoint pair $\Sigma^{\infty}: {\mathcal T} \rightleftarrows {\mathcal S}: \Omega^{\infty}$ is defined by letting $(\Sigma^{\infty} A)_n = \Sigma^n A$ and $\Omega^{\infty} X = X_0$. For $d\geq 0$, we let $s^d: {\mathcal S} \rightarrow {\mathcal S}$ be the $d$--fold shift functor with $(s^d X)_n = X_{n+d}$. This admits a left adjoint $s^{-d}:{\mathcal S} \rightarrow {\mathcal S}$ with \begin{equation*} (s^{-d}X)_n = \begin{cases} X_{n-d} & \text{for } n \geq d \\ * & \text{for } 0\leq n \leq d. \end{cases} \end{equation*} Composing these adjoints, we see that $s^{-d} \circ \Sigma^{\infty}: {\mathcal T} \rightarrow {\mathcal S}$ is left adjoint to the functor sending a spectrum $X$ to its $d^{th}$ space $X_d$. \subsection{Model category structures} \label{model categories} We describe model category structures on ${\mathcal T}$ and ${\mathcal S}$. Our category ${\mathcal T}$ is endowed with the `usual' model category structure (see, e.g. \cite{ds}): the weak equivalences are the weak homotopy equivalences, the fibrations are the Serre fibrations, and the cofibrations are retracts of generalized CW inclusions. (We recall that $f: A \rightarrow B$ in ${\mathcal T}$ is a weak homotopy equivalence if, for each point $a \in A$, $f_*: \pi_*(A,a) \rightarrow \pi_*(B,f(a))$ is a bijection, and is a Serre fibration if it has the right lifting property with respect to the maps $D^n \hookrightarrow D^n \wedge I_+$.) Starting from this model category structure on ${\mathcal T}$, ${\mathcal S}$ is given its stable model category structure `in the usual way', as in \cite{schwede, hovey2, mmss}, all of which follow the lead of \cite{bf}. Firstly, ${\mathcal S}$ has its `level' model structure\footnote{The name `level' model structure is used in \cite[\S 6]{mmss}. Schwede \cite{schwede} refers to this as `strict', and Hovey \cite{hovey2} uses `projective'.} in which the weak equivalences and fibrations are the maps $f: X \rightarrow Y$ such that the levelwise maps $f_n: X_n \rightarrow Y_n$ are weak equivalences and fibrations in ${\mathcal T}$ for all $n$. It is then easy to check that $f$ is a cofibration exactly when the induced maps $X_0 \rightarrow Y_0$ and $X_{n+1} \cup_{\Sigma X_n} \Sigma Y_n \rightarrow Y_{n+1}$ are cofibrations in ${\mathcal T}$. When needed, we will write ${\mathcal S}_l$ for the category of spectra with the level model structure. Now we need to change the model structure to build in stability. Hovey's general method \cite{hovey2} yields the following in our situation. We call a spectrum $X$ an {\em $\Omega$--spectrum} if $\tilde{\sigma}_n: X_n \rightarrow \Omega X_{n+1}$ is a weak equivalence in ${\mathcal T}$ for all $n$. Using our adjunctions, this rewrites as the statement that $Map_{{\mathcal S}}(i_n,X)$ is a weak equivalence in ${\mathcal T}$ for all $n$, where $i_n: s^{-(n+1)}\Sigma^{\infty} S^{1} \rightarrow s^{-n}\Sigma^{\infty} S^0$ is the canonical map in ${\mathcal S}$. Let $Q: {\mathcal S}_l \rightarrow {\mathcal S}_l$ denote Bousfield localization (as in \cite{hirschhorn}) with respect to the set of map $\{ i_n, n \geq 0\}$. Then \cite[Thm.2.2]{hovey2} says that there is a stable model structure on ${\mathcal S}$ with cofibrations equal to level cofibrations, with weak equivalences the maps $f: X \rightarrow Y$ such that $Qf: QX \rightarrow QY$ is a level equivalence, and with fibrant objects the level fibrant $\Omega$--spectra. There are two alternative characterizations of the stable equivalences. It is formal to see that $Qf: QX \rightarrow QY$ is a weak level equivalence if and only if $f^*:[Y,Z]_l \rightarrow [X,Z]_l$ is a bijection for all $\Omega$--spectra $Z$, where $[Y,Z]_l$ denotes homotopy classes computed using the level model structure. True, but {\em not} formal, is the fact that weak equivalences are precisely maps of spectra inducing isomorphisms on $\displaystyle \pi_*(X) = \operatorname*{colim}_{n} \pi_{*+n}(X_n)$: see \cite[Proposition 8.7]{mmss} for a clear discussion of this point. It is easy to check that ${\mathcal S}$ is a topological model category in the sense of \cite[Definition 4.2]{ekmm}, so that, for all spectra $X$ and $Y$, $$[X,Y] = \pi_0(\operatorname{Map_{\mathcal S}}(X^{cof},Y^{fib})),$$ where $X^{cof}$ and $Y^{fib}$ are respectively cofibrant and fibrant replacements for $X$ and $Y$. (Compare with \cite[Proposition 3.10]{goerss jardine} for a nice presentation in the simplicial setting.) Handy observations include that the evident natural maps $\Sigma^d X \rightarrow s^d X$ and $s^{-d}X \rightarrow \Omega^d X$ are stable equivalences. Also useful in calculation is that, if $X^{fib}$ is a fibrant replacement for a spectrum $X$, then each of the evident maps $$ \Omega^{\infty} X^{fib} \rightarrow \operatorname*{hocolim}_n \Omega^n X_n^{fib} \leftarrow \operatorname*{hocolim}_n \Omega^n X_n$$ is a weak equivalence of spaces. In one proof of ours - the proof of \thmref{ho thm} - we make use of function spectra in the homotopy category of spectra\footnote{Bousfield similarly needs this: see \cite[Thm. 11.9]{bousfield3}.}: these exist in $ho({\mathcal S})$ using well known `naive' constructions in ${\mathcal S}$. To summarize our overuse of the notation $\operatorname{Map_{\mathcal S}}(X,Y)$: \begin{itemize} \item $\operatorname{Map_{\mathcal S}}(X,Y)$ is in ${\mathcal T}$ for $X,Y \in {\mathcal S}$, \item $\operatorname{Map_{\mathcal S}}(X,Y)$ is in ${\mathcal S}$ for $X \in {\mathcal T}$ and $Y \in {\mathcal S}$, and \item $\operatorname{Map_{\mathcal S}}(X,Y)$ is in $ho({\mathcal S})$ for $X,Y \in ho({\mathcal S})$ \end{itemize} We trust our meaning will be clear in context. We end this subsection with a useful lemma and corollary. \begin{lem}[Compare with {\cite[Lemma 3.3]{k1}}] \label{hocolim lemma} Given a diagram of spectra $ X(0) \rightarrow X(1) \rightarrow X(2) \rightarrow \dots$ and an increasing sequence of integers $ 0 \leq d_0 < d_1 < d_2 < \dots$, the natural diagram of spectra \begin{equation*} \xymatrix @-1.1pc{ s^{-d_1}\Sigma^{\infty} \Sigma^{d_1-d_0}X(0)_{d_0} \ar[d]_{\wr} \ar[dr] & s^{-d_2}\Sigma^{\infty} \Sigma^{d_2-d_1}X(1)_{d_1} \ar[d]_{\wr} \ar[dr] & s^{-d_3}\Sigma^{\infty} \Sigma^{d_3-d_2}X(2)_{d_2} \ar[d]_{\wr} \ar@{-->}[dr] & \\ s^{-d_0}\Sigma^{\infty} X(0)_{d_0} \ar[d]& s^{-d_1}\Sigma^{\infty} X(1)_{d_1} \ar[d]& s^{-d_2}\Sigma^{\infty} X(2)_{d_2} \ar[d]& \\ X(0) \ar[r] & X(1) \ar[r] & X(2) \ar@{-->}[r] & } \end{equation*} induces a weak equivalence between the homotopy colimit of the top zig-zag and the homotopy colimit of the bottom. \end{lem} \begin{proof}[Sketch proof] One checks that the map induces an isomorphism on $\pi_*$. Alternatively, one can check that the map induces an isomorphism on $[\_\_,Y]$ for all $Y \in {\mathcal S}$. \end{proof} Informally, this lemma says that there is a natural weak equivalence $$ \operatorname*{hocolim}_k s^{-d_k}\Sigma^{\infty} X(k)_{d_k} \xrightarrow{\sim} \operatorname*{hocolim}_k X(k).$$ \begin{cor} \label{hocolim cor}Given a spectrum $X$ and an increasing sequence of integers $ 0 \leq d_0 < d_1 < d_2 < \dots$, the homotopy colimit of \begin{equation*} \xymatrix @-.8pc{ s^{-d_1}\Sigma^{\infty} \Sigma^{d_1-d_0}X_{d_0} \ar[d]_{\wr} \ar[dr] & s^{-d_2}\Sigma^{\infty} \Sigma^{d_2-d_1}X_{d_1} \ar[d]_{\wr} \ar[dr] & s^{-d_3}\Sigma^{\infty} \Sigma^{d_3-d_2}X_{d_2} \ar[d]_{\wr} \ar[dr] & \\ s^{-d_0}\Sigma^{\infty} X_{d_0} & s^{-d_1}\Sigma^{\infty} X_{d_1} & s^{-d_2}\Sigma^{\infty} X_{d_2} & \dots } \end{equation*} is naturally weakly equivalent to $X$. \end{cor} \begin{proof} Apply the lemma to the case when $X(k) = X$ for all $k$. \end{proof} Informally, this corollary says that there is a natural weak equivalence $$ \operatorname*{hocolim}_k s^{-d_k}\Sigma^{\infty} X_{d_k} \xrightarrow{\sim} X.$$ \subsection{Periodic homotopy} We recall some of the terminology and big theorems used when one studies homotopy from the chromatic point of view. A good general reference for this material is Doug Ravenel's book \cite{ravenel}. We let ${\mathcal C} \subset ho({\mathcal S})$ denote the stable homotopy category of $p$--local finite CW spectra, and then we let ${\mathcal C}_n \subset {\mathcal C}$ be the full subcategory with objects the $K(n-1)_*$--acylic spectra. The categories ${\mathcal C}_n$ are properly nested \cite{mitchell}: $${\mathcal C} = {\mathcal C}_0 \supset {\mathcal C}_1 \supset {\mathcal C}_2 \supset \dots .$$ An object $F \in {\mathcal C}_n - C_{n+1}$ is said to be of {\em type $n$}. For finite spectra, the remarkable work of Ethan Devanitz, Mike Hopkins, and Jeff Smith \cite{dhs} tells us the following. \begin{thm}[Nilpotence Theorem {\cite[Thm.3]{hs}}] Given $F \in {\mathcal C}$, a map $v: \Sigma^d F \rightarrow F$ is nilpotent if and only if $K(n)_*(v)$ is nilpotent for all $n \geq 0$. \end{thm} The next two consequences were proved by Hopkins and Smith. \begin{thm}[Thick Subcategory Theorem \cite{hs,ravenel}] A nonempty full subcategory of ${\mathcal C}$ that is closed under taking cofibers and retracts is ${\mathcal C}_n$ for some $n$. \end{thm} Given $F \in {\mathcal C}$, a map $v: \Sigma^d F \rightarrow F$ is called a {\em $v_n$--self map} if $K(n)_*(v)$ is an isomorphism, while $K(m)_*(v)$ is nilpotent for all $m \neq n$. \begin{thm}[Periodicity Theorem \cite{hs,ravenel}] \noindent{\bf (a)} $F \in {\mathcal C}_n$ if and only if $F$ has a $v_n$--self map. \\ \noindent{\bf (b)} Given $F,F^{\prime} \in {\mathcal C}_n$ with $v_n$--self maps $u: \Sigma^c F \rightarrow F$ and $v: \Sigma^dF^{\prime} \rightarrow F^{\prime}$, and $f: F \rightarrow F^{\prime}$, there exist integers $i,j$ such that $ic = jd$ and the diagram \begin{equation*} \xymatrix{ \Sigma^{ic} F\ar[d]^{v^i} \ar[r]^{\Sigma^{ic}f} & \Sigma^{jd}F^{\prime} \ar[d]^{v^j} \\ F \ar[r]^f & F^{\prime} } \end{equation*} commutes. \end{thm} Given $F \in {\mathcal C}$ of type $n$, we let $T(F)$ denote the mapping telescope of a $v_n$--self map. An immediate consequence of the Periodicity Theorem is that $T(F)$ is independent of choice of self map. Furthermore, one deduces that the Bousfield class of $T(F)$ is independent of the choice of type $n$ spectrum $F$. In other words, If $F$ and $F^{\prime}$ are both of type $n$, then $$ T(F) \wedge Y \simeq * \text{ if and only if } T(F^{\prime}) \wedge Y \simeq *.$$ It is usual to let $T(n)$ ambiguously denote $T(F)$ for any particular type $n$ finite spectrum $F$. Another consequence of the Periodicity Theorem was proved by the author in \cite{k1}. \begin{prop}[{\cite[Cor.4.3]{k1}}] \label{resolution prop} There exists a diagram in ${\mathcal C}$, \begin{equation*} \xymatrix{ F(1) \ar[d] \ar[r]^{f(1)} & F(2) \ar[dl] \ar[r]^{f(2)} & F(3) \ar[dll] \ar[r] & \dots \\ S^0 &&& } \end{equation*} such that each $F(k) \in {\mathcal C}_n$, and $\displaystyle \operatorname*{hocolim}_k F(k) \rightarrow S^0$ induces an $T(m)_*$--isomorphism for all $m \geq n$. \end{prop} \begin{rem} The statement of this proposition deserves some comment, as homotopy colimits of general diagrams in a triangulated category like $ho({\mathcal S})$ are not always defined. However, the hocolimit of a sequence as above {\em is} defined (as the cofiber of an appropriate map between coproducts of the $F(k)$). We note also that only this construction is used in the proof of the proposition given in \cite{k1}; in other words, the proposition is proved working solely in the triangulated homotopy category. \end{rem} We give standard names to some localization functors. Let $L_n^f: {\mathcal S} \rightarrow {\mathcal S}$ denote localization with respect to $T(0) \vee \dots \vee T(n)$, and then define functors $C_{n-1}^f, M_n^f: {\mathcal S} \rightarrow {\mathcal S}$ by letting $C_{n-1}^fX$ be the homotopy fiber of $X \rightarrow L_{n-1}^f X$ and $M_n^fX$ be the homotopy fiber of $L_n^fX \rightarrow L_{n-1}^fX$. These functors are all {\em smashing}, e.g. $X \wedge L_n^f S^0 \simeq L_n^f X$ for all $X$, and from this one can quite easily deduce that $L_{T(n)}$ and $M_n^f$ determine each other. More precisely, there are natural equivalences $L_{T(n)} M_n^f X \simeq L_{T(n)}X$ and $C_{n-1}^f L_{T(n)}X \simeq M_n^f L_{T(n)}X \simeq M_n^f X$ An alternative proof of \propref{resolution prop} occurs in \cite[proof of Thm. 12.1]{bousfield3}, where Bousfield notes that $C_{n-1}^fS^0$ can be written in the form $\displaystyle \operatorname*{hocolim}_k F(k)$ with each $F(k) \in {\mathcal C}_n$. This same result also was proved by Mahowald and Sadofsky in \cite[Proposition 3.8]{mahowaldsadofsky}. We end this section with characterizations of spectra that are $T(n)$--local or in the image of $M_n^f$. \begin{lem} \label{Tn local lem} Consider the following three properties that a spectrum $X$ might satisfy. \\ \noindent{(i)} $[F,X] = 0$ for all $F \in {\mathcal C}_{n+1}$. \\ \noindent{(ii)} $[Y,X] = 0$ whenever $F(n) \wedge Y \simeq *$ for some type $n$ finite spectrum $F(n)$. \\ \noindent{(iii)} $T(i)\wedge X \simeq *$ for $0 \leq i \leq n-1$. \\ \noindent Properties (i) and (ii) hold if and only if $X$ is $T(n)$--local. Properties (i) and (iii) hold if and only if $X \simeq M^f_nX$. \end{lem} \begin{proof} We can assume that $T(n)$ is the telescope of a $v_n$--self map $v:\Sigma^d F(n) \rightarrow F(n)$. Condition (i) is equivalent to the statement that $X$ is $L_n^f$--local, while property (ii) says that $X$ is $F(n)$--local. Thus if $X$ is $T(n)$--local, both (i) and (ii) are true. If condition (i) holds, so that $X$ is $L_n^f$--local, we observe that $v: F(n) \wedge X \rightarrow \Sigma^{-d} F(n) \wedge X$ is an equivalence, as the cofiber is null, since it can be written (using $S$--duality) in the form $\operatorname{Map_{\mathcal S}}(F,X)$ with $F$ of type $n+1$. It follows that $F(n) \wedge X \simeq T(n) \wedge X$, and thus $$F(n)_*(X) \simeq T(n)_*(X) \simeq T(n)_*(L_{T(n)}X) \simeq F(n)_*(L_{T(n)}X).$$ Thus if condition (ii) also holds, so that $X$, as well as $L_{T(n)}X$, is $F(n)$--local, we conclude that $X \simeq L_{T(n)}X$, i.e. X is $T(n)$--local. Finally, property (iii) says that $L_{n-1}^fX \simeq *$, so that $M_n^f X \simeq L_n^f X$. \end{proof} \section{Telescopic functors associated to a self map of a space} \label{phi_v section} \subsection{The basic construction} Given a space $B$ and a map $v:\Sigma^d B \rightarrow B$ with $d>0$, we define a functor $$ \Phi_v: {\mathcal T} \rightarrow {\mathcal S}$$ as follows. If $n\equiv -e \mod d$, with $0\leq e \leq d-1$, we let $$ \Phi_v(Z)_n = \Omega^e \operatorname{Map_{\mathcal T}}(B,Z).$$ The structure maps $\Phi_v(Z)_{n} \rightarrow \Omega \Phi_v(Z)_{n+1}$ identify with the identity unless $n\equiv 0 \mod d$, in which case it equals the map $$ v(Z): \operatorname{Map_{\mathcal T}}(B,Z) \xrightarrow{v^*} \operatorname{Map_{\mathcal T}}(\Sigma^dB,Z) = \Omega^d\operatorname{Map_{\mathcal T}}(B,Z).$$ The construction is functorial in $v$ in the following sense: a commutative diagram \begin{equation*} \xymatrix{ \Sigma^dA \ar[d]^u \ar[r]^{\Sigma^d f} & \Sigma^dB \ar[d]^v \\ A \ar[r]^f & B } \end{equation*} induces a natural transformation $f^*:\Phi_v(Z) \rightarrow \Phi_u(Z)$. We list some basic properties of $\Phi_v(Z)$ in the next omnibus lemma. \begin{lem} \label{big phi lemma} \noindent{\bf (a)} $\pi_*(\Phi_v(Z)) = v^{-1}\pi_*(Z;B)$.\\ \noindent{\bf (b)} If a map of spaces $Y \rightarrow Z$ induces an isomorphism on $\pi_*$ for $* \gg 0$, then $\Phi_v(Y) \rightarrow \Phi_v(Z)$ is a stable equivalence. In particular, the $r$--connected covering map $Z\langle r \rangle \rightarrow Z$ induces a stable equivalence $\Phi_v(Z\langle r \rangle) \rightarrow \Phi_v(Z)$ for all $r$. \\ \noindent{\bf (c)} $v^*: \Phi_v(Z) \rightarrow \Phi_{\Sigma^dv}(Z)$ is a stable equivalence. \\ \noindent{\bf (d)} For all spaces $A$, $\Phi_v(\operatorname{Map_{\mathcal T}}(A,Z)) = \Phi_{1_A \wedge v}(Z) = \operatorname{Map_{\mathcal S}}(A,\Phi_v(Z))$. In particular, $\Phi_{\Sigma^c v}(Z) = \Omega^c\Phi_v(Z)$ for all $c$. \\ \noindent{\bf (e)} $\Phi_v$ takes weak equivalences to level weak equivalences (and thus stable equivalences), fibrations to level fibrations, and homotopy pullbacks to level homotopy pullbacks (and thus stable homotopy pullbacks). \\ \noindent{\bf (f)} Given a commutative diagram \begin{equation*} \xymatrix{ \Sigma^dA \ar[d]^u \ar[r]^{\Sigma^d f} & \Sigma^dB \ar[d]^v \ar[r]^{\Sigma^d g} & \Sigma^dC \ar[d]^w \\ A \ar[r]^f & B \ar[r]^g & C, } \end{equation*} if $A \xrightarrow{f} B \xrightarrow{g} C$ is a homotopy cofiber sequence of spaces, then the induced sequences $$ \Phi_w(Z) \xrightarrow{g^*} \Phi_v(Z) \xrightarrow{f^*} \Phi_u(Z)$$ are homotopy fibration sequences of spectra for all $Z$. \\ \noindent{\bf (g)} If $B$ is a finite CW complex, there is a natural stable equivalence $$\operatorname*{hocolim}_d \Phi_v(Z_d) \simeq \Phi_v(\operatorname*{hocolim}_d Z_d)$$ for all diagrams $Z_1 \rightarrow Z_2 \rightarrow Z_3 \rightarrow \dots$ of spectra. \\ \noindent{\bf (h)} If $v_0,v_1: \Sigma^d B \rightarrow B$ are homotopic maps, then $\Phi_{v_0}(Z)$ is naturally stably equivalent to $\Phi_{v_1}(Z)$. \\ \noindent{\bf (i)} There is a natural stable equivalence $\Phi_v(Z) \xrightarrow{\sim} \Phi_{v^r}(Z)$, where $v^r: \Sigma^{rd} B \rightarrow B$ denotes the evident $r$--fold composition of $v$ with its various suspensions. \end{lem} \begin{proof} All of this is quite easily verified. Part (a) is clear, and then parts (b) and (c) follow by check of homotopy groups. Part (d) follows by inspection, since $\operatorname{Map_{\mathcal T}}(A \wedge B, Z) = \operatorname{Map_{\mathcal T}}(A,\operatorname{Map_{\mathcal T}}(B,Z))$. Parts (e) and (f) follow from the fact that $\operatorname{Map_{\mathcal T}}(B,Z)$ takes cofibrations in the $B$-variable and fibrations in the $Z$--variable to fibrations. Similarly, part (g) follows from the fact that $\displaystyle \operatorname*{hocolim}_n \operatorname{Map_{\mathcal T}}(B,Z_n) \simeq \operatorname{Map_{\mathcal T}}(B, \operatorname*{hocolim}_n Z_n)$ if $B$ is a finite complex. To check part (h), suppose $v_0,v_1: \Sigma^d B \rightarrow B$ are homotopic maps. If $H: \Sigma^d B \wedge I_+ \rightarrow B$ is a homotopy from $v_0$ to $v_1$, let $V: \Sigma^d B \wedge I_+ \rightarrow B \wedge I_+$ be defined by $V(x,t) = (H(x,t),t)$. Then there is a commutative diagram \begin{equation*} \xymatrix{ \Sigma^d B \ar[d]^{v_0} \ar[r]^-{\Sigma^di_0} & \Sigma^d B\wedge I_+ \ar[d]^{V} & \Sigma^d B \ar[l]_-{\Sigma^di_1} \ar[d]^{v_1} \\ B \ar[r]^-{i_0} & B \wedge I_+ & B, \ar[l]_-{i_1} } \end{equation*} which induces natural equivalences $$ \Phi_{v_0}(Z) \xleftarrow[\sim]{i_0^*} \Phi_{V}(Z) \xrightarrow[\sim]{i_1^*}\Phi_{v_1}(Z).$$ Finally the stable equivalence of part (i) is defined as follows. Write $n$ in the form $n = mrd -sd - e$, with $0\leq s \leq r-1$ and $0 \leq e \leq d-1$. Then let $$ \Phi_v(Z)_n \rightarrow \Phi_{v^r}(Z)_n$$ be the map $$\displaystyle \Omega^e\operatorname{Map_{\mathcal T}}(B,Z) \xrightarrow{\Omega^e (v^{s})^*} \Omega^e \operatorname{Map_{\mathcal T}}(\Sigma^{sd} B, Z) = \Omega^{e+sd}\operatorname{Map_{\mathcal T}}(B,Z).$$ \end{proof} \begin{cor} \label{phiv cor} $\Phi_v(Z)$ is a periodic spectrum with period $d$: $\Phi_v(Z) \simeq \Omega^d \Phi_v(Z)$. Furthermore, the induced functor $\Phi_v: ho({\mathcal T}) \rightarrow ho({\mathcal S})$ is determined by the stable homotopy class of $v^r$ for any $r$. \end{cor} \begin{proof} Combine properties (c), (d), (h), and (i) of the lemma. \end{proof} Our last property needs some notation. Given an unstable map $u: \Sigma^c A \rightarrow A$ and $X \in {\mathcal S}$, let $u^{-1}\operatorname{Map_{\mathcal S}}(A,X)$ denote the homotopy colimit of the diagram $$ \operatorname{Map_{\mathcal S}}(A,X) \xrightarrow{u^*} \operatorname{Map_{\mathcal S}}(\Sigma^{c}A,X) \xrightarrow{u^*} \operatorname{Map_{\mathcal S}}(\Sigma^{2c}A,X)\rightarrow \dots.$$ \begin{lem} \label{smash lemma} Given maps $u: \Sigma^c A \rightarrow A$ and $v: \Sigma^d B \rightarrow B$, there are natural stable equivalences $u^{-1}\operatorname{Map_{\mathcal S}}(A, \Phi_v(Z)) \simeq \Phi_{u\wedge v}(Z) \simeq v^{-1}\operatorname{Map_{\mathcal S}}(B,\Phi_u(Z))$. \end{lem} \begin{proof}[Sketch proof] By symmetry, we need just verify the first of these equivalences. By \lemref{big phi lemma}(d), $u^{-1}\operatorname{Map_{\mathcal S}}(A, \Phi_v(Z))$ is equal to $$ \operatorname*{hocolim} \{ \Phi_{A \wedge v}(Z) \xrightarrow{u^*} \Phi_{\Sigma^cA \wedge v}(Z) \xrightarrow{u^*} \Phi_{\Sigma^{2c}A \wedge v}(Z) \xrightarrow{u^*} \dots\}.$$ By \lemref{hocolim lemma}, this is stably equivalent to $$ \operatorname*{hocolim}_k s^{-kd}\Sigma^{\infty} \operatorname{Map_{\mathcal T}}(\Sigma^{kc}A \wedge B, Z).$$ This, in turn, maps to $$ \operatorname*{hocolim}_k s^{-k(c+d)}\Sigma^{\infty} \operatorname{Map_{\mathcal T}}(A \wedge B, Z),$$ using evident natural maps of the form $\Sigma^{\infty} \Omega^r W \rightarrow s^{-r} \Sigma^{\infty} W$, and a check of homotopy groups shows this map between homotopy colimits is an equivalence. Finally, by \lemref{hocolim lemma} again, this last homotopy colimit is equivalent to $\Phi_{u\wedge v}(Z)$. \end{proof} \subsection{Identifying $\Phi_v(\Omega^{\infty} Z)$.} Recall that, for $X \in {\mathcal S}$, $\Omega^{\infty} X = X_0$. The following elementary `swindle' is critical to our arguments. Note that it says that the functor that assigns $v^{-1}\operatorname{Map_{\mathcal S}}(B,X)$ to a spectrum $X$ depends only on the space $X_0$. \begin{prop}[Compare with {\cite[Prop.3.3(4)]{k1}}] \label{phi omega prop} If $X \in {\mathcal S}$ is fibrant (i.e. is an $\Omega$--spectrum), then, given $v: \Sigma^d B \rightarrow B$, there is a natural weak equivalence $$ \Phi_v(\Omega^{\infty} X) \simeq v^{-1}\operatorname{Map_{\mathcal S}}(B,X).$$ \end{prop} \begin{proof} We have natural equivalences: \begin{equation*} \begin{split} v^{-1}\operatorname{Map_{\mathcal S}}(B,X) & \xleftarrow{\sim} \operatorname*{hocolim}_r s^{-rd} \Sigma^{\infty} \operatorname{Map_{\mathcal S}}(\Sigma^{rd}B,X)_{rd} \\ & = \operatorname*{hocolim}_r s^{-rd} \Sigma^{\infty} \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,X_{rd}) \\ & = \operatorname*{hocolim}_r s^{-rd} \Sigma^{\infty} \operatorname{Map_{\mathcal T}}(B,\Omega^{rd}X_{rd}) \\ & \xleftarrow{\sim} \operatorname*{hocolim}_r s^{-rd} \Sigma^{\infty} \operatorname{Map_{\mathcal T}}(B,X_0) \\ & \xrightarrow{\sim} \Phi_v(\Omega^{\infty} X). \end{split} \end{equation*} Here the first equivalence follows from \lemref{hocolim lemma}, the last equivalence similarly follows from \corref{hocolim cor}, and the second to last equivalence holds because $X$ is fibrant. \end{proof} \begin{rem} It is not easy to spot the analogue of this proposition in \cite{bousfield3}, but \cite[Thm.11.9]{bousfield3} is a more elaborate result of this type, and its proof, given in \cite[\S\S 11.10--11.11]{bousfield3} uses arguments very similar to our proof of \lemref{hocolim lemma}. \end{rem} \begin{rem} Using the proposition, we can give an alternative proof of part of \lemref{smash lemma}: that $u^{-1}\operatorname{Map_{\mathcal S}}(A, \Phi_v(Z)) \simeq v^{-1}\operatorname{Map_{\mathcal S}}(B,\Phi_u(Z))$. If we let $\Phi_u^{fib}(Z)$ be a fibrant replacement for $\Phi_u(Z)$, then $\displaystyle \Omega^{\infty} \Phi_u^{fib}(Z) \simeq \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rc}A,Z)$. Thus \begin{equation*} \begin{split} v^{-1}\operatorname{Map_{\mathcal S}}(B,\Phi_u(Z)) & \simeq \Phi_v(\operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rc}A,Z)) \text{ (by the proposition)} \\ & \simeq \operatorname*{hocolim}_r \Phi_v(\operatorname{Map_{\mathcal T}}(\Sigma^{rc}A,Z)) \\ & = \operatorname*{hocolim}_r \operatorname{Map_{\mathcal S}}(\Sigma^{rc}A,\Phi_v(Z)) = u^{-1}\operatorname{Map_{\mathcal S}}(A, \Phi_v(Z)). \end{split} \end{equation*} \end{rem} \section{$\Phi_v$ when $v$ is a $v_n$--self map of a space} \label{phi v:part 2} Note that if $v: \Sigma^d B\rightarrow B$ is nilpotent, $\Phi_v(Z)$ will be contractible for all $Z$. So that this might not be the case, in this section, we study the case when $v$ is a $v_n$--self map of a finite CW complex $B$ of type $n$. First we discuss a construction in the homotopy category of spectra. Given $F \in {\mathcal C}_{n}$, it is convenient to let $\Phi(F,Z) \in ho({\mathcal S})$ denote $\Sigma^t\Phi_u(Z)$, where $u: \Sigma^c A \rightarrow A$ is an unstable $v_n$--map of a finite CW complex $A$ such that $\Sigma^tF \simeq \Sigma^{\infty} A$. Similarly, given a map $f: F \rightarrow F^{\prime}$ between finite spectra in ${\mathcal C}_n$, we define $f^*: \Phi(F^{\prime},Z) \rightarrow \Phi(F,Z)$ to be $\Sigma^t\alpha^*: \Sigma^t \Phi_v(Z) \rightarrow \Sigma^t \Phi_u(Z)$ where \begin{equation*} \xymatrix{ \Sigma^dA \ar[d]^u \ar[r]^{\Sigma^d \alpha} & \Sigma^dB \ar[d]^v \\ A \ar[r]^{\alpha} & B } \end{equation*} is a commutative diagram of spaces with $v_n$--self maps, and $\Sigma^tf \simeq \Sigma^{\infty} \alpha$. \begin{lem} \label{phi functor lem} $\Phi: {\mathcal C}_n^{op} \times ho({\mathcal T}) \rightarrow ho({\mathcal S})$ is a well defined functor and satisfies the next two properties. \\ \noindent{\bf (a)} $\Phi$ takes cofibration sequences in the ${\mathcal C}_n$-variable to fibration sequences in $ho({\mathcal S})$. \\ \noindent{\bf (b)} $\operatorname{Map_{\mathcal S}}(F,\Phi(F^{\prime},Z)) \simeq \Phi(F^{\prime} \wedge F,Z) \simeq \operatorname{Map_{\mathcal S}}(F^{\prime},\Phi(F,Z)).$ \end{lem} \begin{proof} This follows from the Periodicity Theorem and the results in the last section. \end{proof} Now we prove that, when $v$ is a $v_n$--self map, $\Phi_v: {\mathcal T} \rightarrow {\mathcal S}$ satisfies versions of the properties listed in \thmref{main thm}. \begin{thm} \label{v Tn thm} Let $v: \Sigma^d B\rightarrow B$ is an unstable $v_n$--self map. \\ \noindent{\bf (1)} $\Phi_v(Z) \simeq M_n^f\Phi_v(Z)$ and is also $T(n)$--local, for all spaces $Z$. \\ \noindent{\bf (2)} $\Phi_v(\Omega^{\infty} X) \simeq \operatorname{Map_{\mathcal S}}(B,L_{T(n)} X)$ for all fibrant $X \in {\mathcal S}$. \end{thm} \begin{proof}[Proof of \thmref{v Tn thm}(1)] We need to verify that $\Phi_v(Z)$ satisfies the three properties listed in \lemref{Tn local lem}. Property (i) says that $[F, \Phi_v(Z)] = 0$ for all $F \in {\mathcal C}_{n+1}$. To see this, we first note that, since $\Phi_v(Z)$ is periodic, we can assume that $F = \Sigma^{\infty} A$ for some finite CW complex $A$ of type at least $n+1$. But then $[F, \Phi_v(Z)] = \pi_0(\operatorname{Map_{\mathcal S}}(A,\Phi_v(Z)) = 0$, because $$ \operatorname{Map_{\mathcal S}}(A,\Phi_v(Z)) = \Phi_{1_A \wedge v}(Z) \simeq *,$$ as $A \wedge B$ will have type greater than $n$, so that $1_A \wedge v: \Sigma^d A \wedge B \rightarrow A \wedge B$ will be nilpotent. Property (ii) says that, with $F(n)$ a fixed finite spectrum of type $n$, $[Y,\Phi_v(Z)] = 0$ whenever $F(n) \wedge Y \simeq *$. To prove this, we make use of the properties of the functor $\Phi$ listed in \lemref{phi functor lem}. So suppose that $F(n) \wedge Y \simeq *$. Let $${\mathcal C}_Y = \{ F \in {\mathcal C}_n \ | \ [Y,\Phi(F,Z)]_* = 0 \text{ for all } Z\}.$$ Using the Thick Subcategory Theorem, we check that ${\mathcal C}_Y = {\mathcal C}_n$, thus verifying property (ii). Firstly, ${\mathcal C}_Y$ is a thick subcategory by \lemref{phi functor lem}(a). Secondly, it contains at least one type $n$ complex, as it contains all type $n$ complexes of the form $F(n) \wedge F$, with $F$ of type $n$. To see this, using \lemref{phi functor lem}(b), we have $$ [Y,\Phi(F(n) \wedge F,Z)]_* = [Y, \operatorname{Map_{\mathcal S}}(F(n),\Phi(F,Z)]_* = [Y \wedge F(n),\Phi(F,Z)]_* = 0.$$ Property (iii) says that $T(i) \wedge \Phi_v(Z) \simeq *$ for $i \leq n-1$. We can assume that $T(i)$ is the telescope of the $S$--dual of an unstable $v(i)$--map $u: \Sigma^c A \rightarrow A$, where $A$ is a finite CW complex of type $i$. Then \begin{equation*} \begin{split} T(i) \wedge \Phi_v(Z) & \simeq u^{-1}\operatorname{Map_{\mathcal S}}(A,\Phi_v(Z)) \\ & \simeq v^{-1}\operatorname{Map_{\mathcal S}}(B,\Phi_u(Z)) \text{ (by \lemref{smash lemma})} \\ & = \operatorname*{hocolim}_r \Omega^{rd} \Phi_{u\wedge 1_B}(Z)) \\ & \simeq *, \end{split} \end{equation*} as $A \wedge B$ has type greater than $i$, so that $u \wedge 1_B: \Sigma^c A \wedge B \rightarrow A \wedge B$ is nilpotent, and thus $\Phi_{u\wedge 1_B}(Z) \simeq *$. \end{proof} \begin{proof}[Proof of \thmref{v Tn thm}(2)] This is similar to the author's proof of \cite[Prop. 3.4]{k1}. Thanks to \propref{phi omega prop}, we just need to show that, if $v: \Sigma^d B \rightarrow B$ is a $v_n$--map, then there is a weak equivalence $$v^{-1}\operatorname{Map_{\mathcal S}}(B,X) \simeq \operatorname{Map_{\mathcal S}}(B,L_{T(n)}X).$$ This is easy to do. We claim that each of the maps $$ v^{-1}\operatorname{Map_{\mathcal S}}(B,X) \rightarrow v^{-1}\operatorname{Map_{\mathcal S}}(B,L_{T(n)}X) \leftarrow \operatorname{Map_{\mathcal S}}(B,L_{T(n)}X)$$ is an equivalence. If we let $T(n)$ be modeled by the telescope of the dual of $v$, then the first map identifies with the equivalence $T(n) \wedge X \xrightarrow{\sim} T(n) \wedge L_{T(n)}X$. The second map is an equivalence as $v$ is a $T(n)_*$--isomorphism, so that each map in the diagram $$\operatorname{Map_{\mathcal S}}(B,L_{T(n)}) \xrightarrow{v^*} \operatorname{Map_{\mathcal S}}(\Sigma^d B, L_{T(n)})\xrightarrow{v^*} \operatorname{Map_{\mathcal S}}(\Sigma^{2d} B, L_{T(n)}) \rightarrow \dots$$ is an equivalence. \end{proof} \section{The construction of $\Phi_n$ and the proof of \thmref{main thm}} \label{phi_n section} \subsection{The construction on the level of homotopy categories} Recall that we have a functor $$\Phi: {\mathcal C}_n^{op} \times ho({\mathcal T}) \rightarrow ho({\mathcal S})$$ defined by letting $\Phi(F,Z)$ denote $\Omega^t\Phi_u(Z)$, where $u: \Sigma^c A \rightarrow A$ is an unstable $v_n$--map of a finite CW complex $A$ such that $\Sigma^tF \simeq \Sigma^{\infty} A$. Now consider a resolution of $S^0$ as in \propref{resolution prop}: a diagram \begin{equation} \label{ho resolution} \xymatrix{ F(1) \ar[d]^(.38){q(1)} \ar[r]^{f(1)} & F(2) \ar[dl]^(.2){q(2)} \ar[r]^{f(2)} & F(3) \ar[dll]^(.2){q(3)} \ar[r] & \dots \\ S^0 &&& } \end{equation} such that each $F(k) \in {\mathcal C}_n$, and such that the map $$\displaystyle q = \operatorname*{hocolim}_k q(k):\operatorname*{hocolim}_k F(k) \rightarrow S^0$$ induces an isomorphism in $T(m)_*$ for all $m \geq n$. \begin{defn} Define $\Phi_n^T: ho({\mathcal T}) \rightarrow ho({\mathcal S})$ by the formula $$ \Phi^T_n(Z) = \operatorname*{holim}_k \Phi(F(k),Z)$$ \end{defn} We have the following theorem, which is \thmref{main thm} on the level of homotopy categories. \begin{thm} \label{ho thm} $\Phi_n^T$ satisfies the following properties. \\ \noindent{\bf (1)} $\Phi_n^T(Z)$ is $T(n)_*$--local for all $Z \in ho({\mathcal T})$. \\ \noindent{\bf (2)} $\operatorname{Map_{\mathcal S}}(F,\Phi_n^T(Z)) \simeq \Phi(F,Z) \in ho({\mathcal S})$ for all $F \in {\mathcal C}$ and $Z\in ho({\mathcal T})$. \\ \noindent{\bf (3)} \ $\Phi_n^T(\Omega^{\infty} X) \simeq L_{T(n)}X$, for all $X \in ho({\mathcal S})$. \end{thm} \begin{proof} By \thmref{v Tn thm}(1), each $\Phi(F(k),Z)$ is $T(n)_*$--local. Since the homotopy limit of local objects is again local, statement (1) follows. To see that (2) is true, given $F \in {\mathcal C}$, we compute in $ho({\mathcal S})$: \begin{equation*} \begin{split} \operatorname{Map_{\mathcal S}}(F,\Phi_n^T(Z)) & \simeq \operatorname{Map_{\mathcal S}}(F, \operatorname*{holim}_k \Phi(F(k),Z))\\ & \simeq \operatorname*{holim}_k \operatorname{Map_{\mathcal S}}(F, \Phi(F(k),Z)) \\ & \simeq \operatorname*{holim}_k \operatorname{Map_{\mathcal S}}(F(k), \Phi(F,Z)) \\ & \simeq \operatorname{Map_{\mathcal S}}(\operatorname*{hocolim}_k F(k), \Phi(F,Z)) \\ & \xleftarrow{\sim} \operatorname{Map_{\mathcal S}}(S^0, \Phi(F,Z)) = \Phi(F,Z). \end{split} \end{equation*} Here the third equivalence is an application of \lemref{phi functor lem}(b), while the last map is an equivalence because it is induced by the $T(n)_*$--isomorphism $q$ and $\Phi(F,Z)$ is $T(n)_*$--local (by \thmref{v Tn thm}(1)). The proof that (3) is true is similar: \begin{equation*} \begin{split} \Phi_n^T(\Omega^{\infty} X) & = \operatorname*{holim}_k \Phi(F(k),\Omega^{\infty} X) \\ &\simeq \operatorname*{holim}_k \operatorname{Map_{\mathcal S}}(F(k),L_{T(n)}X) \text{ (by \thmref{v Tn thm}(2))} \\ & = \operatorname{Map_{\mathcal S}}(\operatorname*{hocolim}_k F(k), L_{T(n)}X) \\ & \xleftarrow{\sim} \operatorname{Map_{\mathcal S}}(S, L_{T(n)}X) = L_{T(n)}X. \end{split} \end{equation*} \end{proof} \subsection{Rigidifying the construction} \begin{defn} A {\em rigidification} of diagram (\ref{ho resolution}) consists of the following data. \\ \noindent{(i)} Finite complexes $B(k)$ of type $n$. \\ \noindent{(ii)} Natural numbers $d(k)$ such that $d(k)|d(k+1)$ together with unstable $v_n$--self maps $v(k): \Sigma^{d(k)}B(k) \rightarrow B(k)$. \\ \noindent{(iii)} Natural numbers $t(k)$ such that $t(k) \leq t(k+1)$ together with maps $p(k): B(k) \rightarrow S^{t(k)}$ and $\beta(k): \Sigma^{e(k)}B(k) \rightarrow B(k+1)$, where $e(k) = t(k+1)-t(k)$. \\ These are required to satisfy three properties: \\ \noindent{(a)} $\Sigma^{\infty} B(k) \in {\mathcal S}$ represents $\Sigma^{t(k)}F(k) \in ho({\mathcal S})$, $\Sigma^{\infty} p(k)$ represents $\Sigma^{t(k)} q(k)$, and $\Sigma^{\infty} \beta(k)$ represents $\Sigma^{t(k+1)}f(k)$.\\ \noindent{(b)} With $r(k) = d(k+1)/d(k)$, the diagram \begin{equation*} \xymatrix{ \Sigma^{e(k) + d(k+1)}B(k) \ar[d]^{\Sigma^{e(k)}v(k)^{r(k)}} \ar[rr]^-{\Sigma^{d(k+1)}\beta(k)} && \Sigma^{d(k+1)}B(k+1) \ar[d]^{v(k+1)} \\ \Sigma^{e(k)}B(k) \ar[rr]^-{\beta(k)} && B(k+1) } \end{equation*} commutes in ${\mathcal T}$. \\ \noindent{(c)} The diagram \begin{equation*} \xymatrix{ \Sigma^{e(k)}B(k) \ar[rr]^-{\beta(k)} \ar[dr]^{\Sigma^{e(k)}p(k)} && B(k+1) \ar[dl]_{p(k+1)} \\ & S^{t(k+1)} & } \end{equation*} commutes. \end{defn} \begin{lem} Rigidifications exist. \end{lem} \begin{proof}[Sketch proof] This is basically Bousfield's construction of `an admissible spectral $L_n^f$--cospectrum' given in \cite[Thm.12.1]{bousfield3}. One proceeds by induction on $k$. Having constructed $B(k)$, $v(k)$, and $p(k)$, using the Periodicity Theorem in the stable range, one chooses $e(k)$ so large that there exists $B(k+1)$, $v(k+1)$, $p(k+1))$, and $\beta(k)$ making property (a) hold, and so that the diagrams in (b) and (c) commute up to homotopy. Then one replaces $B(k+1)$ and $\beta(k)$, so that the new $\beta(k)$ is a cofibration. Finally, one uses the homotopy extension property of cofibrations (applied to both $\beta(k)$ and $\Sigma^{d(k+1)}\beta(k)$) to replace $v(k+1)$ and $p(k+1)$ by homotopic maps so that the new diagrams (b) and (c) strictly commute. \end{proof} Given a rigidification of (\ref{ho resolution}), we will make use of two families of induced natural maps. The maps $\beta(k):\Sigma^{e(k)}B(k) \rightarrow B(k+1)$ induce a natural maps $$ \Phi_{v(k+1)}(Z) \rightarrow \Phi_{\Sigma^{e(k)}v(k)^{r(k)}}(Z)= \Omega^{e(k)}\Phi_{v(k)^{r(k)}}(Z).$$ Adjointing these, and suspending $t(k)$--times, yield natural maps $$ \beta(k)^*: \Sigma^{t(k+1)}\Phi_{v(k+1)}(Z) \rightarrow \Sigma^{t(k)}\Phi_{v(k)^{r(k)}}(Z).$$ The `top cell' maps $p(k): B(k) \rightarrow S^{t(k)}$ induce maps $$ s^{-t(k)}X \rightarrow \Omega^{t(k)}X = \operatorname{Map_{\mathcal S}}(S^{t(k)}, X) \rightarrow \operatorname{Map_{\mathcal S}}(B(k),X).$$ Adjointing these, yield natural maps $$ p(k)^*: X \rightarrow s^{t(k)} \operatorname{Map_{\mathcal S}}(B(k),X).$$ The last maps are compatible as $k$ varies, and so induce a natural map $$ p^*: X \rightarrow \operatorname*{holim}_k s^{t(k)} \operatorname{Map_{\mathcal S}}(B(k),X).$$ \begin{lem} \label{little lem} $p^*$ is an equivalence if $X$ is $T(n)$--local. \end{lem} \begin{proof} In the homotopy category, $p(k)^*$ corresponds to $$q(k)^*: X \rightarrow \operatorname{Map_{\mathcal S}}(F(k),X)$$ so that $p^*$ corresponds to $$ q^*: X = \operatorname{Map_{\mathcal S}}(S^0,X) \rightarrow \operatorname{Map_{\mathcal S}}(\operatorname*{hocolim}_k F(k),X).$$ This is an equivalence if $X$ is $T(n)$--local. \end{proof} \begin{defn} \label{Phi defn} Given a rigidification of (\ref{ho resolution}), we define $\Phi_n: {\mathcal T} \rightarrow {\mathcal S}$ by letting $\Phi_n(Z)$ be the homotopy limit of the diagram \begin{equation*} \xymatrix @-.8pc{ & \Sigma^{t(3)}\Phi_{v(3)^{r(3)}}(Z) & \Sigma^{t(2)}\Phi_{v(2)^{r(2)}}(Z) & \Sigma^{t(1)}\Phi_{v(1)^{r(1)}}(Z) \\ \dots \ar[ur] & \Sigma^{t(3)}\Phi_{v(3)}(Z) \ar[u]_{\wr} \ar[ur]_-{\beta(2)^*} & \Sigma^{t(2)}\Phi_{v(2)}(Z) \ar[u]_{\wr} \ar[ur]_-{\beta(1)^*} & \Sigma^{t(1)}\Phi_{v(1)}(Z). \ar[u]_{\wr} } \end{equation*} \end{defn} Informally, we write $\displaystyle \Phi_n(Z) = \operatorname*{holim}_k \Sigma^{t(k)} \Phi_{v(k)}(Z)$. \begin{proof}[Proof of \thmref{main thm}] By construction, in $ho({\mathcal S})$, $\Phi_n(Z)$ represents the holimit of the diagram $$ \dots \rightarrow \Phi(F(3),Z) \xrightarrow{f(2)^*} \Phi(F(2),Z) \xrightarrow{f(1)^*} \Phi(F(1),Z),$$ i.e. $\Phi_n^T(Z)$. The various properties of $\Phi_n$ stated in \thmref{main thm} are verified by giving proofs similar to those given in proving the analogous properties of $\Phi^T_n$ listed in \thmref{ho thm}, with the constructions of natural equivalences `rigidifying' as needed in straightforward ways. We run through some details. Property (1) is clear: $\Phi_n(Z)$ is the homotopy limit of $T(n)$--local spectra, thus is itself $T(n)$--local. For property (2), we have \begin{equation*} \begin{split} \operatorname{Map_{\mathcal S}}(B,\Phi_n(Z)) & = \operatorname{Map_{\mathcal S}}(F, \operatorname*{holim}_k \Sigma^{t(k)}\Phi_{v(k)}(Z))\\ & = \operatorname*{holim}_k \operatorname{Map_{\mathcal S}}(B, \Sigma^{t(k)}\Phi_{v(k)}(Z)) \\ & \xleftarrow{\sim} \operatorname*{holim}_k \Sigma^{t(k)} \operatorname{Map_{\mathcal S}}(B, \Phi_{v(k)}(Z)) \\ & \simeq \operatorname*{holim}_k \Sigma^{t(k)} \operatorname{Map_{\mathcal S}}(B(k), \Phi_{v}(Z)) \text{ (by \lemref{smash lemma})}\\ & \xrightarrow{\sim} \operatorname*{holim}_k s^{t(k)}\operatorname{Map_{\mathcal S}}(B(k), \Phi_{v}(Z)) \\ & \xleftarrow{\sim} \Phi_v(Z), \end{split} \end{equation*} since $\Phi_{v}(Z)$ is $T(n)$--local. For property (3), we have \begin{equation*} \begin{split} \Phi_n(\Omega^{\infty} X) & = \operatorname*{holim}_k \Sigma^{t(k)}\Phi_{v(k)}\Omega^{\infty} X) \\ & \xrightarrow{\sim} \operatorname*{holim}_k s^{t(k)}\Phi_{v(k)}\Omega^{\infty} X) \\ &\simeq \operatorname*{holim}_k s^{t(k)}\operatorname{Map_{\mathcal S}}(B(k),L_{T(n)}X) \text{ (by \thmref{v Tn thm}(2))} \\ & \xleftarrow{\sim} L_{T(n)}X \text{ (by \lemref{little lem})}. \end{split} \end{equation*} Finally suppose that a functor $\Phi^{\prime}_n: {\mathcal T} \rightarrow {\mathcal S}$ satisfies the next two properties, analogues of properties (1) and (2). \\ \noindent{($1^{\prime}$)} \ $\Phi^{\prime}_n(Z)$ is $T(n)$--local, for all spaces $Z$. \\ \noindent{($2^{\prime}$)} \ There is a weak equivalence of spectra $\operatorname{Map}_{{\mathcal S}}(B,\Phi^{\prime}_n(Z)) \simeq \Phi_v(Z)$, for all unstable $v_n$ self maps $v:\Sigma^d B \rightarrow B$, natural in both $Z$ and $v$. \\ \noindent Then we have: \begin{equation*} \begin{split} \Phi^{\prime}_n(Z) & \xrightarrow{\sim} \operatorname*{holim}_k s^{t(k)}\operatorname{Map_{\mathcal S}}(B(k), \Phi^{\prime}_n(Z)) \text{ (by ($1^{\prime}$))}\\ & \xleftarrow{\sim} \operatorname*{holim}_k \Sigma^{t(k)}\operatorname{Map_{\mathcal S}}(B(k), \Phi^{\prime}_n(Z)) \\ & \simeq \operatorname*{holim}_k \Sigma^{t(k)} \Phi_{v(k)}(Z)) \text{ (by ($2^{\prime}$))}\\ & = \Phi_n(Z). \end{split} \end{equation*} Thus properties (1) and (2) characterize $\Phi_n$. \end{proof} \section{Bousfield's adjoint $\Theta_n$} \label{theta_n section} In \cite{bousfield3}, Bousfield constructs a functor $\Theta_n: {\mathcal S} \rightarrow {\mathcal T}$, which serves as a left adjoint of sorts to $\Phi_n$. In this section, we run through how this works. \subsection{The construction of $\Theta_v$.} Given a self map of a space $v: \Sigma^d B \rightarrow B$, the functor $$ \Phi_v: {\mathcal T} \rightarrow {\mathcal S}$$ admits a left adjoint $$ \Theta_v: {\mathcal S} \rightarrow {\mathcal T},$$ defined as follows. Given a spectrum $X$ with iterated structure maps $\sigma_r: \Sigma^d X_{rd} \rightarrow X_{(r+1)d}$, $\Theta_v(X)$ is defined to be the coequalizer of the two maps $$ \bigvee_r \Sigma^d B \wedge X_{rd} \begin{array}{c} v \\[-.08in] \longrightarrow \\[-.1in] \longrightarrow \\[-.1in] \sigma \end{array} \bigvee_r B \wedge X_{rd},$$ where, on the $rd^{th}$ wedge summand, $v$ is $\Sigma^d B \wedge X_{rd} \xrightarrow{v \wedge 1} B \wedge X_{rd}$, while $\sigma$ is $ \Sigma^d B \wedge X_{rd} \simeq B \wedge \Sigma^d X_{rd} \xrightarrow{1 \wedge \sigma_r} B \wedge X_{(r+1)d}$. It is easy and formal to check that $\Theta_v$ and $\Phi_v$ form an adjoint pair. However, to be homotopically meaningful, one would like these functors to form a Quillen pair, so that they induce an adjunction on the associated homotopy categories. For this to be true, it is necessary and sufficient to check that $\Phi_v$ preserves trivial fibrations, and also fibrations between fibrant objects. (See, e.g. \cite[Lem.10.5]{bousfield3} or \cite[Prop.8.5.4]{hirschhorn}.) $\Phi_v$ certainly preserves trivial fibrations, as it takes a trivial fibration to a levelwise trivial fibration, which will then be a stable trivial fibration. Suppose that $W \rightarrow Z$ is a fibration in ${\mathcal T}$. In the stable model category structure, $\Phi_v(W) \rightarrow \Phi_v(Z)$ will be a fibration between fibrant objects only if the obvious necessary condition holds: $\Phi_v(W)$ and $\Phi_v(Z)$ must both be fibrant, i.e. $\Omega$--spectra. Unravelling the definitions, $\Phi_v(Z)$ will be an $\Omega$--spectrum if and only if the map $$ v^*: \operatorname{Map}(B,Z) \rightarrow \operatorname{Map}(\Sigma^dB,Z)$$ is a weak equivalence. One can {\em force} this condition to be true as follows. Let $L_v: {\mathcal T} \rightarrow {\mathcal T}$ denote localization with respect to the map $f$, and then let ${\mathcal T}_v$ denote ${\mathcal T}$ with the associated model category structure in which weak equivalences ${\mathcal T}_v$ are maps $f$ so that $L_vf$ is a weak equivalence in ${\mathcal T}$. (See e.g. \cite{hirschhorn} for these constructions and many references to the literature.) We recover a variant of \cite[Lem.10.6]{bousfield3}. \begin{lem} \label{adjoint lemma} For any $v: \Sigma^d B \rightarrow B$, we have the following. \\ \noindent{\bf (a)} $\Theta_v: {\mathcal S} \rightarrow {\mathcal T}_v$ and $\Phi_v: {\mathcal T}_v \rightarrow {\mathcal S}$ form a Quillen pair. \\ \noindent{\bf (b)} For all $X \in {\mathcal S}$ and $Z \in {\mathcal T}$, there is a natural bijection $$ [\Theta_v(X), L_vZ] \simeq [X, \Phi_v(L_vZ)].$$ \end{lem} \subsection{Periodic localization of spaces.} In \cite[\S 4.3]{bousfield3}, Bousfield defines $$ L_n^f: {\mathcal T} \rightarrow {\mathcal T}$$ to be localization with respect to the map $\Sigma A \rightarrow *$, where $A$ is chosen so that $\Sigma^{\infty} A$ is equivalent to a finite spectrum of type $n+1$, and the connectivity of $H_*(A;{\mathbb Z}/p)$ is chosen to be as low as possible. His proof that this is independent of choice appears in \cite[Thm.9.15]{bousfieldJAMS}, and depends on the Thick Subcategory Theorem. For our purposes, $L_n^f: {\mathcal T} \rightarrow {\mathcal T}$ satisfies two elementary properties that we care about. \begin{lem} \label{local Oinfy lemma}If $X$ is a $L_n^f$--local spectrum, then $\Omega^{\infty} X$ is $L_n^f$--local space. \end{lem} \begin{proof} Let $A$ be chosen as in the definition of $L_n^f: {\mathcal T} \rightarrow {\mathcal T}$. For all $t$, $\pi_t(\operatorname{Map_{\mathcal T}}(\Sigma A, \Omega^{\infty} X)) = [\Sigma^{\infty} \Sigma^{t+1}A,X] = 0$, since $L_n^f$--local spectra admit no nontrivial maps from objects in ${\mathcal C}_{n+1}$. Thus $\operatorname{Map_{\mathcal T}}(\Sigma A, \Omega^{\infty} X) \simeq *$, and so $\Omega^{\infty} X$ is $L_n^f$--local. \end{proof} \begin{lem} \label{double susp lemma} If $Z$ is a $L_n^f$--local space, then it is also $L_v$--local for all unstable $v_n$--self maps $v: \Sigma^d B \rightarrow B$ that are double suspensions. \end{lem} \begin{proof} Since $v$ is a double suspension, it fits into a cofibration sequence of the form $$ \Sigma C \rightarrow \Sigma^d B \xrightarrow{v} B \rightarrow \Sigma^2 C,$$ where $\Sigma^{\infty} C$ has type $n+1$. This induces a fibration sequence $$ \operatorname{Map_{\mathcal T}}(\Sigma^2 C, Z) \rightarrow \operatorname{Map_{\mathcal T}}(B, Z) \xrightarrow{v^*} \operatorname{Map_{\mathcal T}}(\Sigma^d B, Z) \rightarrow \operatorname{Map_{\mathcal T}}(\Sigma C, Z),$$ in which the first and last of these mapping spaces are null if $Z$ is $L_n^f$--local. Thus the middle map is an equivalences, and so $Z$ is $L_v$--local. \end{proof} A deeper property of $L_n^f$ goes as follows. \begin{prop} \label{Ln prop} If $v$ is a $v_n$--self map, the natural map $\Phi_v(Z) \rightarrow \Phi_v(L_n^fZ)$ is a stable equivalence. \end{prop} \begin{proof} We wish to show that the map of spaces $$ \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z) \rightarrow \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,L_n^fZ)$$ induces an isomorphism on homotopy groups (in high dimensions). This is pretty much \cite[Theorem 11.5]{bousfieldJAMS}, and we sketch how the proof goes. The map we care about factors in the homotopy category: \begin{equation*} \xymatrix{ \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z) \ar[d] \ar[r] & \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,L_n^fZ) \\ L_n^f \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z) & \operatorname*{hocolim}_r L_n^f \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z), \ar[l]_{\sim} \ar[u]} \end{equation*} where the indicated equivalence is \cite[Lemma 11.6]{bousfieldJAMS}. The right vertical arrow induces an isomorphism on homotopy groups in high dimensions, due to \cite[Theorem 8.3]{bousfieldJAMS}, a general result which describes to what extent functors like $L_{\Sigma C}$ preserve fibrations. Applied to the case in hand, one learns that there is a number $\delta$ such that the natural map $$ L_n^f \operatorname{Map_{\mathcal T}}(C,Z) \rightarrow \operatorname{Map_{\mathcal T}}(C, L_n^f Z)$$ will induce an isomorphism on $\pi_i$ for $i \geq \delta$ for all finite complexes $C$. Thus our right vertical map will induce isomorphisms on $\pi_i$ in the same range. It follows that we need just check that the left vertical map is an equivalence, or, otherwise said, that $\displaystyle \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z)$ is $L_n^f$--local. Recalling that $L_n^f = L_{\Sigma A}$ for a well chosen $A$ of type $(n+1)$, we have \begin{equation*} \begin{split} \operatorname{Map_{\mathcal T}}(\Sigma A, \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z)) & \simeq \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma A, \operatorname{Map_{\mathcal T}}(\Sigma^{rd}B,Z) \\ & = \operatorname*{hocolim}_r \operatorname{Map_{\mathcal T}}(\Sigma A \wedge \Sigma^{rd}B,Z) \\ & \simeq *, \end{split} \end{equation*} since $1_{\Sigma A} \wedge v$ will be nilpotent by the Nilpotence Theorem. \end{proof} \begin{rem} It would interesting to have a proof of this proposition that avoided the use of \cite[Theorem 8.3]{bousfieldJAMS}. \end{rem} Combining this proposition with \lemref{adjoint lemma} and \lemref{double susp lemma} yields the next theorem. \begin{thm} \label{theta v thm} If $v: \Sigma^d B \rightarrow B$ is a $v_n$--self map and a double suspension, there is a natural bijection $$ [\Theta_v(X), L_n^fZ] \simeq [X,\Phi_v(Z)],$$ for all $Z \in {\mathcal T}$ and $X \in {\mathcal S}$. \end{thm} \subsection{The definition of $\Theta_n$.} Let the following data make up a rigidification of diagram (\ref{ho resolution}), as used in the definition of $\Phi_n$: \\ \noindent{(i)} Finite complexes $B(k)$ of type $n$. \\ \noindent{(ii)} Natural numbers $d(k)$ such that $d(k)|d(k+1)$ together with unstable $v_n$--self maps $v(k): \Sigma^{d(k)}B(k) \rightarrow B(k)$. \\ \noindent{(iii)} Natural numbers $t(k)$ such that $t(k) \leq t(k+1)$ together with maps $p(k): B(k) \rightarrow S^{t(k)}$ and $\beta(k): \Sigma^{e(k)}B(k) \rightarrow B(k+1)$, where $e(k) = t(k+1)-t(k)$. \\ By double suspending everything, we can also assume that each $v(k)$ is a double suspension. \begin{defn} Given this data, we define $\Theta_n: {\mathcal S} \rightarrow {\mathcal T}$ by letting $\Theta_n(Z)$ be the homotopy colimit of the diagram \begin{equation*} \xymatrix @-.8pc{ \Theta_{v(1)^{r(1)}}(\Omega^{t(1)}X) \ar[d] \ar[dr]^-{\beta(1)_*}& \Theta_{v(2)^{r(2)}}(\Omega^{t(2)}X) \ar[d] \ar[dr]^-{\beta(2)_*}& \Theta_{v(3)^{r(3)}}(\Omega^{t(3)}X) \ar[d] \ar[dr]^-{\beta(3)_*}& \\ \Theta_{v(1)}(\Omega^{t(1)}X)& \Theta_{v(2)}(\Omega^{t(2)}X)& \Theta_{v(3)}(\Omega^{t(3)}X) & \dots, \\ } \end{equation*} where each vertical map will be an $L_n^f$--equivalence, and each $\beta(k)_*$ is itself a natural zig-zag diagram $$ \Theta_{v(k)^{r(k)}}(\Omega^{t(k)}X) \xleftarrow{\sim} \Theta_{v(k)^{r(k)}}(\Sigma^{e(k)}\Omega^{t(k+1)}X) \xrightarrow{\beta(k)_*} \Theta_{v(k+1)}(\Omega^{t(k)}X).$$ \end{defn} Informally, we write $\displaystyle \Theta_n(X) = \operatorname*{hocolim}_k \Theta_{v(k)}(\Omega^{t(k)}X)$. From \thmref{theta v thm}, we deduce \begin{thm}[{\cite[Theorem 5.4(iii)]{bousfield3}}] \label{adjoint thm} There is a natural bijection $$ [\Theta_n(X), L_n^fZ] = [X, \Phi_n(Z)]$$ for all $Z \in {\mathcal T}$ and $X \in {\mathcal S}$. \end{thm} \begin{proof} The idea is that, since $\Phi_n$ is the limit of functors of the form $\Phi_v$, and $\Theta_n$ is the colimit of their adjoints $\Theta_v$, the theorem should follow from \thmref{theta v thm}. The only detail needing a careful check is that the zig-zag natural map $$ \Theta_{v(k)^{r(k)}}(\Omega^{t(k)}X) \xleftarrow{\sim} \Theta_{v(k)^{r(k)}}(\Sigma^{e(k)}\Omega^{t(k+1)}X) \xrightarrow{\beta(k)_*} \Theta_{v(k+1)}(\Omega^{t(k+1)}X)$$ used in the definition of $\Theta_n$ above really is adjoint to the more directly defined map $$ \Sigma^{t(k+1)}\Phi_{v(k+1)}(Z) \xrightarrow{\beta(k)^*} \Sigma^{t(k)}\Phi_{v(k)^{r(k)}}(Z)$$ used in the definition of $\Phi_n$. To see this we have a commutative diagram: {\tiny \begin{equation*} \xymatrix @-1.2pc{ \operatorname{Map_{\mathcal T}}(\Theta_{v(k+1)}\Omega^{t(k+1)}X,Z) \ar[dd]^{\wr} \ar[r]^-{\beta(k)^*} & \operatorname{Map_{\mathcal T}}(\Theta_{v(k)^{r(k)}}\Sigma^{e(k)}\Omega^{t(k+1)}X,Z) \ar[dd]^{\wr} & \operatorname{Map_{\mathcal T}}(\Theta_{v(k)^{r(k)}}\Omega^{t(k)}X,Z)\ar[dd]^{\wr} \ar[l]_-{\sim} \\ && \\ \operatorname{Map_{\mathcal T}}(\Theta_{v(k+1)}s^{-t(k+1)}X,Z) \ar@{=}[dd] \ar[r]^-{\beta(k)^*} & \operatorname{Map_{\mathcal T}}(\Theta_{v(k)^{r(k)}}\Sigma^{e(k)}s^{-t(k+1)}X,Z) \ar@{=}[dd] & \operatorname{Map_{\mathcal T}}(\Theta_{v(k)^{r(k)}}s^{-t(k)}X,Z) \ar@{=}[dd] \ar[l]_-{\sim} \\ && \\ \operatorname{Map_{\mathcal S}}(X,s^{t(k+1)}\Phi_{v(k+1)}Z) \ar[r]^-{\beta(k)^*} & \operatorname{Map_{\mathcal S}}(X,s^{t(k+1)}\Omega^{e(k)}\Phi_{v(k)^{r(k)}}Z) & \operatorname{Map_{\mathcal S}}(X,s^{t(k)}\Phi_{v(k)^{r(k)}}Z) \ar[l]_-{\sim} \\ && \\ \operatorname{Map_{\mathcal S}}(X,\Sigma^{t(k+1)}\Phi_{v(k+1)}Z) \ar[uu]_{\wr} \ar[r]^-{\beta(k)^*} & \operatorname{Map_{\mathcal S}}(X,\Sigma^{t(k)}\Phi_{v(k)^{r(k)}}Z) \ar[uu]_{\wr} & \operatorname{Map_{\mathcal S}}(X,\Sigma^{t(k)}\Phi_{v(k)^{r(k)}}Z). \ar[uu]_{\wr} \ar@{=}[l] \\ } \end{equation*} } \end{proof} \begin{rem} The lack of elegance in the proof of the `detail' checked above reflects the fact, though our functor $\Phi_n: {\mathcal T} \rightarrow {\mathcal S}$ is intuitively the homotopy limit of right adjoint functors $\Phi_{v(k)}$, it does {\em not} make up the right part of an adjoint pair. Note that our official definition involves the use of $\Sigma: {\mathcal S} \rightarrow {\mathcal S}$, which induces an equivalence of homotopy categories, but is a left adjoint, not a right one. It doesn't seem possible to somehow replace $\Sigma^{t(k)}$ by $s^{t(k)}$ (which {\em is} a right adjoint) in Definition \ref{Phi defn}. This same problem shows up in Bousfield's construction: see the paragraph before \cite[Theorem 11.7]{bousfield3}. \end{rem} \begin{cor} In $ho({\mathcal S})$, there is a natural equivalence $$ L_{T(n)}\Sigma^{\infty} \Theta_n(X) \simeq L_{T(n)}X.$$ \end{cor} \begin{proof} In the last theorem, let $Z$ be the space $\Omega^{\infty} L_{T(n)}Y$, which is $L_n^f$--local by \lemref{local Oinfy lemma}. We see that, for all $X,Y \in {\mathcal S}$, there are natural isomorphisms \begin{equation*} \begin{split} [\Sigma^{\infty} \Theta_n(X), L_{T(n)}Y] & \simeq [\Theta_n(X), \Omega^{\infty} L_{T(n)}Y] \\ & \simeq [X, \Phi_n(\Omega^{\infty} L_{T(n)}Y)] \\ & \simeq [X,L_{T(n)}Y]. \end{split} \end{equation*} The corollary then follows from Yoneda's lemma. \end{proof} \begin{rem} The careful reader will note that this corollary is {\em not} dependent on \propref{Ln prop} (and thus not dependent on Bousfield's careful study of the behavior of localized fibration sequences), as we have derived it by only applying our other results to a space $Z$ that is $L_n^f$--local. \end{rem} \begin{rem} From the corollary, it follows that the Telescope Conjecture is equivalent to the statement that if a {\em space} is $K(n)_*$--acyclic, then it is $T(n)_*$--acyclic. \end{rem} \section{The section $\eta_n$} \label{eta_n section} One of the main applications of the functor $\Phi_n$, is that it leads to the construction of a natural transformation $$\eta_n(X): L_{T(n)}X \rightarrow L_{T(n)} \Sigma^{\infty} \Omega^{\infty} X$$ which is a natural homotopy section of the $T(n)$--localization of the evaluation map $$ \epsilon(X): \Sigma^{\infty} \Omega^{\infty} X \rightarrow X.$$ The construction is immediate: $\eta_n$ is defined by applying $\Phi_n$ to the natural map $$\eta(\Omega^{\infty} X): \Omega^{\infty} X \rightarrow Q \Omega^{\infty} X.$$ This section is both used and studied in \cite{k3,rezk}. It seems plausible that $\eta_n$ is the {\em unique} section of $L_{T(n)}\epsilon$. We have a couple of partial results along these lines. The first was discussed in \cite{k4}. \begin{prop} $\eta_n$ is unique up to `tower phantom' behavior in the Goodwillie tower for $\Sigma^{\infty} \Omega^{\infty}$ in the following sense: for all $d$, the composite $$ L_{T(n)}X \xrightarrow{\eta_n(X)} L_{T(n)} \Sigma^{\infty} \Omega^{\infty} X \xrightarrow{L_{T(n)}e_d(X)} L_{T(n)}P_d^{\infty}(X)$$ is the unique natural section of $L_{T(n)}P_d^{\infty}(X) \xrightarrow{L_{T(n)}p_d(X)} L_{T(n)}X$. \end{prop} Here $\Sigma^{\infty} \Omega^{\infty} X \xrightarrow{e_d(X)} P_d^{\infty}(X)$ is the $d^{th}$ stage of the Goodwillie tower, and $p_d$ is the canonical natural transformation such that $\epsilon = p_d \circ e_d$. The uniqueness asserted in the proposition is an immediate consequence of the main theorem of \cite{k2}. Our second observation is in the spirit of observations by Rezk in \cite{rezk}. As usual, we let $QZ$ denote $\Omega^{\infty} \Sigma^{\infty} Z$. \begin{prop} Any natural transformation $f(X): X \rightarrow L_{T(n)}\Sigma^{\infty} \Omega^{\infty} X$ will be determined by $f(S^0): S^0 \rightarrow L_{T(n)}\Sigma^{\infty} QS^0$. \end{prop} \begin{proof} We begin by showing that we can reduce to the case when $X = \Sigma^{\infty} Z$, a suspension spectrum. As the range of $f$ is $T(n)$--local, we can extend the domain to $L_{T(n)}X$. In the diagram \begin{equation*} \xymatrix{ L_{T(n)}\Sigma^{\infty} \Omega^{\infty} X \ar[d]^{\epsilon_*} \ar[rr]^-{f(\Sigma^{\infty} \Omega^{\infty} X)} && L_{T(n)}\Sigma^{\infty} Q\Omega^{\infty} X \ar[d]^{\epsilon_*} \\ L_{T(n)}X \ar[rr]^-{f(X)} && L_{T(n)}\Sigma^{\infty} \Omega^{\infty} X, } \end{equation*} the left vertical map has a section given by $f(\eta_n(X))$. Thus the bottom map is determined by the top map. Next we observe that any continuous functor $G: {\mathcal T} \rightarrow {\mathcal S}$ comes with a natural transformation $Z \wedge G(W) \rightarrow G(Z \wedge W)$, and this structure is natural in $G$. Applied to our situation, for any space $Z$, we have a commutative diagram \begin{equation*} \xymatrix{ Z \wedge \Sigma^{\infty} S^0 \ar[d]^{\wr} \ar[rr]^-{1_Z \wedge f(S^0)} && Z \wedge L_{T(n)}\Sigma^{\infty} QS^0 \ar[d] \\ \Sigma^{\infty} Z \ar[rr]^-{f(Z)} && L_{T(n)}\Sigma^{\infty} QZ. } \end{equation*} Thus the bottom map is determined by the top. \end{proof} \begin{quest} Is it true that the map $$ L_{T(n)}\Sigma^{\infty} QS^0 \rightarrow \prod_d L_{T(n)}\Sigma^{\infty} B\Sigma_{d+},$$ arising from the James--Hopf maps $QS^0 \rightarrow QB\Sigma_{d+}$, is monic on $\pi_0$? \end{quest} If so, then the last propositions combine to show that $\eta_n$ is the unique natural section to the localized evaluation map. \section{A guide to computations} \label{computations} In this section we briefly survey calculations that have been made of $\Phi_n(Z)$ for various pairs $(n,Z)$. Before jumping into this, we first explain that there is also interest in explicit calculations of $\pi_*(\Phi_n(Z))$, or variants thereof. \subsection{Periodic homotopy groups of spaces} Recall that there is a sequence of spectra $$ F(1) \rightarrow F(2) \rightarrow F(3) \rightarrow \dots $$ such that each $F(k)$ is finite of type $n$ and $\displaystyle \operatorname*{hocolim}_k F(k) \simeq C_{n-1}^fS^0$. Dualizing this in the stable homotopy category, one gets a diagram $$ {\mathcal D} F(1) \leftarrow {\mathcal D} F(2) \leftarrow {\mathcal D} F(3) \leftarrow \dots. $$ Furthermore, each ${\mathcal D} F(k)$ comes with a $v_n$--self map which we will generically call `$v$', these are compatible in the usual way, and any given finite part of this data `eventually' desuspends to spaces. Thus the following definition makes sense. \begin{defn} The $v_n$--periodic homotopy groups of a space $Z$ are defined to be $$v_n^{-1}\pi_*(Z) = \operatorname*{colim}_k v^{-1}\pi_*(Z;{\mathcal D} F(k)).$$ \end{defn} \begin{ex} When $n=1$, one takes $F(k)$ to be a Moore spectrum of type $Z/p^k$, and the self maps are `Adams maps'. Using traditional notation, $$v_1^{-1}\pi_*(Z) = \operatorname*{colim}_k v^{-1}\pi_{*+1}(Z;{\mathbb Z}/p^k).$$ \end{ex} \begin{lem} The groups $v_n^{-1}\pi_*(Z)$ can be rewritten in terms of $\Phi_n(Z)$ in various ways: \begin{equation*} \begin{split} v_n^{-1}\pi_*(Z) & = \operatorname*{colim}_k \pi_*(\operatorname{Map_{\mathcal S}}({\mathcal D} F(k),\Phi_n(Z))) \\ & = \operatorname*{colim}_k \pi_*(F(k) \wedge \Phi_n(Z)) \\ & = \pi_*(C_{n-1}^f\Phi_n(Z)) \\ & = \pi_*(M_{n}^f\Phi_n(Z)). \end{split} \end{equation*} \end{lem} The next lemma relates $\Phi_n$--equivalences to isomorphisms on localized homotopy groups. \begin{lem} \label{v-equiv lemma} Given a map $f: W \rightarrow Z$ between spaces, the following conditions are equivalent. \\ \noindent{\bf (a)} $\Phi_n(f): \Phi_n(W) \rightarrow \Phi_n(Z)$ is a weak equivalence. \\ \noindent{\bf (b)} $f_*: v_n^{-1}\pi_*(W) \rightarrow v_n^{-1}\pi_*(Z)$ is an isomorphism. \\ \noindent{\bf (c)} $f_*: v^{-1}\pi_*(W;B) \rightarrow v^{-1}\pi_*(Z;B)$ is an isomorphism for all unstable $v_n$--self maps $v: \Sigma^d B \rightarrow B$. \\ \noindent{\bf (d)} $f_*: v^{-1}\pi_*(W;B) \rightarrow v^{-1}\pi_*(Z;B)$ is an isomorphism for some unstable $v_n$--self map $v: \Sigma^d B \rightarrow B$. \end{lem} \begin{proof} Let $g = \Phi_n(f)$. Then condition (a) says that $g$ is an equivalence, (b) says that $M_n^fg$ is an equivalence, and conditions (c) and (d) say that ${\mathcal D} B \wedge g$ is an equivalence for appropriate $B$'s. As $g$ is a map between $T(n)$--local spectra, the three conditions are all equivalent: clearly (a) implies all the other statements, $L_{T(n)}M_n^f g \simeq g$ so that (b) implies (a), (c) obviously implies (d), and finally (d) implies that $v^{-1}{\mathcal D} B \wedge g$ is an equivalence, so that $g$ is a $T(n)_*$--isomorphism and thus (a) holds. \end{proof} \subsection{Basic observations} From properties of $\Phi_v$ listed in \lemref{big phi lemma}, one deduces the next two useful basic calculational rules. \begin{lem} $\operatorname{Map_{\mathcal S}}(A,\Phi_n(Z)) \simeq \Phi_n(\operatorname{Map_{\mathcal T}}(A,Z))$ for all $A,Z \in {\mathcal T}$. \end{lem} \begin{lem} $\Phi_n$ takes homotopy pullbacks in ${\mathcal T}$ to homotopy pullbacks in ${\mathcal S}$. \end{lem} One might wonder to what extent $\Phi_n$ might take the homotopy limit (`microscope') of a sequence $ Z_1 \leftarrow Z_2 \leftarrow Z_3 \leftarrow \dots$ to the corresponding holimit in ${\mathcal S}$. Unfortunately this will not always be the case; the correct statement can be formally deduced from \thmref{adjoint thm}. \begin{lem} Given a sequence of spaces $ Z_1 \leftarrow Z_2 \leftarrow Z_3 \leftarrow \dots$, we have $$ \operatorname*{holim}_k \Phi_n(Z_k) \simeq \Phi_n(\operatorname*{holim}_k L_n^f Z_k).$$ \end{lem} Since $\Phi_n(\operatorname*{holim}_k Z_k) \simeq \Phi_n(L_n^f \operatorname*{holim}_k Z_k)$, one sees that the failure of $\Phi_n$ to commute with microscopes is caused by the failure of $L_n^f: {\mathcal T} \rightarrow {\mathcal T}$ to commute with microscopes. More constructively, one has the following consequence of \lemref{v-equiv lemma}. \begin{lem} \label{convergence lemma} Given a sequence of spaces $ Z_1 \leftarrow Z_2 \leftarrow Z_3 \leftarrow \dots$, the natural map $$\Phi_n(\operatorname*{holim}_k Z_k) \rightarrow \operatorname*{holim}_k \Phi_n(Z_k)$$ is an equivalence if and only if $$ v^{-1}\lim_k \pi_*(Z_k;B) \rightarrow \lim_k v^{-1}\pi_*(Z_k;B)$$ is an isomorphism for some unstable $v_n$--self map $v: \Sigma^d B \rightarrow B$. \end{lem} Since $\Phi_n(Z)$ can be `calculated' as $L_{T(n)}X$ if $Z = \Omega^{\infty} X$, the following strategy for computing $\Phi_n(Z)$ emerges: try to `resolve' $Z$ by towers of fibrations with fibers which are infinite loopspaces, and hope that \lemref{convergence lemma} can be applied when needed. \subsection{$\Phi_n(S^m)$ when $m$ is odd} The strategy just described was implimented in beautiful work of Arone and Mahowald \cite{aronemahowald} on the Goodwillie tower of the identity functor. It allows for the identification of a short resolution of $\Phi_n(Z)$ with `known' composition factors, when $Z$ is an odd dimensional sphere. We will be brief here; for a slightly different overview of how this goes, see the last sections of our survey paper \cite{k4}. We need some notation. Let $m \rho_k$ denote the direct sum of $m$ copies of the reduced real regular representation of $V_k = (\mathbb Z/p)^k$. Then $GL_k(\mathbb Z/p)$ acts on the Thom space $BV_k^{m\rho_k}$. Let $e_k \in \mathbb Z_{(p)}[GL_k(\mathbb Z/p)]$ be any idempotent in the group ring representing the Steinberg module, and then let $L(k,m)$ be the associated stable summand of $BV_k^{m\rho_k}$: $$ L(k,m) = e_k \Sigma^{\infty} BV_k^{m\rho_k}.$$ The spectra $L(k,0)$ and $L(k,1)$ agree with spectra called $M(k)$ and $L(k)$ in the literature from the early 1980's: see e.g. \cite{mitchellpriddy,kuhnpriddy}. Two properties of the $L(k,m)$ play a crucial roles for our purposes: \begin{itemize} \item When $m$ is odd, the cohomology $H^*(L(k,m); \mathbb Z/p)$ is free over the finite subalgebra ${\mathcal A}(k-1)$ of the Steenrod algebra ${\mathcal A}$. \item Fixing $m$, the connectivity of $L(k,m)$ has a growth rate like $p^k$. \end{itemize} The first fact here implies that $L(k,m)$ is $T(n)_*$--acylic for $k>n$. Indeed, the $E_2$--term of the Adams spectral sequence which computes $[B,L(k,m)]_*$ for any finite $B$ will have a vanishing line of small enough slope so that one can immediately deduce that $v^{-1}E_2 = 0$ if $v$ is a $v_n$--self map of $B$. Arone and Mahowald's analysis in \cite{aronemahowald}, supported by \cite{aronedwyer}, shows that, for odd $m$, there is a tower of fibrations under the $p$--local sphere $S^m$: \begin{equation*} \xymatrix{ &&& \vdots \ar[d] \\ &&& R_2(S^m) \ar[d]^{p_2} \\ &&& R_1(S^m) \ar[d]^{p_1} \\ S^m \ar[rrr]^{e_0} \ar[urrr]^{e_1} \ar[uurrr]^{e_2} &&& R_0(S^m), } \end{equation*} such that $\displaystyle S^m \simeq \operatorname*{holim}_k R_k(S^m)$, $R_0(S^m) = QS^m$, and, for $k \geq 1$, the fiber of $p_k$ is equivalent to $\Omega^{\infty} \Sigma^{m-k}L(k,m)$. (The space $R_k(S^m)$ is the $p^k$th stage of the Goodwillie tower of the identity functor applied to $S^m$.) Using the two properties of the $L(k,m)$ bulleted above, Arone and Mahowald then deduce that $$ v^{-1}\lim_k \pi_*(R_k(S^m);B) \rightarrow \lim_k v^{-1}\pi_*(R_k(S^m);B)$$ is an isomorphism for any self map $v: \Sigma^d B \rightarrow B$. See \cite[\S 4.1]{aronemahowald}. It follows that $$ e_{n*}: v^{-1}\pi_*(S^m;B) \rightarrow v^{-1}\pi_*(R_n(S^m);B)$$ is an isomorphism for any $v_n$--self map $v: \Sigma^d B \rightarrow B$. One deduces the following about $\Phi_n(S^m)$. \begin{thm}[see {\cite[Theorem 7.18]{k4}}] \label{phi theorem} Let $m$ be odd. The map $$ \Phi_n(e_n): \Phi_n(S^m) \rightarrow \Phi_n(R_n(S^m))$$ is an equivalence. Thus the spectrum $\Phi_n(S^m)$ admits a finite decreasing filtration with fibers $L_{T(n)}\Sigma^{m-k}L(k,m)$ for $k = 0, \dots, n$. \end{thm} With a little diagram chasing, one can do better than this. Let $L(k)_1^{m-1}$ be the fiber of the natural map of spectra $L(k,1) \rightarrow L(k,m)$. The fibration sequence of spectra $$ L(k)_1^{m-1} \rightarrow L(k,1) \rightarrow L(k,m)$$ induces a short exact sequence in mod $p$ cohomology, and is thus split as ${\mathcal A}(k-1)$--modules. By applying $\Phi_n$ to the fiber sequence $\Omega_0^m S^m \rightarrow S^1 \rightarrow \Omega^{m-1}S^m$ and applying the previous theorem, one deduces an improved result. \begin{thm}[{\cite[Theorem 7.20]{k4}}] \label{phi theorem 2} Let $m$ be odd. The spectrum $\Phi_n(S^m)$ admits a finite decreasing filtration with fibers $L_{T(n)}\Sigma^{m+1-k}L(k)_1^{m-1}$ for $k = 1, \dots, n$. \end{thm} \begin{ex} When $p=2$, $L(1)_1^m = \mathbb RP^{m}$. Specializing to $n=1$, we learn that, for there is a weak equivalence $$ \Phi_1(S^{2k+1}) \simeq L_{T(1)}\Sigma^{2k+1}\mathbb RP^{2k}.$$ Specializing to $n=2$, we learn that there is a fibration sequence of spectra $$ \Phi_2(S^{2k+1}) \rightarrow L_{T(2)}\Sigma^{2k+1}\mathbb RP^{2k} \rightarrow L_{T(2)}\Sigma^{2k} L(2)_1^{2k}.$$ The first of these is equivalent to an older theorem of Mahowald \cite{mahowald} that said that the James--Hopf map $\Omega^{2k} S^{2k+1} \rightarrow Q \Sigma \mathbb RP^{2k}$ induces an isomorphism on $v_1$--periodic homotopy groups. The odd prime version of this is due to Rob Thompson \cite{thompson}. \end{ex} \subsection{$\Phi_1(Z)$ for many $Z$} There is a huge amount known about $v_1^{-1}\pi_*(Z)$ thanks to the prodigious efforts of Bousfield, together with Don Davis and his collaborators. A survey article by Davis \cite{davis} describes computations known by the mid 1990's. In recent years, beginning with \cite{bousfield99}, there has been an explosion of new, more elegantly organized, computations, often explictly describing $\Phi_1(Z)$ enroute: see the references below for entries into the recent literature. Ingredients special to the $n=1$ case that enter the story include the following. \begin{itemize} \item The identification of $\Phi_1(S^{2k+1})$ as described above. \item The fact that $L_{T(1)} = L_{K(1)}$. \item The identification of $L_{K(1)}S^0$ as the fiber of an appropriate map of the form $\Psi^r - 1: KO \rightarrow KO$ \cite{old bousfield}. \item A tight relationship between a maps of spaces being $K(1)_*$--isomorphisms and being a $\Phi_1$--equivalences \cite{bousfieldJAMS}. \end{itemize} In summary, for appropriate spaces $Z$, $v_1^{-1}\pi_*(Z)$ is essentially determined by $KO^*(Z;{\mathbb Z}_p)$, together with Adams operations. Bousfield's recent careful study \cite{bousfield5} is state of the art in this area. Davis \cite{davis02} gives many complete calculations when $Z$ is a compact Lie group, with calculations beginning with knowledge of the Lie group's represenation ring. A very recent amusing result in this spirit is due to Martin Bendersky and Davis \cite{benderskydavis}, and says that there is a 2--primary homotopy equivalence $$ \Phi_1(DI(4)) \simeq L_{K(1)} \Sigma^{725019}T \wedge M,$$ where $DI(4)$ is the Dywer--Wilkerson exotic $2$--compact group, $T$ is the three cell finite spectrum $S^0 \cup_{\eta} e^2 \cup_2 e^3$, and $M$ is a mod $2^{21}$ Moore spectrum.
1812.03371
\section{Introduction} In 1946 Erd\H{o}s \cite{E} proposed the \emph{distinct distances} problem asking for the minimum number of distinct distances that any set of $n$ points in the plane can determine. Upon posing the problem, Erd\H{o}s established that $f(n)=\Omega(n^{1/2})$; this being the number of distinct distances between pairs of points lying on a $\sqrt{n} \times \sqrt{n}$ square grid. He further established that $f(n)=O(n/ \sqrt{\log n})$. Many mathematicians (see \cite{CST},\cite{KT},\cite{ST},\cite{SL},\cite{T}) improved Erd\H{o}s' lower bound to $\Omega(n^{\alpha})$ for increasingly larger values of $\alpha<1$, but Erd\H{o}s conjectured that $f(n)=\Omega(n^{\alpha})$ for \emph{every} $\alpha<1$. This conjecture was finally resolved in the breakthrough 2015 paper of Guth and Katz \cite{GK}, where they proved $f(n) = \Omega(n/\log n)$, introducing novel techniques in real algebraic geometry to the problem. Though Erd\H{o}s' original problem is more or less asymptotically resolved, many variants of Erd\H{o}s' original problem still remain wide open. One particular such class of variants looks at incidences between two point sets $\mathcal{P}_1,\mathcal{P}_2 \subset \mathbb{R}^2$, and asks for the minimum number of distinct distances between them; this is denoted $D(\mathcal{P}_1,\mathcal{P}_2)$. This variant is referred to in literature as the \emph{bipartite distances problem}. Many results have been established on lower bounds for bipartite distances when $\mathcal{P}_1$ and $\mathcal{P}_2$ have special structure. First consider when $\mathcal{P}_1$ and $\mathcal{P}_2$ are both lie on lines that are not parallel nor orthogonal. In this case, Elekes \cite{GE} discovered a lower bound of $\Omega(n^{5/4})$ when $\mathcal{P}_1$ and $\mathcal{P}_2$ are balanced, meaning $|\mathcal{P}_1|=|\mathcal{P}_2|=n$. Sharir, Sheffer and Solymosi \cite{SSS} showed that when $|\mathcal{P}_1|=m,|\mathcal{P}_2|=n$ and $\mathcal{P}_1,\mathcal{P}_2$ enjoy the same restrictions as in Elekes' result, then $D(\mathcal{P}_1,\mathcal{P}_2) = \Omega(\min\{n^{2/3}m^{2/3},n^2,m^2\})$. In the balanced case, this improves Elekes' result to $\Omega(n^{4/3})$. Pach and de Zeeuw \cite{PZ} proved a similar lower bound in the more general case when both $\mathcal{P}_1,\mathcal{P}_2$ lie on two irreducible algebraic curves of constant degree $d$, provided the curves are not parallel lines, orthogonal lines, or concentric circles. Namely, they proved $D(\mathcal{P}_1,\mathcal{P}_2) = C_d \cdot \Omega(\min\{n^{2/3}m^{2/3},n^2,m^2\})$, where the constant $C_d$ depends on the degree $d$ of the given curves. All these findings place heavy restrictions on both point sets involved. \vspace{0.2in} Our main contribution in this article is to establish lower bounds for $D(\mathcal{P}_1,\mathcal{P}_2)$ that are asymptotically looser but work in a much more general setting: when $\mathcal{P}_1$ is an unrestricted fixed degree algebraic curve, and $\mathcal{P}_1$ is \emph{any} point set. Our main contribution is the following theorem. \begin{theorem} \label{thm:main2} Let $\mathcal{P}_1$ be a set of $m$ points on a curve $\gamma$ of fixed degree $r$ in $\mathbb{R}^2$ and let $\mathcal{P}_2$ be a set of $n$ points in $\mathbb{R}^2.$ Then \[ D(P_1,P_2) = \begin{cases} \Omega(m^{1/2}n^{1/2}\log^{-1/2}n), \ \ & \mbox{ when } m = \Omega(n^{1/2}\log^{-1/3}n), \\ \Omega(m^{1/3}n^{1/2}), \ \ & \mbox{ when } m=O(n^{1/2}\log^{-1/3}n) \end{cases} \] \end{theorem} This work is benefitted by recent results of Pohoata and Sheffer \cite{PS2} that establishes similar lower bounds for $D(\mathcal{P}_1,\mathcal{P}_2)$ when $\mathcal{P}_1$ is restricted to a line and $\mathcal{P}_2$ is arbitrary. \section{Preliminaries} We begin with preliminaries pertinent to our exposition. The first of these discusses necessary background from algebraic geometry. We often speak of curves of a fixed degree, so we make related terminology clear. In the polynomial ring $\mathbb{R}[x,y]$, the \emph{affine variety} of the polynomial $f$, denoted $V(f)$, is the zero set of $f$, i.e. $V(f)=\{p \in \mathbb{R}^2 : f(p)=0\}$. We interchangeably use the terms affine variety, variety, algebraic curve, and curve, to refer to $V(f)$ when $f \in \mathbb{R}[x,y]$. We say a variety is \emph{reducible} if it is the union of proper subvarieties, otherwise it is irreducible. Any algebraic curve is a finite union of irreducible algebraic curves; we refer to the irreducible algebraic curves as the \emph{components} of $V(f)$. A \emph{linear component} of $V(f)$ is a component of the form $V(g)$ where $g$ is linear. A \emph{circular component} of $V(f)$ is a component of the form $V(g)$ where $V(g)$ is a circle. A classical theorem in algebraic geometry that we exploit discusses intersections of curves: \begin{theorem}[Bezout's Theorem]\label{thm:bezout} If $f$ and $g$ are polynomials in $\mathbb{R}[x,y]$ of degree $\deg(f)$ and $\deg(g)$ respectively, and $f$ and $g$ have no common factors in $\mathbb{R}[x,y]$, then $V(f) \cap V(g)$ has at most $\deg(f) \cdot \deg(g)$ points. \end{theorem} Another theorem from algebraic geometry will be useful for understanding how much a given curve can partition $\mathbb{R}^2$. Here, connected components are in the sense of the standard topology on $\mathbb{R}^2$. \begin{theorem}[Harnack's Curve Theorem]\label{thm:harnack} If $f \in \mathbb{R}[x,y]$ is a degree $r$ polynomial, then $\mathbb{R}^2 \backslash V(f)$ has $O(r^2)$ connected components in $\mathbb{R}^2$. \end{theorem} We now review concepts from discrete geometry, including recent developments of Pohoata and Sheffer \cite{PS2}, that are pertinent for our discussion. We begin by formally introducing the concept of incidences. Let $P$ be a set of points, for our purposes in $\mathbb{R}^2$, and let $\Gamma$ be a set of geometric objects in $\mathbb{R}^2$. We say a point $p \in P$ is incident with an object $o \in \Gamma$ if $p$ lies in $o$. The number of such incidences between $P$ and $\Gamma$ is denoted $I(P,\Gamma)$. It will serve useful for us to find upper bounds on $I(P,\Gamma)$, and these can be developed by looking at the \emph{incidence graph} $\mathcal{G}(P,\Gamma)$ of $P$ and $\Gamma$, which is the bipartite graph with bipartition $(P,\Gamma)$ where there is an edge between $p \in P$ and $o \in \Gamma$ precisely when $p$ is in $o$. The following theorem of Pach and Sharir uses the incidence graph to establish an upper bound for $I(P,\Gamma)$ when $P$ is a set of points and $\Gamma$ is a set of algebraic curves with specific data. \begin{theorem}[Pach and Sharir \cite{PS1}]\label{thm:sharir} \label{incidence graphs} Let $\mathcal{P}$ be a set of m points and $\Gamma$ a set of $n$ distinct irreducible algebraic curves of degree at most $k$ in $\mathbb{R}^2$. If the complete bipartite graph $K_{s,t}$ is not a subgraph of $\mathcal{G}(\mathcal{P},\Gamma)$, then $$I(\mathcal{P},\Gamma) = O\left( m^{\frac{s}{2s-1}}n^{\frac{2s-2}{2s-1}}+m+n \right).$$ \end{theorem} The second technique that is central in our exposition is a technique developed by Pohoata and Sheffer \cite{PS2} that is the gateway to their development of the analogue of Theorem~\ref{thm:main2} when the points in $\mathcal{P}_1$ lie on a line (i.e. when $r=1$). It relies on keeping track of $d$-tuples of distances that are realized by a given pair of point sets, for a fixed $d$. \begin{definition} Let $\mathcal{P}_1,\mathcal{P}_2 \subset \mathbb{R}^2$ be finite. The \emph{$d^{th}$ distance energy} between $\mathcal{P}_1$ and $\mathcal{P}_2$ is \[ E_d(\mathcal{P}_1,\mathcal{P}_2) = \left| \left \{ (a_1,a_2,\ldots,a_d,b_1,b_2,\ldots,b_d) \in \mathcal{P}_1^d \times \mathcal{P}_2^d \ : \ |a_1b_1|= \cdots = |a_db_d| > 0 \right \} \right| \] \end{definition} They relate $d^{th}$ distance energies to distinct distances in the following way. \begin{proposition}\label{prop:distanceenergy} If $m=|\mathcal{P}_1|$ and $n = |\mathcal{P}_2|$, then $$E_d(\mathcal{P}_1, \mathcal{P}_2) = \Omega \left( \frac{m^dn^d}{D(\mathcal{P}_1,\mathcal{P}_2)^{d-1}} \right).$$ \end{proposition} They subsequently establish upper bounds on $E_d(\mathcal{P}_1, \mathcal{P}_2)$ to achieve lower bounds on $D(\mathcal{P}_1, \mathcal{P}_2)$ through Proposition~\ref{prop:distanceenergy}. To establish upper bounds on $E_d(\mathcal{P}_1, \mathcal{P}_2)$, they observe that \begin{equation}\label{eq:distanceenergy} E_d(\mathcal{P}_1, \mathcal{P}_2) = \sum_{\delta \in \Delta} p_{\delta}^d \end{equation} where $p_{\delta}$ is the number of pairs of points, one from $\mathcal{P}_1$ and one from $\mathcal{P}_2$, that realize the distance $\delta$, and $\Delta$ is the set of all distances realized between the two point sets. We use this technique to generalize their result to Theorem~\ref{thm:main2}. \section{Main Result} We now prove Theorem~\ref{thm:main2}. Throughout, we let $\gamma$ be the curve $V(f)$, where $f$ has degree $r$. First, suppose $m=\Omega(n/\log n)$. Let $p \in \mathcal{P}_2$ be a point which is not at the center of any circular component of $\gamma$. We can guarantee such a point $p$ exists because the complement of $\gamma$ has at most $O(r^2)$ connected components by Theorem~\ref{thm:harnack} and $r$ is fixed with respect to $n$. Let $C=V(g)$ be a circle centered at $p$, so $g$ is a degree $2$ polynomial in $\mathbb{R}[x,y]$. By construction, $g$ and $f$ have no common factors, so by Bezout's Theorem there are at most $2r$ points in $\mathcal{P}_1$ that lie on the circle $C$. These at most $2r$ points are precisely the set of points in $\mathcal{P}_1$ whose distance from $p$ is the radius of $C$. Consequently, the number of distinct distances between $p$ and $\mathcal{P}_1$ is at least $|\mathcal{P}_1|/2r=m/2r$. Since $m=\Omega(n/\log n)$ this implies \[ D(\mathcal{P}_1,\mathcal{P}_2) \geq D(\mathcal{P}_1,\{p\}) \geq m/2r = \Omega(m) = \Omega(m^{1/2}n^{1/2}\log^{-1/2}n) \] We can now assume throughout that $m=O(n/\log n)$. Suppose furthermore that $\Omega(n)$ points of $\mathcal{P}_2$ lie on $\gamma$. Choose a point $p \in \mathcal{P}_1$ that does not lie at the center of any circular component of $\gamma$. Then as in the previous argument, at most $2r$ points on $\gamma$ share a common fixed distance to $p$, so $D(\mathcal{P}_1,\mathcal{P}_2) \geq D(\{p\},\mathcal{P}_2 \cap V(f)) = \Omega(n)$. Since $m=O(n/\log n)$, we get $D(\mathcal{P}_1,\mathcal{P}_2) = \Omega(m^{1/2}n^{1/2}\log^{-1/2}n)$. So it remains only to consider when less than a constant fraction of the points of $\mathcal{P}_2$ lie on $\gamma$. In other words, if we let $\mathcal{P}_2'$ be the set of points in $\mathcal{P}_2$ not lying on $\gamma$, we can assume $|\mathcal{P}_2'| = \Omega(n)$. For our convenience, we further restrict $\mathcal{P}_2'$ to the subset $\mathcal{P}_2''$ consisting of points in $\mathcal{P}_2'$ that do not lie at the center of any circular component of $\gamma$. Again there are at most $O(r^2)$ such points by Theorem~\ref{thm:harnack}, so $|\mathcal{P}_2''|=\Theta(n)$. Suppose now that $\Omega(m)$ points in $\mathcal{P}_1$ lie on linear components of $\gamma$. Since $\gamma$ is a curve of fixed degree $r$, there are at most $r$ linear components in $\gamma$, so $\Theta(m)$ of these points lie on a single linear component, say the line $\ell$. Now applying Theorem 1.6 in \cite{PS2} with $\mathcal{P}_1 \cap \ell$ and $\mathcal{P}_2''$ we get $D(\mathcal{P}_1 \cap \ell,\mathcal{P}_2'') = \Omega(m^{1/2}n^{1/2}\log^{-1/2}n)$ and Theorem~\ref{thm:main2} then follows because $D(\mathcal{P}_1,\mathcal{P}_2) \geq D(\mathcal{P}_1 \cap \ell,\mathcal{P}_2'')$. Therefore, if we let $\mathcal{P}_1'$ be the set of points in $\mathcal{P}_1$ that do not lie on linear components of $\gamma$, we can assume $|\mathcal{P}_1'| = \Theta(m)$. The remainder of the proof establishes the lower bounds given in Theorem~\ref{thm:main2} with $\mathcal{P}_1$ and $\mathcal{P}_2$ replaced by $\mathcal{P}_1'$ and $\mathcal{P}_2''$ respectively. The theorem then follows from the facts that $|\mathcal{P}_1'|=\Theta(m)$, $|\mathcal{P}_2''|=\Theta(n)$ and $D(\mathcal{P}_1,\mathcal{P}_2) \geq D(\mathcal{P}_1',\mathcal{P}_2'')$. We begin with the first case of Theorem~\ref{thm:main2} in which $m=\Omega(n^{1/2}\log^{-1/3}n)$. To establish the desired lower bound for $D(\mathcal{P}_1',\mathcal{P}_2'')$, we consider the $3^{rd}$ distance energy $E_3(\mathcal{P}_1',\mathcal{P}_2'')$ between $\mathcal{P}_1'$ and $\mathcal{P}_2''$. From Proposition~\ref{prop:distanceenergy}, \[ E_3(\mathcal{P}_1',\mathcal{P}_2'') = \Omega \left( \dfrac{m^3n^3}{D(\mathcal{P}_1',\mathcal{P}_2'')^2} \right) \] so finding lower bounds on $D(\mathcal{P}_1',\mathcal{P}_2'')$ amounts to finding upper bounds on $E_3(\mathcal{P}_1',\mathcal{P}_2'')$. From Equation (1), \[ E_3(\mathcal{P}_1',\mathcal{P}_2'') = \sum_{\delta \in \Delta} p_{\delta}^3 \] where $\Delta$ is the set of all distances realized between $\mathcal{P}_1'$ and $\mathcal{P}_2''$, and for $\delta \in \Delta$ the statistic $p_{\delta}$ is the number of pairs of points, one from $\mathcal{P}_1'$ and one from $\mathcal{P}_2''$, that realize the distance $\delta$. Now fix $\delta$ and let $p \in \mathcal{P}_2''$. Let $C=V(g)$, where $g$ is quadratic in $\mathbb{R}[x,y]$, be the circle of radius $\delta$ centered at $p$. The number of points in $\mathcal{P}_1'$ of distance $\delta$ from $p$ is at most $|V(g) \cap V(f)|$. The polynomials $f,g$ have no common factors because $p$ does not lie at the center of any circular component of $\gamma$, so by Bezout's Theorem, $|V(g) \cap V(f)| \leq 2r$. Subsequently, $p_{\delta} \leq 2r \cdot |\mathcal{P}_2''| \leq 2rn$. Let $\Delta_j = \{\delta \in \Delta \ : \ p_{\delta} \geq j\}$, and $k_j=|\Delta_j|$. Then we have \begin{align*} E_3(\mathcal{P}_1',\mathcal{P}_2'') &= \sum_{\delta \in \Delta} p_\delta^3 \\ &\leq \sum_{j = 0}^{\log_2 2rn} \sum_{\{\delta \in \Delta \ : \ 2^j \leq p_\delta \leq 2^{j+1}\}} p_\delta^3 \\ &< \sum_{j = 0}^{\log_2 2rn} \sum_{\{\delta \in \Delta \ : \ 2^j \leq p_\delta \leq 2^{j+1}\}} (2^{j+1})^3 \\ &\leq 8\sum_{j=0}^{\log_2 2rn} (2^{j})^3k_{2^j}. \end{align*} Now for a fixed $j$, let $q=2^j$. We bound $q^3k_q$ in order to bound $E_3(\mathcal{P}_1',\mathcal{P}_2'')$. Let $\Gamma_q$ be the set of circles centered at points of $\mathcal{P}_1'$ whose radii lie in $\Delta_q$, (so there are $\Theta(m) \cdot k_q$ such circles) and consider the incidence graph between $\mathcal{P}_2''$ and these circles, namely $\mathcal{G}(\mathcal{P}_2'',\Gamma_q)$. We claim this graph avoids $K_{2,r+1}$ as a subgraph. If not, then there would be two points in $\mathcal{P}_2''$ that lie on $r+1$ circles in $\Gamma_q$. If this were the case, then the centers of these $r+1$ circles would be collinear, lying all on some line $\ell=V(g)$ where $\deg(g)=1$. These centers lie in $\mathcal{P}_1'$, which by assumption does not contain any point lying on linear components of $\gamma$. So, if we construct the curve $\gamma'=V(\tilde{f})$ that is obtained from $\gamma$ by deleting its linear components, $\mathcal{P}_1' \subset \gamma'$ and $\ell$ is not a subvariety of $\gamma'$ so $\tilde{f}$ and $g$ have no common factors. Consequently by Bezout's Theorem, \[|\mathcal{P}_1' \cap \ell| \leq |\gamma' \cap \ell| = |V(\tilde{f}) \cap V(g)| < r \cdot 1 = r.\] But this is a contradiction because the centers of the $r+1$ circles all lie in $\mathcal{P}_1' \cap \ell$. So, $K_{2,r+1}$ is not a subgraph of $\mathcal{G}(\mathcal{P}_2'',\Gamma_q)$, and hence Theorem~\ref{thm:sharir} implies \[ I(\mathcal{P}_2'',\Gamma_q) = O(n^{2/3}(mk_q)^{2/3} + n + mk_q) \] We continue based on which summand dominates the expression $n^{2/3}(mk_q)^{2/3} + n + mk_q$. If $mk_q$ dominates, then $n^{2/3}(mk_q)^{2/3} = O(mk_q)$ so $k_q=\Omega(n^2/m)$. Now $m=O(n/\log n)$ so \[D(\mathcal{P}_1',\mathcal{P}_2'') \geq k_q=\Omega(n^2/m)=\Omega(n\log n)=\Omega(m^{1/2}n^{1/2}\log^{3/2}n) =\Omega(m^{1/2}n^{1/2}\log^{-1/2}n),\] as desired. So if the summand $mk_q$ dominates, we do not need to bound $k_q$ as we will get the desired result for Theorem~\ref{thm:main2}. If any of the other two summands dominate, we will subsequently bound $q^3k_q$. First suppose $n$ dominates the sum. Then $m^{2/3}n^{2/3}k_q^{2/3}=O(n)$ so $k_q=O(n^2/m)$ and hence \begin{equation}\label{bound1} q^3k_q = O(q^3n^{1/2}/m). \end{equation} If instead $m^{2/3}n^{2/3}k_q^{2/3}$ dominates, we use the fact that by definition of $k_q$, $I(\mathcal{P}_2'',\Gamma_q) \geq qk_q$ so $qk_q = O(m^{2/3}n^{2/3}k_q^{2/3})$ and subsequently \begin{equation}\label{bound2} q^3k_q=O(m^2n^2). \end{equation} Combining Equations (\ref{bound1}) and (\ref{bound2}), we have \[ q^3k_q = O(q^3n^{1/2}/m+m^2n^2). \] Subsequently, \begin{align*} E_3(\mathcal{P}_1',\mathcal{P}_2'') &< 8\sum_{j=0}^{\log_2(2rn)} 2^{3j}k_{2^j} \\ &= O\left( \sum_{j=0}^{\log_2(2rn)} \left( m^2n^2 + \frac{2^{3j} n^{1/2}}{m} \right) \right) \\ &= O\left( m^2n^2(\log_2(2rn)) + \frac{(2n)^3 n^{1/2}}{m} \right) \\ &= O\left( m^2n^2\log n + \frac{n^{7/2}}{m} \right). \end{align*} Now if $m=\Omega(n^{1/2}\log^{-1/3}n)$, the above bound is dominated by $m^2n^2\log n$, so $E_3(\mathcal{P}_1',\mathcal{P}_2'') = O(m^2n^2\log n)$. Subsequently, by Proposition~\ref{prop:distanceenergy}, \[D(\mathcal{P}_1',\mathcal{P}_2'') = \Omega \left( \left( \dfrac{m^3n^3}{m^2n^2\log n} \right)^{1/2} \right) = \Omega(n^{1/2}m^{1/2}\log^{-1/2}n)\] as desired. Our remaining case to consider is when $m=O(n^{1/2}\log^{-1/3}n)$, and much of this case follows the analogous proof in \cite{PS2}, but we include it for completeness. First, suppose there is a $\delta$ for which $p_{\delta} \geq n^{1/2}m^{4/3}$. Consider the pairs of points $(p,q) \in \mathcal{P}_1' \times \mathcal{P}_2''$ for which the distance from $p$ to $q$ is $\delta$. If we let $\mathcal{C}$ be the set circles centered at the the points $p \in \mathcal{P}_1'$ that occur in some such pair $(p,q)$, then $\mathcal{C}$ intersects $\mathcal{P}_2''$ in at least $n^{1/2}m^{4/3}$ many points. Since $|\mathcal{P}_1'| \leq |\mathcal{P}_1| = m$, there are at most $m$ circles in $\mathcal{C}$, so there is some circle $\gamma_0 \in \mathcal{C}$ that intersects $\mathcal{P}_2''$ in at least $n^{1/2}m^{1/3}$ many points. Now choose any point $p' \in \mathcal{P}_1'$ that is not at the center of the circle $\gamma_0$. Then at most two points on $\gamma_0$ have the same distance from $p'$, so the number of distinct distances from $p'$ to points in $\mathcal{P}_2''$ on the circle $\gamma_0$ is at least $n^{1/2}m^{1/3}/2$. Consequently, \[ D(\mathcal{P}_1',\mathcal{P}_2'') \geq D(\{p'\},\mathcal{P}_2'' \cap \gamma_0) \geq n^{1/2}m^{1/3}/2 = \Omega(m^{1/3}n^{1/2}), \] establishing Theorem~\ref{thm:main2}. Finally, suppose instead that $p_{\delta} < n^{1/2}m^{4/3}$ for every $\delta \in \Delta$. Now for a fixed $j$, there are at least $j$ pairs of points, one from $\mathcal{P}_1'$ one from $\mathcal{P}_2''$, that realize the distance $\delta \in \Delta_j$. Consequently, $k_j=|\Delta_j| \leq mn/j$. So, using second distance energies, we have \begin{align*} E_2(\mathcal{P}_1',\mathcal{P}_2'') &< 4\sum_{j=0}^{\log_2n^{1/2}m^{4/3}} 2^{2j}k_{2^j} \\ &= 4 \left( \sum_{j = 0}^{\log\sqrt{mn}}2^{2j}k_{2^j} + \sum_{j = \log\sqrt{mn}}^{\log_2n^{1/2}m^{4/3}}2^{2j}k_{2^j} \right)\\ &= O\left(\sum_{j = 0}^{\log\sqrt{mn}} mn2^j + \sum_{j = \log\sqrt{mn}}^{\log_2n^{1/2}m^{4/3}} (2^{2j}n^{1/2}m^{-1} + m^2n^22^{-j})\right)\\ &= O\left( n^{3/2}m^{5/3} \right). \end{align*} The bounds in the second last line coming from the fact that $k_j \leq mn/j$ in the first summand, and from the Equations (\ref{bound1}) and (\ref{bound2}) in the second summand. Subsequently, by Proposition~\ref{prop:distanceenergy}, \[D(\mathcal{P}_1',\mathcal{P}_2'') = \Omega \left( \dfrac{m^2n^2}{m^{5/3}n^{3/2}} \right) = \Omega(n^{1/2}m^{1/3})\] as desired. \bigskip \section*{Acknowledgments} The authors would like to thank Adam Sheffer for suggesting this problem and for helpful discourse. This research was supported by the Harvey Mudd College Faculty Research, Scholarship, and Creative Works Award. \bibliographystyle{plain}
1812.03450
\section{#1}\setcounter{equation}{0}} \newcommand{\Delta}%{\triangle}{\Delta \newcommand{\laplace_p}{\Delta}%{\triangle_p} \newcommand{\nabla}%{\bigtriangledown}{\nabla \newcommand{\partial}{\partial} \newcommand{\pd}{\partial} \newcommand{\subset \subset}{\subset \subset} \newcommand{\setminus}{\setminus} \newcommand{:}{:} \newcommand{\mathrm{div}\,}{\mathrm{div}\,} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\rule[-.5mm]{.3mm}{3mm}}{\rule[-.5mm]{.3mm}{3mm}} \newcommand{\stackrel{\rightharpoonup}{\rightharpoonup}}{\stackrel{\rightharpoonup}{\rightharpoonup}} \newcommand{\operatorname{id}}{\operatorname{id}} \newcommand{\operatorname{supp}}{\operatorname{supp}} \newcommand{\mbox{ w-lim }}{\mbox{ w-lim }} \newcommand{{x_N^{-p_*}}}{{x_N^{-p_*}}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{I\!\!N}}{{\mathbb N}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{\abs}[1]{\lvert#1\rvert} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{assertion}[theorem]{Assertion} \newtheorem{problem}[theorem]{Problem} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{example}[theorem]{Example} \newtheorem{Thm}[theorem]{Theorem} \newtheorem{Lem}[theorem]{Lemma} \newtheorem{Pro}[theorem]{Proposition} \newtheorem{Def}[theorem]{Definition} \newtheorem{defi}[theorem]{Definition} \newtheorem{Exa}[theorem]{Example} \newtheorem{Exs}[theorem]{Examples} \newtheorem{Rems}[theorem]{Remarks} \newtheorem{Rem}[theorem]{Remark} \newtheorem{Cor}[theorem]{Corollary} \newtheorem{Conj}[theorem]{Conjecture} \newtheorem{Prob}[theorem]{Problem} \newtheorem{Ques}[theorem]{Question} \newtheorem*{corollary*}{Corollary} \newtheorem*{theorem*}{Theorem} \newtheorem{thm}[theorem]{Theorem} \newtheorem{lem}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{cor}[theorem]{Corollary} \newtheorem{ex}[theorem]{Example} \newtheorem{rem}[theorem]{Remark} \newtheorem*{thmm}{Theorem} \newcommand{\Hmm}[1]{\leavevmode{\marginpar{\tiny% $\hbox to 0mm{\hspace*{-0.5mm}$\leftarrow$\hss}% \vcenter{\vrule depth 0.1mm height 0.1mm width \the\marginparwidth}% \hbox to 0mm{\hss$\rightarrow$\hspace*{-0.5mm}}$\\\relax\raggedright #1}}} \newcommand{\noindent \mbox{{\bf Proof}: }}{\noindent \mbox{{\bf Proof}: }} \renewcommand{\theequation}{\thesection.\arabic{equation}} \catcode`@=11 \@addtoreset{equation}{section} \catcode`@=12 \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\opname{Pess}}{\opname{Pess}} \newcommand{\mbox{\noindent {\bf Proof} \hspace{2mm}}}{\mbox{\noindent {\bf Proof} \hspace{2mm}}} \newcommand{\mbinom}[2]{\left (\!\!{\renewcommand{\arraystretch}{0.5} \mbox{$\begin{array}[c]{c} #1\\ #2 \end{array}$}}\!\! \right )} \newcommand{\brang}[1]{\langle #1 \rangle} \newcommand{\vstrut}[1]{\rule{0mm}{#1mm}} \newcommand{\rec}[1]{\frac{1}{#1}} \newcommand{\set}[1]{\{#1\}} \newcommand{\dist}[2]{$\mbox{\rm dist}\,(#1,#2)$} \newcommand{\opname}[1]{\mbox{\rm #1}\,} \newcommand{\mb}[1]{\;\mbox{ #1 }\;} \newcommand{\undersym}[2] {{\renewcommand{\arraystretch}{0.5} \mbox{$\begin{array}[t]{c} #1\\ #2 \end{array}$}}} \newlength{\wex} \newlength{\hex} \newcommand{\understack}[3]{% \settowidth{\wex}{\mbox{$#3$}} \settoheight{\hex}{\mbox{$#1$}} \hspace{\wex} \raisebox{-1.2\hex}{\makebox[-\wex][c]{$#2$}} \makebox[\wex][c]{$#1$} }% \newcommand{\smit}[1]{\mbox{\small \it #1} \newcommand{\lgit}[1]{\mbox{\large \it #1} \newcommand{\scts}[1]{\scriptstyle #1} \newcommand{\scss}[1]{\scriptscriptstyle #1} \newcommand{\txts}[1]{\textstyle #1} \newcommand{\dsps}[1]{\displaystyle #1} \newcommand{\,\mathrm{d}x}{\,\mathrm{d}x} \newcommand{\,\mathrm{d}y}{\,\mathrm{d}y} \newcommand{\,\mathrm{d}z}{\,\mathrm{d}z} \newcommand{\,\mathrm{d}t}{\,\mathrm{d}t} \newcommand{\,\mathrm{d}r}{\,\mathrm{d}r} \newcommand{\,\mathrm{d}u}{\,\mathrm{d}u} \newcommand{\,\mathrm{d}v}{\,\mathrm{d}v} \newcommand{\,\mathrm{d}V}{\,\mathrm{d}V} \newcommand{\,\mathrm{d}W}{\,\mathrm{d}W} \newcommand{\,\mathrm{d}s}{\,\mathrm{d}s} \newcommand{\,\mathrm{d}S}{\,\mathrm{d}S} \newcommand{\,\mathrm{d}k}{\,\mathrm{d}k} \newcommand{\,\mathrm{d}m}{\,\mathrm{d}m} \newcommand{\,\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi}{\,\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi} \newcommand{\,\mathrm{d}\phi}{\,\mathrm{d}\phi} \newcommand{\,\mathrm{d}\tau}{\,\mathrm{d}\tau} \newcommand{\,\mathrm{d}\xi}{\,\mathrm{d}\xi} \newcommand{\,\mathrm{d}\eta}{\,\mathrm{d}\eta} \newcommand{\,\mathrm{d}\sigma}{\,\mathrm{d}\sigma} \newcommand{\,\mathrm{d}\theta}{\,\mathrm{d}\theta} \newcommand{\,\mathrm{d}\nu}{\,\mathrm{d}\nu} \def\alpha} \def\gb{\beta} \def\gg{\gamma{\alpha} \def\gb{\beta} \def\gg{\gamma} \def\chi} \def\gd{\delta} \def\ge{\varepsilon{\chi} \def\gd{\delta} \def\ge{\varepsilon} \def\theta} \def\vge{\varepsilon{\theta} \def\vge{\varepsilon} \def\phi} \def\vgf{\varphi} \def\gh{\eta{\phi} \def\vgf{\varphi} \def\gh{\eta} \def\iota} \def\gk{\kappa} \def\gl{\lambda{\iota} \def\gk{\kappa} \def\gl{\lambda} \def\mu} \def\gn{\nu} \def\gp{\pi{\mu} \def\gn{\nu} \def\gp{\pi} \def\varpi} \def\gr{\rho} \def\vgr{\varrho{\varpi} \def\gr{\rho} \def\vgr{\varrho} \def\sigma} \def\vgs{\varsigma} \def\gt{\tau{\sigma} \def\vgs{\varsigma} \def\gt{\tau} \def\upsilon} \def\gv{\vartheta} \def\gw{\omega{\upsilon} \def\gv{\vartheta} \def\gw{\omega} \def\xi} \def\gy{\psi} \def\gz{\zeta{\xi} \def\gy{\psi} \def\gz{\zeta} \def\Gamma} \def\Gd{\Delta} \def\Gf{\Phi{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi} \def\Theta{\Theta} \def\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi{\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi} \def\Omega} \def\Gx{\Xi} \def\Gy{\Psi{\Omega} \def\Gx{\Xi} \def\Gy{\Psi} \renewcommand{\div}{\mathrm{div}} \newcommand{\red}[1]{{\color{red} #1}} \newcommand{\De} {\Delta} \newcommand{\la} {\lambda} \newcommand{\mathbb{B}^{2}}{\mathbb{B}^{2}} \newcommand{\mathbb{R}^{2}}{\mathbb{R}^{2}} \newcommand{\mathbb{B}^{N}}{\mathbb{B}^{N}} \newcommand{\mathbb{R}^{N}}{\mathbb{R}^{N}} \newcommand{k_{P}^{M}}{k_{P}^{M}} \newcommand{\authorfootnotes}{\renewcommand\thefootnote{\@fnsymbol\c@footnote}}% \def{\text{e}}{{\text{e}}} \def{I\!\!N}{{I\!\!N}} \numberwithin{equation}{section} \allowdisplaybreaks \title[Perturbation theory of positive solutions]{Some new aspects of perturbation theory of positive solutions of second-order linear elliptic equations} \author{Debdip Ganguly} \address{Debdip Ganguly, Department of Mathematics, Indian Institute of Science Education and Research, Dr. Homi Bhabha Road, Pune 411008, India} \email{debdipmath@gmail.com} \author{Yehuda Pinchover} \address{Yehuda Pinchover, Department of Mathematics, Technion - Israel Institute of Technology, Haifa 3200003, Israel} \email{pincho@technion.ac.il} \date{} \begin{abstract} We present some new results concerning perturbation theory for positive solutions of second-order linear elliptic operators, including further study of the equivalence of positive minimal Green functions and the validity of a Liouville comparison principle for nonsymmetric operators. \vspace{.2cm} \noindent 2000 \! {\em Mathematics Subject Classification.} {Primary 35B09; Secondary 31C35, 35A08, 35J08.} \\[1mm] \noindent {\em Keywords.} Green function, ground state, Liouville comparison principle, quasimetric property, second-order elliptic operator, $3G$-inequality. \end{abstract} \maketitle \section{Introduction}\label{sec_int} Let $M$ be a smooth, connected, and noncompact Riemannian manifold of dimension $N$. We consider a second-order elliptic operator $P$ with real coefficients in the divergence form \begin{equation} \label{operator} Pu:=-\div\! \left[A(x)\nabla u + u\tilde{b}(x) \right] + b(x)\cdot\nabla u +c(x)u \qquad x\in M. \end{equation} More precisely, let $m>0$ be a strictly positive measurable function in $M$ such that $m$ and $m^{-1}$ are bounded on any compact subset of $M$, and denote $\,\mathrm{d}m: =m(x)\!\,\mathrm{d}x$, where $\,\mathrm{d}x$ is the Riemannian volume form of $M$ (which is just the Lebesgue measure in the case of Schr\"odinger operators on domains of ${\mathbb R}^N$). \medskip We denote by $T_xM$ and $TM$ the tangent space to $M$ at $x\in M$ and the tangent bundle, respectively. Let $\mathrm{End}(T_xM)$ and $\mathrm{End}(TM)$ be the set of endomorphisms in $T_xM$ and the corresponding bundle, respectively. The gradient with respect to the Riemannian metric is denoted by $\nabla$, and $-\div$ is the formal adjoint of the gradient with respect to the measure ${\rm d}m$. The inner product and the induced norm on $TM$ are denoted by $\langle X, Y\rangle$ and $|X|$, respectively, where $X, Y \in TM$. \medskip We assume that $A$ is a symmetric measurable section on $M$ of $\mathrm{End}(TM)$ such that for any compact set $K$ in $M$ there exists a positive constant $\lambda_K\geq 1$ satisfying \begin{equation}\label{ell} \lambda_K^{-1} |\xi|^2 \leq |\xi|^2_{A(x)} :=\langle A(x)\xi, \xi\rangle \leq \lambda_K |\xi|^2 \qquad \forall x\in K \mbox{ and } (x,\xi)\in TM. \end{equation} We assume also that the coefficients $b$ and $\tilde b$ are measurable vector fields in $M$ of class $L^p_{\mathrm{loc}}(M)$ and $c$ is a measurable function in $M$ of class $L^{p/2}_{\mathrm{loc}}(M)$ for some $p > N$. We denote by $P^\star$ the formal adjoint operator of $P$ on its natural space $L^2(M,\!\,\mathrm{d}m)$. When $P$ is in divergence form (\ref{operator}) and $b = \tilde{b}$, then the operator \begin{equation}\label{symm_P} Pu = - \div \left[ \big(A \nabla}%{\bigtriangledown u + u b\big) \right] + b \cdot \nabla}%{\bigtriangledown u + c u, \end{equation} is {\em symmetric} in the space $L^2(M, \!\,\mathrm{d}m)$. Throughout the paper, we call this setting the {\em symmetric case}. We note that if $P$ is symmetric and $b$ is smooth enough, then $P$ is in fact a Schr\"odinger-type operator of the form \begin{equation}\label{eq-symm} Pu = - \div \big(A \nabla}%{\bigtriangledown u \big) + \tilde{c} u, \end{equation} where $\tilde{c}=c-\div\, b$. \medskip By a {\em solution} $v$ of the equation $Pu = 0$, we mean $v \in W^{1,2}_{{\mathrm{loc}}}(M)$ that satisfies the equation in the {\em weak sense}. Subsolutions and supersolutions are defined similarly. Denote the cone of all positive solutions of the equation $Pu=0$ in $M$ by $\mathcal{C}_{P}(M)$. Let $V$ be a real valued potential. The {\em generalized principal eigenvalue} of the operator $P$ and a potential $V\in L^q_{\mathrm{loc}}(M)$, $q>N/2$, is defined by $$\gl_0(P,V,M) := \sup\{\gl \in \mathbb{R} \; \mid\; \mathcal{C}_{P-\lambda V}(M)\neq \emptyset\}.$$ We say that $P$ is {\em nonnegative in} $M$ (and we denote it by $P\geq 0$ in $M$) if $\lambda_0:= \lambda_0(P,\mathbf{1},M)\geq 0$, where $\mathbf{1}$ is the constant function on $M$ taking at any point $x\in M$ the value $1$. Throughout the paper we always assume that $\gl_0\geq 0$, that is, $P\geq 0$ in $M$. \medskip The main purpose of the paper is to present some new results concerning perturbation theory of the cone $\mathcal{C}_{P}(M)$. Perturbation theory of positive solutions was studied extensively in the past few decades. S.~Agmon in \cite{AG1, AG2} studied positivity and decay properties of solutions of second-order elliptic equations using the notion of \emph{Agmon ground state}. His results turned out to be highly influential in the study of the structure of $\mathcal{C}_{P}(M)$ and its behaviour under certain types of perturbations (the so-called {\em criticality theory}). Without any claim of completeness, we refer to some relevant papers studying criticality theory \cite{AN, AA, GH, MHM,MM0,MM1, YP3, YP1, YP2,YP5, Pinsky95} and references therein. \medskip The perturbation that we consider here is of the form $P_\gl:= P-\gl V$, where $P\geq 0$ in $M$, $\gl\in {\mathbb R}$ and $V\in L^q_{\mathrm{loc}}(M)$, $q>N/2$. We study, in particular, the maximal interval such that the Green function of $P_\gl$ is equivalent to the Green function of $P$, certain classes of `big' and `small' perturbations, compactness properties of weighted Green operators for certain classes of `small' weights, and a new Liouville comparison principle for nonsymmetric operators. See Section~\ref{sec-AO} for more details. \medskip The outline of our paper is as follows. In Section~\ref{sec-pre} we recall some definitions and basic known results concerning criticality theory, and in Section~\ref{sec-AO} we discuss the problems that we study in the present paper. Section~\ref{section_maximal_green} is devoted to our results concerning the equivalence of positive minimal Green functions of second-order elliptic operators under nonnegative perturbation. In Section~\ref{sec_hbig} we prove that \emph{optimal} Hardy-weights are \emph{h-big} perturbations in the sense of \cite{GH}, while in Section~\ref{sec-critical-Hardy} we present a large family of \textquoteleft small\textquoteright\, Hardy-weights $W_\mu} \def\gn{\nu} \def\gp{\pi$, given by a simple explicit formula, such that $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is positive-critical. In Section~\ref{sec-torsion} we prove that for symmetric operators, the assumption of finite torsional rigidity implies that the spectrum of $P$ on $L^2(M,\!\,\mathrm{d}m)$ is discrete. Section~\ref{Sec_4.1} is devoted to a Liouville comparison principle for {\em nonsymmetric}, nonnegative, elliptic operators. We conclude our paper in Section~\ref{sec_green_function_hyperbolic} where we apply perturbation theory to study the asymptotic of the positive minimal Green function of the shifted Laplace-Beltrami operator on the hyperbolic space $\mathbb{H}^N$. \section{Preliminaries}\label{sec-pre} In the present section we fix our setting and notation, and recall some basic definitions and results concerning criticality theory. Let $M$ be a smooth, connected, and noncompact Riemannian manifold of dimension $N$, and $P$ an elliptic operator of the form \eqref{operator}. Throughout the paper we use the following notation. \begin{itemize} \item We denote by $ \infty $ the ideal point which is added to $ M $ to obtain the one-point compactification of $M$. \item We write $X_1 \Subset X_2$ if the set $X_2$ is open in $M$, the set $\overline{X_1}$ is compact and $\overline{X_1} \subset X_2$. \item Let $g_1,g_2$ be two positive functions defined in a domain $D$. We say that $g_1$ is {\em equivalent} to $g_2$ in $D$ (and use the notation $g_1\asymp g_2$ in $D$) if there exists a positive constant $C$ such that $$C^{-1}g_{2}(x)\leq g_{1}(x) \leq Cg_{2}(x) \qquad \mbox{ for all } x\in D.$$ \item We fix a {\em compact exhaustion} of $M$, i.e., a sequence of smooth relatively compact domains in $M$ such that $M_1 \neq \emptyset,$ $M_j \Subset M_{j + 1}$ and $\cup_{j = 1}^{\infty} M_j = M$. We denote $M_j^* := M \setminus \overline{M_j}.$ \item We denote the restriction of a function $f:M\to {\mathbb R}$ to $A\subset M$ by $f \!\!\upharpoonright_A$. \end{itemize} \medskip We first recall the definitions of critical and subcritical operators and of a ground state (for more details on criticality theory, see \cite{MM0,MM1,YP3, YP1, YP2} and references therein). \begin{defi}\label{groundstate}{\em Let $K \Subset M$. We say that $u \in \mathcal{C}_{P}(M \setminus K)$ is a {\em positive solution of the operator $P$ of minimal growth in a neighborhood of infinity in $M$}, if for any compact set $K \Subset K_{1} \Subset M$ with a smooth boundary and any positive supersolution $v$ of the equation $Pw=0$ in $M \setminus K_{1}$, $v\in C((M \setminus K_{1})\cup \partial K_{1})$, the inequality $u \leq v$ on $\partial K_{1}$ implies that $u \leq v$ in $M \setminus K_{1}$. A positive solution $u \in \mathcal{C}_{P}(M)$ which has minimal growth in a neighborhood of infinity in $M$ is called the \emph{(Agmon) ground state} of $P$ in $M$ (see \cite{AG2}). } \end{defi} \begin{defi}\label{critical}{\em The operator $P$ is said to be {\em critical} in $M$ if $P$ admits a ground state in $M$. The operator $P$ is called {\em subcritical} in $M$ if $P\geq 0$ in $M$ but $P$ is not critical in $M$. If $P \not\geq 0$ in $M$, then $P$ is said to be {\em supercritical} in $M$. } \end{defi} If $W\in L^q_{\mathrm{loc}}(M;{\mathbb R}_+)$ with $q>N/2$ is a nonzero nonnegative potential, then $P- \lambda W$ is subcritical for every $\lambda \in (-\infty, \lambda_0(P, W, M))$, and supercritical for $\lambda > \lambda_0(P, W,M)$. Furthermore, if $P$ is critical in $M$, then $\lambda_0(P, W, M) = 0$. \begin{rem}\label{altenatecritical}{\em Let $P\geq 0$ in $M$. It is well known that the operator $P$ is critical in $M$ if and only if the equation $P u = 0$ in $M$ has a unique (up to a multiplicative constant) positive supersolution (see \cite{MM1,YP3}). In particular, if $P$ is critical in $M$, then $\dim \mathcal{C}_{P}(M) = 1$. Further, in the critical case, the unique positive supersolution (up to a multiplicative positive constant) is a ground state of $P$ in $M$. On the other hand, $P$ is subcritical in $M$ if and only if $P$ admits a (unique) positive minimal Green function $G_{P}^{M}(x,y)$ in $M$. Moreover, for any fixed $y\in M$, the function $G_{P}^{M}(\cdot,y)$ is a positive solution of minimal growth in a neighborhood of infinity in $M$. Since, $G_{P^\star}^{M}(x,y)=G_{P}^{M}(y,x)$, it follows that $P$ is critical (resp. subcritical) in $M$ if and only if $P^\star$ is critical (resp. subcritical) in $M$. } \end{rem} \begin{rem}\label{altenatecritical1}{\em In the critical case there exists a (sign-changing) \emph{Green function} which is bounded above by the corresponding ground state away from the singularity, see \cite{DP}. } \end{rem} \begin{defi}\label{null-critical}{\em 1. We say that $W\gneqq 0$ is a {\em Hardy-weight} of $P$ in $M$ if $P-W\geq 0$ in $M$. \medskip 2. Assume that $W\gneqq 0$ is a Hardy-weight of $P$ in $M$, and that $P-W$ is critical in $M$. Let $\phi$ and $\phi^\star$ be the ground states of $P-W$ and $P^\star-W$, respectively. The operator $P-W$ is said to be {\em null-critical} (respect., {\em positive-critical}) in $M$ with respect to $W$ if $\phi\phi^\star \not\in L^1(M,W\!\,\mathrm{d}x)$ (respect., $\phi\phi^\star \in L^1(M,W\!\,\mathrm{d}x)$). } \end{defi} Fix a potential $V\in L^q_{\mathrm{loc}}(M;{\mathbb R})$, where $q>N/2$. Set $S: = S_+ \cup S_0$, where \begin{align*} S_+: &= S_+ (P, V, M) = \{ t \in \mathbb{R} : P - tV \ \mbox{is subcritical in $M$} \},\\[2mm] S_0: &= S_0 (P, V, M) = \{ t \in \mathbb{R} : P - tV \ \mbox{is critical in $M$} \}. \end{align*} Then $S$ is a closed interval and $S_0 \subset \partial S$ \cite{YP2}. Moreover, if $V$ has compact support in $M$, then $S_0 = \partial S$. In particular, subcriticality is stable under compact perturbation, i.e., if $P$ is subcritical and $V$ is a nonzero potential with compact support in $M$, then there exists $\varepsilon > 0$ such that $P - \varepsilon V$ is subcritical for $|\varepsilon| < \varepsilon_0$ (see \cite{YP1, YP2}). The above stability property of subcritical operators and other positivity properties are preserved under a larger (and in fact maximal) class of potentials $V$ called \emph{small perturbations} \cite{YP1}. We recall below the definition of small perturbation and other types of perturbations by a potential $V$ and discuss briefly some of their properties. \begin{defi}[\cite{MM1,YP1}] {\rm Let $P$ be a subcritical operator in $M$ and let $V \in L^{q}_{\mathrm{loc}}(M)$ for some $q > N/2$ be a real valued potential. We say that $V$ is a \emph{small (semismall) perturbation} of $P$ in $M$ if \begin{equation*}\label{defi_small_perturbation} \lim_{n \rightarrow \infty} \left\{ \sup_{x, y \in M_n^*} \int_{M_n^*} \dfrac{G^M_P(x, z) |V(z)| G^M_P(z, y)\,\mathrm{d}m(z)}{G^M_P(x, y)} \right\} = 0, \end{equation*} \medskip \begin{equation*}\label{defi_semismall_perturbation} \left(\!\! \lim_{n \rightarrow \infty} \!\!\left\{\! \sup_{ y \in M_n^*}\! \int_{M_n^*} \!\!\!\!\dfrac{G^M_P(x_0, z) |V(z)| G^M_P(z, y)\!\,\mathrm{d}m(z) }{G^M_P(x_0, y)} \!\!\right\} \!\!=\! 0,\! \mbox{ where } x_0 \in M \mbox{ is fixed}\!\!\right)\!\!. \end{equation*} } \end{defi} \begin{defi}\label{g_bounded_defi} {\rm We say that $V$ is a {\em $G$-(semi)bounded perturbation} of $P$ in $M$ if there exists a positive constant $C_0$ such that \begin{equation}\label{supremum} C_0 : = \sup_{x, y \in M} \int_{M}\frac{ G_{P}^{M}(x, z) |V(z)| G_{P}^{M} (z, y) \,\mathrm{d}m(z)}{G_{P}^{M}(x, y)} < \infty, \end{equation} \medskip \begin{equation*}\label{defi_bdd_perturbation} \left( \sup_{ y \in M} \int_{M}\dfrac{G^M_P(x_0, z) |V(z)| G^M_P(z, y)\!\,\mathrm{d}m(z) }{G^M_P(x_0, y)} <\infty, \mbox{ where } x_0 \in M \mbox{ is fixed}\!\right)\!\!. \end{equation*} } \end{defi} \begin{rem} {\rm A small perturbation is semismall and $G$-bounded \cite{MM1}. On the other hand, if $V$ is $G$-bounded perturbation of $P$ in $M$, and $f$ is an arbitrary bounded function vanishing at infinity in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ (i.e. with respect of the one-point compactification of $M$), then clearly, $fV$ is a small perturbation of $P$ in $M$. } \end{rem} \begin{defi}{\em Let $P_i,$ $i = 1, 2$ be two subcritical operators in $M.$ We say that the Green functions $G^{M}_{P_1}(x, y)$ and $G^{M}_{P_2}(x, y)$ are {\em equivalent} (respect., {\em semiequivalent}) if $G^{M}_{P_1} \asymp G^{M}_{P_2}$ on $M \times M \setminus \{ (x, x) : x \in M \}$ (respect., if for a fixed $y\in M$, we have $G^{M}_{P_1}(\cdot,y) \asymp G^{M}_{P_2}(\cdot,y)$ on $M \setminus \{ y\}$). } \end{defi} In the sequel we use the notation \begin{align*} &E_+ \!= \!E_+(P, V, M) := \{ t \in \mathbb{R} \!\mid\! G^{M}_{P - tV} \asymp G^{M}_{P} \quad \mbox{on} \ M \times M \setminus \{ (x, x) : x \in M \} ,\\[2mm] &SE_+ = SE_+(P, V, M) := \{ t \in \mathbb{R} \!\mid\! G^{M}_{P - tV} \mbox{ is semiequivalent to } G^{M}_{P} \} . \end{align*} \begin{rem}\label{rem_sp} {\rm Clearly, $E_+\subseteq S_+$. It is known that if the operator $P$ is subcritical and $V$ is a small perturbation of $P$ in $M,$ then $E_+ = S_+$, $\partial S = S_0$, and the corresponding ground states are equivalent to $G_P^M(x,x_0)$ in $M\setminus B(x_0,\vge)$ for sufficiently small $\vge>0$. On the other hand, If $V$ is a $G$-bounded perturbation of $P$ in $M,$ then $G^{M}_{P}\asymp G^{M}_{P- tV}$ on $M \times M \setminus \{ (x, x) : x \in M \}$ provided $|t|$ is small enough \cite{MM1,YP3,YP1}. Furthermore, if $G^{M}_{P}(x, y)$ and $G^{M}_{P- V}(x, y)$ are equivalent and $V$ has a definite sign, then $V$ is a G-bounded perturbation of $P$ in $M$. Moreover, in this case, $E_+$ is an open half-line which is contained in $S_+\setminus \{ \lambda_0 \}$ \cite[Corollary~3.6]{YP2}. } \end{rem} \medskip Finally, we discuss sufficient conditions for the compactness of the following weighted Green operators with weight $W\geq 0$. Let \begin{equation}\label{weighted Green} \mathcal{G} \! f(x)\!:= \!\!\! \int_{M}\!\!\! \Green{M}{P}{x}{y}W(y)f(y)\!\,\mathrm{d}m(y),\;\; \mathcal{G} ^\odot \!f(y) \!:=\!\!\! \int_{M}\!\!\! \Green{M}{P}{x}{y}W(x)f(x)\!\,\mathrm{d}m(x) \end{equation} in certain weighted $L^p$ spaces, where $1\leq p\leq \infty$. Let $\phi} \def\vgf{\varphi} \def\gh{\eta$ and $\tilde \phi} \def\vgf{\varphi} \def\gh{\eta$ be a pair of two positive continuous functions on $M$, and set $$L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p):=L^{p}(M,(\phi} \def\vgf{\varphi} \def\gh{\eta_p)^p\!\,\mathrm{d}m),\quad L^{p}(\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p):=L^{p}(M,(\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p)^p\!\,\mathrm{d}m),$$ where \begin{equation}\label{eq:2.9} \phi} \def\vgf{\varphi} \def\gh{\eta_p:=\phi} \def\vgf{\varphi} \def\gh{\eta^{-1}(\phi} \def\vgf{\varphi} \def\gh{\eta W\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta})^{1/p}, \qquad \tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p:=\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}^{-1}(\phi} \def\vgf{\varphi} \def\gh{\eta W\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta})^{1/p}. \end{equation} We have \begin{theorem}[\cite{YP17}]\label{thmcomp} Let $P$ be a subcritical operator in $M$. Assume that $W>0$ is a semismall perturbation of $P^\star $ and $P$ in $M$, and let $\gl_0:=\gl_0(P,W,M)$. Then \begin{enumerate} % \item The operator $P-\gl_0W$ is positive-critical with respect to $W$, that is, \begin{equation}\label{phiwphi} \int_M \tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}(x)W(x)\phi} \def\vgf{\varphi} \def\gh{\eta(x)\,\mathrm{d}m(x)<\infty, \end{equation} where $\phi} \def\vgf{\varphi} \def\gh{\eta$ and $\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}$ denote the ground states of $P-\gl_0 W$ and $P^\star-\gl_0 W$, respectively. Moreover, $\gl_0= \|\mathcal{G}\|_{L^p(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}^{-1}> 0$ for any $1\leq p\leq \infty$. \item for any $1\leq p\leq \infty$, the integral operators $\mathcal{G}$ and $\mathcal{G} ^\odot$ defined in \eqref{weighted Green} are compact on $L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p)$ and $L^{p}(\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p)$, respectively. \item For $1\leq p\leq \infty$, the spectrum of $\mathcal{G} \!\!\upharpoonright_{L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}$ contains $0$, and besides, consists of at most a sequence of eigenvalues of finite multiplicity which has no point of accumulation except $0$. \item For any $1\leq p\leq\infty$, $\phi} \def\vgf{\varphi} \def\gh{\eta$ (resp. $\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}$) is the unique nonnegative eigenfunction of the operator $\mathcal{G} \!\!\upharpoonright_{L^p(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}$ (resp., $\mathcal{G} ^\odot\!\!\upharpoonright_{L^p(\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p)}$). The corresponding eigenvalue $\gn=( \gl_0)^{-1}$ is simple. \item The spectrum of $\mathcal{G} \!\!\upharpoonright_{L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}$ is $p$-independent for all $1\leq p\leq \infty$, and we have $$0\in \sigma} \def\vgs{\varsigma} \def\gt{\tau\left(\mathcal{G} \!\!\upharpoonright_{L^p(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}\right) = \sigma} \def\vgs{\varsigma} \def\gt{\tau\left(\mathcal{G} ^\odot\!\!\upharpoonright_{L^p(\tilde{\phi} \def\vgf{\varphi} \def\gh{\eta}_p)}\right) \subset\overline{ B\Big(0, ( \gl_0)^{-1}\Big)} .$$ \item Suppose further that $P$ is symmetric. Let $\phi_k$ be the $k$-th (weighted) eigenfunction in $L^2(M, W\!\,\mathrm{d}m)$ (counting multiplicity). Then for each $k\geq 1$, the quotient of the eigenfunctions $\phi_k/\phi$ is bounded in $M$ and has a continuous extension up to the Martin boundary of the pair $(M, P)$. \end{enumerate} \end{theorem} \begin{remark}{\em We would like to point out that criticality theory, and in particular the results of this paper, are also valid for the class of {\em classical solutions} of locally uniformly elliptic operators of the form \begin{equation} \label{L} Lu:=-\sum_{i,j=1}^N a^{ij}(x)\partial_{i}\partial_{j}u + b(x)\cdot\nabla u+c(x) u, \end{equation} with real and locally H\"older continuous coefficients, and for the class of {\em strong solutions} of locally uniformly elliptic operators of the form \eqref{L} with locally bounded coefficients (provided that the formal adjoint operator also satisfies the same assumptions), see \cite{YP3, YP1, YP2,YP5, Pinsky95} and references therein. Nevertheless, for the sake of clarity, we prefer to present our results only for operators in divergence form \eqref{operator} and weak solutions. } \end{remark} \section{Aims and objectives}\label{sec-AO} In this section we present the problems that we study in our paper. \subsection{Maximal interval of equivalence} The following problem was posed in \cite[Conjecture~3.7]{YP2}, see also \cite[Example~8.6]{YP5} for a counterexample. \begin{problem}\label{pb_equivalence} Suppose that $P$ is subcritical in $M$ of the form \eqref{operator}, and assume that $W \geq 0$ is a $G$-bounded perturbation of $P$ in $M$. Is it true that $$ E_+ = S_+ \setminus \{ \lambda_0 \} ? $$ \end{problem} In Section~\ref{section_maximal_green} we provide a positive answer to the above question if $P$ is \emph{symmetric} and its positive minimal Green function satisfies the \emph{quasimetric property}. See also Lemma~\ref{lem_2}, where we prove that $SE_+ = S_+ \setminus \{ \lambda_0 \} $ for a certain family of nonnegative $G$-semibounded perturbations of a subcritical operator $P$ in $M$. \subsection{$h$-big perturbation} Next, we discuss a class of perturbations known as {\em $h$-big perturbations}. This notion was introduced by A.~Grigor'yan and W.~Hansen \cite{GH} for the case when $P = -\Delta$, and later it was generalized by M.~Murata (see \cite{MM2, MM3}) for elliptic operators of the form \eqref{operator}. \begin{defi}\label{h_big_defi} {\rm Suppose that $P$ of the form \eqref{operator} is subcritical in $M$. Let $h$ be a positive supersolution of the equation $$ P \, u = 0 \quad \mbox{in} \ M. $$ We say that a nonnegative potential $W$ is a {\em $h$-big in $M$} if there is no function satisfying $$ (P + W) v = 0 \quad \mbox{in } M \mbox{ and } 0<v \leq h \quad \mbox{in a neighborohood of infinity in } M. $$ Otherwise, $W$ is said to be {\em non-$h$-big}. } \end{defi} \begin{rem} {\rm It is evident from the definition of $h$-big perturbation that it generalizes the following Liouville property for Schr\"odinger equation \cite{AG}: Let $M$ be a smooth, noncompact Riemannian manifold $M$ and let $W\neq 0$ be a smooth nonnegative potential on $M$. We say that the operator $-\Delta + W$ satisfies the {\em Liouville property} if \begin{equation}\label{liouville_laplace} (-\Delta + W)u = 0 \quad \mbox{in } M, \mbox{ and } 0\leq u\in L^\infty(M), \end{equation} implies $u= 0$. } \end{rem} Clearly (see for example \cite{AG}), if $W\gneqq 0$ has a compact support the above Liouville property holds true if and only if $P:=-\Delta$ is critical in $M$ (in other word, $M$ is parabolic). On the other hand, if $P=-\Delta$ is subcritical in $M$ and $$ \int_{M} G_P^M(x, y) W(y) \,\mathrm{d}m(y) < \infty, $$ then the Liouville property does not hold \cite{AG, GH}. Moreover, it follows from \cite[Proposition~3.4]{YP5} that if $P$ is subcritical operator in $M$ of the form \eqref{operator}, and $h\in \mathcal{C}_P(M)$, then $W\gneqq 0$ is non-$h$-big if $$ \int_{M} G_P^M(x, y) W(y) h(y) \,\mathrm{d}m(y) < \infty. $$ \medskip For a given subcritical operator $P$ of the form \eqref{operator} there is a natural class of weights satisfying $\gl_0(P,W,M)>0$, which are `big' in a certain sense. \begin{defi}[\cite{DFP}]\label{def-opt-w} {\rm we say that $W\gneqq 0$ is an {\em optimal-Hardy} weight for $P$ in $M$ if the following three properties hold: \begin{itemize} \item {\bf Criticality:} $P-W$ is critical in $M$, and let $\vgf$ and $\vgf^*$ be the corresponding ground states of $P-W$ and $P^*-W$. \item {\bf Optimality at infinity:} for any $\gl > 1$ and $K\Subset M$, $P -\gl W\not\geq 0$ in $M\setminus K$. \item {\bf Null-criticality:} $\vgf\vgf^*\not\in L^1(M,W\!\,\mathrm{d}m)$. \end{itemize} } \end{defi} The following theorem is a version of \cite[Theorem~4.12]{DFP} (cf. the discussion therein). \begin{thm}\label{thm_DFP} Let $P$ be a subcritical operator in $M$ and let $G^{M}_P(x,y)$ be its minimal positive Green function. Let $u\in \mathcal{C}_P(M)$ satisfying \begin{equation}\label{DFP_cond} \lim_{x \rightarrow \infty} \frac{G_P^M(x,y)}{u(x)} = 0, \end{equation} where $\infty$ is the ideal point in the one-point compactification of $M$. Let $\phi} \def\vgf{\varphi} \def\gh{\eta\gneqq 0$ be a compactly supported smooth function, and consider its Green potential $$G_\phi} \def\vgf{\varphi} \def\gh{\eta(x):=\int_{M} G_P^M(x, y) \phi} \def\vgf{\varphi} \def\gh{\eta(y) \,\mathrm{d}m(y).$$ Then \begin{equation}\label{W-Hardy} W:=\frac{P(\sqrt{G_\phi} \def\vgf{\varphi} \def\gh{\eta u})}{\sqrt{G_\phi} \def\vgf{\varphi} \def\gh{\eta u}} \end{equation} is an optimal Hardy-weight for $P$ in $M$. Moreover, $$ W(x) := \frac{1}{4} \left| \nabla \log\left( \frac{G_\phi} \def\vgf{\varphi} \def\gh{\eta(x)}{u(x)} \right) \right|^2_{A(x)} \qquad \mbox{in } M\setminus \operatorname{supp}{\phi} \def\vgf{\varphi} \def\gh{\eta}. $$ \end{thm} We omit the proof of Theorem~\ref{thm_DFP} since it can be obtained by a slight modification of the proof of \cite[Theorem~4.12]{DFP}. In Section~\ref{sec_hbig}, we discuss the following problem. \begin{problem} Study the $h$-bigness property of optimal Hardy-weights $W$ given by Theorem~\ref{thm_DFP}. \end{problem} \subsection{ Critical Hardy-weights} An important feature of classical Hardy-weights $W$ is the knowledge of the best Hardy constant. In other words, for such Hardy-weights the value of $\gl_0(P,W,M)$ is known (in contrary to the case of a general weight). We note that the problem of finding a critical potential for a given subcritical operator was studied in \cite[Section~5]{PT06}. The answer obtained there relies on solving a nontrivial auxiliary variational problem. Moreover, this variational approach is obviously restricted to {\em symmetric} subcritical operators. In Section~\ref{sec-critical-Hardy} we prove for any subcritical operator $P$ of the form \eqref{operator}, the existence of a large family of critical Hardy-weights which are given by a simple explicit formula. More precisely, we present a family of \textquoteleft small\textquoteright\, Hardy-weights $W_\mu} \def\gn{\nu} \def\gp{\pi$ such that each $W_\mu} \def\gn{\nu} \def\gp{\pi$ is semismall perturbation of $P$ in $M$, and $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is positive critical with respect to $W_\mu} \def\gn{\nu} \def\gp{\pi$. In particular, $\gl_0(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M)=1$. Recall that {\em optimal} Hardy-weights $W$ given by Theorem~\ref{thm_DFP} are $h$-big and $P-W$ is null-critical with respect to $W$. \subsection{ Liouville comparison principle} Next, we recall a Liouville comparison principle for nonnegative Schr\"odinger-type operators. \begin{thm}\cite[Theorem~1.7]{YP07}\label{YP07-thm} Let $N \geq 1$ and $M$ be a noncompact connected Riemannian manifold. Consider two Schr\"odinger operators defined on $M$ of the form \eqref{eq-symm}, that is, $$ P_j := -\div (A_j \nabla) + V_j \qquad j = 0,1, $$ such that $A_j$ satisfy \eqref{ell}, and $V_j \in L^{q}_{{\mathrm{loc}}} (M)$ for some $q > N/2$, where $j = 0,1$. Suppose that the following assumptions hold true: \begin{enumerate} \item The operator $P_1$ is critical in $M$. Denote by $\Phi$ be its ground state. \item $P_0$ is nonnegative in $M$, and there exists a real function $\Psi \in H^{1}_{{\mathrm{loc}}}(M)$ such that $\Psi_{+} \neq 0$, and $P_0 \Psi \leq 0$ in $M$, where $u_+(x) := \max \{ 0, u(x) \}.$ \item The following inequality holds: \begin{equation*} (\Psi_+)^2(x) A_0(x) \leq C \Phi^2(x) A_1(x) \qquad \mbox{ a.e. in } M, \end{equation*} where $C > 0$ is a positive constant, and the matrix inequality $A\leq B$ means that $B - A$ is a positive semi-definite matrix. \end{enumerate} Then the operator $P_0$ is critical in $M$ and $\Psi$ is its ground state. \end{thm} \medskip We note that in Theorem~\ref{YP07-thm} there is no assumption on the difference of the given potentials $V_j$. In \cite[Problem~5]{YP07} the author proposed to generalize Theorem~\ref{YP07-thm} to the case of {\em nonsymmetric} elliptic operators of the form \eqref{operator} with the same (or even with comparable) principal parts. In a recent paper \cite{ABG}, the authors gave a partial answer to the above problem using a probabilistic approach along with criticality theory under some assumptions on the difference of the given potentials. In Section~\ref{Sec_4.1}, we prove another version of Liouville comparison principle for nonsymmetric nonnegative operators. In particular, we provide a quantitative bound on the difference of the given potentials in terms of a certain Hardy-weight to guarantee the validity of a Liouville comparison principle. Moreover, in contrast to \cite[Theorem~2.3]{ABG} which holds in ${\mathbb R}^N$, our result holds in any noncompact Riemannian manifold. We refer to Theorem~\ref{liouville_critical_thm} for more details. \section{Maximal interval of equivalence of Green functions}\label{sec_equivalence}\label{section_maximal_green} In the present section we provide a partial answer to Problem~\ref{pb_equivalence} concerning $G$-bounded perturbations under the quasimetric assumption. This property of Green functions has been considered previously by several authors, for example in \cite{FNV, KV, YP5}. \begin{defi}\label{def-qk}{\em A {\em quasimetric kernel} $K$ on a measure space $(M, \mu)$ is a measurable function from $M \times M \rightarrow (0, \infty]$ such that the following conditions hold. \begin{enumerate} \item The kernel $K$ is symmetric : $K(x, y) = K(y, x)$ for all $ x, y \in M. $ \item The function $d := 1/K$ satisfies the quasi-triangle inequality \begin{equation}\label{quasimetric} d(x, y) \leq C(d(x, z) + d(z, y))\qquad \forall x, y, z \in M, \end{equation} for some $C > 0$, called the {\em quasimetric constant} for $K$. \end{enumerate} } \end{defi} \begin{remark}\label{rem-quasi} {\em Using Ptolemy inequality \cite[Lemma~2.2]{FNV}, it follows that if $G_{P}^{M}$ is a quasimetric kernel in the sense of Definition~\ref{def-qk}, then it satisfies the quasimetric inequality of \cite[Lemma~7.1]{YP5}. Therefore, in this case and in light of \cite[Lemma~7.1]{YP5}, if $W$ is $G$-semibounded perturbation, then $W$ is in fact, $G$-bounded perturbation. } \end{remark} We are now in a position to state the main result of the present section. We have \begin{thm}\label{main-thm} Let $P$ be a second-order, symmetric, subcritical elliptic operator of the form \eqref{symm_P} defined on noncompact Riemannian manifold $M$, and let $0\lneqq W\in L^q_{\mathrm{loc}}(M;{\mathbb R})$, with $q>N/2$ be a $G$-semibounded perturbation of $P$ in $M$. Assume further that $G_{P}^{M}$ is a quasimetric kernel. Then $$ G^{M}_{P} \asymp G^{M}_{P - \varepsilon W} \qquad \mbox{on} \ M \times M $$ for all $\varepsilon < \lambda_{0}=\lambda_{0}(P,W,M).$ Moreover, $$E_+=S_+ \setminus \{ \lambda_0 \} .$$ \end{thm} Before proving Theorem~\ref{main-thm}, we recall some general results concerning the equivalence of Green functions. We start with the following lemma. \begin{lem}[\cite{MM1,YP3,YP1}]\label{interval_equivalence} Let $P$ be a second-order, subcritical elliptic operator of the form \eqref{operator} defined on noncompact Riemannian manifold $M$, and let $V\in L^q_{\mathrm{loc}}(M;{\mathbb R})$ with $q>N/2$ be a $G$-bounded perturbation (that is, the $3G$-inequality \eqref{supremum} holds true). Then $P - \varepsilon V$ is subcritical and \begin{equation} G^{M}_{P} \asymp G^{M}_{P - \varepsilon V} \qquad \mbox{on} \ M \times M \end{equation} for all $|\varepsilon| < (2C_0)^{-1}$. In particular, $\lambda_{0}: = \lambda_{0}(P,V, M) > 0$. \end{lem} \begin{proof} Consider the iterated Green kernel \begin{equation}\label{ik2} G^{(i)}_{P}(x, y) := \left\{ \begin{array}{ll} G^{M}_{P}(x, y) & i=0, \\[4mm] \int_{M} G(x, z) V(z) G^{(i-1)}_{P}(z, y) \,\mathrm{d}m(z) & i\geq 1. \end{array} \right. \end{equation} Then it follows from the hypothesis and an induction argument that $$ |G^{(i)}_{P}(x, y)| \leq (C_0)^{i} G^{M}_{P}(x, y), $$ where $C_0$ is given by \eqref{supremum}. Hence, \begin{equation*}\label{first_estimate} \sum_{i = 0}^{\infty} |\varepsilon|^{i} \left|G^{(i)}_{P}(x, y) \right| \leq \frac{1}{1 - C_0 |\varepsilon|} G^{M}_{P}(x, y), \end{equation*} provided $|\vge|<C_0^{-1}$. Fix $|\vge|<C_0^{-1}$. Using a standard elliptic argument, it follow that the Neumann series $$ H^{\varepsilon}_{P}(x, y) := \sum_{i = 0}^{\infty} {\varepsilon}^{i} G^{(i)}_{P}(x, y) $$ converges locally uniformly in $M$ to a Green function of $(P- \varepsilon V)u = 0.$ Moreover, for $|\vge|<C_0^{-1}$, the positive minimal Green function $G^{M}_{P - \varepsilon |V|}$ exists, and by the minimality of the Green function it satisfies \begin{equation*}\label{upper_bound3} 0 \leq G^{M}_{P - |\varepsilon| |V|} (x, y) \leq \frac{1}{1 - |\varepsilon|C_0 } G^{M}_{P}(x, y). \end{equation*} Hence, $G^{M}_{P - \varepsilon V}$ exists, and by the generalized maximum principle we obtain \begin{equation}\label{upper_bound} 0\leq G^{M}_{P - \varepsilon V} (x, y) \leq G^{M}_{P -| \varepsilon| |V|} (x, y) \leq \frac{1}{1 - |\varepsilon|C_0 } G^{M}_{P}(x, y). \end{equation} Using resolvent equation \cite[Lemma~2.4]{YP1} $$G^{M}_{P - \varepsilon V} (x, y)= G^{M}_{P} (x, y)+ \varepsilon\int_{M} G_{P - \varepsilon V}(x, z) V(z) G^{M}_{P}(z, y) \,\mathrm{d}m(z),$$ we obtain $$G^{M}_{P} (x, y)\leq G^{M}_{P - \varepsilon V} (x, y) + \frac{|\varepsilon| C_0}{1 - |\varepsilon|C_0} G^{M}_{P} (x, y). $$ Hence, for $|\varepsilon| < (2C_0)^{-1}$ we have $$ \frac{1 - 2|\varepsilon|C_0 }{1 - |\varepsilon|C_0 } G^{M}_{P}(x, y)\leq G^{M}_{P - \varepsilon V} (x, y). $$ Hence, the lemma follows. \end{proof} We recall a lemma regarding the convergence of the Neumann series of the iterated Green functions in the case of a perturbation by a potential $W$ with a definite sign. \begin{lem}[Lemma~3.1, \cite{YP5}] \label{conv} Let $P$ be a second-order, subcritical elliptic operator of the form \eqref{operator} defined on noncompact Riemannian manifold $M$, and let $W\in L^q_{\mathrm{loc}}(M;{\mathbb R})$, with $q>N/2$ be a nonzero, nonnegative potential such that $\lambda_{0}: = \lambda_{0}(P,V, M) > 0$. Then \item \begin{equation}\label{c1} \int_{M} G_{P}^{M} (x, z) W(z) G_{P}^{M}(z, y)\,\mathrm{d}m(z) < \infty, \end{equation} and for every $0 < \varepsilon < \lambda_{0}$, the Neumann series $\sum_{i = 0}^{\infty} \varepsilon^{i} G_{P}^{(i)}(x, y)$ converges to $G_{P - \varepsilon W}^{M}(x, y)$ in the compact-open topology. \end{lem} \begin{proof}[Proof of Theorem~\ref{main-thm}] In light of Remark~\ref{rem-quasi} we may assume that $W$ is a $G$-bounded perturbation. Clearly, $E_+$ is an open set. Indeed, if $\gl\in E_+$, then $W$ is $G$-bounded perturbation of $P-\gl W$, and by Lemma~\ref{interval_equivalence}, there exists $\vge_0>0$ such that $(\gl-\vge_0,\gl+\vge_0)\subset E_+$ (see also \cite[Corollary~3.6]{YP2}). In particular, $\gl_0\not \in E_+$. \medskip Next, We claim that $G^{M}_{P} \asymp G^{M}_{P - \varepsilon W}$ for all $\varepsilon <C_0^{-1}$. It follows from Lemma~\ref{interval_equivalence} that $G^{M}_{P} \asymp G^{M}_{P - \varepsilon W}$ for all $|\varepsilon| < (2C_0)^{-1}$. Moreover, by the generalized maximum principle, if $\varepsilon_1 < \varepsilon_2, $ then \begin{equation}\label{g_maximum} G^{M}_{P- \varepsilon_1 W} \leq G^{M}_{P- \varepsilon_2 W}. \end{equation} Therefore, $G^{M}_{P} \leq G^{M}_{P - \varepsilon W}$ for all $ 0\leq\varepsilon < \gl_0$. On the other hand, for $0<\varepsilon<\frac{1}{C_{0}}$, we have by \eqref{upper_bound} that \begin{equation}\label{eq177} G^{M}_{P}\leq G^{M}_{P - \varepsilon W}\leq \frac{1}{1-\vge C_0}G^{M}_{P}. \end{equation} Fix $\varepsilon > 0$, and let $$G_0: =G^{M}_{P + \varepsilon W},\qquad G_1: =G^{M}_{P - \frac{W}{2C_0}}, \qquad \alpha} \def\gb{\beta} \def\gg{\gamma:=\frac{\vge}{\vge+1/(2C_0)}\,.$$ In light of \cite[Theorem~{3.4}]{YP2} and \eqref{eq177}, we obtain $$ G_0=G^{M}_{P+\vge W}\leq G^{M}_{P} \leq (G_{1})^{\alpha} \def\gb{\beta} \def\gg{\gamma}(G_{0})^{1-\alpha} \def\gb{\beta} \def\gg{\gamma} \leq 2^{\alpha} \def\gb{\beta} \def\gg{\gamma} (G^{M}_{P})^{\alpha} \def\gb{\beta} \def\gg{\gamma}G_{0}^{1-\alpha} \def\gb{\beta} \def\gg{\gamma}. $$ Therefore, $$G_{P+ \vge W}\leq G^{M}_{P}\leq 2^{2C_0\vge}G_{P+ \vge W}.$$ Hence, $G^{M}_{P - \varepsilon W} \asymp G^{M}_{P}$ for all $\varepsilon < \frac{1}{C_{0}}$. \medskip Let $E_0: =\sup E_+$. Thus, $0<\frac{1}{C_{0}} \leq E_0\leq \lambda_{0}$. We claim that $E_0 = \lambda_{0}$. Suppose to the contrary, that there exists $\delta > 0$ such that $E_0 + \delta < \lambda_{0},$ i.e., $\frac{E_0 + \delta }{\lambda_0} < 1.$ \medskip Set $\,\mathrm{d}W := W(x) \!\,\mathrm{d}m(x)$, and define the iterated kernel \begin{equation*} K^{(i)}(x, y) := \left\{ \begin{array}{ll} \left( E_0 + \delta \right) G^{M}_{P}(x, y) & i=0, \\[4mm] \int_{M} G^{M}_{P}(x, z) K^{(i-1)}(z, y) \,\mathrm{d}W(z) & i\geq 1, \end{array} \right. \end{equation*} and an operator $T : L^{2}(M, \,\mathrm{d}W) \rightarrow L^{2}(M, \,\mathrm{d}W)$ by $$Tf(x): = \left( E_0 + \delta \right) \int_{M} G^{M}_{P}(x, y) f(y) \,\mathrm{d}W(y).$$ We claim that $T$ is well defined and $||T||_{L^2(M,\, \,\mathrm{d}W)} < 1.$ \medskip Let $u$ be a positive supersolution of $(P - \lambda_{0} W) u = 0.$ Then it follows from \cite{YP2} that $$ \left( E_0 + \delta \right) \int_{M} G^{M}_{P}(x, y) u(y) \,\mathrm{d}W(y) \leq \frac{\left( E_0 + \delta \right)u(x)}{\lambda_0}\, , $$ and $$ \left(E_0 + \delta \right) \int_{M} u(x) G^{M}_{P}(x, y)\,\mathrm{d}W(x) \leq \frac{\left( E_0 + \delta \right) u(y)}{\lambda_0}\, . $$ Therefore, by Schur's test we obtain $$||T||_{L^2(M, \, \,\mathrm{d}W)} \leq \frac{ E_0 + \delta }{\lambda_0} < 1.$$ Define \begin{equation}\label{converge_green} H(x, y) := \sum_{i = 0}^{\infty} \left( E_0 + \delta \right)^{i} K^{(i)}(x, y) = \left( E_0 + \delta \right) G^{M}_{P - (E_0 + \delta) W}(x, y), \end{equation} which is well defined by Lemma~\ref{conv}. Hence, $T$ is a bounded linear integral operator on $L^{2}(M, \!\,\mathrm{d}W)$, with a quasimetric kernel $K$ and with a norm strictly less than $1$. Consequently, \cite[Theorem~1.1]{FNV} implies that \begin{equation}\label{verbi} \mathrm{e}^{ \frac{C_1K^{(1)}(x, y)}{K^{(0)}(x, y)}}K^{(0)}(x, y) \leq H(x, y) \leq \mathrm{e}^{ \frac{C_2K^{(1)}(x, y)}{K^{(0)}(x, y)}} K^{(0)}(x, y), \end{equation} for some positive constants $C_1$ and $C_2$. Therefore, \eqref{verbi} and \eqref{converge_green} immediately imply \begin{equation}\label{eq-final} \left( E_0 + \delta \right) G^{M}_{P - (E_0 + \delta)W}(x, y) \leq K^{(0)}(x, y)\, \mathrm{e}^{ \frac{C_2K^{(1)}(x, y)}{K^{(0)}(x, y)}}. \end{equation} Now, observe that $$ \frac{K^{(1)}(x, y)}{K^{(0)}(x, y)} = \frac{1}{G_{P}^{M}(x, y)} \int_{M} G_{P}^{M}(x, z) W(z) G_{P}^{M}(z, y) \, \,\mathrm{d}m(z) \leq C_0. $$ Hence, \eqref{eq-final} yields $$ G^{M}_{P} (x, y)\leq G^{M}_{P - (E_0 + \delta)W}(x, y) \leq C G^{M}_{P} (x, y), $$ where $C$ is a positive constant. This contradicts the maximality of $E_0$. Hence, $E_0 = \lambda_0.$ \end{proof} \begin{rem} {\rm The validity of the conjecture $E_+=S_+ \setminus \{ \lambda_0 \}$, for a general nonnegative $G$-bounded perturbation $W$ of operator $P$ of the form \eqref{operator} remains open (cf. \cite[Conjecture~3.7]{YP2} and the counterexample \cite[Example~8.6]{YP5}). } \end{rem} \section{Optimal Hardy-weights and $h$-bigness}\label{sec_hbig} In the present section we study the $h$-bigness of \emph{optimal Hardy-weights} $W\geq 0$ given by Theorem~\ref{thm_DFP}. Recall that $G$-bounded perturbations are non-$h$-big \cite{MM1}. We note that under the conditions of Theorem~\ref{thm_DFP}, the operator $P_\gl : = P - \lambda W$ is subcritical in $M$ for all $\lambda < 1$. We have \begin{thm}\label{thm-hbig} Consider the operator $P_\gl := P - \lambda W$, and assume that \begin{itemize} \item The operator $P$ is subcritical, and let $G_\phi} \def\vgf{\varphi} \def\gh{\eta$ be a Green potential with respect to $P$, with a compactly supported smooth density $\phi} \def\vgf{\varphi} \def\gh{\eta$. \item There exists a positive solution $u$ of the equation $Pv=0$ in $M$ satisfying \eqref{DFP_cond}. \item $W$ is the corresponding optimal Hardy-weight given by \eqref{W-Hardy}. \item $0<\lambda <1$. \end{itemize} Set $\alpha_\pm := \frac{1 \pm \sqrt{1 - \lambda}}{2}$. Then $\gl W$ is $h_\pm$-big perturbations for the positive $P_\gl$-supersolutions $$h_\pm : = u^{(1 - \alpha_\pm)} (G_\phi} \def\vgf{\varphi} \def\gh{\eta)^{\alpha_\pm}.$$ \end{thm} \begin{proof} Let $K:=\operatorname{supp} \phi} \def\vgf{\varphi} \def\gh{\eta$. Since $\lambda = 4 \alpha_\pm (1 - \alpha_\pm)$, it follows that $h_\pm$ are indeed positive $P_\gl $-supersolutions in $M$, which are positive solutions of the equation $P_\gl v=0$ in $M\setminus K$ (see \cite[Theorem~3.1]{YP2}). Let $v_\pm$ be nonnegative solutions of $Pw=(P_\gl + \lambda W)w = 0 $ in $M$ satisfying $0 \leq v_\pm \leq h_\pm$. Suppose that $v_\pm>0$. So, $$ \frac{v_\pm(x)}{u(x)} \leq \left( \frac{G_\phi} \def\vgf{\varphi} \def\gh{\eta(x)}{u(x)} \right)^{\alpha_\pm}. $$ By our assumption, $\lim_{x \rightarrow \infty} \frac{G(x)}{u(x)} = 0$, therefore, $\lim_{x \rightarrow \infty} \frac{G_\phi} \def\vgf{\varphi} \def\gh{\eta(x)}{u(x)} = 0$. Consequently, $$ \lim_{x \rightarrow \infty} \frac{v_\pm(x)}{u(x)} = 0. $$ In light of \cite[Proposition~6.1]{DFP}, we conclude $v_\pm$ are positive solutions of the equation $Pw=0$ in $M$ of minimal growth in a neighborhood of infinity in $M$. Hence $v_\pm$ are ground states, and $P$ is critical in $M$, a contradiction. Hence, we conclude $v_\pm \equiv 0.$ \end{proof} \begin{rem}{\em 1. Since near infinity in $M$ we have $$ \left( \frac{G_\phi} \def\vgf{\varphi} \def\gh{\eta(x)}{u(x)} \right)^{\alpha_+}\leq \left( \frac{G_\phi} \def\vgf{\varphi} \def\gh{\eta(x)}{u(x)} \right)^{\alpha_-},$$ it is enough to prove that $\gl W$ is $h_-$-big perturbation. \medskip 2. Fix $x_0\in M$. We may consider the punctured manifold $M^*:=M\setminus \{x_0\}$, and let $u$ is a positive solution of the equation $P w=0$ in $M$, and $G(x):=G_P^M(x,x_0)$ satisfying \eqref{DFP_cond}. Let $$ W(x) := \frac{1}{4} \left| \nabla \log\left( \frac{G(x)}{u(x)} \right) \right|^2_{A(x)} \qquad \mbox{in } M\setminus \{x_0\}. $$ As in the proof of Theorem~\ref{thm-hbig}, it follows that for $0<\gl<1$, the potential $\gl W$ is $h_-$-big perturbations for $h_-:=u^{(1 - \alpha_-)} (G)^{\alpha_-}$. } \end{rem} \section{Critical Hardy-weights}\label{sec-critical-Hardy} Throughout the present section we assume that $P$ is a subcritical operator in $M$ of the form \eqref{operator}. We fix a positive Radon measure $\mu} \def\gn{\nu} \def\gp{\pi$ on $M$ with a \textquoteleft nice\textquoteright\, nonnegative density $\mu} \def\gn{\nu} \def\gp{\pi(x)$. We denote $\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi=\mu} \def\gn{\nu} \def\gp{\pi(x)\,\mathrm{d}m$, and we assume that the corresponding {\em Green potential} $G_\mu} \def\gn{\nu} \def\gp{\pi$ is finite. That is, we assume that for some $x\in M$ (and therefore, for any $x\in M$) \begin{equation}\label{eq_finite_pot} G_\mu} \def\gn{\nu} \def\gp{\pi(x):= \int_{M}\!\!\! \Green{M}{P}{x}{y}\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi(y)<\infty. \end{equation} A sufficient condition for \eqref{eq_finite_pot} to hold is obviously, the existence of $k\geq 1$, and a positive (super)solution $\vgf^\star$ of the equation $P^\star u=0$ in $M^\star_k$ such that $\vgf^\star\in L^1(M^\star_k,\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi)$. Set $$W_\mu} \def\gn{\nu} \def\gp{\pi(x):=\frac{\mu} \def\gn{\nu} \def\gp{\pi(x)}{G_\mu} \def\gn{\nu} \def\gp{\pi(x)}\,.$$ Since $PG_\mu} \def\gn{\nu} \def\gp{\pi=\mu} \def\gn{\nu} \def\gp{\pi$, it follows that the Green potential $G_\mu} \def\gn{\nu} \def\gp{\pi$ is a positive solution of the equation $(P-W_\mu} \def\gn{\nu} \def\gp{\pi)u=0$ in $M$, so, $\gl_0:=\gl_0(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M)\geq 1$. Moreover, since \begin{equation}\label{inv} \int_{M}\Green{M}{P}{x}{y}W_\mu} \def\gn{\nu} \def\gp{\pi(y)G_\mu} \def\gn{\nu} \def\gp{\pi(y)\,\mathrm{d}m(y) =G_\mu} \def\gn{\nu} \def\gp{\pi(x) \quad\forall x\in M, \end{equation} it follows that $G_\mu} \def\gn{\nu} \def\gp{\pi$ is a positive {\em invariant solution} of the equation $(P-W_\mu} \def\gn{\nu} \def\gp{\pi)u=0$ in $M$ (see \cite{YP2,YP17} and references therein). Without loss of generality, we assume that $0\in M$, and we denote $G(x):=\Green{M}{P}{x}{0}$. Since $PG=0$ in $M\setminus \{0\}$, and $G$ has minimal growth at infinity in $M$, it follows that for a given Green potential $G_\mu} \def\gn{\nu} \def\gp{\pi$ and for $\vge>0$ small enough, there exists a positive constant $C$ such that $$G(x)\leq CG_\mu} \def\gn{\nu} \def\gp{\pi(x) \qquad \forall x\in M\setminus B(0,\vge).$$ On the other hand, let $V_\mu} \def\gn{\nu} \def\gp{\pi(x):=\frac{\mu} \def\gn{\nu} \def\gp{\pi(x)}{G(x)}$ in $M$. The following lemma characterizes Green potentials that are comparable (near infinity in $M$) to $G$ (see \cite[Corollary~4.7]{YP5}). \begin{lemma}\label{lem_1} There exists a positive constant $C>0$ such that % \begin{equation}\label{eq_min_gr} C^{-1}G_\mu} \def\gn{\nu} \def\gp{\pi(x) \leq G(x) \qquad \forall x\in M \end{equation} if and only if $V_\mu} \def\gn{\nu} \def\gp{\pi$ is a $G$-semibounded perturbation of $P^\star$ in $M$. \medskip Moreover, in this case, we have $V_\mu} \def\gn{\nu} \def\gp{\pi\asymp W_\mu} \def\gn{\nu} \def\gp{\pi$ near infinity in $M$, and in particular, $W_\mu} \def\gn{\nu} \def\gp{\pi$ is a $G$-semibounded perturbation of $P^\star$ in $M$. \medskip In addition, the convex set of all positive solutions $v$ of the equation $P^\star u=0$ in $M$ satisfying $v(0)=1$ is a bounded set in $L^1(M, \mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi)$. \end{lemma} \begin{proof} Assume first that $V_\mu} \def\gn{\nu} \def\gp{\pi$ is a $G$-semibounded perturbation of $P^\star$ in $M$. Then \begin{multline*} G_\mu} \def\gn{\nu} \def\gp{\pi(x)= \int_{M}\Green{M}{P}{x}{y}\frac{\mu} \def\gn{\nu} \def\gp{\pi(y)}{G(y)}G(y)\!\,\mathrm{d}m(y)=\\ \int_{M}\Green{M}{P}{x}{y}V_\mu} \def\gn{\nu} \def\gp{\pi(y)G(y)\,\mathrm{d}m(y)\leq CG(x) \qquad \forall x\in M, \end{multline*} and \eqref{eq_min_gr} holds. \medskip On the other hand, suppose that \eqref{eq_min_gr} holds. Consequently, \begin{equation}\label{eq-8} \int_{M}\!\Green{M}{P}{x}{y}V_\mu} \def\gn{\nu} \def\gp{\pi(y)G(y)\,\mathrm{d}m(y) = G_\mu} \def\gn{\nu} \def\gp{\pi(x)\leq C G(x)\quad \forall x\in M. \end{equation} Therefore,, $V_\mu} \def\gn{\nu} \def\gp{\pi$ is a $G$-semibounded perturbation of $P^\star$ in $M$. In particular, in this case we have $G_\mu} \def\gn{\nu} \def\gp{\pi\asymp G$ near infinity. This in turn, obviously implies that $V_\mu} \def\gn{\nu} \def\gp{\pi\asymp W_\mu} \def\gn{\nu} \def\gp{\pi$ near infinity. \medskip In addition, by \eqref{eq-8} we have $$ \int_{M}\frac{\Green{M}{P}{x}{y}}{\Green{M}{P}{x}{0}}\,\mathrm{d}\mu} \def\gn{\nu} \def\gp{\pi(y)=\int_{M}\frac{\Green{M}{P}{x}{y}V_\mu} \def\gn{\nu} \def\gp{\pi(y)G(y)}{G(x)}\,\mathrm{d}m(y) \leq C\qquad \forall x\in M. $$ Therefore, the last assertion of the lemma follows from Fatou's lemma and the Martin representation theorem. \end{proof} The following lemma gives, in particular, a positive answer to Problem~\ref{pb_equivalence} for the class of nonnegative $G$-semibounded perturbations of the form $W_\mu} \def\gn{\nu} \def\gp{\pi$. \begin{lemma}\label{lem_2} Suppose that \eqref{eq_min_gr} holds true, then $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is positive-critical in $M$ with respect to $W_\mu} \def\gn{\nu} \def\gp{\pi$, and $G_\mu} \def\gn{\nu} \def\gp{\pi$ is its ground state. Moreover, $$SE_+(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M)=S_+(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M) =(-\infty,\gl_0(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M))=(-\infty,1).$$ \end{lemma} \begin{proof} Recall that $G_\mu} \def\gn{\nu} \def\gp{\pi$ is a positive solution of the equation $(P-W_\mu} \def\gn{\nu} \def\gp{\pi)u=0$ in $M$. On the other hand, by our assumption $G_\mu} \def\gn{\nu} \def\gp{\pi\asymp G$ near infinity in $M$. Note that any positive supersolution $v$ of the equation $(P-W_\mu} \def\gn{\nu} \def\gp{\pi)u=0$ near infinity in $M$ is a positive supersolution of the equation $Pu=0$ in this neighborhood, while $G$ is a positive solution of $Pu=0$ of minimal growth near infinity. Consequently, $$G_\mu} \def\gn{\nu} \def\gp{\pi\leq CG\leq C_1 v \qquad \mbox{near infinity in } M.$$ Therefore, $G_\mu} \def\gn{\nu} \def\gp{\pi$ is a ground state of the equation $(P-W_\mu} \def\gn{\nu} \def\gp{\pi)u=0$ in $M$, and $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is critical in $M$. Consequently, for any $0<\alpha} \def\gb{\beta} \def\gg{\gamma <1$ and $\varepsilon > 0$ sufficiently small, we have $$G \asymp \Green{M}{P-\alpha} \def\gb{\beta} \def\gg{\gamma W_\mu} \def\gn{\nu} \def\gp{\pi}{\cdot}{0}\asymp G_\mu} \def\gn{\nu} \def\gp{\pi \qquad \mbox{in} \ M \setminus B(0, \varepsilon). $$ Furthermore, in light of \cite[Corollary~3.6]{YP2}, $G \asymp \Green{M}{P-\alpha} \def\gb{\beta} \def\gg{\gamma W_\mu} \def\gn{\nu} \def\gp{\pi}{\cdot}{0}$ also for any $\alpha} \def\gb{\beta} \def\gg{\gamma<0$. So, $SE_+(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M)=S_+(P,W_\mu} \def\gn{\nu} \def\gp{\pi,M)=(-\infty,1)$. Moreover, since $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is critical in $M$, we have that $P^\star-W_\mu} \def\gn{\nu} \def\gp{\pi$ is also critical in $M$. Denote by $u_\mu} \def\gn{\nu} \def\gp{\pi^\star$ its ground state. In particular, $u_\mu} \def\gn{\nu} \def\gp{\pi^\star$ is a positive invariant solution of the corresponding equation \cite[Theorem~2.1]{YP2}. Therefore, \begin{equation*}\label{gseq7} \int_{M} \!\!G_\mu} \def\gn{\nu} \def\gp{\pi(x)W_\mu} \def\gn{\nu} \def\gp{\pi(x)u_\mu} \def\gn{\nu} \def\gp{\pi^\star(x)\!\,\mathrm{d}m(x)\! \asymp\! \int_{M}\!\! G(x)W_\mu} \def\gn{\nu} \def\gp{\pi(x)u_\mu} \def\gn{\nu} \def\gp{\pi^\star(x)\!\,\mathrm{d}m(x) \!=\!u_\mu} \def\gn{\nu} \def\gp{\pi^\star(0)\!<\!\infty . \end{equation*} Hence, $P-W_\mu} \def\gn{\nu} \def\gp{\pi$ is positive-critical in $M$ with respect to $W_\mu} \def\gn{\nu} \def\gp{\pi$. \end{proof} \begin{lemma}\label{lem-semismall} For $k\geq 2$, let $\chi_k$ be a smooth function on $M$ such that $$0\leq \chi_k(x)\leq 1, \mbox{ in } M \qquad \chi_k\!\!\upharpoonright_{M_{k-1}}=0, \qquad \chi_k\!\!\upharpoonright_{M_{k}^\star}=1,$$ where $\{M_k\}$ is an exhaustion of $M$ (see Section~\ref{sec-pre}). Denote by $\mu} \def\gn{\nu} \def\gp{\pi_k(x):=\chi_k(x)\mu} \def\gn{\nu} \def\gp{\pi(x)$. Assume further that \begin{equation}\label{eq_uniform} \lim_{k\to \infty} \left\|\frac{G_{\mu} \def\gn{\nu} \def\gp{\pi_k}}{G}\right\|_{\infty; M_{k}^\star}=0. \end{equation} Then $W_\mu} \def\gn{\nu} \def\gp{\pi$ is a semismall perturbation of the operator $P^\star$ in $M$, and for any $1\leq p\leq \infty$ the integral operator $$\mathcal{G}_\mu} \def\gn{\nu} \def\gp{\pi f(x):= \int_{M} \Green{M}{P}{x}{y}W_\mu} \def\gn{\nu} \def\gp{\pi(y)f(y)\,\mathrm{d}m(y)$$ is compact on $L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p)$, where \begin{equation}\label{eq:2.9a} \phi} \def\vgf{\varphi} \def\gh{\eta_p:=G_\mu} \def\gn{\nu} \def\gp{\pi^{-1}(G_\mu} \def\gn{\nu} \def\gp{\pi W_\mu} \def\gn{\nu} \def\gp{\pi u_\mu} \def\gn{\nu} \def\gp{\pi^\star)^{1/p}. \end{equation} Suppose in addition that $P$ is a symmetric operator on $L^2(M,W_\mu} \def\gn{\nu} \def\gp{\pi(x)\,\mathrm{d}m)$ with a core $C_0^\infty(M)$, Let $\{(\vgf_k,\gl_k)\}_{k=0}^\infty$ be the set of the corresponding pairs of eigenfunctions and eigenvalues (counting multiplicity), where $\vgf_0:=G_\mu} \def\gn{\nu} \def\gp{\pi$ and $\gl_0=1$. Then for every $k\geq 1$ there exists a positive constant $C_k$ such that \begin{equation}\label{efest} |\vgf_k(x)|\leq C_k \vgf_0(x) \qquad \mbox{in } M. \end{equation} Furthermore, the function $\vgf_k/\vgf_0$ has a continuous extension $\psi_k$ up to the Martin boundary $\partial_{P}^MM$ of $P$ in $M$. \end{lemma} \begin{proof} The generalized maximum principle, and \eqref{eq_uniform} imply \begin{equation}\label{eq_uniform1} \lim_{k\to \infty} \left\|\frac{G_{\mu} \def\gn{\nu} \def\gp{\pi_k}}{G}\right\|_{\infty; M}=0. \end{equation} Hence, \begin{multline*} \int_{M_{k}^\star}\Green{M}{P}{x}{y}W_\mu} \def\gn{\nu} \def\gp{\pi(y)G(y)\,\mathrm{d}m(y)=\int_{M_{k}^\star}\Green{M}{P}{x}{y}\frac{\mu} \def\gn{\nu} \def\gp{\pi(y)}{G_\mu} \def\gn{\nu} \def\gp{\pi(y)}G(y)\,\mathrm{d}m(y) \\ \leq C \int_{M_{k}^\star}\Green{M}{P}{x}{y}\frac{\mu} \def\gn{\nu} \def\gp{\pi(y)}{G_\mu} \def\gn{\nu} \def\gp{\pi(y)}G_\mu} \def\gn{\nu} \def\gp{\pi(y)\,\mathrm{d}m(y)=CG_{\mu} \def\gn{\nu} \def\gp{\pi_k}(x)<\ge G(x) \qquad \forall x\in M, \end{multline*} Consequently, $W_\mu} \def\gn{\nu} \def\gp{\pi$ is a semismall perturbation of the operator $P^\star$ in $M$. Therefore, Theorem~\ref{thmcomp} implies that for any $1\leq p\leq \infty$ the integral operator $\mathcal{G}_\mu} \def\gn{\nu} \def\gp{\pi f(x)$ is compact on $L^{p}(\phi} \def\vgf{\varphi} \def\gh{\eta_p)$, and its spectrum is $p$-independent and contained in the closed unit disk. More precisely, the spectrum contains $0$, and besides, consists of at most a sequence of eigenvalues of finite multiplicity which has no point of accumulation except $0$. Moreover, $\vgf_0=G_\mu} \def\gn{\nu} \def\gp{\pi$ is the unique nonnegative eigenfunction of the operator $\mathcal{G}_\mu} \def\gn{\nu} \def\gp{\pi \!\!\upharpoonright_{L^p(\phi} \def\vgf{\varphi} \def\gh{\eta_p)}$. Furthermore, the corresponding eigenvalue $\gl_0=1$ is simple. \medskip The statement concerning the symmetric case follows from Theorem~\ref{thmcomp}. We note that by \cite{YP17}, the continuous extension $\psi_k$ of $\vgf_k/\vgf_0$ satisfies for $k\geq 1$ \begin{eqnarray}\label{psineq} \psi_k(\xi)&=& (\psi_0(\xi))^{-1}\gl_k\int_{M}K^M_{P}(z,\xi)W_\mu} \def\gn{\nu} \def\gp{\pi (z)\vgf_k(z)\,\mathrm{d}m(z)= \nonumber\\ & &\frac{\gl_k\int_{M}K^M_{P}(z,\xi)W_\mu} \def\gn{\nu} \def\gp{\pi(z)\vgf_k(z)\,\mathrm{d}m(z)} {\int_{M}K^M_{P}(z,\xi)W_\mu} \def\gn{\nu} \def\gp{\pi(z)\vgf_0(z)\,\mathrm{d}m(z)}\qquad \forall\xi\in\partial_{P}^MM, \end{eqnarray} where $K^M_{P}(\cdot,\xi)$ is the Martin kernel of $P$ in $M$ with a pole at $\xi\in\partial_{P}^MM$, and $\psi_0$ is the corresponding continuous extension of $G_\mu} \def\gn{\nu} \def\gp{\pi/G$. \end{proof} \begin{remark}\label{rem-landscape}{\em If $\mu} \def\gn{\nu} \def\gp{\pi=1$ and \eqref{eq_finite_pot} is satisfied, then $G_1$ is called the {\em torsion function} (see for example, \cite{vdBI13} and references therein). In a recent paper \cite{ADFJM}, D.~N.~Arnold, G.~David, M.~Filoche, D.~Jerison and S.~Mayboroda, considered the Green potential $W_1$ (which they called the {\em effective potential}) associated with a Schr\"odinger operator $L$ in a bounded Lipschitz domain $M\subset {\mathbb R}^N$. They showed a remarkable connection between the Neumann eigenfunctions of $L$ and the torsion function $G_1$ (which they call the {\em landscape function}) by proving that $W_1$ acts as an effective potential that governs the exponential decay of these eigenfunctions and delivers information on the distribution of eigenvalues near the bottom of the spectrum. } \end{remark} \section{Finite torsional rigidity}\label{sec-torsion} Throughout the present section we assume that $P$ is subcritical, symmetric operator on $L^2(M,\!\,\mathrm{d}m)$ of the form \eqref{symm_P}. Without loss of generality, we assume that $0\in M$, and we denote $G(x):=\Green{M}{P}{x}{0}$. In addition, we assume that $G_1\in L^1(M,\!\,\mathrm{d}m)$. So, we assume that the Green potential $G_1$ satisfies $$ G_1(x):=\int_{M} \Green{M}{P}{x}{y}\,\mathrm{d}m(y)<\infty, \quad \mbox{and }\; T(M):= \int_{M} G_1(x)\,\mathrm{d}m(x)<\infty.$$ $G_1$ (resp., $ T(M)$) is called the {\em torsion function} (resp., {\em torsional rigidity}) with respect to the operator $P$ and the measure $\,\mathrm{d}m$. Note that if $G_1\asymp G$, then the finiteness of the torsion function $G_1$ is clearly equivalent to the finiteness of torsional rigidity $T(M)$. \medskip Following \cite{vdBI13}, we have \begin{lemma}\label{lem-ftr} Let $P$ be symmetric subcritical operator in $M$ with finite torsional rigidity. Assume further that there exists a function $$c:(0,\infty)\to (0,\infty)$$ such that $k_P^M(x,y,t)$, the positive minimal heat kernel of $P$ in $(M,\!\,\mathrm{d}m)$, satisfies \begin{equation}\label{eq-hk} k_P^M(x,y,t)\leq c(t)\qquad \forall t>0, x,y\in M. \end{equation} Then the spectrum of $P$ on $L^2(M,\!\,\mathrm{d}m)$ is discrete. \medskip Suppose further that there exists $\gb\geq 0$ and $\tilde c>0$ such that $$c(t)\leq \tilde c \min\{t^{-N/2}, t^{-\gb/2}\} \qquad \forall t>0.$$ Then there exists a positive function $C:{\mathbb R}_+\to{\mathbb R}_+$ such that \begin{equation}\label{eq-glj} \gl_j\geq \min\left\{C(\beta) T(M)^{-2/(\gb+2)}j^{2/(\gb+2)}, C(N) T(M)^{-2/(N+2)}j^{2/(N+2)}\right\}, \end{equation} where $\{\gl_j\}_{j=0}^\infty$ is the increasing sequence of the eigenvalues of $P$ (counting multiplicity). \end{lemma} \begin{proof} Since $$ G_1(x)= \int_M \int_0^\infty k_P^M(x,y,t) {\rm d}t\, \,\mathrm{d}m,$$ by Tonelli's theorem, it follows that for any $0<\alpha} \def\gb{\beta} \def\gg{\gamma<1$, we have $$T(M)= (1-\alpha} \def\gb{\beta} \def\gg{\gamma) \int_0^\infty {\rm d}t \int_{M\times M} k_P^M(x,y,(1-\alpha} \def\gb{\beta} \def\gg{\gamma)t)\,\mathrm{d}m(y)\,\mathrm{d}m(x).$$ In light of \eqref{eq-hk} and the semigroup property, we have \begin{align}\label{trace_class} T(M) & \!\geq \!(1-\alpha} \def\gb{\beta} \def\gg{\gamma)\!\! \int_0^\infty \!\!\!\!\big( c(\alpha} \def\gb{\beta} \def\gg{\gamma t)\big)^{\!-1} \! {\rm d}t \!\! \int_{M\times M}\!\!\!\! \!\! k_P^M(x,y,(1-\alpha} \def\gb{\beta} \def\gg{\gamma)t)k_P^M(x,y,\alpha} \def\gb{\beta} \def\gg{\gamma t)\!\,\mathrm{d}m(y)\!\,\mathrm{d}m(x) \notag \\[2mm] & =(1-\alpha} \def\gb{\beta} \def\gg{\gamma) \int_0^\infty \big( c(\alpha} \def\gb{\beta} \def\gg{\gamma t)\big)^{-1} \, {\rm d}t \int_{M} k_P^M(x,x,t)\,\mathrm{d}m(x). \end{align} It follows that the heat operator $k_P^M$ is trace class. So, for each $t>0$ we have $$ \int_{M} k_P^M(x,x,t)\,\mathrm{d}m(x)=\sum_{j=0}^\infty \exp(-\lambda_j t)<\infty, $$ where $\{\gl_j\}$ is the nonincreasing sequence of all the eigenvalues of $P$ (counting multiplicity). In particular, $P$ has a discrete $L^2(M,\,\mathrm{d}m)$-spectrum. Estimate \eqref{eq-glj} is obtained as in \cite[Theorem~2]{vdBI13}. Indeed, by \eqref{trace_class} we have $$T(M) \!\geq\! (1-\alpha} \def\gb{\beta} \def\gg{\gamma) (\tilde{c})^{-1} \!\! \!\int_0^\infty\!\!\! (\alpha} \def\gb{\beta} \def\gg{\gamma t)^{\gb/2}\! \sum_{j=0}^\infty\! {\mathrm e}^{-\lambda_j t}\!\,\mathrm{d}t\!\geq\! (1-\alpha} \def\gb{\beta} \def\gg{\gamma)(\tilde{c})^{-1} j \!\! \int_0^\infty\! \!\!(\alpha} \def\gb{\beta} \def\gg{\gamma t)^{\gb/2} {\mathrm e}^{-\lambda_j t}\!\,\mathrm{d}t. $$ Recall that $$\int_0^\infty t^{\gg} {\mathrm e}^{- \ell t}\,\mathrm{d}t= \frac{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi(\gg+1)}{\ell^{\gg+1}}\,.$$ Hence, for $\alpha} \def\gb{\beta} \def\gg{\gamma := \frac{\beta}{\beta+2}$, we obtain \eqref{eq-glj} with $C(\beta)$ given by $$ C(\beta) := \frac{\beta^{\frac{\beta}{\beta + 2}}}{\beta + 2} \left( \frac{2\Gamma} \def\Gd{\Delta} \def\Gf{\Phi((\gb+2)/2)}{\tilde{c}} \right)^{2/(\gb+2)}\!\!. \qquad \qquad\qquad \qedhere $$ \end{proof} \section{Liouville comparison principle}\label{Sec_4.1} The present section is devoted to the study of Liouville comparison principle for {\em nonsymmetric} elliptic operators. The following theorem should be compared with Theorem~\ref{YP07-thm} and \cite[Theorem~2.3]{ABG}. \begin{theorem}\label{liouville_critical_thm} Let $M$ be a smooth, noncompact, connected manifold of dimension $N$. Consider two operators \begin{equation*} P_k := \mathcal{L}_{k} - V_k \qquad k = 1,2, \end{equation*} where each $\mathcal L_k$ is of the form \eqref{operator}, and $V_k \in L^{p}_{{\mathrm{loc}}}(M ; \mathbb{R})$, where $p>N/2$. Let $\overline{V} (x) = \max \{ V_1(x), V_2(x) \}.$ Suppose that there exists $K_1 \Subset K\Subset M$ such that $\mathcal{L}_{1} = \mathcal{L}_{2}$ in $M \setminus K_1$, and $P_k\geq 0$ in $M \setminus K_1$, for $k=1,2$. \medskip Let $G_k$ be a positive supersolution of the equation $P_k u=0$ in $M \setminus K_1$, such that $G_k$ is a positive solution of the equation $P_k u=0$ in $M \setminus K$ of minimal growth at infinity in $M$, where $k=1, 2$. Suppose that \begin{equation}\label{eq-V1-V2} \frac{|V_1 - V_2|}{2} \leq W:= \frac{1}{4} \left| \nabla \log\left( \frac{G_1}{G_2} \right) \right|^2_{A} \qquad \mbox{in } M\setminus K. \end{equation} Then (a) $\mathcal{L}_{1}-\overline V \geq 0$ in $M\setminus K$. \medskip (b) Assume further the that the following assumptions hold true: \begin{enumerate} \item The operator $P_1$ is critical in $M$, and let $\Phi \in \mathcal{C}_{P_1}(M)$ be its ground state. \medskip \item $P_2 \geq 0$ in $M$, and there exists a real function $\Psi \in W^{1,2}_{\mathrm{loc}}(M)$ such that $\Psi_+ \neq 0$ and $P_2 \Psi \leq 0$ in $M$. \medskip \item The following inequality holds: $$ \Psi_+ \leq C \Phi \qquad \mbox{in } M. $$ \end{enumerate} Then the operator $P_2$ is critical in $M$ and $\Psi$ is its ground state. In particular, the equation $P_2 v=0$ admits a unique positive supersolution in $M$. Moreover, $\Psi\asymp \Phi$ in $M$. \end{theorem} \begin{proof} The proof relies on criticality theory, the supersolution construction \cite{DFP}, and on the well known ``maximal $\varepsilon$-trick''. We denote the restriction of the operators $ \mathcal{L}_k$ on $M \setminus K_1$ by $ \mathcal{L}$. (a) We note that $U:=(G_1 G_2)^{1/2}$ is a positive solution of the equation \begin{equation}\label{optimal_Hardy_liouville} \left(\mathcal{L} - \left(\frac{V_1 + V_2}{2} \right) - W \right) v = 0 \quad \ \mbox{ in } M \setminus K , \end{equation} where $W$ is given in \eqref{eq-V1-V2}. Since $$ \overline{V} = \max \{ V_1(x), V_2(x) \} = \frac{V_1 + V_2}{2} + \frac{|V_1 - V_2|}{2}, $$ assumption \eqref{eq-V1-V2} implies that $U$ is a positive supersolution of the equation $(\mathcal{L}-\overline V)u \geq 0$ in $M\setminus K_1$. Hence, $\mathcal{L}-\overline V \geq 0$ in $M\setminus K_1$. \medskip (b) Let $\overline G$ be a positive solution of the equation $(\mathcal{L}-\overline V)u=0$ in $M \setminus K$ of minimal growth at infinity in $M$. Then by the generalized maximum principle and the fact that $G_1$ has minimal growth at infinity in $M$ we have that \begin{equation}\label{G1leqU} G_1\leq C_1\overline G \leq C_2 U = C_2(G_1 G_2)^{1/2} \qquad \mbox{in } M \setminus K. \end{equation} Hence, $G_1 \leq C_3 G_2$ in $M \setminus K$. Since $\Gf \leq \tilde C G_1$ in $M \setminus K$, and $G_2$ has minimal growth at infinity in $M$ for $P_2$, we have that for any positive supersolution $f$ of the equation $P_2 u=0$ in $M$ we have \begin{equation}\label{5.7} \Psi_+ \leq C \Phi \leq C\tilde C G_1\leq C\tilde C C_3G_2 \leq C_4 f \qquad \mbox{in } M \setminus K. \end{equation} Define $$\varepsilon_0 = \mbox{max} \{ \varepsilon : \varepsilon \Psi(x) \leq f(x) \quad \forall x \in M \}.$$ In light of \eqref{5.7}, it follows that $\varepsilon_0>0$ is well defined, and hence, $w(x) := f(x) - \varepsilon_0 \Psi (x) $ is a nonnegative supersolution of the equation $P_2 v =0 $ in $M$. By the strong maximum principle, either $w > 0$ or $w= 0$ in $M$. Let us assume that $ w > 0$. Then by replacing $f$ with $w$ and repeating the above argument, we conclude that there exists $\delta > 0$ such that $f - (\vge_0+\gd) \Psi > 0$, which contradicts the maximality of $\varepsilon_0$. Hence, $w = 0$ in $M$, which in turns implies that $$ \Psi (x) = \Psi_+ =\vge_0 f(x)>0 \qquad \forall x \in M . $$ Since $f$ is an arbitrary positive supersolution of $P_2 u = 0$ in $M$, it follows that $P_2$ is critical in $M$ and $\Psi$ is its ground state. The assertion $\Psi\asymp \Phi$ in $M$ follows now from \eqref{5.7} since $\Psi (x) = \Psi_+>0$ in $M$ and $G_2$ is a positive solution of the equation $P_2 u=0$ in $M \setminus K$ of minimal growth at infinity in $M$. \end{proof} \begin{rem} {\rm Under the assumptions of Theorem~\ref{liouville_critical_thm}, it follows that the positive minimal Green functions of $P_k$ in $M\setminus K$, where $k=1,2$, are semiequivalent. Moreover, \eqref{G1leqU} implies that these Green functions are also semiequivalent to the positive minimal Green function of $\mathcal L -\overline V$ in $M\setminus K$. We note that using \cite[Theorem~4.3]{YP95} it follows that under the assumptions of Theorem~\ref{liouville_critical_thm}, the operators $\mathcal L_k -\overline V$ might be supercritical in $M$. } \end{rem} The following example demonstrates that inequality \eqref{eq-V1-V2} might not hold and still the Liouville comparison principle holds true. \begin{example}\label{ex_a} {\rm Let $P_1 = -\Delta$, $V_1=0$ in $\mathbb{R}^2$. Then it is well known that $P_1$ is critical and $1$ is the corresponding ground state. Let $P_2 = - \Delta - V_2$ be nonnegative in $\mathbb{R}^2$, where $V_2\in L^{\infty}(\mathbb{R}^2)$ is a radially symmetric potential that satisfies \begin{equation}\label{V} V_2(x) = \frac{\lambda}{|x|^2} \quad \mbox{in} \ \mathbb{R}^2 \setminus B(0,1), \end{equation} where $\lambda < 0$ be any real number. A straightforward computation yields $G_2(x) := |x|^{- \sqrt{-\lambda}} $ is positive solution in $\mathbb{R}^2 \setminus B(0,1)$ of minimal growth at infinity in ${\mathbb R}^2$ for $P_2$. Also $G_1(x) = 1$ is a positive solution of minimal growth at infinity in ${\mathbb R}^2$ for $P_1$, so, $G_1\not \asymp G_2$ near infinity. Note that $$ \frac{|V_1 - V_2|}{2} = \frac{|\lambda|}{2|x|^2} > \frac{|\lambda|}{4|x|^2} = \frac{1}{4} \left| \nabla \log\left( \frac{G_1}{G_2} \right) \right|^2. $$ On the other hand, the Liouville comparison principle (Theorem~\ref{YP07-thm}) applies for the above $P_1$ and $P_2$, since these operators are symmetric. In particular, if the equation $P_2 u=0$ in $M$ admits a nonzero, nonnegative, bounded subsolution, then $P_2$ is critical in $M$. } \end{example} Next, we slightly modify the above example by adding a drift term to the Laplacian. \begin{example}\label{ex_b} {\rm Consider the operator $$P_1 = -\Delta - b \, \dfrac{\chi_{B(0, 1)^{*}} }{r} \partial_r \quad \mbox{ in } \ \mathbb{R}^2, $$ and $V_1=0$, where $r : =|x|$, $b$ is a negative constant, and $\chi_{B(0, 1)^{*}}$ is the indicator function of $B(0, 1)^{*}:=\mathbb{R}^2 \setminus B(0, 1)$. Then $P_1$ is critical in $\mathbb{R}^2$, with a ground state equals $1$. Let $$P_2 : = - \Delta - b \, \dfrac{\chi_{B(0, 1)^{*}} }{r} \partial_r - V_2,$$ where $V_2\in L^{\infty}(\mathbb{R}^2)$ satisfies \eqref{V}, such that $P_2\geq 0$ in ${\mathbb R}^2$. Then as before we easily find that $G_2(x) := |x|^{\frac{-b - \sqrt{ b^2 - 4 \lambda }}{2}}$ is a positive solution in $B(0, 1)^{*}$ of minimal growth at infinity in ${\mathbb R}^2$ for $P_2$. Also, $G_1(x) = 1$ is a positive solution of minimal growth at infinity in ${\mathbb R}^2$ for $P_1$, so, $G_1\not \asymp G_2$ near infinity. We note that for $|x|>1$ we have $$ \frac{1}{4} \left| \nabla \log\left( \frac{G_1}{G_2} \right) \right|^2 = \frac{|\lambda|}{4 |x|^2} - \frac{b^2}{8 |x|^2} \left[ \sqrt{1 + \frac{4|\lambda|}{b^2}} - 1\right] $$ This immediately yields as before $$ \frac{|V_1 - V_2|}{2}= \frac{|\lambda|}{2|x|^2} > \frac{1}{4} \left| \nabla \log\left( \frac{G_1}{G_2} \right) \right|^2. $$ On the other hand, Theorem 2.14 applies for the above $P_1$ and $P_2$, since the operator $P_1$ is symmetric in $L^2( \mathbb{R}^2, {\rm d}m),$ where $$ \,\mathrm{d}m= m(x)\,\mathrm{d}x : =\left\{\begin{array}{ll} \,\mathrm{d}x & \text{if $x \in B(0, 1)$\,}, \\[2mm] |x|^{b}\,\mathrm{d}x & \text{if $x \in \mathbb{R}^2 \setminus B(0, 1)$\,}. \end{array} \right. $$ In particular, if the equation $P_2 u=0$ in $M$ admits a nonzero, nonnegative, bounded subsolution, then $P_2$ is critical in $M$. } \end{example} \section{Green function estimate on the hyperbolic space} \label{sec_green_function_hyperbolic} As an application of our results, we study the behaviour of the positive minimal Green function of the shifted Laplacian on ${\mathbb H}^N$, the real hyperbolic space. It is well known that a Cartan-Hadamard manifold $M$ whose sectional curvatures is bounded above by a strictly negative constant satisfies the Poincar\'e inequality, or in other words, the bottom of the $L^2$-spectrum of the Laplace-Beltrami on $M$ is strictly positive. The most important example of such a manifold is ${\mathbb H}^N$. Let $\Delta_{\mathbb{H}^N}$ denote the Laplace-Beltrami operator on the hyperbolic space, then the \emph{generalized principal eigenvalue} of $-\Delta_{\mathbb{H}^N}$ is given by $$ \lambda_0(-\Delta_{{\mathbb H}^{N}}, \textbf{1}, {\mathbb H}^N)= \frac{(N-1)^2}{4}. $$ Moreover, by using explicit bounds for the heat kernel on ${{\mathbb H}^N}$ (see e.g. \cite{DA}) one can show that the nonnegative operator $$P:= -\Delta_{\mathbb H^N}-(N-1)^2/4$$ admits a positive minimal Green function (for $N\geq 2$). In other words, $P$ is subcritical in ${{\mathbb H}^N}$. \medskip Fix $x_0\in\mathbb H^N$, and let $G(x) : = G^{\mathbb H^N}_{-\Delta_{\mathbb H^N}}(x, x_0)$. For $0< \gl < 1$, let $$0<\alpha} \def\gb{\beta} \def\gg{\gamma_-<1/2<\alpha} \def\gb{\beta} \def\gg{\gamma_+<1$$ be the roots of the equation $\lambda = 4 \alpha(1 - \alpha)$. Using the supersolution construction \cite{DFP}, it follows that $G^{\alpha_{\pm}}$ are solutions of the equation $$ (-\Delta_{\mathbb H^N} - \lambda W) G^{\alpha_{\pm}} = 0 \quad \mbox{in } \mathbb{H}^N \setminus \{ x_0 \}, \quad \mbox{where } W := \dfrac{1}{4} \dfrac{|\nabla G|^2}{|G|^2}\, . $$ \medskip The asymptotic of $W$ is given by the following lemma. \begin{lem} Let $N \geq 2$ and $r : = {\rm d}(x, x_0).$ Then $W(r)$ satisfies $$ W(r) = \frac{(N-1)^2}{4} + \frac{(N-1)^3}{N + 1} \mathrm{e}^{-2r} + o(\mathrm{e}^{-2r}) \quad \mbox{as} \ r \rightarrow \infty. $$ \end{lem} \begin{proof} For the hyperbolic space ${\mathbb H}^N$, the Green function of the Laplace-Beltrami operator is given by $$ G(x)= \tilde G(r) : = \int_{r}^{\infty} (\sinh s)^{-(N-1)} \, {\rm d}s. $$ We have \begin{align*} (\sinh s)^{-(N-1)} = 2^{N-1} \mathrm{e}^{-(N-1)s} (1 - \mathrm{e}^{-2s})^{-(N-1)}. \end{align*} Therefore, $r \rightarrow \infty$ yields $$ (\sinh r)^{-(N-1)} = 2^{N-1} \left( \mathrm{e}^{-(N-1)r} + (N-1) \mathrm{e}^{-(N+1)r} + o\big( \mathrm{e}^{-(N+1)r}\big) \right). $$ Furthermore, as $r\rightarrow \infty$ we have $$ \int_{r}^{\infty} \!\!\!(\sinh s)^{-(N-1)}\!\,\mathrm{d}s \!= \!2^{N-1} \!\!\left[\!\frac{1}{N-1} \mathrm{e}^{-(N-1)r} \!+\! \frac{N-1}{N+1} \mathrm{e}^{-(N+ 1)r} \!\!+ o\big( \mathrm{e}^{-(N+1)r}\big)\!\right]\!. $$ Hence, as $r \rightarrow \infty$ we have $$ W(r) \!=\! \frac{1}{4} \left[ \frac{(\sinh r)^{-2(N-1)}}{\left(\int_{r}^{\infty} (\sinh s)^{-(N-1)} {\rm d}s\right)^2} \right]=\frac{(N-1)^2}{4} +\frac{(N-1)^3}{N + 1} \mathrm{e}^{-2r} + o( \mathrm{e}^{-2r}). $$ \end{proof} Now we state the following perturbative result. \begin{thm} Let $N \geq 2$ and $0<\lambda < 1.$ Then there holds \begin{equation}\label{green_estimate_hyperbolic} G^{\mathbb H^N}_{-\Delta_{\mathbb H^N} - \lambda \frac{(N-1)^2}{4}}(x,x_0) \asymp G^{\mathbb H^N}_{-\Delta_{\mathbb H^N} - \lambda W}(x,x_0) \asymp G^{\alpha_+}(x) \quad \mbox{in } \mathbb{H}^N \setminus B(x_0, 1), \end{equation} where $\lambda = 4 \alpha_+(1 - \alpha_+)$ and $ \frac{1}{2}<\alpha+ <1$. \end{thm} \begin{proof} Recall that $G^{\mathbb H^N}_{-\Gd_{\mathbb H^N} -\gl W}(x,x_0)$ is a positive solution of minimal growth at infinity of the equation $(-\Gd_{\mathbb H^N} -\gl W)v=0$ in $\mathbb{H}^N$. On the other hand, $$\lim_{r\to\infty}\frac{G^{\alpha_+}(r)}{G^{\alpha_-}(r)}=0.$$ Therefore, \cite[Proposition~6.1]{DFP} implies that $G^{\alpha_+}$ is also a positive solution of minimal growth at infinity of the equation $(-\Gd_{\mathbb H^N} -\gl W)v=0$ in $\mathbb{H}^N$. Thus, $$G^{\mathbb H^N}_{-\Delta_{\mathbb H^N} - \lambda W}(x,x_0) \asymp G^{\alpha_+}(x) \qquad \mbox{in } \mathbb{H}^N \setminus B(x_0, 1).$$ Hence, it remains to prove that $$G^{\mathbb H^N}_{-\Delta_{\mathbb H^N} - \lambda \frac{(N-1)^2}{4}}(x,x_0) \asymp G^{\mathbb H^N}_{-\Delta_{\mathbb H^N} - \lambda W}(x,x_0) \qquad \mbox{in } \mathbb{H}^N \setminus B(x_0, 1).$$ Note that for $r \rightarrow \infty,$ we have $$\gl W(r)- \gl \frac{(N-1)^2}{4}= \gl \frac{(N-1)^3}{N + 1} \mathrm{e}^{-2r} + o( \mathrm{e}^{-2r}).$$ Consequently, Remark~\ref{rem_sp} implies that it suffices to show that $\tilde W(r) := \mathrm{e}^{-2r} + o( \mathrm{e}^{-2r})$ is a small perturbation of the operator $P_\lambda := -\Delta_{\mathbb H^N} - \lambda \frac{(N-1)^2}{4}$ in $\mathbb H^N$. \medskip We follow the approach of Ancona \cite[corollary~6.1]{AN}. Let us choose $\Phi(r) := \mathrm{e}^{-(2 - \varepsilon)r}$ with $0<\varepsilon <1$. Then it follows \begin{equation}\label{limit_hyperb} \lim_{r \rightarrow \infty} \dfrac{\Phi(r)}{\tilde W(r)} = + \infty. \end{equation} Moreover, $\Phi$ is nonnegative, nonincreasing and $\int_{0}^{\infty} \Phi (r) {\rm d} r < \infty.$ Therefore, by \cite[Theorem~1]{AN}, we conclude \begin{equation}\label{green_equiv_hyperb} G^{\mathbb H^N}_{P_\lambda} \asymp G^{\mathbb H^N}_{P_\lambda + \Phi(r) \textbf{1}_{\mathbb{H}^N \setminus B(x_0, R)}} \quad \mbox{in } \mathbb H^N\times \mathbb H^N \end{equation} for large $R$. Consequently, \eqref{green_equiv_hyperb} and arguments given in \cite{YP3, YP1} implies that $\Phi$ is a $G$-bounded perturbation of $P_\gl$ in $\mathbb{H}^N$. Hence, it follows from \eqref{limit_hyperb} that $\tilde W$ is a small perturbation for $P_\lambda$. In particular, by Remark~\ref{rem_sp} we have \begin{equation*} G^{\mathbb H^N}_{P_\lambda} \asymp G^{\mathbb H^N}_{-\Gd_{\mathbb H^N} -\gl W} \quad \mbox{in } \mathbb H^N\times \mathbb H^N\setminus \{(x,x)\mid x\in \mathbb H^N\}. \end{equation*} Thus, \eqref{green_estimate_hyperbolic} follows. \end{proof} \medskip \begin{center} {\bf Acknowledgments} \end{center} D.~G. is supported in part by an INSPIRE faculty fellowship (IFA17-MA98) and is grateful to the Department of Mathematics at the Technion for the hospitality during his visit. He also acknowledges the support of the Israel Council for Higher Education (grant No. 32710877). The authors acknowledge the support of the Israel Science Foundation (grant 970/15) founded by the Israel Academy of Sciences and Humanities.
1409.2813
\section{Introduction} The \emph{random assignment problem} is a now classical problem in probabilistic combinatorial optimization. Given an $n\times n$ array $\{X_{i,j}\}_{1\leq i,j\leq n}$ of \textsc{iid} non-negative random variables, it asks about the statistics of \begin{eqnarray*} M_n & := &\min_{\sigma}\sum_{i=1}^nX_{i,\sigma(i)}, \end{eqnarray*} where the minimum runs over all permutations $\sigma$ of $\{1,\ldots,n\}$. This corresponds to finding a minimum-length perfect matching on the complete bipartite graph $K_{n,n}$ with edge-lengths $\{X_{i,j}\}_{1\leq i,j\leq n}$. Using the celebrated \emph{replica symmetry ansatz} from statistical physics, M\'ezard and Parisi \cite{MePa85,MePa86,MePa87} made a remarkably precise prediction concerning the regime where $n$ tends to infinity while the distribution of $X_{i,j}$ is kept fixed and satisfies \begin{eqnarray*} {\mathbb P}\left(X_{i,j}\leq x\right) \ \sim \ x^{d} & \textrm{ as } & x\to 0^+, \end{eqnarray*} for some exponent $0<d<\infty$. Specifically, they conjectured that \begin{eqnarray} \label{prediction} \frac{M_n}{n^{1-1/d}} & \xrightarrow[n\to\infty]{{\mathbb P}} & -d\int_{{\mathbb R}}f(x)\ln{f(x)}dx, \end{eqnarray} where the function $f\colon{\mathbb R}\to[0,1]$ solves the so-called \emph{cavity equation}: \begin{eqnarray} \label{rde} f(x) & = & \exp\left(-\int_{-x}^{+\infty}d(x+y)^{d-1}f(y)dy\right). \end{eqnarray} Aldous \cite{Al92,Al01} proved this conjecture in the special case $d=1$, where the term $(x+y)^{d-1}$ simplifies and makes the cavity equation exactly solvable, yielding \begin{eqnarray*} f(x) =\frac{1}{1+e^x} & \textrm{ and } & -d\int_{{\mathbb R}}f(x)\ln{f(x)}dx=\frac{\pi^2}{6}. \end{eqnarray*} Since then, several alternative proofs have been found \cite{LiWa04,NaBaSh05,Wa09}. This stands in sharp contrast with the case $d\neq 1$, where showing that the M\'ezard-Parisi equation (\ref{rde}) admits a unique solution has until now remained an open problem \cite[Open Problem 63]{AlBa05}. W\"astlund \cite{Wa12} circumvented this issue by considering instead the truncated equation \begin{eqnarray} \label{truncated} f_\lambda(x) & = & \exp\left(-\int_{-x}^{\lambda}d(x+y)^{d-1}f_\lambda(y)dy\right),\qquad 0<\lambda<\infty. \end{eqnarray} Using an ingenious game-theoretical interpretation of this equation, he showed the existence of a unique, global attractive solution $f_\lambda\colon[-\lambda,\lambda]\to[0,1]$ for every $0<\lambda <\infty$, provided $d\geq 1$. He then used this fact to establish that \begin{eqnarray} \label{interchange} \frac{M_n}{n^{1-1/d}} & \xrightarrow[n\to\infty]{{\mathbb P}} & {\lim_{\lambda\to+\infty}} \uparrow -d\int_{-\lambda}^\lambda f_\lambda(x)\ln{f_\lambda(x)}dx. \end{eqnarray} W\"astlund \cite{Wa12} explicitly left open the problem of completing the proof of the original M\'ezard-Parisi prediction by showing (i) that the untruncated cavity equation admits a unique solution $f$ and (ii) that $f_\lambda\to f$ as $\lambda\to\infty$. The purpose of this short paper is to establish this conjecture. \begin{theorem} \label{th:main} For $d>1$, the M\'ezard-Parisi equation (\ref{rde}) admits a unique solution $f\colon{\mathbb R}\to[0,1]$. Moreover, $f_\lambda\to f$ pointwise as $\lambda\to+\infty$, and \begin{eqnarray*} \int_{-\lambda}^\lambda f_\lambda(x)\ln{f_\lambda(x)}dx & \xrightarrow[\lambda\to+\infty]{} & \int_{{\mathbb R}}f(x)\ln{f(x)}dx. \end{eqnarray*} Consequently, the two limits in (\ref{prediction}) and (\ref{interchange}) coincide. \end{theorem} In addition, we provide a short alternative proof of the crucial result of \cite{Wa12} that the truncated equation (\ref{truncated}) admits a unique, attractive solution. \paragraph{Remark 1.} Very recently, a proof of uniqueness for the truncated equation (\ref{truncated}) has been announced \cite{La14} for the case $0<d<1$. It would be interesting to see if the result of the present paper can be extended to this regime. \paragraph{Remark 2.} For a random variable $Z$ with ${\mathbb P}\left(Z>x\right)=f(x)$, the cavity equation (\ref{rde}) simply expresses the fact that $Z$ solves the distributional identity \begin{eqnarray} \label{distrib} Z & \stackrel{d}{=} & \min_{i\geq 1}\left\{\xi_i-Z_i\right\}, \end{eqnarray} where $\{\xi_i\}_{i\geq 1}$ is a Poisson point process with intensity $d x^{d-1}\partial x$ on $[0,\infty)$, and $\{Z_i\}_{i\geq 1}$ are \textsc{iid} with the same distribution as $Z$, independent of $\{\xi_i\}_{i\geq 1}$. Such \emph{recursive distributional equations} arise naturally in a variety of models from statistical physics, and the question of existence and uniqueness of solutions plays a crucial role for the rigorous understanding of those models. We refer the interested reader to the comprehensive surveys \cite{Al04,AlBa05} for more details. In particular, \cite[Section 7.4]{AlBa05} contains a detailed discussion on equation (\ref{distrib}), and \cite[Open Problem 63]{AlBa05} raises explicitly the uniqueness issue. We note that the refined question of \emph{endogeny} remains a challenging open problem. Recursive distributional equations for other mean-field combinatorial optimization problems have been analysed in e.g. \cite{GaNoSw06,PaWa12,Kh14}. \paragraph{}The remainder of the paper is organized as follows. Section 2 deals with the truncated equation (\ref{truncated}) for fixed $0<\lambda<\infty$ and is devoted to the alternative analytical proof that there is a unique, globally attractive solution $f_\lambda$. Section 3 prepares the $\lambda\to\infty$ limit by providing uniform controls on the family $\{f_\lambda\colon0<\lambda<\infty\}$ and by characterizing the possible limit points. This reduces the proof of Theorem \ref{th:main} to establishing uniqueness in the un-truncated M\'ezard-Parisi equation ($\lambda=\infty$), which is done in Section 4. \section{The truncated cavity equation $(\lambda<\infty)$} Fix a parameter $0<\lambda<\infty$. On the set ${\mathfrak F}$ of non-increasing functions $f\colon[-\lambda,{\lambda}]\to[0,1]$, define an operator $T$ by \begin{eqnarray} \label{T} (T f)(x) & = & \exp\left(-d\int_{-x}^\lambda(x+y)^{d-1} f(y)dy\right). \end{eqnarray} The purpose of this section is to give a short and purely analytical proof of the following result, which was the main technical ingredient in \cite{Wa12} and was therein established using an ingenious game-theoretical framework. \begin{proposition} \label{pr:truncated} $T$ admits a unique fixed point $f_\lambda$ and it is attractive in the sense that $|T^n f(x)-f_\lambda(x)|\xrightarrow[n\to\infty]{} 0,$ uniformly in both $x\in[-\lambda,\lambda]$ and $f\in{\mathfrak F}$. \end{proposition} \begin{proof} Write $f\leq g$ to mean $f(x)\leq g(x)$ for all $x\in[-\lambda,\lambda]$. In particular, \begin{eqnarray*} {\bf 0}\leq f \leq T{\bf 0} \end{eqnarray*} for every $f\in{\mathfrak F}$, where $\bf{0}$ denotes the constant-zero function. Note also that the operator $T$ is non-increasing, in the sense that \begin{eqnarray*} f\leq g & \Longrightarrow & T f\geq T g. \end{eqnarray*} Those two observations imply that the sequences $\{T^{2n}{\bf 0}\}_{n\geq 0}$ and $\{T^{2n+1}{\bf 0}\}_{n\geq 0}$ are respectively non-decreasing and non-increasing, and that their respective pointwise limits $f^{-}$ and $f^+$ satisfy \begin{eqnarray*} f^- \ \leq \ \liminf_{n\to\infty}T^n f & \leq & \limsup_{n\to\infty}T^n f \ \leq \ f^+, \end{eqnarray*} for any $f\in{\mathfrak F}$. Moreover, the dominated convergence Theorem ensures that $T$ is continuous with respect to pointwise convergence, allowing to pass to the limit in the identity $T^{n+1} {\bf 0}=T(T^n{\bf 0})$ to deduce that \begin{eqnarray} \label{pm} Tf^- = f^+ & \textrm{ and } & Tf^+ = f^-. \end{eqnarray} Therefore, the proof boils down to the identity $f^-=f^+$, which we now establish. By definition, we have for any $f\in{\mathfrak F}$, \begin{eqnarray*} (T f)(x) & = & \exp\left(-d\int_{-\lambda}^{\lambda} (x+y)^{d-1}{\bf 1}_{(x+y\geq 0)}f(y)dy\right). \end{eqnarray*} Since $d>1$, we may differentiate under the integral sign to obtain \begin{eqnarray*} (T f)'(x) & = & -d(d-1)(T f)(x)\int_{-\lambda}^{\lambda} (x+y)^{d-2}{\bf 1}_{(x+y\geq 0)} f(y)dy. \end{eqnarray*} Integrating over $\left[-\lambda,\lambda\right]$ and noting that $(T f)\left(-\lambda\right)=1$, we conclude that \begin{eqnarray*} 1-(T f)\left(\lambda\right)& = & d(d-1)\iint_{\left[-\lambda,\lambda\right]^2} (x+y)^{d-2}{\bf 1}_{(x+y\geq 0)}(T f)(x)f(y)dx dy. \end{eqnarray*} Let us now specialize to $f=f^\pm$. In both cases, the right-hand side is \begin{eqnarray*} d(d-1)\iint_{\left[-\lambda,\lambda\right]^2} (x+y)^{d-2}{\bf 1}_{(x+y\geq 0)}f^+(x)f^-(y)dx dy, \end{eqnarray*} by (\ref{pm}). Therefore, we have $(T f^+)\left(\lambda\right)=(T f^-)(\lambda)$, i.e. \begin{eqnarray*} \int_{-\lambda}^{\lambda} d(\lambda+y)^{d-1}f^+(y)dy & = & \int_{-\lambda}^{\lambda} d(\lambda+y)^{d-1}f^-(y)dy. \end{eqnarray*} Since we already know that $f^-\leq f^+$, this forces $f^-= f^+$ almost-everwhere on $[-{\lambda},\lambda]$, and hence everywhere by continuity. Finally, the convergence $T^n{\bf 0}\to f_\lambda=f^\pm$ is automatically uniform on $[-\lambda,\lambda]$, by Dini's Theorem. \end{proof} \section{Relative compactness of solutions $(\lambda\to\infty)$} In order to study properties of the family $\{f_\lambda\colon 0<\lambda<\infty\}$, we extend the domain of $f_\lambda$ to ${\mathbb R}$ by setting $f_\lambda(x)=1$ for $x\leq -\lambda$ and $f_\lambda(x)=0$ for $x>\lambda$. \begin{proposition}[Uniform bounds] \label{pr:uniform} For all $0<\lambda<\infty$ and $x\geq 0$, \begin{eqnarray*} f_\lambda(x) & \leq & \exp\left(-\frac{x^d}{e}\right)\\ 1-f_\lambda(-x) & \leq & \exp\left(-\frac{x^d}{e}\right)\\ f_\lambda(-x)\ln\frac{1}{f_\lambda(-x)} & \leq & \exp\left(-\frac{x^d}{e}\right)\\ f_\lambda(x)\ln\frac{1}{f_\lambda(x)} & \leq & \left(1+ \frac{x^d}{e}\right)\exp\left(-\frac{x^d}{e}\right). \end{eqnarray*} \end{proposition} \begin{proof} Let $0<\lambda<\infty$. We may assume that $x\in[0,\lambda]$, otherwise the above bounds are trivial. By definition, we have \begin{eqnarray} \label{flambda} f_\lambda(x) & = & \exp\left(-\int_{-x}^\lambdad(x+y)^{d-1}f_\lambda(y)dy\right). \end{eqnarray} Now, since $x\geq 0$ and $f_\lambda$ is non-increasing, we have \begin{eqnarray*} \int_{-x}^{\lambda}(x+y)^{d-1}f_\lambda(y)dy & = & \int_{-x}^0(x+y)^{d-1}f_\lambda(y)dy+\int_{0}^\lambda (x+y)^{d-1}f_\lambda(y)dy\\ & \geq & f_\lambda(0)\frac{x^d}{d}+\int_{0}^\lambda y^{d-1}f_\lambda(y)dy. \end{eqnarray*} Applying $u\mapsto \exp(-d u)$ to both sides and using (\ref{flambda}), we obtain \begin{eqnarray} \label{upperb} f_\lambda(x) & \leq & f_\lambda(0)\exp(-f_\lambda(0)x^d). \end{eqnarray} In turn, this inequality implies that for all $x\geq 0$, \begin{eqnarray*} \int_{x}^{\lambda}d(y-x)^{d-1}f_\lambda(y)dy & \leq & f_\lambda(0)\int_{x}^{+\infty}d y^{d-1}e^{-f_\lambda(0)y^d}dy \ = \ \exp(-f_\lambda(0)x^d). \end{eqnarray*} Applying $u\mapsto \exp(-u)$ to both sides, we conclude that \begin{eqnarray} \label{lowerb} f_\lambda(-x) & \geq & \exp\left(-e^{-f_\lambda(0)x^d}\right). \end{eqnarray} In particular, taking $x=0$ yields $f_\lambda(0)\geq e^{-1}$, and reinjecting this into (\ref{upperb}) and (\ref{lowerb}) easily yields the first three claims. For the last one, observe that $u\mapsto u\ln\frac{1}{u}$ increases on $[0,e^{-1}]$ and decreases on $[e^{-1},1]$, with the value at $u=e^{-1}$ being precisely $e^{-1}$. Therefore, if $\exp(-x^d/e)\leq e^{-1}$, we may use the bound $f_\lambda(x)\leq \exp(-x^d/e)$ to deduce that \begin{eqnarray*} f_\lambda(x)\ln \frac{1}{f_\lambda(x)} & \leq & \frac{x^d}{e}\exp\left(-\frac{x^d}e\right). \end{eqnarray*} On the other hand, if $\exp(-x^d/e)\geq e^{-1}$, then \begin{eqnarray*} f_\lambda(x)\ln \frac{1}{f_\lambda(x)} & \leq & e^{-1}\ \leq \ \exp\left(-\frac{x^d}{e}\right). \end{eqnarray*} In both cases, the last inequality holds, and the proof is complete. \end{proof} \begin{proposition} \label{pr:tight} The family $\left\{f_\lambda\colon 0<\lambda<\infty\right\}$ is relatively compact with respect to the topology of uniform convergence on ${\mathbb R}$, and any sub-sequential limit as $\lambda\to\infty$ must solve the cavity equation (\ref{rde}). \end{proposition} \begin{proof}Let $\{\lambda_n\}_{n\geq 0}$ be any sequence of positive numbers such that $\lambda_n\to\infty$ as $n\to\infty$. By Helly's compactness principle for uniformly bounded monotone functions (see e.g. \cite[Theorem 36.5]{KoFo75}), there exists an increasing sequence $\{n_k\}_{k\geq 0}$ in ${\mathbb N}$ and a non-increasing function $f\colon{\mathbb R}\to[0,1]$ such that \begin{eqnarray} \label{helly} f_{\lambda_{n_k}}(x) & \xrightarrow[k\to\infty]{} & f(x), \end{eqnarray} for all $x\in{\mathbb R}$. Thanks to the first inequality in Proposition \ref{pr:uniform}, we may invoke dominated convergence to deduce that for each $x\in{\mathbb R}$, \begin{eqnarray*} \int_{-x}^{\lambda_{n_k}}f_{\lambda_{n_k}}(y)(x+y)^{d-1}dy & \xrightarrow[k\to\infty]{} & \int_{-x}^{+\infty}f(y)(x+y)^{d-1}dy. \end{eqnarray*} Applying $u\mapsto\exp(-d u)$ and recalling (\ref{flambda}), we see that \begin{eqnarray*} f(x) & = & \exp\left(-d\int_{-x}^{+\infty}f(y)(x+y)^{d-1}dy\right), \end{eqnarray*} which shows that $f$ must solve the cavity equation (\ref{rde}). This identity easily implies that $f$ is continuous. Consequently, the convergence (\ref{helly}) is uniform in $x\in{\mathbb R}$, by Dini's Theorem. \end{proof} \section{The un-truncated cavity equation $(\lambda=\infty)$} To conclude the proof of Theorem \ref{th:main}, it now remains to show that the un-truncated equation \begin{eqnarray} \label{Tinfty} f(x) & = & \exp\left(-d\int_{-x}^{+\infty}(x+y)^{d-1} f(y)dy\right). \end{eqnarray} admits at most one fixed point $f\colon{\mathbb R}\to[0,1]$. Proposition \ref{pr:tight} will then guarantee the convergence $f_\lambda\xrightarrow[\lambda\to\infty]{} f$, which will in turn imply \begin{eqnarray*} \int_{-\lambda}^\lambda f_\lambda(x)\ln{f_\lambda(x)}dx & \xrightarrow[\lambda\to+\infty]{} & \int_{{\mathbb R}}f(x)\ln{f(x)}dx, \end{eqnarray*} by dominated convergence, thanks to the last inequalities in Proposition \ref{pr:uniform}. A quick inspection of the proof of Proposition \ref{pr:uniform} reveals that it remains valid when $\lambda=\infty$. In particular, any solution $f$ to (\ref{Tinfty}) must satisfy \begin{eqnarray} \label{unif} \max(f(x),1-f(-x)) & \leq & \exp\left(-\frac{x^d}{e}\right), \end{eqnarray} for all $x\geq 0$. It also clear from (\ref{Tinfty}) that $f$ must be $(0,1)-$valued and continuous. We will use those properties in the proofs below. \begin{lemma} \label{lm:shift} If $f,g$ solve (\ref{Tinfty}), then there exists $t\geq 0$ such that for all $x\in{\mathbb R}$, $$f(x+t)\leq g(x)\leq f(x-t).$$ \end{lemma} \begin{proof} (\ref{unif}) ensures that for any $t\in{\mathbb R}$, $y\mapsto (1+|y|)(f(y-t)-g(y))$ is integrable on ${\mathbb R}$, so that by dominated convergence, \begin{eqnarray} \label{lim} \frac{1}{x^{d-1}}\int_{-x}^{+\infty}(y+x)^{d-1}\left(f(y-t)-g(y)\right) dy & \xrightarrow[x\to+\infty]{} &\Delta(t), \end{eqnarray} where \begin{eqnarray} \label{Delta} \Delta(t) & := & \int_{{\mathbb R}} \left(f(y-t)-g(y)\right) dy. \end{eqnarray} Observe that $t\mapsto\Delta(t)$ increases continuously from $-\infty$ to $+\infty$, as can be seen from the decomposition \begin{eqnarray*} \Delta(t) & = & \int_{0}^{+\infty} (1- g(-y)-g(y))dy +\int_{-t}^{+\infty} f(y)dy - \int_{t}^{+\infty} (1- f(-y))dy. \end{eqnarray*} In particular, we can find $t_0\geq 0$ such that $\Delta(-t_0)<0<\Delta(t_0)$. In view of (\ref{lim}), we deduce the existence of $a\geq 0$ such that for all $x\geq a$, \begin{eqnarray} \label{aa} \int_{-x}^{+\infty}(y+x)^{d-1}g(y) dy & \geq & \int_{-x}^{+\infty}(y+x)^{d-1}f(y+t_0)dy\\ \label{bb} \int_{-x}^{+\infty}(y+x)^{d-1}g(y) dy & \leq & \int_{-x}^{+\infty}(y+x)^{d-1}f(y-t_0)dy. \end{eqnarray} Applying $u\mapsto\exp(-d u)$, we conclude that for all $x\geq a$, \begin{eqnarray} \label{a} f(x+t_0) \ \leq & g(x) & \leq f(x-t_0). \end{eqnarray} In turn, this implies that (\ref{aa})-(\ref{bb}) also hold when $x\leq -a$, so that (\ref{a}) actually holds for all $x$ outside $(-a,a)$. On the other hand, since $g$ is $(0,1)-$valued and $f$ has limits $0,1$ at $\pm\infty$, we can choose $t_1\geq 0$ large enough so that \begin{eqnarray*} f(-a+t_1) \ \leq \ g(a) & \leq & g(-a)\ \leq \ f(a-t_1). \end{eqnarray*} Since $f,g$ are non-increasing, this inequality implies that for all $x\in[-a,a]$, \begin{eqnarray} \label{b} f(x+t_1) \ \leq & g(x) & \leq \ f(x-t_1). \end{eqnarray} In view of (\ref{a})-(\ref{b}), taking $t:=\max(t_0, t_1)$ concludes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{pr:tight}] Let $f,g$ solve equation (\ref{Tinfty}) and let $t$ be the smallest non-negative number satisfying for all $x\in{\mathbb R}$,\begin{eqnarray} \label{initial} f(x+t) & \leq \ g(x) \ \leq & f(x-t). \end{eqnarray} Note that $t$ exists by Lemma \ref{lm:shift} and the continuity of $f$. Now assume for a contradiction that $t>0$. Clearly, each of the two inequalities in (\ref{initial}) must be strict at some point $x\in{\mathbb R}$ (and hence on some open interval by continuity), otherwise we would have $g\geq f$ or $g\leq f$ and (\ref{Tinfty}) would then force $g=f$, contradicting the assumption that $t>0$. Consequently, the function $\Delta$ defined in (\ref{Delta}) must satisfy $\Delta(-t) < 0 < \Delta(t).$ By continuity of $\Delta$, there must exists $t_0<t$ such that $\Delta(-t_0)<0<\Delta(t_0)$. As we have already seen, this inequality implies \begin{eqnarray} \label{t0} f(x+t_0) & \leq \ g(x) \ \leq & f(x-t_0), \end{eqnarray} for all $x$ outside some compact $[-a,a]$. In particular, we now see that the inequalities in (\ref{initial}) must be strict for all large enough $x$. Thus, for all $x\in{\mathbb R}$, \begin{eqnarray*} \int_{-x}^{+\infty}(y+x)^{d-1}g(y) dy & > & \int_{-x}^{+\infty}(y+x)^{d-1}f(y+t)dy\\ \int_{-x}^{+\infty}(y+x)^{d-1}g(y) dy & < & \int_{-x}^{+\infty}(y+x)^{d-1}f(y-t)dy. \end{eqnarray*} Applying $u\mapsto\exp(-d u)$ now shows that the inequalities in (\ref{initial}) must actually be strict everywhere on ${\mathbb R}$, hence in particular on the compact $[-a,a]$. By uniform continuity, there must exists $t_1<t$ such that \begin{eqnarray} \label{t1} f(x+t_1) & \leq \ g(x) \ \leq & f(x-t_1), \end{eqnarray} for all $x\in [-a,a]$. In view of (\ref{t0})-(\ref{t1}), the number $t':=\max(t_0,t_1)$ now contradicts the minimality of $t$. \end{proof} \bibliographystyle{plain}
1409.2570
\section{Methods} \subsection{System geometry and forcefields} Our simulation system consists of a slab of liquid in contact with a vapor layer above it in the $z$ direction, forming a vapor--liquid interface. The bottom layer of the liquid slab is anchored to a surface modeled by a potential, $U(z)$, acting uniformly across the cross-section, given by: \begin{equation} {U(z) = \frac{2\pi}{3} \rho _s \epsilon _{sf} \sigma _{sf}^3 \left[ \frac{2}{15} \left( \frac{\sigma _{sf}}{z} \right)^9- \left (\frac{\sigma _{sf}}{z}\right)^3 \right] } \protect\label{eqn:uz} \end{equation} \noindent where $\sigma_{sf}= 0.355$ nm, $\rho_{s}$ (= 35 nm$^{-2}$) is the number density of surface atoms per unit area, and $\epsilon_{sf}$ is the strength of surface-fluid interactions that is varied as described in the main text. \begin{figure}[h] \centering \includegraphics[scale=0.8]{sifigure1.pdf} \caption{Simulation snapshot of a water system. Water molecules are shown in a spacefill representation with oxygen atoms colored blue and hydrogen atoms colored light gray. The 9-3 surface is marked by a gray slab.}\label{systemschematic} \end{figure} We study two liquids (i) water represented explicitly using the extended simple point charge (SPC/E) model and (ii) Lennard-Jones (LJ) fluid with the interparticle potential described by: \[ U(r) = 4\epsilon_{LJ}\left[ \left( \frac{\sigma_{LJ}}{r} \right)^{12} - \left( \frac{\sigma_{LJ}}{r} \right)^{6} \right] \] \noindent where $\sigma_{LJ}=0.373$ nm and $\epsilon_{LJ}=1.234$ kJ/mol. We performed simulations of both fluids in systems of various cross-sections and with a range of surface-fluid interactions. The details of the cross-sectional area and number of molecules included in each simulation are given in Table. \ref{tab:systems}. \begin{table}[h] \caption{System details}\label{tab:systems} \centering \begin{tabular}{L{1.5cm}C{4cm}C{2cm}} \hline Fluid & Box cross-section (nm$^{2}$) & $N$ \\ \hline Water & 3.5 $\times$ 3.5 & 1000 \\ & 5 $\times$ 5 & 2447 \\ & 10 $\times$ 10 & 8287 \\ \hline LJ & 5 $\times$ 5 & 2055 \\ & 10 $\times$ 10 & 9849 \\ & 15 $\times$ 15 & 25836 \\ & 20 $\times$ 20 & 52436 \\ \hline \end{tabular} \end{table} \subsection{Simulation details} Simulations were performed in the canonical ensemble (NVT) using the molecular dynamics package GROMACS \cite{Gromacs}. The Leapfrog algorithm with a time-step of 2 fs was used to integrate the equations of motion with bond constraints on water molecules imposed using the SHAKE algorithm \cite{SHAKE}. All simulations of water were carried out at a temperature of 300 K and of the LJ fluid at 104 K. The temperature was maintained using a Nose-Hoover thermostat \cite{NoseHoover1, NoseHoover2}. Electrostatic interactions were calculated using the particle mesh Ewald (PME) algorithm with a grid spacing of 0.12 nm and a real-space cut-off of 1.0 nm \cite{PME}. Lennard-Jones interactions were also truncated at 1.0 nm. Each simulation was equilibrated for 500 ps followed by a production of run of more than 10 ns. Configurations were stored every picosecond for further analysis. \section{Density profiles at the solid-liquid interface} As described in the main text, we studied the solvation of 9-3 surfaces by water and a LJ fluid. Whereas the main manuscript focuses on capillary fluctuations and transverse correlations, for completeness here we report the one particle density profile of these two fluids in the direction perpendicular to the surface as a function of surface-fluid attractions. Because we have a well-defined flat mathematical surface, the local density of fluid increases monotonically with increasing attractions. \begin{figure}[h] \centering \includegraphics[scale=0.8]{sifigure2.pdf} \caption{Density profile of water perpendicular to the interfacial plane for various surface-water attractions. The solid surface is located at $z=0$.}\label{waterdensity} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.8]{sifigure3.pdf} \caption{Density profile of Lennard-Jones liquid perpendicular to the interfacial plane for various surface-fluid attractions. The solid surface is located at $z=0$.}\label{ljdensity} \end{figure} \section{Capillary Wave Theory for the liquid-vapor interface} The capillary wave theory (CWT) for liquid vapor interfaces represents the instantaneous surface configuration in terms of surface harmonic waves: \begin{equation} h(x,y) = \displaystyle \sum_{k} A(k) e^{i\vec{k}.\vec{r}} \end{equation} The probability of observing a particular set of $A(k)$ values is proportional to the Boltzmann factor $\exp \left( -W/k_{\mathrm{B}}T \right)$, where $W$ is the work done against the interfacial tension and any other external forces (such as due to a surface). For a liquid-vapor interface under no external field \citep{BuffLS65,KalosPR77}, \begin{equation} \ens{\left| A(k)\right|^{2}} \propto \frac{k_{\mathrm{B}}T}{\gamma k^{2}} \end{equation} \noindent where $\gamma$ is the liquid-vapor interfacial tension. The transverse solvent structure factor, $S^{t}(k) \propto 1/k^2$ at the interface \cite{VakninBT08,KalosPR77}. The transverse correlations, $g^t(r)$ can be obtained from an inverse Fourier transform: \begin{equation}\protect\label {eqn:lvgtr} g^{t}(r) - 1 \propto \displaystyle\sum_{k} \frac {|e^{{\it i}{\bf k.r}}|}{k^2} \end{equation} \begin{figure}[h] \centering \includegraphics[scale=1.0]{sifigure4.pdf} \caption{Scaling of long-range correlations by the CWT scaling constant $\eta$. }\label{fig:waterljscaling} \end{figure} The proportionality constant, $\eta$, in Eqn \ref{eqn:lvgtr} is given by \cite{KalosPR77}: \begin{equation}\protect\label{eqn:eta} \eta = \frac{kT}{\gamma}\left[\frac{\rho '(z_o)^2}{\rho(z_o)}\right] \end{equation} \noindent where $\rho(z)$ is the density profile along the normal to the interface and $z_o$ is the location of the interface. $\rho(z)$ is frequently represented using \cite{AlejandreTC1995}: \begin{equation} \frac{\rho(z)}{\rho_{b}} = 0.5\left[1+\tanh \left(\frac{z-z_o}{d}\right)\right] \end{equation} \noindent where $d$ is a measure of the width of the interface and $\rho_{b}$ is the bulk density. Figure \ref{fig:waterljscaling} shows long range transverse correlations in water and LJ fluid scaled by $\eta$. \section{Transverse correlations and contact angle data for a Lennard-Jones fluid} The main text uses data on transverse correlations and contact angles for liquid water. As described in the main text, the underlying physics is general, and trends similar to that in water are observed for a LJ fluid as shown in Figures \ref{fig:slgrlj} and \ref{fig:ljtheta}. \begin{figure}[h] \centering \includegraphics[scale=1.0]{sifigure5.pdf} \caption{Correlations at the solid-LJ fluid interface. Six $g^t(r)$ profiles for $\epsilon_{sf}$ = 0.02, 0.05, 0.08, 0.1, 0.2, and 0.3 kJ/mol are shown (red to yellow, see arrow). The correlations in bulk (black line) at the VL interface (circles) are shown for reference. Direction of arrow indicates increasing surface-fluid attractions}\label{fig:slgrlj} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=1.0]{sifigure6.pdf} \caption{Cosine of the contact angle $\theta$ for LJ liquid droplets on the 9-3 surface as a function of $\epsilon_{sf}$.}\label{fig:ljtheta} \end{figure} \section{Water density fluctuations at the solid-liquid interface} Water density fluctuations at the interface are quantified by the probability distribution $P_{v}(N)$, that describes the probability of observing N waters in an observation volume $v$. We employed the INDUS method \cite{PatelVCG2011} to calculate $P_{v}(N)$ in cuboidal observation volumes of dimension 1.5 nm x 1.5 nm x 0.5 nm placed adjacent to the solid surface. \begin{figure}[h] \centering \includegraphics[scale=1.0]{sifigure7.pdf} \caption{Probablity distributions of the number of water oxygen centers in a 1.5 nm x 1.5nm x 0.5 nm volume adjacent to the 9-3 surface calculated using the Indirect Umbrella Sampling (INDUS) method for three different surface-fluid attractions.}\label{fig:indus} \end{figure} Figure \ref{fig:indus} shows $\log P_{v}(N)$ for water next to 9-3 surfaces with increasing surface-water attractions. For $\epsilon_{sf} = 2.50$ kJ/mol, the $P_{v}(N)$ is approximately Gaussian and the probability of observing $N=0$ water molecules in the probe volume is negligible. As $\epsilon_{sf}$ is decreased, we observe low-$N$ fat tails in the $\log P_{v}(N)$ distribution characteristic of hydrophobic nature of the surface (See \cite{PatelVC2010} for a detailed discussion).
0903.3528
\section{Introduction} Let $G_n=(V_n,E_n)$ denote the complete graph with vertex set $V_n=\{1,\dots,n\}$, and edge set $E_n=\{\,\{i,j\}\,,\;1\leq i,j\leq n\}$, including loops $\{i,i\}$, $1\leq i\leq n$. Assign a non negative random weight (or conductance) $U_{i,j}=U_{j,i}$ to each edge $\{i,j\}\in E_n$, and assume that the weights $\mathbf{U}}\newcommand{\bV}{\mathbf{V}=\{U_{i,j};\,\{i,j\}\in E_n\}$ are i.i.d.\ with common law $\mathcal{L}$ independent of $n$. This defines a random network, or weighted graph, denoted $(G_n,\mathbf{U}}\newcommand{\bV}{\mathbf{V})$. Next, consider the random walk on $(G_n,\mathbf{U}}\newcommand{\bV}{\mathbf{V})$ defined by the transition probabilities \begin{equation}\label{srw} K_\ij := \frac{U_\ij}{\rho_i}\,, % \quad\text{with}\quad % \rho_i:=\sum_{j=1}^n U_\ij. \end{equation} The Markov kernel $K$ is \emph{reversible} with respect to the measure $\rho=\sum_{i\in V_n}\rho_i\delta_i$ in that $$ \rho_i K_\ij = \rho_j K_{j,i} $$ for all $i,j\in V_n$. Note that we have not assumed that $\cL$ has no atom at $0$. If $\rho_i=0$ for some $i$, then for that index $i$ we set $K_{i,j}=\delta_{i,j}$, $1\leq j\leq n$. However, as soon as $\cL$ is not concentrated at $0$ then almost surely, for all $n$ sufficiently large, $\rho_i>0$ for all $1\leq i\leq n$, $K$ is irreducible and aperiodic, and $\rho$ is its unique invariant measure, up to normalization; see e.g.\ \cite{bordenave-caputo-chafai}. For any square $n\times n$ matrix $M$ with eigenvalues $\lambda_1(M),\ldots,\lambda_n(M)$, the \emph{Empirical Spectral Distribution} (ESD) is the discrete probability measure with at most $n$ atoms defined by $$ \mu_M:=\frac1n\sum_{j=1}^n \delta_{\lambda_j(M)}. $$ All matrices $M$ to be considered in this work have real spectrum, and the eigenvalues will be labeled in such a way that $\lambda_n(M)\leq\cdots\leq\lambda_1(M)$. Note that $K$ defines a square $n\times n$ random Markov matrix whose entries are not independent due the normalizing sums $\rho_i$. By reversibility, $K$ is self--adjoint in $L^2(\rho)$ and its spectrum $\sigma(K)$ is real. Moreover, $\sigma(K)\subset[-1,+1]$, and $1\in \sigma(K)$. Since $K$ is Markov, its ESD $\mu_K$ carries further probabilistic content. Namely, for any $\ell\in\mathds{N}$, if $p_\ell(i)$ denotes the probability that the random walk on $(G_n,\mathbf{U}}\newcommand{\bV}{\mathbf{V})$ started at $i$ returns to $i$ after $\ell$ steps, then the $\ell^\text{th}$ moment of $\mu_K$ satisfies \begin{equation}\label{moms} \int_{-1}^{+1}\!x^\ell\mu_K(dx) % = \frac1n\mathrm{tr}(K^\ell) % = \frac1n\sum_{i\in V} p_\ell(i). \end{equation} \subsubsection*{Convergence of the ESD} The asymptotic behavior of $\mu_K$ as $n\to\infty$ depends strongly on the tail of $\cL$ at infinity. When $\cL$ has finite mean $\int_0^\infty\!x\,\cL(dx) = m$ we set $m=1$. This is no loss of generality since $K$ is invariant under the dilation $t\to t\,U_\ij$. If $\cL$ has a finite second moment we write $\sigma^2 = \int_0^\infty(x-1)^2\,\cL(dx)$ for the variance. The following result, from \cite{bordenave-caputo-chafai}, states that if $0<\sigma^2<\infty$ then the bulk of the spectrum of $\sqrt{n}K$ behaves, when $n\to\infty$, as if we had truly i.i.d.\ entries (Wigner matrix). Without loss of generality, we assume that the weights $\mathbf{U}}\newcommand{\bV}{\mathbf{V}$ come from the truncation of a unique infinite table $(U_\ij)_{\ij\geq1}$ of i.i.d.\ random variables of law $\cL$. This gives a meaning to the almost sure (a.s.) convergence of $\mu_{\sqrt n K}$. The symbol $\scriptstyle\overset{w}{\to}$ denotes weak convergence of measures with respect to continuous bounded functions. Note that $\lambda_1(\sqrt{n}K)=\sqrt{n}\to\infty$. \begin{thm}[Wigner--like behavior]\label{th:wigner} If $\cL$ has variance $0<\sigma^2<\infty$ then a.s.\ \begin{equation}\label{eq:wigner} \mu_{\sqrt{n}K}:=\frac{1}{n}\sum_{k=1}^n\delta_{\sqrt{n}\lambda_k(K)} \overset{w}{\underset{n\to\infty}{\longrightarrow}} \mathcal{W}_{2\sigma}\,, \end{equation} where $\mathcal{W}_{2\sigma}$ is the Wigner semi--circle law on $[-2\sigma,+2\sigma]$. Moreover, if $\mathcal{L}$ has finite fourth moment, then $\lambda_2(\sqrt{n}K)$ and $\lambda_n(\sqrt{n}K)$ converge a.s.\ to the edge of the limiting support $[-2\sigma,+2\sigma]$. \end{thm} This Wigner--like scenario can be dramatically altered if we allow $\cL$ to have a heavy tail at infinity. For any $\alpha\in(0,\infty)$, we say that $\cL$ belongs to the class $\mathds{H}_\alpha$ if $\cL$ is supported in $[0,\infty)$ and has a regularly varying tail of index $\alpha$, that is for all $t > 0$, \begin{equation}\label{htp} G(t):=\cL((t,\infty))=L(t)\,t^{-\alpha} \end{equation} where $L$ is a function with slow variation at $\infty$, i.e.\ for any $x>0$, $$ \lim_{t\to\infty}\frac{L(x\,t)}{L(t)} = 1. $$ Set $a_n = \inf\{a>0\,: \; n \,G(a)\leq 1\}$. Then $nG(a_n)=nL(a_n)a_n^{-\alpha}\to 1$ as $n\to\infty$, and \begin{equation}\label{ht1} n\,G(a_n t)\to t^{-\alpha} % \quad\text{as} % \quad n\to\infty % \quad\text{for all $t > 0$}. \end{equation} It is well known that $a_n$ has regular variation at $\infty$ with index $1/\alpha$, i.e.\ $$ a_n=n^{1/\alpha}\ell(n) $$ for some function $\ell$ with slow variation at $\infty$, see for instance Resnick \cite[Section 2.2.1]{resnick}. As an example, if $V$ is uniformly distributed on the interval $[0,1]$ then for every $\alpha\in(0,\infty)$, the law of $V^{-1/\alpha}$, supported in $[1,\infty)$, belongs to $\mathds{H}_\alpha$. In this case, $L(t)=1$ for $t\geq 1$, and $a_n=n^{1/\alpha}$. To understand the limiting behavior of the spectrum of $K$ in the heavy tailed case it is important to consider first the symmetric i.i.d.\ matrix corresponding to the un-normalized weights $U_{i,j}$. More generally, we introduce the random $n\times n$ symmetric matrix $X$ defined by \begin{equation}\label{levymatrix} X = (X_\ij)_{1\leq \ij\leq n} \end{equation} where $(X_\ij)_{1\leq i\leq j\leq n}$ are i.i.d.\ such that $U_\ij:=|X_\ij|$ has law in $\mathds{H}_\alpha$ with $\alpha\in(0,2)$, and \begin{equation}\label{theta} \theta = \lim_{t \to \infty} % \frac{\mathds{P}(X_{i,j}>t)}{\mathds{P}(|X_{i,j}|>t)} \in [0,1]\,. \end{equation} It is well known that, for $\alpha\in(0,2)$, a random variable $Y$ is in the domain of attraction of an $\alpha$-stable law iff the law of $|Y|$ is in $\mathds{H}_\alpha$ and the limit \eqref{theta} exists, cf.\ \cite[Theorem IX.8.1a]{Feller}. It will be useful to view the entries $X_{i,j}$ in \eqref{levymatrix} as the marks across edge $\{i,j\}\in E_n$ of a random network $(G_n,{\bf X})$, just as the marks $U_{i,j}$ defined the network $(G_n,{\bf U})$ introduced above. Remarkable works have been devoted recently to the asymptotic behavior of the ESD of matrices $X$ defined by \eqref{levymatrix}, sometimes called L\'evy matrices. The analysis of the \emph{Limiting Spectral Distribution} (LSD) for $\alpha\in(0,2)$ is considerably harder than the finite second moment case (Wigner matrices), and the LSD is non explicit. Theorem \ref{th:iida} below has been investigated by the physicists Bouchaud and Cizeau \cite{BouchaudCizeau} and rigorously proved by Ben Arous and Guionnet \cite{benarous-guionnet}, and Belinschi, Dembo, and Guionnet \cite{belinschi}; see also Zakharevich \cite{zakharevich} for related results. \begin{thm}[Symmetric i.i.d.\ matrix, $\alpha\in(0,2)$]\label{th:iida} For every $\alpha\in(0,2)$, there exists a symmetric probability distribution $\mu_\alpha$ on $\mathds{R}$ depending only on $\alpha$ such that (with the notations of (\ref{ht1}-\ref{levymatrix}) a.s.\ $$ \mu_{a_n^{-1}X}:=\frac1n\sum_{i=1}^n\delta_{\lambda_i(a_n^{-1}X)} \overset{w}{\underset{n\to\infty}{\longrightarrow}} \mu_\alpha. $$ \end{thm} In Section \ref{sec:iida}, we give a new independent proof of Theorem \ref{th:iida}. The key idea of our proof is to exhibit a limiting self--adjoint operator $\bT$ for the sequence of matrices $a_n^{-1} X$, defined on a suitable Hilbert space, and then use known spectral convergence theorems of operators. The limiting operator will be defined as the ``adjacency matrix'' of an infinite rooted tree with random edge weights, the so called Poisson weighted infinite tree (PWIT) introduced by Aldous \cite{aldous92}, see also \cite{aldoussteele}. In other words, the PWIT will be shown to be the local weak limit of the random network $(G_n,{\bf X})$ when the edge marks $X_{i,j}$ are rescaled by $a_n$. In this setting the LSD $\mu_\alpha$ arises as the expected value of the (random) spectral measure of the operator $\bT$ at the root of the tree. The PWIT and the limiting operator $\bT$ are defined in Section \ref{sec:PWIT}. Our method of proof can be seen as a variant of the resolvent method, based on local convergence of operators. It is also well suited to investigate properties of the LSD $\mu_\alpha$, cf.\ Theorem \ref{th:mua} below. Let us now come back to our random reversible Markov kernel $K$ defined by \eqref{srw} from weights with law $\cL\in\mathds{H}_\alpha$. We obtain different limiting behavior in the two regimes $\alpha\in(0,1)$ and $\alpha\in(1,2)$. The case $\alpha>2$ corresponds to a Wigner--type behavior (special case of Theorem \ref{th:wigner}). We set $$ \kappa_n=na_n^{-1}. $$ \begin{thm}[Reversible Markov matrix, $\alpha\in(1,2)$]\label{th:k12} Let $\mu_\alpha$ be the probability distribution which appears as the LSD in the symmetric i.i.d.\ case (Theorem \ref{th:iida}). If $\cL\in\mathds{H}_\alpha$ with $\alpha\in(1,2)$ then a.s.\ $$ \mu_{\kappa_n K} :=\frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k(\kappa_n K)}% \overset{w}{\underset{n\to\infty}{\longrightarrow}} \mu_\alpha. $$ \end{thm} \begin{thm}[Reversible Markov matrix, $\alpha\in(0,1)$]\label{th:k01} For every $\alpha\in(0,1)$, there exists a symmetric probability distribution $\widetilde \mu_\alpha$ supported on $[-1,1]$ depending only on $\alpha$ such that a.s.\ $$ \mu_{K} := \frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k(K)}% \overset{w}{\underset{n\to\infty}{\longrightarrow}} \widetilde \mu_\alpha. $$ \end{thm} The proof of Theorem \ref{th:k12} and Theorem \ref{th:k01} is given in Sections \ref{sec:k12} and \ref{sec:k01} respectively. As in the proof of Theorem \ref{th:iida}, the main idea is to exploit convergence of our matrices to suitable operators defined on the PWIT. To understand the scaling in Theorem \ref{th:k12}, we recall that if $\alpha>1$, then by the strong law of large numbers, we have $n^{-1}\rho_i\to1$ a.s.\ for every row sum $\rho_i$, and this is shown to remove, in the limit $n\to\infty$, all dependencies in the matrix $na_n^{-1} K$, so that we obtain the same behavior of the i.i.d.\ matrix of Theorem \ref{th:iida}. On the other hand, when $\alpha\in(0,1)$, both the sum $\rho_i$ and the maximum of its elements are on scale $a_n$. The proof of Theorem \ref{th:k01} shows that the matrix $K$ converges (without rescaling) to a random stochastic self--adjoint operator $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ defined on the PWIT. The operator $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ can be described as the transition matrix of the simple random walk on the PWIT and is naturally linked to Poisson--Dirichlet random variables. This is based on the observation that the order statistics of any given row of the matrix $K$ converges weakly to the Poisson--Dirichlet law $\mathrm{PD}(\alpha,0)$, see Lemma \ref{le:PoiExt} below for the details. In fact, the operator $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ provides an interesting generalization of the Poisson--Dirichlet law. Since $\mu_K$ is supported in $[-1,1]$, \eqref{moms} and Theorem \ref{th:k01} imply that for all $\ell \geq 1$, a.s.\ \begin{equation}\label{momms} \frac1n\sum_{i =1} ^n p_\ell(i) = \int_\mathds{R}\! x^\ell \mu_K(dx) % \underset{n\to\infty}{\longrightarrow} % \int_\mathds{R}\!x^\ell \widetilde \mu_\alpha(dx)=:\gamma_\ell\,. \end{equation} The LSD $\widetilde \mu_\alpha$ will be obtained as the expectation of the (random) spectral measure of $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ at the root of the PWIT. It will follow that $\gamma_\ell$ (the $\ell^\text{th}$ moment of $\widetilde \mu_\alpha$) is the expected value of the (random) probability that the random walk returns to the root in $\ell$-steps. In particular, the symmetry of $\widetilde\mu_\alpha$ follows from the bipartite nature of the PWIT. It was proved by Ben Arous and Guionnet \cite[Remark 1.5]{benarous-guionnet} that $\alpha\in(0,2)\mapsto\mu_\alpha$ is continuous with respect to weak convergence of probability measures, and by Belinschi, Dembo, and Guionnet \cite[Remark 1.2 and Lemma 5.2]{belinschi} that $\mu_\alpha$ tends to the Wigner semi--circle law as $\alpha \nearrow 2$. We believe that Theorem \ref{th:k12} should remain valid for $\alpha=2$ with LSD given by the Wigner semi--circle law. Further properties of the measures $\mu_\alpha$ and $\widetilde\mu_\alpha$ are discussed below. The case $\alpha=1$ is qualitatively similar to the case $\alpha\in(1,2)$ with the difference that the sequence $\kappa_n$ in Theorem \ref{th:k12} has to be replaced by $\kappa_n=na_n^{-1}w_n$ where \begin{equation}\label{wwnn} w_n=\int_0^{a_n}\!x\cL(dx)\,. \end{equation} Indeed, here the mean of $U_\ij$ may be infinite and the closest one gets to a law of large numbers is the statement that $\rho_i/ nw_n \to 1$ in probability (see Section \ref{sec:k1}). The sequence $w_n$ (and therefore $\kappa_n$) is known to be slowly varying at $\infty$ for $\alpha=1$ (see e.g.\ Feller \cite[VIII.8]{Feller}). The following mild condition will be assumed: There exists $0 < \varepsilon < 1/2$ such that \begin{equation}\label{eq:conda1wn} \liminf_{n\to\infty}\frac{ w_{\lfloor n^\varepsilon \rfloor} }{ w_n} > 0. \end{equation} For example, if $U_\ij^{-1}$ is uniform on $[0,1]$, then $\kappa_n=w_n = \log n $ and $\lim_{n\to\infty} w_{\lfloor n^\varepsilon \rfloor} / w_n = \varepsilon$. In the next theorem $\mu_1$ stands for the LSD $\mu_\alpha$ from Theorem \ref{th:iida}, at $\alpha=1$. \begin{thm}[Reversible Markov matrix, $\alpha=1$]\label{th:k1} Suppose that $\cL\in\mathds{H}_\alpha$ with $\alpha=1$ and assume \eqref{eq:conda1wn}. If $\mu_{\kappa_n K}$ is the ESD of $\kappa_nK$, with $\kappa_n=na_n^{-1}w_n$, then, as $n\to\infty$, a.s.\ $\mu_{\kappa_n K}% \overset{w}{\underset{n\to\infty}{\longrightarrow}} \mu_1\,.$ \end{thm} \subsubsection*{Properties of the LSD} In Section \ref{sec:spec} we prove some properties of the LSDs $\mu_\alpha$ and $\widetilde \mu_\alpha$. \begin{thm}[Properties of $\mu_\alpha$]\label{th:mua} Let $\mu_\alpha$ be the symmetric LSD in Theorems \ref{th:iida}-\ref{th:k12}. \begin{enumerate} \item[(i)] $\mu_\alpha$ is absolutely continuous on $\mathds{R}$ with bounded density. \item[(ii)] The density of $\mu_\alpha$ at $0$ is equal to $$ \frac{1}{\pi} \, \Gamma\left(1+ \frac {2}{\alpha}\right) \left( \frac{ \Gamma(1 - \frac {\alpha}{2} )}{ \Gamma(1 + \frac {\alpha}{2} ) } \right)^{\frac{1}{\alpha}} . $$ \item[(iii)] $\mu_\alpha$ is heavy--tailed, and as $t$ goes to $+\infty$, $$ \mu_\alpha (( t , + \infty)) \sim \frac{1}{2}t^{- \alpha}. $$ \end{enumerate} \end{thm} Statements (i)-(ii) answer some questions raised in \cite{benarous-guionnet,belinschi}. Statement (iii) is already contained in \cite[Theorem 1.7]{belinschi}, but we provide a new proof based on a Tauberian theorem for the Cauchy--Stieltjes transform that may be of independent interest. \begin{thm}[Properties of $\widetilde\mu_\alpha$]\label{th:muabis} Let $\widetilde\mu_\alpha$ be the symmetric LSD in Theorem \ref{th:k01}, with moments $\gamma_\ell$ as in \eqref{momms}. Then the following statements hold true. \begin{itemize} \item[(i)] For $\alpha\in(0,1)$, there exists $\delta>0$ such that $$ \gamma_{2n}\geq \delta\,n^{-\alpha} \quad\text{for all}\quad n \geq 1\,. $$ Moreover, we have $\liminf_{\alpha\nearrow 1}\gamma_2>0$. \item[(ii)] For the topology of the weak convergence, the map $\alpha \mapsto \tilde \mu_\alpha$ is continuous in $(0,1)$. \item[(iii)] For the topology of the weak convergence, $$ \lim_{\alpha \searrow 0} \tilde \mu_\alpha % = \frac 1 4 \delta_{-1}+ \frac 1 2 \delta_{0} + \frac 1 4 \delta_{1}. $$ \end{itemize} \end{thm} It is delicate to provide liable numerical simulations of the ESDs. Nevertheless, Figure \ref{firstpic} provides histograms for various values of $\alpha$ and a large value of $n$, illustrating Theorems \ref{th:k12}--\ref{th:muabis}. \subsubsection*{Invariant measure and edge--behavior} Finally, we turn to the analysis of the invariant probability distribution $\hat \rho$ for the random walk on $(G,\mathbf{U}}\newcommand{\bV}{\mathbf{V})$. This is obtained by normalizing the vector of row sums $\rho$: $$ \hat \rho = (\rho_1 + \dots + \rho_n) ^{ -1} (\rho_1, \dots , \rho_n ). $$ Following \cite[Lemma 2.2]{bordenave-caputo-chafai}, if $\alpha>2$ then $n\max_{1\leq i\leq n}|\hat\rho_i - n^{-1}| \to 0$ as $n\to\infty$ a.s.\ This uniform strong law of large numbers does not hold in the heavy--tailed case $\alpha\in(0,2)$: the large $n$ behavior of $\hat \rho$ is then dictated by the largest weights in the system. Below we use the notation $\widetilde\rho =(\widetilde\rho_1,\dots,\widetilde\rho_n)$ for the ranked values of $\hat\rho_1,\dots,\hat\rho_n$, so that $\widetilde\rho_1\geq \widetilde\rho_2\geq\cdots$ and their sum is $1$. The symbol $\scriptstyle\overset{d}{\longrightarrow}$ denotes convergence in distribution. We refer to Subsection \ref{order} for more details on weak convergence in the space of ranked sequences and for the definition of the Poisson--Dirichlet law $\mathrm{PD}(\alpha,0)$. \begin{thm}[Invariant probability measure] \label{th:inv} Suppose that $\cL\in\mathds{H}_\alpha$. \begin{itemize} \item[(i)] If $\alpha\in(0,1)$, then \begin{equation}\label{inv02} \widetilde \rho \overset{d}{\underset{n\to\infty}{\longrightarrow}} \frac12\,\left(V_1,V_1,V_2,V_2,\dots\right)\,, \end{equation} where $V_1>V_2>\cdots$ stands for a Poisson--Dirichlet $\mathrm{PD}(\alpha,0)$ random vector. \item[(ii)] If $\alpha\in(1,2)$, then \begin{equation}\label{inv01} \kappa_{n(n+1)/2} \,\widetilde\rho \overset{d}{\underset{n\to\infty}{\longrightarrow}} \frac12\left(x_1,x_1,x_2,x_2,\dots\right)\,, \end{equation} where $x_1>x_2>\cdots$ denote the ranked points of the Poisson point process on $(0,\infty)$ with intensity measure $\alpha\,x^{-\alpha-1}dx$. Moreover, the same convergence holds for $\alpha=1$ provided the sequence $\kappa_n$ is replaced by $n a_n^{-1} w_n$, with $w_n$ as in (\ref{wwnn}). \end{itemize} \end{thm} Theorem \ref{th:inv} is proved in Section \ref{sec:inv}. These results will be derived from the statistics of the ranked values of the weights $U_\ij$, $i<j$, on the scale $a_{n(n+1)/2}$ (diagonal weights $U_{i,i}$ are easily seen to give negligible contributions). The duplication in the sequences in \eqref{inv01} and \eqref{inv02} then comes from the fact that each of the largest weights belongs to two distinct rows and determines alone the limiting value of the associated row sum. Theorem \ref{th:inv} is another indication that the random walk with transition matrix $K$ shares the features of a \emph{trap model}. Loosely speaking, instead of being trapped at a vertex, as in the usual mean field trap models (see \cite{Bouchaud,BenArousCerny,MR2152251,MR2435851}) here the walker is trapped at an edge. Large edge weights are responsible for the large eigenvalues of $K$. This phenomenon is well understood in the case of symmetric random matrices with i.i.d.\ entries, where it is known that, for $\alpha\in(0,4)$, the edge of the spectrum gives rise to a Poisson statistics, see \cite{MR2081462,auffinger-benarous-peche}. The behavior of the extremal eigenvalues of $K$ when $\mathcal{L}$ has finite fourth moment has been studied in \cite{bordenave-caputo-chafai}. In particular, it is shown there that the spectral gap $1-\lambda_2$ is $1-O(n^{-1/2})$. In the present case of heavy--tailed weights, in contrast, by localization on the largest edge--weight it is possible to prove that, a.s.\ and up to corrections with slow variation at $\infty$ \begin{equation}\label{edges} 1-\lambda_2 = \begin{cases} O(n^{-1/\alpha}) & \alpha\in(0,1)\\ O(n^{-(2-\alpha)/\alpha}) & \alpha\in[1,2) \end{cases} \end{equation} Similarly, for $\alpha\in(2,4)$ one has that $\lambda_2$ is bounded below by $n^{-(\alpha-2)/\alpha}$. Understanding the statistics of the extremal eigenvalues remains an interesting open problem. \section{Convergence to the Poisson Weighted Infinite Tree} \label{sec:PWIT} The aim of this section is to prove that the matrices $X$ and $K$ appearing in Theorems \ref{th:iida}, \ref{th:k12} and \ref{th:k01}, when properly rescaled, converge ``locally'' to a limiting operator defined on the Poisson weighted infinite tree (PWIT). The concept of local convergence of operators is defined below. We first recall the standard construction of the PWIT. \subsection{The PWIT} Given a Radon measure $\nu$ on $\mathds{R}$, $\mathrm{PWIT}(\nu)$ is the random rooted tree defined as follows. The vertex set of the tree is identified with $\mathds{N}^f:= \cup_{k \in \mathds{N}} \mathds{N}^k$ by indexing the root as $\mathds{N}^0 = \eset$, the offsprings of the root as $\mathds{N}$ and, more generally, the offsprings of some $\mathbf{v} \in \mathds{N}^k$ as $(\bv1),(\bv2), \cdots \in \mathds{N}^{k+1}$ (for short notation, we write $(\bv1)$ in place of $(\mathbf{v},1)$). In this way the set of $\mathbf{v}\in\mathds{N}^n$ identifies the $n^\text{th}$ generation. We now assign marks to the edges of the tree according to a collection $\{ \Xi_\mathbf{v} \}_{\mathbf{v} \in \mathds{N}^f}$ of independent realizations of the Poisson point process with intensity measure $\nu$ on $\mathds{R}$. Namely, starting from the root $\eset$, let $ \Xi_\eset = \{y_1,y_2,\dots\}$ be ordered in such a way that $|y_1| \leq |y_2| \leq \cdots$, and assign the mark $y_i$ to the offspring of the root labeled $i$. Now, recursively, at each vertex $\mathbf{v}$ of generation $k$, assign the mark $y_{\mathbf{v} i}$ to the offspring labeled $\mathbf{v} i$, where $\Xi_{\mathbf{v}}=\{ y_{\mathbf{v} 1}, y_{\mathbf{v} 2}, \dots \}$ satisfy $|y_{\mathbf{v} 1}| \leq |y_{\mathbf{v} 2}| \leq \cdots$ Note that $\Xi_\mathbf{v}$ has in average $\nu(\mathds{R})\in(0,\infty]$ elements. As a convention, if $\nu (\mathds{R}) < \infty$, one sets the remaining marks equal to $\infty$. For example, if $\nu=\lambda\delta_1$ is proportional to a Dirac mass, then, neglecting infinite marks, $\mathrm{PWIT}(\nu)$ is the tree obtained from a Yule Process (with all marks equal to $1$). In the sequel we shall only consider cases where $\nu$ is not finite and each vertex has a.s.\ an infinite number of offsprings with finite and distinct marks. If $\nu$ is the Lebesgue measure on $[0,\infty)$ we obtain the original PWIT in \cite{aldous92}. \subsection{Local operator convergence} We give a general formulation and later specialize to our setting. Let $V$ be a countable set, and let $L^2(V)$ denote the Hilbert space defined by the scalar product $$ \langle \phi,\psi \rangle:= \sum_{u\in V} \bar\phi_u\psi_u\,,\quad \phi_u = \langle \delta_u,\phi \rangle $$ where $\phi,\psi\in \mathds{C}^V$ and $\delta_{u}$ denotes the unit vector with support $u$. Let $\cD$ denote the dense subset of $L^2 (V)$ of vectors with finite support. \begin{defi}[Local convergence]\label{def:convloc} Suppose $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n$ is a sequence of bounded operators on $L^2(V)$ and $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ is a closed linear operator on $L^2(V)$ with dense domain $D(\mathbf{S}}\newcommand{\bT}{\mathbf{T})\supset \cD$. Suppose further that $\cD$ is a core for $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ (i.e.\ the closure of $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ restricted to $\cD$ equals $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$). For any $u,v\in V$ we say that $(\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n,u)$ converges locally to $(\mathbf{S}}\newcommand{\bT}{\mathbf{T},v)$, and write $$ (\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n,u) \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T},v)\,, $$ if there exists a sequence of bijections $\sigma_n:V\to V$ such that $\sigma_n (v) = u$ and, for all $\phi\in\cD$, $$ \sigma_n ^{-1} \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n \sigma_n \phi \to \mathbf{S}}\newcommand{\bT}{\mathbf{T} \phi\,, $$ in $L^2(V)$, as $n\to\infty$. \end{defi} In other words, this is the standard strong convergence of operators up to a re-indexing of $V$ which preserves a distinguished element. With a slight abuse of notation we have used the same symbol $\sigma_n$ for the linear isometry $\sigma_n: L^2(V)\to L^2(V)$ induced in the obvious way, i.e.\ such that $\sigma_n\delta_v = \delta_{\sigma_n(v)}$ for all $v\in V$. The point for introducing Definition \ref{def:convloc} lies in the following theorem on strong resolvent convergence. Recall that if ${\bf S}$ is a self--adjoint operator its spectrum is real and for all $z \in \mathds{C}_+ := \{z \in \mathds{C} : \Im z > 0\} $, the operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T} - zI$ is invertible with bounded inverse. The operator--valued function $z \mapsto (\mathbf{S}}\newcommand{\bT}{\mathbf{T} - zI)^{-1}$ is the resolvent of $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$. \begin{thm}[From local convergence to resolvents]\label{th:strongres} If $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n$ and $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ are self--adjoint operators that satisfy the conditions of Definition \ref{def:convloc} and $(\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n,u) \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T},v)$ for some $u,v\in V$, then, for all $z \in \mathds{C}_+$, \begin{equation}\label{strresconv} \langle \delta_{u}, (\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n- zI) ^{-1} \delta_u \rangle \to \langle \delta_{v}, (\mathbf{S}}\newcommand{\bT}{\mathbf{T}- zI) ^{-1} \delta_{v} \rangle. \end{equation} \end{thm} \begin{proof}[Proof of Theorem \ref{th:strongres}] It is a special case of \cite[Theorem VIII.25(a)]{reedsimon}. Indeed, if we define $\widetilde \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n = \sigma_n ^{-1} \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n \sigma_n$, then $\widetilde \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n \phi \to \mathbf{S}}\newcommand{\bT}{\mathbf{T} \phi$ for all $\phi$ in a common core of the self--adjoint operators $\widetilde \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n, \mathbf{S}}\newcommand{\bT}{\mathbf{T}$. This implies the strong resolvent convergence, i.e.\ $(\widetilde\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n - zI)^{-1}\psi \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T} - zI)^{-1}\psi$ for any $z\in\mathds{C}_+$, $\psi\in L^2(V)$. The conclusion follows by taking the scalar product $$ \scalar{\delta_v}{(\widetilde\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n - zI)^{-1}\delta_v} % = \scalar{\delta_u}{(\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n - zI)^{-1}\delta_u}. $$ \end{proof} We shall apply the above theorem in cases where the operators $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n$ and $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ are random operators on $L^2(V)$, which satisfy with probability one the conditions of Definition \ref{def:convloc}. In this cases we say that $(\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n ,u) \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T} ,v)$ \emph{in distribution} if there exists a random bijection $\sigma_n$ as in Definition \ref{def:convloc} such that $\sigma_n ^{-1} \mathbf{S}}\newcommand{\bT}{\mathbf{T}_n \sigma_n \phi $ converges in distribution to $\mathbf{S}}\newcommand{\bT}{\mathbf{T} \phi $, for all $\phi \in \cD$ (where a random vector $\psi_n \in L^2 (V)$ converges in distribution to $\psi$ if $$ \lim_{n\to\infty} \mathds{E} f (\psi_n) = \mathds{E} f(\psi) $$ for all bounded continuous functions $f:L^2 (V)\to\mathds{R}$). Under these assumptions then \eqref{strresconv} becomes convergence in distribution of (bounded) complex random variables. In our setting the Hilbert space will be $L^2(V)$, with $V=\mathds{N}^f$, the vertex set of the PWIT, the operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_n$ will be a rescaled version of the matrix $X$ defined by \eqref{levymatrix} or the matrix $K$ defined by \eqref{srw}. The operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ will be the corresponding limiting operator defined below. \subsection{Limiting operators}\label{limop} Let $\theta$ be as in Theorem \ref{th:iida}, and let $\ell_\theta$ be the positive Borel measure on the real line defined by $d\ell_\theta(x)=\theta \mathds{1}_{\{x > 0\}}dx + (1-\theta)\mathds{1}_{\{x < 0\}}dx$. Consider a realization of $\mathrm{PWIT}(\ell_\theta)$. As before the mark from vertex $\mathbf{v} \in \mathds{N}^k$ to $\mathbf{v} k \in \mathds{N}^{k+1}$ is denoted by $y_{\mathbf{v} k}$. We note that almost surely \begin{equation}\label{l20} \sum_k |y_{\mathbf{v} k }|^{-2/\alpha} < \infty, \end{equation} since a.s.\ $\lim_k |y_{\mathbf{v} k } | / k = 1$ and $\sum_k k^{-2/\alpha}$ converges for $\alpha \in (0,2)$. Recall that for $V = \mathds{N} ^f$, $\cD$ is the dense set of $L^2 (V)$ of vectors with finite support. We may a.s.\ define a linear operator $\bT:\cD\to L^2(V)$ by letting, for $\mathbf{v},\mathbf{w}\in \mathds{N}^f$, \begin{equation}\label{tone} \bT(\mathbf{v},\mathbf{w}) = \langle \delta_{\mathbf{v}} , \bT \delta_{\mathbf{w}} \rangle = \begin{cases} \mathrm{sign}(y_{\mathbf{w}}) |y_{\mathbf{w}}|^{-1/\alpha} & \text{ if $\mathbf{w} = \mathbf{v} k$ for some integer $k$} \\ \mathrm{sign}(y_{\mathbf{v}}) |y_{\mathbf{v}}|^{-1/\alpha} & \text{ if $\mathbf{v} = \mathbf{w} k$ for some integer $k$} \\ 0& \text{ otherwise}. \end{cases} \end{equation} Note that if every edge $e$ in the tree with mark $y_e$ is given the ``weight'' $\mathrm{sign}(y_{e}) |y_{e}|^{-1/\alpha}$ then we may look at the operator $\bT$ as the ``adjacency matrix'' of the weighted tree. Clearly, $\bT$ is symmetric, and therefore it has a closed extension with domain $D(\bT) \subset L^2 (\mathds{N}^f)$ such that $\cD \subset D(\bT)$; see e.g.\ \cite[VIII.2]{reedsimon}. We will prove in Proposition \ref{esssa} below that $\bT$ is essentially self--adjoint, i.e.\ the closure of $\bT$ is self--adjoint. With a slight abuse of notation, we identify $\bT$ with its closed extension. As stated below, $\bT$ is the weak local limit of the sequence of $n\times n$ i.i.d.\ matrices $a_n^{-1}X$, where $X$ is defined by \eqref{levymatrix}. To this end we view the matrix $X$ as an operator in $L^2(V)$ by setting $\scalar{\delta_i}{X\delta_j} = X_{i,j}$, where $i,j\in\mathds{N}$ denote the labels of the offsprings of the root (the first generation), with the convention that $X_{i,j}=0$ when either $i>n$ or $j>n$, and by setting $\scalar{\delta_\mathbf{u}}{X\delta_\mathbf{v}} = 0$ when either $\mathbf{u}$ or $\mathbf{v}$ does not belong to the first generation. Similarly, taking now $\theta=1$, in the case of Markov matrices $K$ defined by \eqref{srw}, for $\alpha\in[1,2)$, $\bT$ is the local limit operator of $\kappa_n K$. To work directly with symmetric operators we introduce the symmetric matrix \begin{equation}\label{ksim} S_{i,j} = \frac{U_\ij}{\sqrt{\rho_i\rho_j}}\,, \end{equation} which is easily seen to have the same spectrum of $K$ (see e.g.\ \cite[Lemma 2.1]{bordenave-caputo-chafai}). Again the matrix $S$ can be embedded in the infinite tree as described above for $X$. In the case $\alpha\in(0,1)$ the Markov matrix $K$ has a different limiting object that is defined as follows. Consider a realization of $\mathrm{PWIT}(\ell_1)$, where $\ell_1$ is the Lebesgue measure on $[0,\infty)$. We define an operator corresponding to the random walk on this tree with conductance equal to the mark to the power $-1/\alpha$. More precisely, for $\mathbf{v} \in \mathds{N}^f$, let $$ \rho(\mathbf{v}) = y^{-1/\alpha}_{\mathbf{v}} + \sum_{k \in \mathds{N}} y^{-1/\alpha}_{\mathbf{v} k} $$ with the convention that $y^{-1/\alpha}_{\eset} = 0$. Since a.s.\ $\lim_k |y_{\mathbf{v} k } | / k = 1$, $\rho(\mathbf{v})$ is almost surely finite for $\alpha \in (0,1)$. We define the linear operator $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ on $\cD$, by letting, for $\mathbf{v},\mathbf{w} \in \mathds{N}^f$, \begin{equation}\label{kappone} \mathbf{K}}\newcommand{\bL}{\mathbf{L}(\mathbf{v},\mathbf{w}) % = \langle \delta_{\mathbf{v}} , \mathbf{K}}\newcommand{\bL}{\mathbf{L} \delta_{\mathbf{w}} \rangle % = \begin{cases} \frac{y^{-1/\alpha}_{\mathbf{w}}}{\rho(\mathbf{v})} & \text{ if $\mathbf{w} = \mathbf{v} k$ for some integer $k$} \\ \frac{y^{-1/\alpha}_{\mathbf{v}}}{\rho(\mathbf{v})} & \text{ if $\mathbf{v} = \mathbf{w} k$ for some integer $k$} \\ 0 & \text{ otherwise}. \end{cases} \end{equation} Note that $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ is not symmetric, but it becomes symmetric in the weighted Hilbert space $L^2(V,\rho)$ defined by the scalar product $$ \scalar{\phi}{\psi}_\rho := \sum_{\mathbf{u}\in V} \rho(\mathbf{u}) \,\bar\phi_\mathbf{u}\psi_\mathbf{u}\,. $$ Moreover, on $L^2(V,\rho)$, $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ is a bounded self--adjoint operator since Schwarz' inequality implies \begin{align*} \scalar{ \mathbf{K}}\newcommand{\bL}{\mathbf{L} \phi}{\mathbf{K}}\newcommand{\bL}{\mathbf{L}\phi}_\rho^2 &= \sum_\mathbf{u} \rho(\mathbf{u}) \big|\sum_\mathbf{v} \mathbf{K}}\newcommand{\bL}{\mathbf{L}(\mathbf{u},\mathbf{v}) \phi_\mathbf{v}\big|^2\nonumber\\ &\leq \sum_\mathbf{u}\rho(\mathbf{u})\sum_\mathbf{v} \mathbf{K}}\newcommand{\bL}{\mathbf{L}(\mathbf{u},\mathbf{v})|\phi_\mathbf{v}|^2 = \sum_\mathbf{v} \rho(\mathbf{v}) |\phi_\mathbf{v}|^2 =\scalar{\phi}{\phi}_{\rho}^2 \label{sck} \end{align*} so that the operator norm of $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ is less than or equal to $1$. To work with self--adjoint operators in the unweighted Hilbert space $L^2(V)$ we shall actually consider the operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ defined by \begin{equation}\label{opsym} \mathbf{S}}\newcommand{\bT}{\mathbf{T}(\mathbf{v},\mathbf{w}):=\sqrt{\frac{\rho(\mathbf{v})}{\rho(\mathbf{w})}}\,\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\mathbf{v},\mathbf{w}) = \frac{\bT(\mathbf{v},\mathbf{w})} {\sqrt{\rho(\mathbf{v})\,\rho(\mathbf{w})}}\,. \end{equation} This defines a bounded self--adjoint operator in $L^2(V)$. Indeed, the map $\delta_\mathbf{v} \to\sqrt{\rho(\mathbf{v})}\delta_\mathbf{v}$ induces a linear isometry $\bD: L^2(V,\rho)\to L^2(V)$ such that \begin{equation}\label{isom} \scalar{\phi}{\mathbf{S}}\newcommand{\bT}{\mathbf{T}\psi} = \scalar{\bD^{-1}\phi}{\mathbf{K}}\newcommand{\bL}{\mathbf{L}\bD^{-1}\psi}_\rho\,, \end{equation} for all $\phi,\psi\in L^2(V)$. In this way, when $\alpha\in(0,1)$, $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ will be the limiting operator associated to the matrix $S$ defined in \eqref{ksim}. Note that no rescaling is needed here. The main result of this section is the following \begin{thm}[Limiting operators]\label{th:convop} As $n$ goes to infinity, in distribution, \begin{enumerate} \item[(i)] if $\alpha \in (0,2)$ and $\theta\in[0,1]$, then $(a_n ^{-1} X ,1) \to (\bT ,\eset)$, \item[(ii)] if $\alpha \in (1,2)$ and $\theta = 1$ then $( \kappa_n S ,1) \to (\bT ,\eset)$, \item[(iii)] if $\alpha \in (0,1)$ then $(S ,1) \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T},\eset)$. \end{enumerate} \end{thm} From the remark after Theorem \ref{th:strongres} we see that Theorem \ref{th:convop} implies convergence in distribution of the resolvent at the root. As we shall see in Section \ref{se:convesd}, this in turn gives convergence of the expected values of the Cauchy--Stieltjes transform of the ESD of our matrices. The rest of this section is devoted to the proof of Theorem \ref{th:convop}. \subsection{Weak convergence of a single row}\label{order} In this paragraph, we recall some facts about the order statistics of the first row of the matrix $X$ and $K$, i.e.\ $$ (X_{1,1}, \dots ,X_{1,n}) \quad\text{and}\quad (U_{1,1}, \dots ,U_{1,n})/\rho_1\,, $$ where $U_{1,j} = |X_{1,j}|$ has law $\mathds{H}_\alpha$. Let us denote by $V_1\geq V_2\geq\cdots\geq V_n$ the order statistics of the variables $U_{1,j}$, $1\leq j\leq n$. Recall that $\rho_1 = \sum_{j=1}^nV_j$. Let us define $\Delta_{k,n}=\sum_{j=k+1}^nV_j$ for $k<n$ and $\Delta^2_{k,n}=\sum_{j=k+1}^nV_j^2$. Call $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ the set of sequences $\{v_j\}\in [0,\infty)^\mathds{N}$ with $v_1\geq v_2\geq\cdots\geq 0$ such that $\lim_{j\to\infty} v_j=0$, and let $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1\subset\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ be the subset of sequences satisfying $ \sum_j v_j = 1$. We shall view $$ Y_n = \left(\frac{V_1}{a_n},\dots,\frac{V_n}{a_n}\right)\, % \quad \text{ and } \quad % Z_n = \left(\frac{V_1}{\rho_1},\dots,\frac{V_n}{\rho_1}\right)\, $$ as elements of $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ and $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$, respectively, simply by adding zeros to the right of $V_n/a_n$ and $V_n/\rho_1$. Equipped with the standard product metric, $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ and $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$ are complete separable metric spaces ($\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$ is compact) and convergence in distribution for $\mathcal{A}}\newcommand{\cB}{\mathcal{B},\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$--valued random variables is equivalent to finite dimensional convergence, cf. e.g.\ Bertoin \cite{bertoin06}. Let $E_1,E_2,\dots$ denote i.i.d.\ exponential variables with mean $1$ and write $\gamma_k=\sum_{j=1}^k E_j$. We define the random variable in $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ $$ Y= \left(\gamma_1^{-\frac1\alpha},\gamma_2^{-\frac1\alpha}, \dots \right)$$ The law of $Y$ is the law of the ordered points of a Poisson process on $(0,\infty)$ with intensity measure $\alpha x ^{-\alpha - 1} dx$. For $\alpha \in (0,1)$ we define the variable in $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$ $$ Z=\left(\frac{\gamma_1^{-\frac1\alpha}}{\sum_{n=1}^\infty\gamma_n^{-\frac1\alpha}}\,,\,\frac{\gamma_2^{-\frac1\alpha}}{\sum_{n=1}^\infty\gamma_n^{-\frac1\alpha}}\,,\,\dots\right)\,. $$ For $\alpha\in(0,1)$ the sum $\sum_n\gamma_n^{-\frac1\alpha}$ is a.s.\ finite. The law of $Z$ in $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1$ is called the \emph{Poisson--Dirichlet} law $\mathrm{PD}(\alpha,0)$, see Pitman and Yor \cite[Proposition 10]{MR1434129}. The next result is rather standard but we give a simple proof for convenience. \begin{lem}[Poisson--Dirichlet laws and Poisson point processes]% \label{le:PoiExt} \ \begin{itemize} \item[(i)] For all $\alpha>0$, $Y_n$ converges in distribution to $Y$. Moreover, for $\alpha \in (0,2)$, $(a_n^{-1} V_j)_{j \geq 1}$ is a.s.\ uniformly square integrable, i.e.\ a.s.\ $\lim_{k} \sup_{n> k}\,a_n^{-2}\,\Delta^2_{k,n} = 0$. \item[(ii)] If $\alpha \in (0,1)$, $Z_n$ converges in distribution to $Z$. Moreover, $(a_n^{-1} V_j)_{j \geq 1}$ is a.s.\ uniformly integrable, i.e.\ a.s.\ $\lim_{k} \sup_{n> k}\,a_n^{-1}\,\Delta_{k,n} = 0$. \item[(iii)] If $I \subset \mathds{N}$ is a finite set and $V^I_{1}\geq V^I_{2}\geq \cdots$ denote the order statistics of $\{U_{1,j}\}_{j \in \{ 1 ,\dots,n \} \backslash I}$ then (i) and (ii) hold with $Y^I_n = (V^I_1/a_n,V^I_2/a_n, \dots)$ and $Z^I_n = (V^I_1/\rho_1,V^I_2/\rho_1, \dots)$. \end{itemize} \end{lem} As an example, from (i), we retrieve the well--known fact that for any $\alpha>0$, the random variable $ a_n^{-1}\max(U_{1,1},\ldots,U_{1,n}) $ converges weakly as $n\to\infty$ to the law of $\gamma_1^{-\frac1\alpha}$. This law, known as a Fr\'echet law, has density $\alpha x^{-\alpha-1}e^{-x^{-\alpha}}$ on $(0,\infty)$. \begin{proof}[Proof of Lemma \ref{le:PoiExt}] As in LePage, Woodroofe and Zinn \cite{zinn81} we take advantage of the following well known representation for the order statistics of i.i.d.\ random variables. Let $G$ be the function in \eqref{htp} and write $$ G^{-1}(u) = \inf\{y>0:\, G(y)\leq u\}\,, $$ $u\in(0,1)$. We have that $(V_1,\dots, V_n)$ equals in distribution the vector \begin{equation}\label{rep} \left(G^{-1}\left(\gamma_1/\gamma_{n+1}\right),\dots,G^{-1}\left(\gamma_n/\gamma_{n+1}\right)\right)\,, \end{equation} where $\gamma_j$ has been defined above. To prove (i) we start from the distributional identity $$ Y_n\overset{d}{=} \left(\frac{G^{-1}\left(\gamma_1/\gamma_{n+1}\right)}{a_n},\dots,\frac{G^{-1}\left(\gamma_n/\gamma_{n+1}\right)}{a_n}\right)\,, $$ which follows from \eqref{rep}. It suffices to prove that for every $k$, almost surely the first $k$ terms above converge to the first $k$ terms in $Y$. Thanks to \eqref{ht1}, almost surely, for every $j$: \begin{equation}\label{conn} a_n^{-1}\,G^{-1}\left(\gamma_j/\gamma_{n+1}\right)\to \gamma_j^{-\frac1\alpha}\,, \end{equation} and the convergence in distribution of $Y_n$ to $Y$ follows. Moreover, from \eqref{ht1}, for any $\delta>0$ we can find $n_0$ such that $$ a_n^{-1} V_j = a_n^{-1}\,G^{-1}\left(\gamma_j/\gamma_{n+1}\right)\leq \left(n\gamma_j/(1+\delta)\gamma_{n+1}\right)^{-\frac1\alpha}\,, $$ for $n\geq n_0$, $j\in\mathds{N}$. Since $n/\gamma_{n+1}\to 1$, a.s.\ we see that the expression above is a.s.\ bounded by $2(1+\delta)^{\frac1\alpha}\gamma_j^{-\frac1\alpha}$, for $n$ sufficiently large, and the second part of (i) follows from a.s. summability of $\gamma_j^{-\frac 2\alpha}$. Similarly, if $\alpha \in (0,1)$, $\Delta_{k,n}$ has the same law of $$ \sum_{j =k+1}^n G^{-1}\left(\gamma_j/\gamma_{n+1}\right), $$ and the second part of (ii) follows from a.s.\ summability of $\gamma_j^{-\frac1\alpha}$. To prove the convergence of $Z_n$ we use the distributional identity $$ Z_n\overset{d}{=} \left(\frac{G^{-1}\left(\gamma_1/\gamma_{n+1}\right)}{\sum_{j=1}^n G^{-1}\left(\gamma_j/\gamma_{n+1}\right)},\dots,\frac{G^{-1}\left(\gamma_n/\gamma_{n+1}\right)}{\sum_{j=1}^n G^{-1}\left(\gamma_j/\gamma_{n+1}\right)}\right)\,. $$ As a consequence of \eqref{conn}, we then have almost surely $$ a_n^{-1}\,\sum_{j=1}^nG^{-1}\left(\gamma_j/\gamma_{n+1}\right)\to \sum_{j=1}^\infty \gamma_j^{-\frac1\alpha}\,, $$ and (ii) follows. Finally, (iii) is an easy consequence of the exchangeability of the variable $(U_{1,i})$ : $$ \mathds{P}( V^I _k \neq V_k ) % \leq \mathds{P} ( \exists j \in I : U_{1,j} \geq V_k ) % \leq |I| \mathds{P} (U_{1,1} \geq V_k) = |I|\frac{k}{n}. $$ \end{proof} The intensity measure $\alpha x ^{-\alpha - 1} dx$ on $(0,\infty)$ is not locally finite at $0$. It will be more convenient to work with Radon (i.e.\ locally finite) intensity measures. \begin{lem}[Poisson Point Processes with Radon intensity measures]\label{le:HT} Let $\xi^{n}_1,\xi^{n}_2,\ldots$ be sequences of i.i.d.\ random variables on $\overline{\mathds{R}}:=\mathds{R}\cup\{\pm\infty\}$ such that \begin{equation}\label{asso1} n\, \mathds{P} ( \xi^{n}_1 \in \cdot) \overset{w}{\underset{n\to\infty}{\longrightarrow}} \nu\, \end{equation} where $\nu$ is a Radon measure on $\mathds{R}$. Then, for any finite set $I \subset \mathds{N} $ the random measure $$ \sum_{i\in \{1,\ldots,n\}\setminus I} \delta_{\xi^{n}_i} $$ converges weakly as $n\to\infty$ to \emph{PPP($\nu$)}, the Poisson Point Process on $\mathds{R}$ with intensity law $\nu$, for the usual vague topology on Radon measures. \end{lem} We refer to \cite[Theorem 5.3, p.\ 138]{resnick} for a proof of Lemma \ref{le:HT}. Note that for $\xi^{(n)}_j = a_n / U_{1,j}$ it is a consequence of Lemma \ref{le:PoiExt} (iii). In the case $\xi^{(n)}_j = a_n / X_{1,j}$, where $X_\ij$ is as in \eqref{levymatrix} and \eqref{theta}, the above Lemma yields convergence to PPP($\nu_{\alpha,\theta}$), where \begin{equation}\label{appl} \nu_{\alpha,\theta}(dx) = \left[\theta \mathds{1}_{\{x > 0\}}+(1-\theta)\mathds{1}_{\{x < 0\}}\right] \alpha |x|^{\alpha-1} dx\,, \end{equation} \subsection{Local weak convergence to PWIT} \label{subsec:LWC} In the previous paragraph we have considered the convergence of the first row of the matrix $a_n^{-1}X$. Here we generalize this by characterizing the limiting local structure of the complete graph with marks $a_n / X_{i,j}$. Our argument is based on a technical generalization of an argument borrowed from Aldous \cite{aldous92}. This will lead us to Theorems \ref{th:convop} and \ref{th:convop2} below. Let $G_n$ be the complete network on $\{1,\ldots,n\}$ whose mark on edge $(i,j)$ equals $\xi^n_{i,j}$, for some collection $(\xi^n_{i j})_{1 \leq i \leq j \leq n}$ of i.i.d.\ random variables with values in $\mathds{R}$, with $\xi^n _ {j,i} = \xi^n_{i,j}$. We consider the rooted network $(G_n,1)$ obtained by distinguishing the vertex labeled $1$. We follow Aldous \cite[Section 3]{aldous92}. For every fixed realization of the marks $(\xi^n_{i j})$, and for any $B,H\in\mathds{N}$, such that $(B^{H+1} - 1)/(B-1) \leq n$, we define a finite rooted subnetwork $(G_n,1)^{B,H}$ of $(G_n,1)$, whose vertex set coincides with a $B$--ary tree of depth $H$ with root at $1$. To this end we partially index the vertices of $(G_n,1)$ as elements in $$ J_{B,H} = \cup_{\ell=0}^H \{1,\ldots, B\}^\ell \subset \mathds{N}^f, $$ the indexing being given by an injective map $\sigma_n$ from $J_{B,H}$ to $V_n:=\{1,\dots,n\}$. The map $\sigma_n$ can be extended to a bijection from a subset of $\mathds{N}^f$ to $V_n$. We set $I_\eset = \{ 1 \}$ and the index of the root $1$ is $ \sigma_n^{-1} (1) = \eset$. The vertex $v\in V_n \backslash I_{\eset}$ is given the index $(k) = \sigma_n^{-1} (v)$, $1 \leq k\leq B$, if $\xi^n_{(1,v)}$ has the $k^\text{th}$ smallest absolute value among $\{\xi^n_{1,j}, \,j\neq 1\}$, the marks of edges emanating from the root $1$. We break ties by using the lexicographic order. This defines the first generation. Now let $I_1$ be the union of $I_\eset$ and the $B$ vertices that have been selected. If $H\geq 2$, we repeat the indexing procedure for the vertex indexed by $(1)$ (the first child) on the set $V_n \backslash I_1$. We obtain a new set $\{11,\ldots,1B\}$ of vertices sorted by their weights as before (for short notation, we concatenate the vector $(1,1)$ into $11$). Then we define $I_{2}$ as the union of $I_1$ and this new collection. We repeat the procedure for $(2)$ on $V_n \backslash I_{2}$ and obtain a new set $\{21,\ldots,2 B\}$, and so on. When we have constructed $\{B1,\cdots,BB\}$, we have finished the second generation (depth $2$) and we have indexed $(B^{3} - 1)/(B-1)$ vertices. The indexing procedure is then repeated until depth $H$ so that $(B^{H+1} - 1)/(B-1)$ vertices are sorted. Call this set of vertices $V_n^{B,H} = \sigma_n J_{B,H} $. The subnetwork of $G_n$ generated by $V_n^{B,H}$ is denoted $(G_n,1)^{B,H}$ (it can be identified with the original network $G_n$ where any edge $e$ touching the complement of $V_n^{B,H}$ is given a mark $x_e=\infty$). In $(G_n,1)^{B,H}$, the set $\{\bu1,\cdots,\mathbf{u} B \}$ is called the set of children or offsprings of the vertex $\mathbf{u}$. Note that while the vertex set has been given a tree structure, $(G_n,1)^{B,H}$ is still a complete network. The next proposition shows that it nevertheless converges to a tree (i.e.\ all circuits vanish, or equivalently, the extra marks diverge to $\infty$) if the $\xi^n_\ij$ satisfy a suitable scaling assumption. Let $(\cT,\eset)$ denote the infinite random rooted network with distribution $\mathrm{PWIT}(\nu)$. We call $(\cT,\eset)^{B,H}$ the finite random network obtained by the sorting procedure described in the previous paragraph. Namely, $(\cT,\eset)^{B,H}$ consists of the sub-tree with vertices of the form $\mathbf{u}\in J_{B,H}$, with the marks inherited from the infinite tree. If an edge is not present in $(\cT,\eset)^{B,H}$, we assign to it the mark $+\infty$. We say that the sequence of random finite networks $(G_n,1)^{B,H}$ converges in distribution (as $n\to\infty$) to the random finite network $(\cT,\eset)^{B,H}$ if the joint distributions of the marks converge weakly. To make this precise we have to add the points $\{\pm\infty\}$ as possible values for each mark, and continuous functions on the space of marks have to be understood as functions such that the limit as any one of the marks diverges to $+\infty$ exists and coincides with the limit as the same mark diverges to $-\infty$. The next proposition generalizes \cite[Section 3]{aldous92}. \begin{prop}[Local weak convergence to a tree] \label{prop:LWC} % Let $(\xi^n_{i,j})_{1 \leq i \leq j \leq n}$ be a collection of i.i.d.\ random variables with values in $\overline{\mathds{R}}:=\mathds{R}\cup\{\pm\infty\}$ and set $\xi^n _ {j,i} = \xi^n_{i,j}$. Let $\nu$ be a Radon measure on $\mathds{R}$ with no mass at $0$ and assume that \begin{equation}\label{asso} n \mathds{P}(\xi^n_{12} \in \cdot) \overset{w}{\underset{n\to\infty}{\longrightarrow}} \nu % \quad\text{as}\quad n\to\infty. \end{equation} Let $G_n$ be the complete network on $\{1,\ldots,n\}$ whose mark on edge $(i,j)$ equals $\xi^n_{i j}$. Then, for all integers $B,H$, as $n$ goes to infinity, in distribution, $$ (G_n,1) ^{B,H} \longrightarrow (\cT,\eset) ^{B,H}.$$ Moreover, if $\cT_1,\cT_2$ are independent with common law $\mathrm{PWIT}(\nu)$, then, in distribution, $$( (G_n,1) ^{B,H},(G_n,2) ^{B,H})\longrightarrow ( (\cT_1,\eset) ^{B,H}, (\cT_2,\eset) ^{B,H}).$$ \end{prop} The second statement is the convergence of the joint law of the finite networks, where $(G_n,2) ^{B,H}$ is obtained with the same procedure as for $(G_n,1)^{B,H}$, by starting from the vertex $2$ instead of $1$. In particular, the second statement implies the first. This type of convergence is often referred to as \emph{local weak convergence}, a notion introduced by Benjamini and Schramm \cite{benjaminischramm}, Aldous and Steele \cite{aldoussteele}, see also Aldous and Lyons \cite{aldouslyons}. Let us give some examples of application of this proposition. Consider the case where $\xi^n_{i j} = 1$ with probability $\lambda/n$ and $\xi^n_{i,j}=\infty$ otherwise. The network $G_n$ is an Erd\H{o}s-R\'enyi random graph with parameter $\lambda/n$. From the proposition, we retrieve the well--known fact that it locally converges to the tree of a Yule process of intensity $\lambda$. If $\xi^n_{i,j}= n Y_{i,j}$, where $Y_{i,j}$ is any non negative continuous random variable with density $1$ at $0+$, then the network converges to $\mathrm{PWIT}(\ell_1)$, where $\ell_1$ is the Lebesgue measure on $[0,\infty)$. The relevant application for our purpose is given by the choice $\xi^n_\ij=(a_n / X_{i,j})$, and $\nu=\nu_{\alpha,\theta}$, where $X_\ij$ are such that $|X_\ij|\in \mathds{H}_\alpha$ and \eqref{theta} is satisfied, and $\nu_{\alpha,\theta}$ is defined by \eqref{appl}. Note that the proposition applies to all $\alpha>0$ in this setting. \begin{proof}[Proof of Proposition \ref{prop:LWC}] We order the elements of $J_{B,H}$ in the lexicographic order, i.e.\ $\eset \prec1 \prec 2 \prec \dots \prec B \prec 11\prec 12\prec \dots \prec B \dots B$. For $\mathbf{v} \in J_{B,H}$, let $O_{\mathbf{v}}$ denote the set of offsprings of $\mathbf{v}$ in $(G_{n},1)^{B,H}$. By construction, we have $I_\eset = \{1\}$ and $I_\mathbf{v} = \sigma_n( \cup_{\mathbf{w} \prec \mathbf{v}} O_{\mathbf{w}} )$. At every step of the indexing procedure, we sort the marks of the neighboring edges that have not been explored at an earlier step $\{1,\cdots, n\} \backslash I_1$, $\{1,\ldots, n\} \backslash I_{2}$, $\ldots$. Therefore, for all $\mathbf{u}$, \begin{equation}\label{eq:lwc1} ( \xi^n _{ \sigma_n(\mathbf{u}), i} ) _{i \notin I_\mathbf{u}} % \overset{d}{=} % ( \xi^n _{ 1, i} ) _{1 \leq i \leq n - |I _ \mathbf{u} | } . \end{equation} Thus, from Lemma \ref{le:HT} and the independence of the variables $\xi^n$, we infer that the marks from a parent to its offsprings in $(G_n,1) ^{B,H}$ converge weakly to those in $(\cT, \eset)^{B,H}$. We now check that all other marks diverge to infinity. For $\mathbf{v}, \mathbf{w} \in J_{B,H}$, we define $$x^n_{\mathbf{v},\mathbf{w}} = \xi^n_{\sigma_n(\mathbf{v}),\sigma_n(\mathbf{w})}.$$ Also, let $\{y^n_{\mathbf{v},\mathbf{w}}\,,\;\mathbf{v}, \mathbf{w} \in J_{B,H}\}$ denote independent variables distributed as $|\xi^n_{1,2}|$. Let $E^{B,H}$ denote the set of edges $\{\mathbf{u},\mathbf{v}\} \in J_{B,H}\times J_{B,H}$ that do not belong to the finite tree (i.e.\ there is no $k\in\{1,\dots,B\}$ such that $\mathbf{u}=\mathbf{v} k$ or $\mathbf{v}=\mathbf{u} k$). Lemma \ref{le:stochdom} below implies that the vector $\{|x^n_{\mathbf{v},\mathbf{w}}|\,,\;\{\mathbf{v}, \mathbf{w}\} \in E^{B,H}\}$ stochastically dominates the vector $\mathcal{Y}}\newcommand{\cZ}{\mathcal{Z}^n:=\{y^n_{\mathbf{v},\mathbf{w}}\,,\;\{\mathbf{v}, \mathbf{w}\} \in E^{B,H}\}$, i.e.\ there exists a coupling of the two vectors such that almost surely $|x^n_{\mathbf{v},\mathbf{w}}| \geq y^n_{\mathbf{v},\mathbf{w}}$, for all $\{\mathbf{v}, \mathbf{w}\} \in E^{B,H}$. Since $J_{B,H}$ is finite (independent of $n$), $\mathcal{Y}}\newcommand{\cZ}{\mathcal{Z}^n$ contains a finite number of variables, and \eqref{asso} implies that the probability of the event $\{\min_{\{\mathbf{v}, \mathbf{w}\} \in E^{B,H}}|x^n_{\mathbf{v},\mathbf{w}}| \leq t\}$ goes to $0$ as $n\to\infty$, for any $t>0$. Therefore it is now standard to obtain that if $x_{e}$ denote the mark of edge $e$ in $\cT^{B,H}$, the finite collection of marks $(x^n_{e})_{e \in J_{B,H} \times J_{B,H} }$ converges in distribution to $(x_{e})_{e \in J_{B,H} \times J_{B,H}}$ as $n\to\infty$. In other words, $(G_n,1) ^{B,H}$ converges in distribution to $(\cT, \eset)^{B,H}$. It remains to prove the second statement. It is an extension of the above argument. We consider the two subnetworks $(G_{n},1)^{B,H}$ and $(G_{n},2)^{B,H}$ obtained from $(G_n,1)$ and $(G_n,2)$. This gives rise to two increasing sequences of sets of vertices $I_{\mathbf{v},1}$ and $I_{\mathbf{v},2}$ with $\mathbf{v} \in J_{B,H}$ and two injective maps $\sigma_{n,1}$, $\sigma_{n,2}$ from $J_{B,H}$ to $\{フ?1 , \cdots, n \}$. We need to show that in distribution, \begin{equation}\label{eq:BH2} ( (G_n,1) ^{B,H},(G_n,2) ^{B,H}) % \to ( (\cT_1,\eset) ^{B,H}, (\cT_2,\eset) ^{B,H}). \end{equation} Let $V_{n,i}^{B,H} = \sigma_{n,i} ( J_{B,H}) $ be the vertex set of $(G_{n},i)^{B,H}$, $i=1,2$. There are $$ C := \frac{B^{H+1} - 1}{B-1} $$ vertices in $V_{n,i}^{B,H}$, hence the exchangeability of the variables implies that $$ \mathds{P}( 2 \in V_{n,1}^{B,H} ) \leq \frac{C}{n}. $$ Let $\widetilde G_n = G_n \backslash V_{n,1}^{B,H}$, the subnetwork of $G_n$ spanned by the vertex set $V \backslash V^{B,H}_{n,1}$. Assuming that $2 (B^{H+1} - 1)/(B-1) < n$ and $2 \notin V_{n,1}^{B,H}$, we may then define $(\widetilde G_{n}, 2)^{B,H}$. If $2 \in V_{n,1}^{B,H}$, $(\widetilde G_{n}, 2)^{B,H}$ is defined arbitrarily. The above analysis shows that, in distribution, $$ ( (G_n,1) ^{B,H},(\widetilde G_n,2) ^{B,H}) % \to ( (\cT_1,\eset) ^{B,H}, (\cT_2,\eset) ^{B,H}). $$ Therefore in order to prove \eqref{eq:BH2} it is sufficient to prove that with probability tending to $1$, $$ V_{n,1}^{B,H} \cap V_{n,2}^{B,H} = \eset. $$ Indeed, on the event $\{V_{n,1}^{B,H} \cap V_{n,2}^{B,H} = \eset\}$, $(G_{n},2)^{B,H}$ and $(\widetilde G_{n},2)^{B,H}$ are equal. For $\mathbf{v} \in J_{B,H}$, let $O_{\mathbf{v},2}$ denote the set of offsprings of $ \mathbf{v}$ in $(G_{n},2)^{B,H}$. We have $$ I_{\mathbf{v},2} = \{2\} \bigcup \cup_{\mathbf{w} \prec \mathbf{v}} O_{\mathbf{w},2}, $$ and $$ \mathds{P}( V_{n,1}^{B,H} \cap V_{n,2}^{B,H} \neq \eset) \leq \mathds{P}( 2 \in V_{n,1}^{B,H} ) + \sum_{\mathbf{v} = \eset}^{B\dots B} \mathds{P}( O_{\mathbf{v},2} \cap V_{n,1}^{B,H} \neq \eset \,|\, V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset ). $$ For any $\mathbf{u}, \mathbf{v} \in J_{B,H}$, if $V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset$, then $\sigma_{n,2} ( \mathbf{v})$ is neither the ancestor of $\sigma_{n,1}(\mathbf{u})$, nor an offspring of $\sigma_{n,1}(\mathbf{u})$. From Lemma \ref{le:stochdom} below we deduce that $|\xi^n_{\sigma_{n,1} (\mathbf{u}), \sigma_{n,2} (\mathbf{v})}|$ given $V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset$ dominates stochastically $|\xi^n_{1,2}|$, and is independent of the i.i.d.\ vector $(|\xi^n_{\sigma_{n,2} (\mathbf{v}),k}|)_{k \in\{1,\dots,n\}\setminus (V_{n,1}^{B,H} \cup I_{\mathbf{v},2} )}$, with law $|\xi^n_{1,2}|$. It follows that $$ \mathds{P}(\sigma_{n,1}(\mathbf{u})\in O_{\mathbf{v},2}\,|\,V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset ) % \leq \frac{B}{n - C - | I_{\mathbf{v},2} |}. $$ Therefore, \begin{eqnarray*} \mathds{P} ( O_{\mathbf{v},2} \cap V_{n,1}^{B,H} \neq \eset \,|\, V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset ) & \leq & \sum_{\mathbf{u} \in J_{B,H} } \mathds{P} ( \sigma_{n,1} (\mathbf{u}) \in O_{\mathbf{v},2} \,|\, V_{n,1}^{B,H} \cap I_{\mathbf{v},2} = \eset ) \\ & \leq & \frac{ C B }{ n - 2C}. \end{eqnarray*} Finally, $$ \mathds{P} ( V_{n,1}^{B,H} \cap V_{n,2}^{B,H} \neq \eset) % \leq \frac C n + \frac{ C^2 B }{ n - 2C}, $$ which converges to $0$ as $n\to\infty$. \end{proof} We have used the following stochastic domination lemma. For any $B,H$ and $n$ let $\mathcal{E}}\newcommand{\cF}{\mathcal{F}_n^{H,B}$ denote the (random) set of edges $\{i,j\}$ of the complete graph on $\{1,\dots,n\}$, such that $\{\sigma_n^{-1}(i),\sigma_n^{-1}(j)\}$ is not an edge of the finite tree on $J_{B,H}$. By construction, any loop $\{i,i\}$ belongs to $\mathcal{E}}\newcommand{\cF}{\mathcal{F}_n^{B,H}$. Also, for $\mathbf{u}\neq\eset$ on the finite tree, let $g(\mathbf{u})$ denote the parent of $\mathbf{u}$. \begin{lem}[Stochastic domination]\label{le:stochdom} For any $n\in\mathds{N}$, and $B,H\in\mathds{N}$ such that $$ \frac{B^{H+1} - 1}{B-1} \leq n, $$ the random variables $$ \{|\xi^n_{i,j}|\,,\;\{i,j\}\in \mathcal{E}}\newcommand{\cF}{\mathcal{F}_n^{B,H} \} $$ stochastically dominate i.i.d.\ random variables with the same law as law $|\xi^n_{1,2}|$. Moreover, for every $\eset\neq \mathbf{u}\in J_{B,H}$, the random variables $$ \{|\xi^n_{\sigma_n(\mathbf{u}),i}|\,,\;i\in\{1,\dots,n\}\setminus \sigma_n(g(\mathbf{u}))\}\,, $$ stochastically dominate i.i.d.\ random variables with the same law as law $|\xi^n_{1,2}|$. \end{lem} \begin{proof}[Proof of Lemma \ref{le:stochdom}] The censoring process which deletes the edges that belong to the tree on $J_{B,H}$ has the property that at each step the $B$ lowest absolute values are deleted from some \emph{fresh} (previously unexplored) subset of edge marks. Using this and the fact that the edge marks $\xi^n_\ij$ are i.i.d.\ we see that both claims in the lemma are implied by the following simple statement. Let $Y_1,\dots,Y_m$ denote i.i.d.\ positive random variables. Suppose $m=n_1 + \cdots n_\ell$, for some positive integers $\ell$, $n_1,\dots,n_\ell$, and partition the $m$ variables in $\ell$ blocks $I^1,\dots,I^\ell$ of $n_1,\dots,n_\ell$ variables each. Fix some non negative integers $k_j$ such that $k_j\leq n_j$ and call $q_1^j,\dots,q^j_{k_j}$, the (random) indexes of the $k_j$ lowest values of the variables in the block $I^j$ (so that $Y_{q^1_1}$ is the lowest of the $Y_1,\dots,Y_{n_1}$, $Y_{q^1_2}$ is the second lowest of the $Y_1,\dots,Y_{n_1}$ and so on). Consider the random index sets of the $k_j$ minimal values in the $j^{th}$ block, $J^j:=\cup_{i=1}^{k_j}\{q^j_i\}$, and set $J=\cup_{j=1}^\ell J^j$. If $k_j=0$ we set $J^j=\eset$. Finally, let $\widetilde Y$ denote the vector $\{Y_i\,, \;i=1,\dots,m\,;\; i\not\in J\}$. Then we claim that $\widetilde Y$ stochastically dominates $m-\sum_{j=1}^\ell k_j$ i.i.d.\ copies of $Y_1$. Indeed, the coupling can be constructed as follows. We first extract a realization $y_1,\dots,y_m$ of the whole vector. Given this we isolate the index sets $J^1,\dots,J^\ell$ within each block. We then consider two vectors $\cZ,\cV$ obtained as follows. The vector $\cZ_1=(z^1_1,\dots,z^1_{n_1-k_1},z^2_1,\dots,z^2_{n_2-k_2},\dots,z^\ell_{n_\ell-k_\ell})$ is obtained by extracting the $n_1-k_1$ values $z^1_1,\dots,z^1_{n_1-k_1}$ uniformly at random (without replacement) from the values $y_1,\dots,y_{n_1}$ (in the block $I^1$), the $n_2-k_2$ variables $z^2_1,\dots,z^2_{n_2-k_2}$ in the same way from the values $y_{n_1+1},\dots,y_{n_1+n_2}$ (in the block $I^2$), and so on. On the other hand, the vector $\cV% =(v^1_1,\dots,v^1_{n_1-k_1},v^2_1,\dots,v^2_{n_2-k_2},\dots,v^\ell_{n_\ell-k_\ell})$ is obtained as follows. For the first block we take $v^1_i$, $i=1,\dots, n_1-k_1$ equal to $z^1_i$ whenever an index $i\in I^1\setminus J^1$ was picked for the vector $z^1_1,\dots,z^1_{n_1-k_1}$ and we assign the remaining values (if any) through an independent uniform permutation of those variables $y_i,\,i\in I^1\setminus J^1$ which were not picked for the vector $z^1_1,\dots,z^1_{n_1-k_1}$. We repeat this procedure for all other blocks to assign all values of $\cV$. By construction, $\cV\geq \cZ$ coordinate-wise. The conclusion follows from the observation that $\cZ$ is distributed like a vector of $m-\sum_{j=1}^\ell k_j$ i.i.d.\ copies of $Y_1$, while $\cV$ is distributed like our vector $\widetilde Y$. \end{proof} \subsection{Proof of Theorem \ref{th:convop}} \begin{proof}[Proof of Theorem \ref{th:convop}(i)] Let $\nu=\nu_{\alpha,\theta}$ be as in \eqref{appl} and let $(\cT_\alpha,\eset)$ be a realization of the $\mathrm{PWIT}(\nu)$. The mark on edge $(\mathbf{v} ,\mathbf{v} k)$ in $\cT_\alpha$ is denoted by $x_{(\mathbf{v}, \mathbf{v} k)}$ or simply $x_{\mathbf{v} k}$. By definition, we have $x_{(\mathbf{v},\mathbf{w})} = \infty$ if $\mathbf{v}$ and $\mathbf{w}$ are at graph-distance different from $1$. In particular, if we set $y_\mathbf{v} = \mathrm{sign}(x_{\mathbf{v}})| x_{\mathbf{v}}|^{\alpha}$, then the point sets $\Xi_\mathbf{v} = \{y_{\mathbf{v} k } \}_{k \geq 1}$ are independent Poisson point processes of intensity $\ell_\theta = \theta \mathds{1}_{\{x > 0\}}dx + (1-\theta) \mathds{1}_{\{x < 0\}}dx$. We may thus build a realization of the operator $\bT$ on $\cT_\alpha$, cf.\ \eqref{tone}. Let $G_n$ be the complete network on $\{1,\ldots,n\}$ whose mark on edge $(i,j)$ is $\xi^n_{i,j}:=a_n/X_{i,j}$. Next, we apply Proposition \ref{prop:LWC}. For all $B,H$, $(G_n,1)^{B,H}$ converges weakly to $(\cT_\alpha,\eset)^{B,H}$. Let $\sigma_n^{B,H}$ be the map $\sigma_n$ associated to the network $(G_n,1)^{B,H}$ (see the construction given before Proposition \ref{prop:LWC}). From the Skorokhod Representation Theorem we may assume that $(G_n,1)^{B,H}$ converges a.s.\ to $(\cT_\alpha,\eset)^{B,H}$ for all $B,H$. Thus we may find sequences $B_n,H_n$ tending to infinity, such that $(B_n ^{H_n +1} - 1 )/ (B_n - 1) \leq n$ and such that for any pair $\mathbf{u},\mathbf{v}\in\mathds{N}^f$ we have $\xi^n_{(\widetilde\sigma_n (\mathbf{u}), \widetilde\sigma_n (\mathbf{v}))} \to x_{(\mathbf{u}, \mathbf{v})}$ a.s.\ as $n\to\infty$, where $\widetilde\sigma_n:=\sigma_n^{B_n,H_n}$. The map $\widetilde\sigma_n$ can be extended to a bijection $\mathds{N}^f\to\mathds{N}^f$. It follows that a.s.\ \begin{equation} \langle \delta_\mathbf{u} , \widetilde\sigma_n^{-1} (a_n^{-1} X ) \widetilde\sigma_n \delta_\mathbf{v} \rangle = \frac{ 1}{ \xi^n_{(\widetilde\sigma_n (\mathbf{u}),\widetilde \sigma_n (\mathbf{v}))} } \to \frac{ 1}{x_{(\mathbf{u}, \mathbf{v})} }= \langle \delta_\mathbf{u} , \bT \delta_\mathbf{v} \rangle\,. \label{pointw} \end{equation} Fix $\mathbf{v}\in\mathds{N}^f$, and set $\psi_n^\mathbf{v} := \widetilde\sigma_n^{-1} (a_n^{-1} X)\widetilde\sigma_n \delta_\mathbf{v}$. To prove Theorem 2.3$(i)$ it is sufficient to show that $\psi_n^\mathbf{v}\to \bT\delta_\mathbf{v}$ in $L^2( \mathds{N}^f)$ almost surely as $n\to\infty$, i.e.\ \begin{equation}\label{l2con} \sum_{\mathbf{u}} (\left\langle\delta_\mathbf{u},\psi_n^\mathbf{v}\right\rangle - \left\langle\delta_\mathbf{u},\bT\delta_\mathbf{v}\right\rangle)^2 \to 0\,. \end{equation} Since from \eqref{pointw} we know that $\left\langle \delta_\mathbf{u},\psi_n^\mathbf{v}\right\rangle\to \left\langle \delta_\mathbf{u},\bT\delta_\mathbf{v}\right\rangle$ for every $\mathbf{u}$, the claim follows if we have (almost surely) uniform (in $n$) square-integrability of $(\left\langle\delta_\mathbf{u},\psi_n^\mathbf{v}\right\rangle)_\mathbf{u} $. This in turn follows from Lemma \ref{le:stochdom} and Lemma \ref{le:PoiExt}(i). The proof of Theorem \ref{th:convop}(i) is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{th:convop}(ii)] We need the following two facts: \begin{equation}\label{eq:llnweak} \lim_{n\to\infty}\frac{\rho_1}{n}=1\;\text{ in probability}\,, \end{equation} and there exists $\delta>0$ such that \begin{equation}\label{eq:llninf} \liminf_{n\to\infty}\min_{1\leq i\leq n}\frac{\rho_i}{n}>\delta\;\text{ a.s.} \end{equation} Clearly, \eqref{eq:llnweak} is a law of large numbers and holds actually a.s.\ (recall that for $\alpha>1$ we assume the mean of $U_{i,j}$ to be $1$). Let us establish the a.s.\ uniform bound \eqref{eq:llninf}. For every $\epsilon>0$, there exists $R>0$ such that $\mathds{E}(U_{i,j}\mathds{1}_{\{U_{i,j}<R\}})\geq 1-\epsilon$. If we define $\rho_i^R=\sum_{j=1}^nU_{i,j}\mathds{1}_{\{U_{i,j}<R\}}$, then $$ \liminf_{n\to\infty}\min_{1\leq i\leq n}\frac{\rho_i}{n} \geq \liminf_{n\to\infty}\min_{1\leq i\leq n}\frac{\rho_i^R}{n}\,. $$ Therefore \eqref{eq:llninf} is implied by the uniform law of large numbers in \cite[Lemma 2.2]{bordenave-caputo-chafai}, applied to the bounded variables $U_{i,j}\mathds{1}_{\{U_{i,j}<R\}}$. Next, we claim that for all $\mathbf{u} \in \mathds{N}^f$, in probability \begin{equation}\label{eq:sigmalln} \lim_{n\to\infty} \frac{\rho_{\widetilde\sigma_n( \mathbf{u} ) }}{n} =1. \end{equation} To prove this we first observe that by Lemma \ref{le:stochdom} and \eqref{eq:llnweak} we have in probability $$ \limsup_{n\to\infty} \frac{ \left( \rho_{\widetilde\sigma_n( \mathbf{u} ) } - U_{\widetilde\sigma_n( \mathbf{u} ) , \widetilde\sigma_n(g (\mathbf{u} ))} \right) } {n} \leq 1. $$ On the other hand $U_{\widetilde\sigma_n( \mathbf{u} ) , \widetilde\sigma_n(g (\mathbf{u} ))}$ is stochastically dominated by the maximum of $n$ i.i.d.\ variables with law $U_\ij$. The latter converges in distribution on the scale $a_n$ (cf.\ Lemma \ref{le:PoiExt}$(i)$) and we know that $a_n/n \to 0$. It follows that in probability $\limsup_{n\to\infty} \rho_{\widetilde\sigma_n( \mathbf{u} ) }/ n \leq 1$. Next, we can estimate $$ \rho_{\widetilde\sigma_n( \mathbf{u} ) } \geq \sum_{i\in\{1,\ldots, n\}\setminus I_\mathbf{u}} U_{\widetilde\sigma_n( \mathbf{u} ),i}\,. $$ Now, observe that if $\mathbf{u} \in \mathds{N}^f$ belongs to generation $h$, then the set $I_\mathbf{u}$ contains at most $O(B_n^h)$ elements, while $n$ is at least of order $B_n^{H_n}$, where $B_n,H_n$ are the sequences used in the proof of Theorem \ref{th:convop}$(i)$. In particular, it follows that $|I_\mathbf{u}|=o(n)$ and therefore \eqref{eq:lwc1} and \eqref{eq:llnweak} imply that $\liminf_{n\to\infty} \rho_{\widetilde\sigma_n( \mathbf{u} ) }/ n\geq 1$ in probability. This proves \eqref{eq:sigmalln}. Thanks to \eqref{eq:sigmalln}, from the Slutsky lemma and the Skorokhod Representation Theorem, we may also assume that for each $\mathbf{v}\in\mathds{N}^f$, $\rho_{\widetilde\sigma_n(\mathbf{v})}/n$ converges a.s.\ to $1$. We need to show that for each $\mathbf{v}\in \mathds{N}^f$, \eqref{l2con} holds with the new vector $\psi_n^\mathbf{v}:= \widetilde\sigma_n^{-1} (\kappa_n S)\widetilde\sigma_n \delta_\mathbf{v}$, $$ \left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle = \kappa_ \frac{U_{\widetilde\sigma_n (\mathbf{w}),\widetilde\sigma_n(\mathbf{v})}}{\sqrt{\rho_{\widetilde\sigma_n (\mathbf{v})}\rho_{\widetilde\sigma_n (\mathbf{w})}}}\,. $$ Thanks to (\ref{eq:llninf}), $(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle)_\mathbf{w}$ is uniformly square-integrable (cf.\ the proof of (\ref{l2con})), and all we have to check is that $(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle - \left\langle \delta_\mathbf{w},\bT\delta_\mathbf{v}\right\rangle)^2\to 0$ for fixed $\mathbf{w}$. Here $\bT$ is the operator appearing in the proof of Theorem \ref{th:convop}(i) above, now with the choice $\theta=1$. We have \begin{align*} &(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle - \left\langle \delta_\mathbf{w},\bT\delta_\mathbf{v}\right\rangle)^2 \\ &\quad\leq 2 \left(a_n^{-1}U_{\widetilde\sigma_n (\mathbf{w}),\widetilde\sigma_n (\mathbf{v})}\left(1-n/ \sqrt{\rho_{\widetilde\sigma_n (\mathbf{v})}\rho_{\widetilde\sigma_n (\mathbf{w})}}\right)\right)^2 + 2(a_n^{-1}U_{\widetilde\sigma_n (\mathbf{w}),\widetilde\sigma_n (\mathbf{v})} - \left\langle \delta_\mathbf{w},\bT\delta_\mathbf{v}\right\rangle)^2\,. \end{align*} The second term above converges to zero as in the proof of point $(i)$. For the first term we use $\rho_{\widetilde\sigma_n(\mathbf{v})}/n \to 1$ and $\rho_{\widetilde\sigma_n(\mathbf{w})}/n \to 1$. This proves point $(ii)$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:convop}(iii)] The setting is as in the proof of point $(ii)$ above, but now $\alpha\in(0,1)$. We build the operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ on the tree $\cT_\alpha$ as in \eqref{opsym}. We need to prove that for any $\mathbf{v}\in\mathds{N}^f$, a.s.\ \begin{equation} \label{tos} \sum_{\mathbf{w}}(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle - \left\langle \delta_\mathbf{w},\mathbf{S}}\newcommand{\bT}{\mathbf{T}\delta_\mathbf{v}\right\rangle)^2 \to 0\,, \end{equation} with $\psi_n^\mathbf{v}:= \widetilde\sigma_n^{-1} S \widetilde\sigma_n \delta_\mathbf{v}$, i.e.\ $$ \left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle = \frac{U_{\widetilde\sigma_n (\mathbf{w}),\widetilde\sigma_n(\mathbf{v})}}{\sqrt{\rho_{\widetilde\sigma_n (\mathbf{v})}\rho_{\widetilde\sigma_n (\mathbf{w})}}}\,. $$ Let us first show that for any $\mathbf{v},\mathbf{w}\in\mathds{N}^f$ we have a.s.\ \begin{equation} \label{eq:convK} \frac{U_{\widetilde\sigma_n (\mathbf{w}), \widetilde\sigma_n (\mathbf{v})} } { \sqrt{\rho_{\widetilde\sigma_n (\mathbf{v})}\rho_{\widetilde\sigma_n (\mathbf{w})}} } \to \frac{\left\langle \delta_\mathbf{w}, \bT \delta_\mathbf{v}\right\rangle} {\sqrt{\rho(\mathbf{v})\rho(\mathbf{w})}} = \left\langle \delta_\mathbf{w},\mathbf{S}}\newcommand{\bT}{\mathbf{T}\delta_\mathbf{v}\right\rangle\,. \end{equation} Multiplying and dividing by $a_n$ and using \eqref{pointw} with $\theta=1$, we see that \eqref{eq:convK} holds if \begin{equation}\label{rhoss} a_n^{-1}\rho_{\widetilde\sigma_n (\mathbf{v})} \to \rho(\mathbf{v})\,, \end{equation} almost surely, for every $\mathbf{v}\in\mathds{N}^f$. In turn, \eqref{rhoss} can be proved as follows. Let $k \in \mathds{N}$, and consider the tree with vertex set $J_{k,k}$, obtained as in Proposition \ref{prop:LWC} with $B=H=k$. Since $J_{k,k}$ is a finite set, for any $\mathbf{v}$, \eqref{pointw} implies that a.s. $$ a_n^{-1}\sum_{\mathbf{u} \in J_{k,k}} U_{\widetilde\sigma_n (\mathbf{v}), \widetilde\sigma_n (\mathbf{u}) }\to {\sum_{\mathbf{u} \in J_{k,k}} x^{-1}_{\mathbf{v},\mathbf{u} }}\,. $$ By Lemma \ref{le:stochdom} and Lemma \ref{le:PoiExt}(ii), $\sum_{\mathbf{u} \notin J_{k,k}} a_n^{-1}\,U_{\widetilde\sigma_n (\mathbf{v}), \widetilde\sigma_n (\mathbf{u})}$ a.s.\ converges uniformly (in $n$) to $0$ as $k$ goes to infinity. This proves \eqref{rhoss} and \eqref{eq:convK}. Once we have \eqref{eq:convK}, to conclude the proof it is sufficient to show that a.s.\ \begin{equation}\label{unifint} \lim_{k\to\infty}\,\sup_n \,\sum_{\mathbf{w} \notin J_{k,k}}(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle)^2 = 0\,. \end{equation} However, using \eqref{rhoss} and the simple bound $(\left\langle \delta_\mathbf{w},\psi_n^\mathbf{v}\right\rangle)^2\leq \frac{U_{\widetilde\sigma_n (\mathbf{v}),\widetilde\sigma_n(\mathbf{w})}}{\rho_{\widetilde\sigma_n (\mathbf{v})}}$, we have that \eqref{unifint} again follows from an application of Lemma \ref{le:stochdom} and Lemma \ref{le:PoiExt}(ii). This completes the proof of Theorem \ref{th:convop}(iii). \end{proof} \subsection{Two-points local operator convergence} In the proof of the main theorems, we will need a stronger version of Theorem \ref{th:convop}. Define the $2 n \times 2 n$ matrices $$ X \oplus X \quad \text{ and } \quad S \oplus S, $$ where ``$\oplus$'' denotes the usual direct sum decomposition: $X\oplus X (\phi_1,\phi_2) = (X \phi_1, X \phi_2)$, for $n$-dimensional vectors $\phi_1,\phi_2$. As for the limiting operators, we realize them on the Hilbert space $L^2(V)\oplus L^2(V)$ with $V=\mathds{N}^f$. We consider two independent realizations $\cT^1_\alpha$, $\cT^2_\alpha$ of the PWIT($\ell_\theta$), and call $\bT_1, \mathbf{S}}\newcommand{\bT}{\mathbf{T}_1,\bT_2,\mathbf{S}}\newcommand{\bT}{\mathbf{T}_2$ the associated operators as in Section \ref{limop}. We may then define $$ \bT_1 \oplus \bT_2 \quad \text{ and } \quad \mathbf{S}}\newcommand{\bT}{\mathbf{T}_1\oplus \mathbf{S}}\newcommand{\bT}{\mathbf{T}_2. $$ By Proposition \ref{prop:LWC}, $((G_n,1))^{B,H},(G_n,2)^{B,H})$ converges weakly to $((\cT^1_\alpha , \eset)^{B,H} , (\cT^2_\alpha , \eset)^{B,H} )$. As before we can view the matrices $X\oplus X$ and $S\oplus S$ as bounded self--adjoint operators on $L^2(V)\oplus L^2(V)$. Therefore, arguing as in the proof of Theorem \ref{th:convop}, it follows that, in distribution, for all $(\phi_1, \phi_2) \in \cD \times \cD$, $$ \sigma_{n}^{-1} a_n^{-1} X \oplus X \sigma_n (\phi_1,\phi_2) \rightarrow \bT_1\oplus \bT_2 (\phi_1,\phi_2) \,, $$ where, $\sigma_n = \sigma_n^1 \oplus \sigma_n^2$, and, as above, for $i \in \{1,2\}$, $\sigma_n^i$ is a bijection on $\mathds{N}^f$, extension of the injective indexing map from $\mathds{N}^f$ to $\{1,\dots,n\}$, such that $\sigma_n^i (\eset) = i$. Analogous convergence results hold for the matrix $S\oplus S$. We can thus extend the statement of Theorem \ref{th:convop} to the following local convergence of operators in $L^2(V)\oplus L^2(V)$. To avoid lengthy repetitions we omit the details of the proof. \begin{thm}\label{th:convop2} As $n$ goes to infinity, in distribution, \begin{enumerate} \item[(i)] if $\alpha \in (0,2)$ then % $(a_n ^{-1} X\oplus a_n^{-1} X ,(1,2) ) \to (\bT_1 \oplus \bT_2 ,(\eset,\eset))$, \item[(ii)] if $\alpha \in (1,2)$ and $\theta=1$ then % $( \kappa_n S \oplus \kappa_n S ,(1,2)) \to (\bT_1 \oplus \bT_2 , (\eset,\eset) )$, \item[(iii)] if $\alpha \in (0,1)$ then % $(S\oplus S ,(1,2)) \to (\mathbf{S}}\newcommand{\bT}{\mathbf{T}_1 \oplus \mathbf{S}}\newcommand{\bT}{\mathbf{T}_2 ,(\eset,\eset))$. \end{enumerate} \end{thm} \section{Convergence of the Empirical Spectral Distributions} \label{se:convesd} \subsection{Markov matrix, \texorpdfstring{$\alpha\in(0,1)$}{alpha in (0,1)}: Proof of Theorem \ref{th:k01}} \label{sec:k01} Recall that $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ is a bounded self--adjoint operator on $L^2(V)$, whose spectrum is contained in $[-1,1]$, cf.\ \eqref{isom}. The resolvents of $S$ and $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ are the functions on $\mathds{C}_+ = \{z \in \mathds{C} : \Im z > 0 \}$: $$ R^{(n)} (z) % = ( S - z I) ^{-1} \quad \text{ and } \quad R (z) = ( \mathbf{S}}\newcommand{\bT}{\mathbf{T} - z I) ^{-1}. $$ For $\ell \in \mathds{N}$, set \begin{equation}\label{gammal} {\bf p}_\ell:= \langle \delta_\eset , \mathbf{S}}\newcommand{\bT}{\mathbf{T}^{\ell}\delta_\eset \rangle \end{equation} Note that ${\bf p}_\ell=\frac1{\rho(\eset)}\,\langle \delta_\eset , \mathbf{K}}\newcommand{\bL}{\mathbf{L}^{\ell} \delta_\eset \rangle_\rho$ is the probability that the random walk on the PWIT associated to the stochastic operator $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ comes back to the root (where it started) after $\ell$ steps. In particular, ${\bf p}_\ell=0$ for $\ell$ odd. We set ${\bf p}_0=1$. Let $\mu_\eset$ denote the spectral measure of $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ associated to $\delta_\eset$ (see e.g.\ \cite[Chapter VII]{reedsimon}). Equivalently, $\mu_\eset$ is the spectral measure of $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ associated to the $L^2(V,\rho)$ normalized vector $\hat\delta_\eset:= \delta_\eset/\sqrt{\rho(\eset)}$, cf.\ \eqref{isom}. In particular, $\mu_\eset$ is a probability measure supported on $[-1,1]$ and such that ${\bf p}_\ell = \int_{-1}^{1}x^\ell\mu_\eset(dx)$, for every $\ell$. Since all odd moments vanish $\mu_\eset$ is symmetric. Moreover, for any $z\in\mathds{C}_+$ we have $$ \langle \delta_\eset , R (z) \delta_\eset \rangle = \int_{-1}^1 \frac{ \mu_\eset (dx)}{x - z}, $$ i.e.\ $\langle \delta_\eset , R (z)\delta_\eset \rangle $ is the Cauchy--Stieltjes transform of $\mu_\eset$. Recall that the Cauchy--Stieltjes transform of a probability measure $\mu$ on $\mathds{R}$ is the analytic function on $\mathds{C}_+$ given by $$ m_\mu (z) = \int_\mathds{R} \frac{\mu(dx)}{x - z}. $$ The function $m_\mu$ characterizes the measure $\mu$, $|m_\mu(z) | \leq (\Im z) ^{-1}$, and weak convergence of $\mu_n$ to $\mu$ is equivalent to the convergence $m_{\mu_n}(z)\to m_\mu(z)$ for all $z\in\mathds{C}_+$. By construction $$ \frac 1 n \mathrm{tr} R^{(n)} (z) % = \int_{-1}^1 \frac{ \mu_K (dx)}{x - z} % = m_{\mu_K} (z)\,, $$ where $\mu_K$ is the ESD of $K$, which coincides with the ESD of $S$. Using exchangeability and linearity, we get $$ \mathds{E} R^{(n)}_{1,1} (z) = \mathds{E} m_{\mu_K} (z) = m_{\mathds{E} \mu_K} (z). $$ Since $R^{(n)}(z) _{1,1} \leq (\Im z) ^{-1}$ is bounded, we may apply Theorem \ref{th:strongres} and Theorem \ref{th:convop}, and obtain, for all $z \in \mathds{C}_+$, \begin{equation}\label{eq:Kav} \lim_{n \to \infty} m_{\mathds{E} \mu_K} (z) = m_{\mathds{E} \mu_\eset} (z). \end{equation} We define $$ \widetilde \mu_\alpha = \mathds{E} \mu_\eset. $$ Next, we shall prove that, for all $z \in \mathds{C}_+$: \begin{equation}\label{eq:KL1} \lim_{n \to \infty} % \mathds{E} \left| m_{ \mu_K} (z) - m_{\mathds{E} \mu_\eset} (z) \right| = 0. \end{equation} We have $$ \mathds{E} | m_{\mu_K} (z) - m_{ \mathds{E} \mu_\eset} (z) | % \leq \mathds{E} \left| m_{\mu_K} (z) - \mathds{E} m_{\mu_K} (z) \right| % + \left| m_{\mathds{E} \mu_K} (z) - m_{\mathds{E} \mu_\eset} (z)\right| $$ On the right hand side, the second term converges to $0$ by \eqref{eq:Kav}. The first term is equal to $$ \mathds{E} \left| \frac1n\sum_{k=1}^n \left[ R ^{(n)}_{k,k} (z)- \mathds{E} R ^{(n)}_{k,k} (z) \right] \right| . $$ By exchangeability, we note that \begin{align*} &\mathds{E} \left[\left( \frac 1 n \sum_{k=1}^n \left[ R ^{(n)}_{k,k} (z)- \mathds{E} R ^{(n)}_{k,k} (z) \right] \right)^2\right] \\ & \qquad= \frac 1 n \mathds{E} \left(R ^{(n)}_{1,1} - \mathds{E} R ^{(n)}_{1,1} \right)^2 + \frac{n(n-1)}{n^2} \mathds{E} \left[ \left(R ^{(n)}_{1,1} - \mathds{E} R ^{(n)}_{1,1} \right)\left(R ^{(n)}_{2,2} - \mathds{E} R ^{(n)}_{2,2}\right) \right] \\ & \qquad\leq \frac{1}{n (\Im z)^2} + \mathds{E} \left[ \left(R ^{(n) }_{1,1} - \mathds{E} R ^{(n)}_{1,1} \right)\left(R ^{(n) }_{2,2} - \mathds{E} R ^{(n) }_{2,2}\right) \right]. \end{align*} Theorem \ref{th:strongres} and Theorem \ref{th:convop2} imply that $(R_{1,1}(z),R_{2,2}(z))$ are asymptotically independent. Since these variables are bounded, they are also asymptotically uncorrelated, and \eqref{eq:KL1} follows. Finally, observe that the sequence of measures $\mu_K$ is a.s.\ tight. Therefore the convergence \eqref{eq:KL1} is sufficient to establish a.s.\ convergence of $\mu_K$ to $\widetilde\mu_\alpha$. This completes the proof of Theorem \ref{th:k01}. \subsection{I.i.d.\ matrix, \texorpdfstring{$\alpha\in(0,2)$}{alpha in (0,2)}: Proof of Theorem \ref{th:iida}}\label{sec:iida} Set $A_n = a_n^{-1} X$. For $z\in \mathds{C}_+$, we define the Cauchy--Stieltjes transform $$m_{A_n} (z) = \int \frac{d \mu_{A_n}(x)} {x - z} = \frac 1 n \sum_{k=1}^n R^{(n)}_{k,k} (z) \,,$$ where $$ R^{(n)} (z) = (A_n - z I)^{-1}\,, $$ is the resolvent of $A_n$. By exchangeability, $\mathds{E} m_{A_n} (z) = \mathds{E} R^{(n)}_{1,1}(z)$. From Proposition \ref{esssa} we know that $\bT$ is self--adjoint. Therefore from Theorem \ref{th:strongres} and Theorem \ref{th:convop} we infer \begin{equation}\label{meancon} \mathds{E} m_{A_n} (z) % \to \mathds{E} h(z)\,,\quad % h(z):=\scalar{\delta_\eset}{(\bT-z I)^{-1}\delta_\eset}\,. \end{equation} As in the proof of Theorem \ref{th:k01} we may write $\mathds{E} h(z) = \mathds{E} m_{\mu_\eset}=m_{\mathds{E}\mu_\eset}$, that is the Cauchy--Stieltjes transform of the expected value of the random spectral measure $\mu_\eset$ associated to $\bT$ at the root vector $\delta_\eset$. From \eqref{meancon} we obtain the weak convergence of $\mathds{E} \mu_{A_n}$ to $\mu_\alpha:=\mathds{E}\mu_\eset$. To obtain a.s.\ weak convergence of $\mu_{A_n}$ to $\mu_\alpha$, from Lemma \ref{astight} it suffices to prove the $L^1$ convergence of Cauchy--Stieltjes transforms as in \eqref{eq:KL1}. This in turn is obtained by repeating word by word the argument in the proof of Theorem \ref{th:k01}. Thus, we have obtained $\mu_{A_n}\to\mu_\alpha$ almost surely. Since the operator $\bT$ only depends on the two parameters $\alpha$ and $\theta$, where the latter is defined by \eqref{theta}, the LSD $\mu_\alpha$ might still depend on the parameter $\theta$. However, the fact that $\mu_\alpha$ is independent of $\theta$ follows from Lemma \ref{le:itunique} below, which implies in particular that the values $m_{\mu_\alpha}(i t)=\mathds{E}[h(i t)]$, $t>0$, are uniquely determined by $\alpha$, and therefore by analyticity, all values $m_{\mu_\alpha}(z)$, $z\in\mathds{C}_+$ are uniquely determined by $\alpha$. This ends the proof of Theorem \ref{th:iida}. We remark that in the proof of Theorem \ref{th:iida} one can avoid establishing \eqref{eq:KL1} plus almost sure tightness (Lemma \ref{astight}$(i)$) as we do above. Namely, the convergence of expected values $\mathds{E}\mu_{a_n^{-1}X}\to \mu_\alpha$ is sufficient. This follows from an apriori concentration estimate; see \cite{bordenave-caputo-chafai-heavygirko}. However, we did that piece of extra work here since we need it anyway in the case of Markov matrices, where the mentioned concentration estimate is not available. \subsection{Markov matrix, \texorpdfstring{$\alpha\in(1,2)$}{alpha in (1,2)}: Proof of Theorem \ref{th:k12}} \label{sec:k12} The proof given above for the matrix $A_n=a_n^{-1}X$ applies without modifications to the new matrix $A_n:=\kappa_n S$, where $ S_\ij = \frac{U_{i,j}}{\sqrt{\rho_i \rho_j}}$. In particular, we use Theorem \ref{th:convop}$(ii)$, Theorem \ref{th:convop2}$(ii)$ and Lemma \ref{astight}$(ii)$ to obtain the a.s.\ weak convergence of $\mu_{A_n}$ to $\mu_\alpha=\mathds{E}\mu_\eset$, where $ \mu_\eset$ is the random spectral measure of $\bT$ at the root. This ends the proof of Theorem \ref{th:k12}. \subsection{Markov matrix, \texorpdfstring{$\alpha=1$}{alpha=1}: Proof of Theorem \ref{th:k1}} \label{sec:k1} Suppose now that $\alpha=1$ and set $w_n=\int_0^{a_n}\!x\cL(dx)$ and $\kappa_n=na_n^{-1}w_n$. A close inspection of the proof of Theorem \ref{th:convop}$(ii)$ and Theorem \ref{th:k12} reveals that all arguments used for $\alpha\in(1,2)$ can be applied to the case $\alpha=1$ without modifications except for the two estimates \eqref{eq:llnweak} and \eqref{eq:llninf}, which have to be replaced by \eqref{eq:llnweaka1} and \eqref{eq:llninfa1} below respectively. For \eqref{eq:llninfa1} we shall use the hypothesis \eqref{eq:conda1wn} on $w_n$. Let us start by proving that, in probability \begin{equation}\label{eq:llnweaka1} \lim_{n\to\infty} \frac{\rho_1}{n w_n } = 1. \end{equation} We recall that, for fixed $i$, $a_n^{-1}(\rho_i - n w_n)$ converges in distribution to a $1$-stable law, see for instance \cite[Theorem 1]{zinn81}. Therefore it suffices to show that $\kappa_n=a_n^{-1} n w_n \to\infty$. To see this we may argue as follows. Observe that, for any $\varepsilon>0$ $$ \kappa_n = \mathds{E}\sum_{i=1}^n a_n^{-1} V_i \,\mathds{1}_{\{a_n^{-1}V_i\leq 1\}} % \geq \mathds{E}\sum_{i=1}^n a_n^{-1} V_i \,\mathds{1}_{\{\varepsilon \leq a_n^{-1}V_i\leq 1\}} $$ where $V_1\geq V_2\geq \cdots$ are the ranked values of $U_{1,j}$, $j=1,\dots, n$. From Lemma \ref{le:PoiExt}$(i)$ the right hand side above, for any $\varepsilon>0$, converges to $\mathds{E}\sum_{i} x_i \mathds{1}_{\{\varepsilon \leq x_i\leq 1\}}$, where the $x_i$ are distributed according to the PPP with intensity $x^{-2}dx$ on $(0,\infty)$. While this sum is finite for every $\varepsilon>0$ it is easily seen to diverge (logarithmically) for $\varepsilon\to 0$. This achieves the proof of \eqref{eq:llnweaka1}. Next, we claim that if $w_n$ satisfies \eqref{eq:conda1wn}, then there exists $\delta >0$ such that, a.s. \begin{equation}\label{eq:llninfa1} \liminf_{n\to\infty} \min_{1 \leq i \leq n} \frac{\rho_i}{n w_n} \geq \delta . \end{equation} To establish \eqref{eq:llninfa1}, let us define $b_n = a_{\lfloor n^\varepsilon \rfloor}$ so that $\mathds{E}(U_{1,i} \mathds{1}_{\{U_{1,i} \leq b_n\}}) = w_{\lfloor n^\varepsilon \rfloor} $ and $$ \rho_1 \geq S_n := \sum_{i = 1}^n U_{1,i}\mathds{1}_{\{U_{1,i}\leq b_n\}}. $$ From the union bound, $$ \mathds{P}\Big(\min_{1 \leq i \leq n}\frac{\rho_i}{n w_n}<\delta\Big) % \leq n\,\mathds{P}\Big(\frac{\rho_1}{n w_n} < \delta\Big). $$ From Borel-Cantelli Lemma, it is thus sufficient to prove that for some $\delta >0$ \begin{equation}\label{eq:bcSn} \sum_{n \geq 1} n \mathds{P}\left(S_n < \delta n w_n\right) < \infty \end{equation} By assumption, there exists $\delta > 0$ such that for all $n$ large enough, $w_{\lfloor n^\varepsilon \rfloor} \geq 2 \delta w_n$. We define $$ V_i = U_{i,1} \mathds{1}_{\{U_{1,i} \leq b_n\}} - w_{\lfloor n^\varepsilon \rfloor} % \quad\text{ and }\quad % \overline S_n = \sum_{i = 1} ^ n V_i. $$ Note that $\mathds{E} V_i = \mathds{E} \overline S_n = 0$. We get for all $n$ large enough \begin{equation} \label{eq:Sn} % \mathds{P}\left(S_n < \delta n w_n\right) % = \mathds{P}\left(\overline S_n < \delta n w_n - n w_{\lfloor n^\varepsilon \rfloor}\right) % \leq \mathds{P}\left(\overline S_n < - \delta n w_n\right). \end{equation} By construction, $w_n$ is slowly varying and $a_n = L(n) n$ where $L(n)$ is slowly varying. Hence $| V_i | \leq \max(w_{\lfloor n^\varepsilon \rfloor}, b_n) = L(n) n^{\varepsilon}$ where $L(n)$ is another slowly varying sequence. By the Hoeffding inequality, we get from \eqref{eq:Sn} $$ \mathds{P}\left(\overline S_n < - \delta n w_n\right) % \leq \exp % \Big( - \frac{ \delta ^2 n ^2 w_n ^2 } { n L (n) ^2 n ^{2 \varepsilon}} \Big) % = \exp \left( - \widetilde L(n) n ^{1 - 2 \varepsilon} \right). $$ where $\widetilde L(n)$ is a slowly varying sequence. Since $\varepsilon < 1/2$ we obtain \eqref{eq:bcSn} and thus \eqref{eq:llninfa1}. \section{Properties of the Limiting Spectral Distributions} \label{sec:spec} Recall that $\mu_\alpha$ is characterized by the Cauchy--Stieltjes transform $m_{\mu_\alpha}(z) = \mathds{E} h(z)$, $z\in\mathds{C}_+$, where $h(z)$ is the random variable $h(z)=\scalar{\delta_\eset}{(\bT-zI)^{-1}\delta_\eset}$, cf.\ \eqref{meancon}. The main novelty in our analysis of the LSD $\mu_\alpha$ with respect to previous works \cite{benarous-guionnet,belinschi} is that we can work here with the distribution of $h(z)$ rather than only with its expectation. \subsection{Recursive distributional equation}\label{rdesub} The symbol $ \overset{d}{=}$ stands for equality in distribution. The following result is at the heart of our analysis of the LSD $\mu_\alpha$. \begin{thm}[Recursive Distributional Equation] \label{th:BC} For all $z \in \mathds{C}_+$, the random variable $$ h(z)=\scalar{\delta_\eset}{(\bT-zI)^{-1}\delta_\eset} $$ satisfies to $h(-\bar z) = -\bar h (z)$ and \begin{equation} \label{eq:RDE} % h(z) \overset{d}{=} - \left( z + \sum_{k \in \mathds{N}} \xi_k h_k (z) \right)^{-1}, \end{equation} where $(h_k)_{k \in \mathds{N}}(z)$ are i.i.d.\ with the same law of $h(z)$, and $\{\xi_k\}_{k \in \mathds{N}}$ is an independent Poisson point process with intensity $\frac{\alpha}{2} x ^{-\frac \alpha 2 -1}dx$ on $(0,\infty)$. \end{thm} \proof Since the PWIT is bipartite, the property $h(-\bar z) = -\bar h (z)$ is a consequence of Lemma \ref{le:bipartite}. We are left with the RDE \eqref{eq:RDE}. This can be interpreted as an operator version of Schur complement formula (see e.g.\ Proposition 2.1 in Klein \cite{klein} for a similar argument). Denote, as usual, by $k \in \mathds{N}$ the descendants of the root $\eset$ and let $\cT^{(k)}$ denote the subtree rooted at $k$ (the set of vertices of $\cT^{(k)}$ is then $k \mathds{N}^f$). We have the direct sum decomposition $\mathds{N}^f = \{\eset \} \bigcup \cup_{k} k \mathds{N}^f$. We define $\bT^{(k)}$ as the projection of $\bT$ on $k \mathds{N}^f$. Its skeleton is thus $\cT^{(k)}$. Finally, define the operator $\mathbf{U}}\newcommand{\bV}{\mathbf{V}$ on $\cD$ by its matrix elements $$u_{k}:= \langle \delta_\eset, \mathbf{U}}\newcommand{\bV}{\mathbf{V} \delta_k \rangle = \langle \delta_k, \mathbf{U}}\newcommand{\bV}{\mathbf{V}\delta_\eset \rangle = \langle \delta_\eset, \bT \delta_k \rangle \,$$ for all $k \in \mathds{N}$ (offsprings of $\eset$) and $\langle \delta_\mathbf{u}, \mathbf{U}}\newcommand{\bV}{\mathbf{V}\delta_\mathbf{v} \rangle =0$ otherwise. In this way we have $$ \bT = \mathbf{U}}\newcommand{\bV}{\mathbf{V} + \wt{\mathbf{T}} \quad \text{ with }\quad \wt{\mathbf{T}} = \bigoplus_{k \in \mathds{N}} \bT^{(k)}. $$ As $\bT$, each $\bT^{(k)}$ can be extended to a self--adjoint operator, which we denote again by $\bT^{(k)}$. Therefore $\wt{\mathbf{T}}$ is self--adjoint. We shall write $R(z)=(\bT - z I ) ^{-1}$ and $\widetilde R(z) = (\wt{\mathbf{T}} - z I ) ^{-1} $ for the associated resolvents, $z\in\mathds{C}_+$. These operators satisfy the resolvent identity \begin{equation} \label{resid} \widetilde R(z) (\bT-\wt{\mathbf{T}}) R(z) = \widetilde R(z) - R(z)\,. \end{equation} Set $\widetilde R_{\mathbf{u},\mathbf{v}}(z):=\langle \delta_\mathbf{u} ,\widetilde R(z) \delta_\mathbf{v} \rangle$ and $R_{\mathbf{u},\mathbf{v}}(z):=\langle \delta_\mathbf{u} ,R(z) \delta_\mathbf{v} \rangle $. Observe that $\widetilde R_{\eset,\eset}(z) = - z^{-1}$ and that the direct sum decomposition $\mathds{N}^f = \{\eset \} \bigcup \cup_{k} k \mathds{N}^f$ implies $\widetilde R_{k,l}(z)= 0$ for $k\neq l$. Similarly we have that $\widetilde R_{\eset,k}(z) = 0 = \widetilde R_{k,\eset}(z)$ for every $k\in\mathds{N}$. From \eqref{resid} we then obtain, for $k\in\mathds{N}$: $$ \widetilde R_{k,k}(z) u_k R_{\eset,\eset}(z) = - R_{k,\eset}(z) \,. $$ It follows that $$ \langle \delta_\eset,\widetilde R(z) (\bT-\wt{\mathbf{T}}) R(z)\delta_\eset \rangle = \sum_{k\in\mathds{N}}\widetilde R_{\eset,\eset}(z) u_k R_{k,\eset}(z) = - \sum_{k\in\mathds{N}} \widetilde R_{\eset,\eset}(z)\widetilde R_{k,k}(z) u_k^2 R_{\eset,\eset}(z)\,. $$ From \eqref{resid} we then conclude that $$ R_{\eset,\eset}(z) = \frac{\widetilde R_{\eset,\eset}(z)}{1- \widetilde R_{\eset,\eset}(z)\sum_{k\in\mathds{N}} \widetilde R_{k,k}(z) u_k^2}\,. $$ Or, using $\widetilde R_{\eset,\eset}(z) = -z^{-1}$: $$ R_{\eset,\eset}(z) = -\left(z + \sum_{k\in\mathds{N}} \widetilde R_{k,k}(z) u_k^2\right) ^{-1}\,. $$ Then \eqref{eq:RDE} follows from the recursive construction of the PWIT: $\cT^{(k)}$ are i.i.d.\ with distribution $\cT$ and therefore $\widetilde R_{k,k}(z)$ are i.i.d.\ with the same law of $R_{\eset,\eset}(z)$, for every $z\in\mathds{C}_+$. \qed \bigskip Concerning the uniqueness of the solution to the RDE \eqref{eq:RDE} we can establish the following useful result. For $z = i t $, with $ t > 0 $, the identity, $h(-\bar z) = -\bar h (z)$ reads $\Re h (it) = 0$. Thus, the equation satisfied by $g(it) = \Im h(it) \geq 0$ is \begin{equation} \label{eq:RDEg} % g(it) % \overset{d}{=} \left( t + \sum_{k \in \mathds{N}} \xi_k g_k (it) \right) ^{-1}. \end{equation} \begin{lem}[Uniqueness of solution for the RDE]\label{le:itunique} For each $t>0$, there exists a unique probability measure $L^{it}$ on $\mathds{R}_+$, solution of \eqref{eq:RDEg}. \end{lem} \begin{proof}[Proof of Lemma \ref{le:itunique}] Set $\beta=\alpha/2$. If $(Y_k)$ is an i.i.d.\ sequence of non negative random variables, independent of $\{ \xi_k\}_{k \in \mathds{N}}$, such that $\mathds{E} [ Y_1 ^\beta] < \infty$ then it is well known that $$\sum_k \xi_k Y_k \overset{d}{=} \sum_k \xi_k (\mathds{E} [ Y_1^\beta])^{1/\beta}$$ (see for example \cite[Lemma 6.5.1]{talagrand2003} or \eqref{eq:laplace} below). This implies the unicity for Equation \eqref{eq:RDEg} provided that the equation satisfied by $ \mathds{E}[ g(it)^\beta]$ has a unique solution. Recall the formulas of Laplace transforms, for $y \geq 0$, $\eta >0$ and $0 < \eta < 1$ respectively, \begin{equation} \label{eq:gammaLaplace} % y^{-\eta} % = \Gamma(\eta)^{-1} \int_0 ^\infty x ^{\eta -1} e^{- x y} dx % \quad\text{and}\quad % y^{\eta} % = \Gamma(1-\eta)^{-1} \eta \int_0 ^\infty x ^{-\eta -1} (1 - e^{- x y}) dx. \end{equation} From the exponential formula we deduce that, with $s \geq 0$, \begin{align} \mathds{E} \exp\left( - s \sum_{k} \xi_k Y_k \right) & = \exp \left( \mathds{E} \int_0 ^\infty ( e^{-x s Y_1} - 1 ) \beta x ^{-\beta - 1} dx \right) \nonumber \\ &= \exp\left( - \Gamma(1-\beta) s^\beta \mathds{E} [Y_1^\beta]\right) \label{eq:laplace}. \end{align} From Equation \eqref{eq:RDEg}, $\mathds{E}[ g(it)^\beta]$ is solution of the equation in $y$: $$ y = \frac {1}{\Gamma(\beta)} \int_0 ^\infty x ^{\beta -1} e^{-tx} e^{- x ^{\beta} \Gamma(1-\beta) y} dx. $$ The last equation has a unique solution for any $t\geq 0$. Indeed, the function from $\mathds{R}_+$ to $\mathds{R}_+$ $$\varphi : y \mapsto \frac {1}{\Gamma(\beta)} \int_0 ^\infty x ^{\beta -1} e^{-tx} e^{- x ^{\beta} \Gamma(1-\beta) y}dx$$ tends to $0$ as $y\to\infty$ and it is decreasing since $$ \varphi'(y) % = - \frac {\Gamma(1- \beta) }{\Gamma(\beta)}\int_0 ^\infty x ^{2\beta -1} e^{-tx} e^{- x ^{\beta} \Gamma(1-\beta) y}dx. $$ Thus $\varphi$ has a unique fixed point. \end{proof} \bigskip Before going into the proof of Theorem \ref{th:mua}, we introduce some notation. Let $\beta = \alpha /2$ as above and let $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$ denote the set of probability measures on $(0,\infty)$ with finite $\beta$ moment. We define the map $\Psi$ on probability measures on $\mathds{R}_+ \cup \{\infty\}$, where $\Psi(Q)$ is the law of \begin{equation}\label{eq:Psi} Z = \left( \sum_{k \in \mathds{N}} \xi_k Y_k \right)^{-1}, \end{equation} with $(Y_k,k \in \mathds{N})$ i.i.d.\ with law $Q$ independent of $\Xi = \{\xi_k\}_{k \in \mathds{N}}$ a Poisson point process on $\mathds{R}_+$ of intensity $\beta x ^{-\beta - 1}dx$. \begin{lem}\label{le:Psi} $\Psi$ satisfies the following \begin{enumerate} \item[(i)] $\Psi$ is a map from $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$ to $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$. Let $(P_n)_{n \in \bN}$ and $P$ in $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$, if $\lim_{n\to\infty} \int x^\beta dP_n = \int x^\beta dP$ then $\Psi (P_n)$ converges weakly to $\Psi(P)$ and $\lim_{n\to\infty} \int x^\beta d\Psi(P_n) = \int x^\beta d\Psi(P)$. \item[(ii)] The unique fixed point of $\Psi$ in $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$ is the law of $1/S$ where $S$ is the one-sided $\beta$-stable law with Laplace transform $ \mathds{E} \exp ( - t S ) = \exp \left( -t^{\beta}\sqrt{ \Gamma(1 + \beta )/\Gamma(1 -\beta)} \right) $, $t\geq 0$. \item[(iii)] $\mathds{E} S^{-\beta} = (\Gamma(\beta+1)\Gamma( 1 - \beta) ) ) ^{-1/2}$. \end{enumerate} \end{lem} \begin{proof}[Proof of Lemma \ref{le:Psi}] As in the proof of Lemma \ref{le:itunique}, we get \begin{eqnarray*} \mathds{E} Z^\beta & = & \mathds{E} \left( \sum_k \xi_k Y_k \right)^{-\beta} \\ & = & \mathds{E} \frac{1}{\Gamma(\beta)} \int_0 ^\infty x^{\beta - 1} e^{-x \sum_k \xi_k Y_k} dx \\ & = & \frac{1}{\Gamma(\beta)} \int_0 ^\infty x^{\beta - 1} e^{- x^\beta \Gamma( 1 - \beta) \mathds{E} Y_1^\beta } dx \\ & = & \frac{1}{\beta \Gamma(\beta)} \int_0 ^\infty e^{- s \Gamma( 1 - \beta) \mathds{E} Y_1^\beta } ds\\ & = & (\Gamma(\beta+1)\Gamma( 1 - \beta) \mathds{E} Y_1^\beta )^{-1} , \end{eqnarray*} (in the last line we have used the identity $z \Gamma(z) = \Gamma(z+1)$). Therefore, $\Psi$ is a map from $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$ to $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$. Also as a consequence of \eqref{eq:laplace}: $$ \mathds{E} \exp( -t Z^{-1} )% = \exp( - t^\beta \Gamma( 1 - \beta) \mathds{E} Y_1^\beta ). $$ Statement (i) follows from the continuity of the map $x \mapsto 1/x$ in $(0,\infty)$. If $Z$ is a fixed point of $\Psi$ then from the computation above $\mathds{E} Z^\beta = (\Gamma(\beta+1)\Gamma( 1 - \beta) ) ) ^{-1/2}$. Finally, from \eqref{eq:laplace} we obtain for all $t\geq 0$, $$ \mathds{E} \exp( -t Z^{-1} ) % = \exp( - t^\beta \Gamma( 1 - \beta) \mathds{E} Z^\beta ) % = \exp \left( - t^{\beta} \sqrt{\frac{ \Gamma(1 + \beta )}{ \Gamma(1 -\beta) } } \right) . $$ \end{proof} \subsection{Proof of Theorem \ref{th:mua}\texorpdfstring{$(i)$}{(i)}} From Theorem \ref{th:BC}, for $z \in \mathds{C}_+$, $$ m_{\mu_\alpha} (z) = \mathds{E} h (z), $$ where $h$ solves RDE \eqref{eq:RDE}. Set $f(z) = \Re h(z)$ and $g(z) = \Im h(z)$. For $z = u + i v \in \mathds{C}_+$, $f$ and $g$ satisfy the RDE $$ f(z) \overset{d}{=} % - \frac{ u + \sum_k \xi_k f_k (z) } { \left( u + \sum_k \xi_k f_k(z) \right)^2 + \left( v + \sum_k \xi_k g_k(z) \right)^2}\,, $$ and $$ g(z) \overset{d}{=} \frac{ v + \sum_k \xi_k g_k (z) } { \left( u + \sum_k \xi_k f_k(z) \right)^2 + \left( v + \sum_k \xi_k g_k (z)\right)^2}\,. $$ By construction, $0 \leq g(z) \leq 1/v$, thus the law of $g(z)$ is in $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$. If the stochastic domination of $P$ by $Q$ is denoted by $P \leq_{st} Q$, we have \begin{equation}\label{eq:gborne} g(z) \leq_{st} \left(v+ \sum_k \xi_k g_k (z)\right)^{-1} % \leq_{st} \left( \sum_k \xi_k g_k (z)\right)^{-1}. \end{equation} (In fact, we also have $| h(z) | \leq_{st} \left(\sum_k \xi_k g_k (z)\right)^{-1}$). Using the computation in Lemma \ref{le:Psi}, we obtain $\mathds{E} g(z) ^\beta \leq (\Gamma(\beta+1)\Gamma( 1 - \beta) \mathds{E} g(z) ^\beta )^{-1}$. Thus \begin{equation}\label{eq:tension} \mathds{E} g(z) ^\beta \leq \frac{1}{\sqrt{\Gamma(\beta+1)\Gamma( 1 - \beta) ) }}. \end{equation} Again, the formula $y^{-\eta} = \Gamma(\eta)^{-1} \int_0 ^\infty x ^{\eta -1} e^{- x y} dx$, for $y \geq 0$, $\eta >0$, gives \begin{equation}\label{eq:gmoment} \mathds{E} \left[\left( \sum_k \xi_k g_k (z) \right)^{-\eta}\right] = \frac{1}{\Gamma(\eta)} \int_0 ^\infty x ^{\eta -1} e^{- x^\beta \Gamma(1-\beta) \mathds{E} g(z) ^\beta } dx\,. \end{equation} We now study the weak limit of $g(u + iv)$ when $v \downarrow 0$, $u \in \mathds{R}$. Equation \eqref{eq:tension} implies tightness, so let $g(u +i 0)$ be a weak limit. If this limit is non zero then $\mathds{E} g^\beta (u + i0) > 0$, and Equations \eqref{eq:gborne}-\eqref{eq:gmoment} imply for all $\eta >0$ and $u \in \mathds{R}$, $$ \limsup_{u + i v : v \downarrow 0} \mathds{E} g^\eta (u + i v) < \infty. $$ Since $\mathds{E} h(z)$ is the Cauchy--Stieltjes transform of $\mu_\alpha$, taking $\eta = 1$, we deduce that $\mu_\alpha$ is absolutely continuous with bounded density, see for example \cite[Theorem 11.6]{simon05}. \qed \subsection{Proof of Theorem \ref{th:mua}\texorpdfstring{$(ii)$}{(ii)}} In view of \cite[Theorem 11.6]{simon05}, it is sufficient to show that \begin{equation}\label{eq:limg0} \lim_{t \downarrow 0} \mathds{E} g(it) % = \Gamma\left(1+ \frac{1}{\beta}\right) \left( \frac{ \Gamma(1 + \beta )}{ \Gamma(1 - \beta ) } \right)^{\frac{1}{2 \beta}}. \end{equation} As above, \eqref{eq:tension} implies the tightness of $(g(it), t > 0)$. So let $g(i 0)$ be a weak limit. It is in $\mathcal{K}}\newcommand{\cL}{\mathcal{L}_\alpha$ and, by continuity, $g(i0)$ is solution of the RDE $$ g(i0) \overset{d}{=} \left( \sum_k \xi_k g_k (i0)\right)^{-1}. $$ By Lemma \ref{le:Psi}, $g(i0) \overset{d}{=} 1/ S$, and \eqref{eq:gmoment} gives $$ \mathds{E} g(i0) = \int_0 ^\infty e^{- x^\beta \sqrt{\frac{\Gamma(1-\beta) }{\Gamma(1+\beta)}} } dx = \frac{1}{\beta} \Gamma\left(\frac{1}{\beta}\right) \left( \frac{ \Gamma(1 + \beta )}{ \Gamma(1 - \beta ) } \right)^{\frac{1}{2 \beta}}. $$ Using the identity $z \Gamma(z) = \Gamma(z+1)$, we get \eqref{eq:limg0}. \qed \subsection{Proof of Theorem \ref{th:mua}\texorpdfstring{$(iii)$}{(iii)}} We start with a Tauberian--type theorem for the Cauchy--Stieltjes transform of symmetric probability measures. As usual, let $m_\mu $ denote the Cauchy-Stieltjes transform of a symmetric probability measure $\mu$ on $\mathds{R}$. Then, for all $t > 0$, $m_\mu (it ) \in i \mathds{R}_+$ and $$ \Im m_\mu (it) = \int_{-\infty}^\infty \frac{ t }{ t^2 + x^2} \mu (dx)= 2 \int_0 ^\infty \frac{ t }{ t^2 + x^2} \mu (dx). $$ \begin{lem}[Tauberian--like lemma]\label{le:taub} If $L$ is slowly varying and $0 < \alpha < 2$, the following are equivalent: as $t$ goes to $+\infty$ \begin{align} \mu ( (t,\infty )) & \sim L(t) t ^{- \alpha} \label{eq:taub1} \\ \Im m_\mu (it) - t ^{-1} & \sim - \Delta(\alpha) L(t) t ^{- \alpha-1} \label{eq:taub2} \end{align} with $ \Delta(\alpha) = 2\alpha\int_0 ^\infty \frac{ x^{1 - \alpha } } { 1 + x ^2 } dx$. \end{lem} \begin{proof}[Sketch of Proof of Lemma \ref{le:taub}] The proof is an adaptation of the proof of the Karamata Tauberian Theorem in \cite[pages 37--38]{bingham}. Let $\mathcal{M}}\newcommand{\cN}{\mathcal{N}$ denote the set of symmetric measures on $\mathds{R}$ such that $\int_0^\infty \min (1, x^2) \mu (dx) < + \infty$. On $\mathcal{M}}\newcommand{\cN}{\mathcal{N}$, define the transform $$ \mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu : t \mapsto \int_0 ^\infty \frac{2 x^ 2} { t^2 + x ^2 } \mu( dx). $$ Note that $\mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu (t) = 1 - t \Im m_\mu (it) = 1 + i t m_\mu (it)$. Recall that the Cauchy-Stieltjes transform characterizes the measure. Thus if for all $t > 0$, $(\mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu_n (t))_{n \in \mathds{N}} $ converges to $\mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu$ then $(\mu_n)_{n \in \mathds{N}} $ converges to $\mu$ over all bounded continuous function with $0$ outside the support. Now, assume that \eqref{eq:taub2} holds, namely \begin{equation}\label{scond} \mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu (t) \sim \Delta(\alpha) L(t) t ^{- \alpha}. \end{equation} Since $\lim_{x \to \infty} L( t x ) / L(t) = 1$, we deduce that for all $t >0$, as $x \to \infty$ $$ \frac{ \mathcal{S}}\newcommand{\cT}{\mathcal{T} \mu (x t) }{ L(x) x ^{-\alpha} } \to \Delta(\alpha) t ^{- \alpha}. $$ The left hand side is the $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ transform of the measure $\mu_x (dy) = \mu (x dy ) / ( L(x) x ^{-\alpha} )$ while the right hand side is the $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ transform of $\mu_{\infty} (dy ) = \alpha |y | ^{- \alpha -1} dy$, thus $$\frac{ \mu (( x, \infty) ) }{ L(x) x ^{-\alpha} } = \mu_x ( (1,\infty) ) \to \mu_\infty (1,\infty) = 1.$$ We get precisely \eqref{eq:taub1}. The reciprocal implication can be proved similarly, see \cite[pages 37--38]{bingham} (it is straightforward for $L(t) = c$, the case that we will actually use). \end{proof} \bigskip We now come back to the RDE \eqref{eq:RDEg} and define $Q(t) = \mathds{E}[ g(it)^\beta]$. From \eqref{eq:RDEg}, we have a.s.\ $t g(it) \leq 1$. Note also, from a.s.\ $\sum_k \xi_k g_k (it) \leq t^{-1} \sum_k \xi_k $, that a.s. $\lim_{t \to +\infty} t g(it) = 1$. The dominated convergence Theorem leads to\begin{eqnarray}\label{eq:asyG} \lim_{t \to \infty} t ^{ \beta } Q(t) =1. \end{eqnarray} Moreover, as already pointed in Lemma \ref{le:itunique}, $$\sum_k \xi_k g_k (it) \overset{d}{=} Q(t)^{1/\beta} \sum_k \xi_k.$$ We deduce, with $C(t) = ( t Q(t)^{1/\beta})^{-1/2}$, that \begin{eqnarray} \Im m_{\mu_\alpha} ( it) = \mathds{E} g (it) & = & \mathds{E} \frac { t } { t^2 + t Q(t)^{1/\beta} \sum_k \xi_k} \nonumber\\ & = & C(t) \mathds{E} \frac { t C(t)} { (t C(t))^2 + \sum_k \xi_k} \nonumber\\ & = & C(t) \Im m_{\cL(Y)} ( i C(t) t ), \label{eq:asyma} \end{eqnarray} where $\cL(Y)$ is the law of $$ Y = \varepsilon \sqrt {\sum_k \xi_k},$$ and $\varepsilon$ is independent of $\{\xi_k\}_k$, $\mathds{P} (\varepsilon =1 ) = \mathds{P} (\varepsilon = - 1 ) = 1/2$. We have $$ \mathds{P} ( Y > t ) = \frac 1 2 \mathds{P} \left( \sum_k \xi_k > t^2\right). $$ By \eqref{eq:laplace}, as $s \downarrow 0$, $\mathds{E} \exp( -s \sum_k \xi_k ) = \exp ( - s^\beta \Gamma (1 - \beta) ) \sim 1 - s^\beta \Gamma (1 - \beta)$. Using \cite[Corollary 8.7.1]{bingham}, we obtain $\mathds{P} ( \sum_k \xi_k > t ) \sim t^{-\beta}$ and $$ \mathds{P} ( Y > t ) \sim \frac{t^{-\alpha}}{2}. $$ By Lemma \ref{le:taub}, $ \Im m_{\cL(Y)} ( i t ) - t ^{-1} \sim - \frac{t^{-\alpha-1}}{2} \Delta(\alpha)$. Thus by \eqref{eq:asyG}-\eqref{eq:asyma}, $$ \Im m_{\mu_\alpha} ( it) - t ^{-1} \sim - \frac{t^{-\alpha-1}}{2}\Delta(\alpha). $$ Theorem \ref{th:mua}$(iii)$ now follows from Lemma \ref{le:taub}. \qed \begin{rem} In the proof of Lemma \ref{le:itunique}, we have seen that the distribution of $g(it) = \Im h(it)$ was function of $Q(t) = \mathds{E} [ g^\beta (it) ]$ which satisfies the equation $$ Q(t) = \frac {1}{\Gamma(\beta)} \int_0 ^\infty x ^{\beta -1} e^{-tx} e^{- x ^{\beta} \Gamma(1-\beta) Q(t)} dx = f_\beta ( t, Q(t)). $$ We could push further our investigation at $t =0$ and compute the derivative of $Q$ at $t =0$: $Q'(0) = - f_{\beta +1} (0,Q(0)) - \Gamma ( 1 - \beta)f_{2\beta} (0,Q(0)) Q'(0)$, with $Q(0) = (\Gamma(\beta+1)\Gamma( 1 - \beta) ) ) ^{-1/2}$. There should be no obstacle for computing by recursion the successive derivatives of $Q(t)$ at $t=0$. We would then obtain a series expansion of the partition function $\mu_\alpha ((-\infty,t))$ in a neighborhood of $0$. \end{rem} \subsection{Proof of Theorem \ref{th:muabis}: \texorpdfstring{$\widetilde\mu_\alpha$, $\alpha\in(0,1)$}{mutilde with alpha in(0,1)}} As in \eqref{gammal}, let ${\bf p}_{\ell}$ denote the return probability after $\ell$ steps starting from the root $\eset$, for the random walk on the PWIT with transition kernel $\mathbf{K}}\newcommand{\bL}{\mathbf{L}$ given by \eqref{kappone}. In particular, $\gamma_{\ell} = \mathds{E} {\bf p}_{\ell}$ is the $\ell$-th moment of the LSD $\widetilde\mu_\alpha$. \begin{proof}[Proof of Theorem \ref{th:muabis} (i)] For the first part, we shall show that there exists $\delta>0$ such that for any $\varepsilon\in(0,1/2]$ and any $n$: \begin{equation}\label{hm2} \gamma_{2n}\geq \delta\,\varepsilon^\alpha\,(1-\varepsilon)^{2n}\,. \end{equation} Theorem \ref{th:muabis} (i) follows by choosing $\varepsilon=1/2n$. To prove \eqref{hm2} we use the simple bound ${\bf p}_{2n}\geq \left(\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\eset,1)\mathbf{K}}\newcommand{\bL}{\mathbf{L}(1,\eset)\right)^n$, which states that to come back to the root in $2n$ steps the walk can move to the child with the highest weight, with probability $\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\eset,1)$, go back to the root, with probability $\mathbf{K}}\newcommand{\bL}{\mathbf{L}(1,\eset)$, and repeat this $n$ times. Taking expectation, it follows that \begin{equation}\label{hm3} \gamma_{2n} \geq \mathds{E}\left[\left(\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\eset,1)\mathbf{K}}\newcommand{\bL}{\mathbf{L}(1,\eset)\right)^n\right]\,. \end{equation} Therefore \eqref{hm2} holds if the event $$ A_\varepsilon = \{\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\eset,1)\geq (1-\varepsilon)\;\text{ and} \;\;\mathbf{K}}\newcommand{\bL}{\mathbf{L}(1,\eset)\geq (1-\varepsilon)\} $$ has probability at least $\delta\,\varepsilon^\alpha$, for some $\delta>0$ and for any $\varepsilon\in(0,1/2]$. Let $(x_i)_i$ denote the realization of the PPP at the root $\eset$, i.e.\ $x_1>x_2>\dots$ are the points of a PPP on $(0,\infty)$ with intensity measure $\alpha x^{-\alpha-1}dx$. We set $\phi:=\sum_{i=1}^\infty x_i$ and let $\phi'$ denote an independent copy of $\phi$. We can use the representation $\mathbf{K}}\newcommand{\bL}{\mathbf{L}(\eset,1) = x_1/\phi$ and $\mathbf{K}}\newcommand{\bL}{\mathbf{L}(1,\eset)=x_1/(x_1+\phi')$. Therefore, \begin{align*} \mathds{P}(A_\varepsilon) & = \mathds{P}\left(x_1\geq (1-\varepsilon)\phi\,,\;x_1\geq (1-\varepsilon)(x_1+\phi')\right)\\ & = \mathds{P}\left(x_1\geq (1-\varepsilon)\phi\,,\;\phi'\leq \frac{\varepsilon\,x_1}{(1-\varepsilon)}\right)\\ &\geq \mathds{P}\left(x_1\geq (1-\varepsilon)\phi\,,\;x_1\geq \varepsilon^{-1}\,, \;\phi'\leq 1\right)\,. \end{align*} Let $\delta_1:= \mathds{P}(\phi\leq 1) = \int_0^1\!f(t)\,dt>0$, where $f(t)$ denotes the density of $\phi$. The function $f(t)$ can be obtained from its Laplace transform, which is given by the known identity $\mathds{E}[e^{-u\phi}] =e^{-\Gamma(1-\alpha)u^{\alpha}}$, $u>0$ (see \cite[Proposition 10]{MR1434129}, or \eqref{eq:laplace} with $\beta$ replaced by $\alpha$ and $Y_k = 1$). Since $\phi'$ is independent of $(x_i)$ we obtain $$ \mathds{P}(A_\varepsilon) % \geq \delta_1\,\mathds{P}\left(x_1\geq (1-\varepsilon)\phi\,,\;x_1\geq \varepsilon^{-1}\right)\,. $$ To estimate the last quantity we observe that if $\widetilde x$ is a size-biased pick from $(x_i)$ then $x_1\geq \widetilde x$. We recall that $\widetilde x$ is a random variable such that, given the sequence $(x_i)$ the probability that $\widetilde x$ equals $x_i$ is $x_i/\phi$. It is not hard to check (see e.g.\ \cite[Lemma 2.2]{PerPitYor}) that the random variable $\widetilde x$ has a probability density on $(0,\infty)$ given by \begin{equation}\label{sz} \alpha\,x^{-\alpha-1}\int_0^\infty f(t) \,\frac{x}{x+t}\,dt\,, \end{equation} where $f(t)$ is the density of the variable $\phi$. Therefore, \begin{align*} &\mathds{P}\left(x_1\geq (1-\varepsilon)\phi\,,\;x_1\geq \varepsilon^{-1}\right) \geq \mathds{P}\left(\widetilde x\geq (1-\varepsilon)\phi\,,\;\widetilde x\geq \varepsilon^{-1}\right)\\ & \qquad=\alpha \int_0^\infty dt\,f(t)\int_0^\infty dx\,x^{-\alpha-1}\,\frac{x}{x+t}\, \mathds{1}_{\{x\geq (1-\varepsilon)(x+t)\}}\,\mathds{1}_{\{x\geq\varepsilon^{-1}\}}\,\\ &\qquad\geq \alpha \int_0^1 dt\,f(t)\int_0^\infty dx\,x^{-\alpha-1}\,(1-\varepsilon)\,\mathds{1}_{\{x\geq\varepsilon^{-1}\}}\\ & \qquad = \delta_1\,(1-\varepsilon)\,\varepsilon^\alpha\,. \end{align*} In conclusion, $\mathds{P}(A_\varepsilon)\geq \delta_1^2\,(1-\varepsilon)\,\varepsilon^\alpha\geq \frac12\,\delta_1^2\,\varepsilon^\alpha$, and the claim \eqref{hm2} follows. It remains to show that $\liminf_{\alpha\nearrow1}\gamma_2>0$. If $(x_i)$, $\widetilde x$, and $\phi$ are as above and if $\phi'$ is independent of the sequence $(x_i)$ and identical in law to the random variable $\phi$ then $$ \gamma_2 =\mathds{E}\left[\sum_{i}\frac{x_i}{\phi}\frac{x_i}{x_i+\phi'}\right] =\mathds{E}\left[\frac{\widetilde x}{\widetilde x+\phi'}\right] =\int_0^\infty\!\alpha x^{1-\alpha}\left(\int_0^\infty\!\frac{f(t)}{x+t}\,dt\right)^2\,dx $$ Now, from the Laplace transform $\mathds{E}[e^{-u\phi}] =e^{-\Gamma(1-\alpha)u^{\alpha}}$ we have the identity $$ \int_0^\infty\!\frac{f(t)}{x+t}\,dt % =\int_0^\infty\!e^{-\Gamma(1-\alpha)u^\alpha-ux}\,du\,. $$ This gives \begin{align*} \gamma_2 &=\alpha\Gamma(2-\alpha)\int_0^\infty\!\int_0^\infty\! e^{-\Gamma(1-\alpha)(u^\alpha+v^\alpha)}\,(u+v)^{-2+\alpha}\,du\,dv\\ &=\frac{\alpha\Gamma(2-\alpha)}{\Gamma(1-\alpha)} \int_0^\infty\!\int_0^\infty\!e^{-t^\alpha-s^\alpha}\,(t+s)^{-2+\alpha}\,ds\,dt\,. \end{align*} Finally, the desired result follows from the bounds (for absolute constants $c_1,c_2>0$) $$ \int_0^\infty\!\int_0^\infty\!e^{-t^\alpha-s^\alpha}\,(t+s)^{-2+\alpha}\,ds\,dt % \geq e^{-2} \int_0^1\!\int_0^1\!(t+s)^{-2+\alpha}\,dsdt % \geq \frac{c_1}{1-\alpha}. $$ and $$ \Gamma(1-\alpha) =\int_0^\infty\!t^{-\alpha}e^{-t}\,dt \leq \int_0^1\!t^{-\alpha}\,dt+\int_1^\infty\!e^{-t}\,dt\leq \frac{c_2}{1-\alpha}. $$ \end{proof} \begin{proof}[Proof of Theorem \ref{th:muabis} (ii)] It is convenient to make here the dependence over $\alpha$ explicit in all the notations. In particular, for every $\alpha \in (0,1)$, we denote by $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_\alpha$ the operator $\mathbf{S}}\newcommand{\bT}{\mathbf{T}$ given by \eqref{opsym}. These operators are defined on a common probability space, and are self--adjoint in $L^2 ( V)$. Moreover, it follows from Subsection \ref{sec:k01} that $\tilde \mu_\alpha = \mathds{E} \mu_{\alpha, \eset}$, where $\mu_{\alpha,\eset}$ is the spectral measure of $\mathbf{S}}\newcommand{\bT}{\mathbf{T}_\alpha$ at the vector $\delta_{\eset}$. By the dominated convergence Theorem, in order to prove that $\alpha \mapsto \tilde \mu_\alpha$ is continuous in $(0,1)$, it is sufficient to show that a.s.\ $\alpha \mapsto \mu_{\alpha,\eset}$ is continuous. From \cite[Theorem VIII.25(a)]{reedsimon}, it is in turn sufficient to prove that for all $\mathbf{v} \in V$, $\alpha \mapsto \mathbf{S}}\newcommand{\bT}{\mathbf{T}_{\alpha} \delta_\mathbf{v}$ is a continuous map from $(0,1)$ to $L^2 ( V)$. From \eqref{opsym}, for all $\mathbf{u} \in V$, the map $\alpha \mapsto \mathbf{S}}\newcommand{\bT}{\mathbf{T}_\alpha ( \mathbf{u}, \mathbf{v}) $ is continuous. It thus remains to check the uniform square integrability of $\left(\mathbf{S}}\newcommand{\bT}{\mathbf{T}_{\alpha} ( \mathbf{v}, \mathbf{u} ) \right)_{\mathbf{u} \in V}$. We start with the upper bound $$ (\mathbf{S}}\newcommand{\bT}{\mathbf{T}_{\alpha} ( \mathbf{v}, \mathbf{v} k ) ) ^2 = \frac{y_{\mathbf{v} k} ^ {-1 / \alpha} }{\rho_\alpha (\mathbf{v})} \frac{y_{\mathbf{v} k} ^ {-1 / \alpha} }{\rho_\alpha (\mathbf{v} k )} \leq \frac{y_{\mathbf{v} k}^{-1 / \alpha} }{\rho_\alpha (\mathbf{v})}.$$ Then, notice that for all $\alpha \in (0, 1 - \varepsilon)$, one has $y_{\mathbf{v} k} ^ {-1 / \alpha} \leq \max(1, y_{\mathbf{v} k} ^ {-1 / (1-\varepsilon)})$, and $ \rho_\alpha (\mathbf{v}) \geq \min( 1, y_{\mathbf{v} 1} ^{- 1/ ( 1 - \varepsilon) })$. We may conclude by recalling that a.s.\ $\lim_k y_{\mathbf{v} k} / k =1$ and $y_{\mathbf{v} 1} >0$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:muabis} (iii)] As in the proof of Theorem \ref{th:muabis} (ii), we make here the dependence over $\alpha$ explicit in all the notations. It follows from Subsection \ref{sec:k01} $$ \int x^{2 \ell} \tilde \mu_\alpha(dx) % = \mathds{E} \int x^{2 \ell} \mu_{\alpha, \eset} (dx) % = \mathds{E} {\bf p}_{\alpha,2 \ell} , $$ where the expectation is over the randomness of the PWIT. We introduce for $\mathbf{v} \in V$, $$ V_\alpha ( \mathbf{v}) = \left( \frac{y_{\mathbf{v} 1} ^{-1/\alpha}}{ \sum_{k \geq 1} y_{\mathbf{v} k} ^{-1/\alpha}} , \frac{y_{\mathbf{v} 2} ^{-1/\alpha}}{ \sum_{k \geq 1} y_{\mathbf{v} k} ^{-1/\alpha}}, \cdots \right). $$ By construction $ V_\alpha ( \mathbf{v})$ is a PD($\alpha,0$) random variable. Thus, by \cite[Corollary 18]{MR1434129}, as $\alpha \downarrow 0$, $V_\alpha ( \mathbf{v})$ converge weakly to the deterministic vector $(1,0,\cdots)$. We may thus write, $$ \mathbf{K}}\newcommand{\bL}{\mathbf{L}_\alpha(1,\eset) = \frac{y_1^{-1/\alpha}}{ y_1^{-1/\alpha} + y_{11}^{-1/\alpha}(1+\varepsilon_\alpha)}, $$ where as $\alpha$ goes to $0$, $\varepsilon_\alpha$ goes in probability to $0$. We define $U = \mathds{1}_{\{ y_{11} > y_{1} \}}$, so that $U$ is a symmetric Bernoulli i.e. $\mathds{P}(U = 0) = \mathds{P}( U =1) = 1/2$. We have proved that in probability, $$ \lim_{\alpha \downarrow 0} \mathbf{K}}\newcommand{\bL}{\mathbf{L}_\alpha ( \eset, 1 ) = 1 \quad \hbox{ and } \quad \lim_{\alpha \downarrow 0} \mathbf{K}}\newcommand{\bL}{\mathbf{L}_\alpha ( 1, \eset ) = U. $$ In particular, $$ \lim_{\alpha \downarrow 0} \int x^{2 \ell} \mu_{\alpha,\eset} (dx) = U . $$ Since $ \mu_{\alpha,\eset}$ is symmetric, $$ \lim_{\alpha \downarrow 0} \mu_{\alpha,\eset} % = \frac{U}{2} \delta_{-1} + (1-U) \delta_0 + \frac{U}{2} \delta_1. $$ Taking expectation, we obtain the claimed statement on $ \tilde \mu_\alpha $. \end{proof} \section{Invariant Measure: Proof of Theorem \ref{th:inv}} \label{sec:inv} We start with a lemma. Let $(X_1,\ldots,X_n)$, $X_1\geq \cdots\geq X_n$, denote the ranked values of $\rho_1,\ldots,\rho_n$ and recall the notion of convergence in the space $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$, cf. Section \ref{order}. We use the notation $b_n:=a_{m_n}$, where $m_n=n(n+1)/2$. \begin{lem}\label{ordst} For any $\alpha\in(0,2)$, the sequence $b^{-1}_{n}(X_1,X_2,\dots)$ converges in distribution to $(x_1,x_1,x_2,x_2,\dots)$, where $x_1>x_2>\cdots$ denote the ranked points of the Poisson point process on $(0,\infty)$ with intensity $\alpha\,x^{-\alpha-1}dx$. \end{lem} \begin{proof}[Proof of Lemma \ref{ordst}] There are $m_n=n(n+1)/2$ edges, including self--loops. Let us denote by $U_e$ the weight of edge $e\in\{1,\ldots,m_n\}$. The row sums are given by $\rho_i = \sum_{e:\, e\ni i} U_e$. We write $O_n$ for the set of off--diagonal edges $e$, i.e.\ edges of the form $e=\{i,j\}$ with $i\neq j$. Let $U_{e_1}\geq U_{e_2}\geq \cdots$ denote the ranked values of the i.i.d.\ random vector $(U_e)_{e\in O_n}$. Since there are $m_n-n$ edges in $O_n$, an application of Lemma \ref{le:PoiExt}(i) yields convergence in distribution \begin{equation}\label{sosh1} b_n^{-1}(U_{e_1},U_{e_2},\dots) \overset{d}{\underset{n\to\infty}{\longrightarrow}} (x_1,x_2,\dots)\,. \end{equation} Each $e_i=\{u_i,v_i\}\in O_n$ identifies two row sums $\rho_{u_i}$ and $\rho_{v_i}$. Set $\Delta_i = \max\{\rho_{u_i}-U_{e_i},\rho_{v_i}-U_{e_i}\}$. Then, for every $k\in\mathds{N}$ and $\varepsilon>0$: \begin{equation}\label{sosh} \lim_{n\to\infty} \mathds{P}\left(\max_{1\leq \ell\leq k}\Delta_\ell \geq \varepsilon\,b_n\right) = 0\,. \end{equation} To prove this we use an estimate due to Soshnikov \cite{MR2081462}. Let $B_n$ denote the event that there exists no $i\in\{1,\dots,n\}$ such that $$ \{\rho_i > b_n^{\frac34 + \frac\alpha{8}}\;\text{ and }\; \rho_i - \max_{j} U_{i,j} > b_n^{\frac34 + \frac\alpha{8}}\}. $$ Then, from \cite{MR2081462} and \cite[Lemma 3]{auffinger-benarous-peche}, one has \begin{equation}\label{soshn} \lim_{n\to\infty}\mathds{P}(B_n)\to 1\,. \end{equation} Clearly, on the event $B_n$, if $\max_{1\leq \ell\leq k}\Delta_\ell \geq \varepsilon\,b_n$, then $U_{e_k}\leq b_n^{\frac34 + \frac\alpha{8}}$ which has vanishing probability in the limit by \eqref{sosh1}. This proves \eqref{sosh}. For simplicity, we introduce the notation $R_{2\ell -1} = \max\{\rho_{u_{\ell}},\rho_{v_{\ell}}\}$, $R_{2\ell} = \min\{\rho_{u_{\ell}},\rho_{v_{\ell}}\}$. Therefore \eqref{sosh} and \eqref{sosh1} prove that \begin{equation}\label{sosh10} b_n^{-1}(R_1,R_2,R_3,R_4\dots) \overset{d}{\underset{n\to\infty}{\longrightarrow}} (x_1,x_1,x_2,x_2,\dots)\,. \end{equation} It remains to show that for every fixed $k$: \begin{equation}\label{star} \lim_{n\to\infty} \mathds{P}\left(\cup_{1\leq i\leq 2k}\{R_i\neq X_i\}\right) = 0\,. \end{equation} By construction, we have $X_i\geq R_i$ for $i=1,2$. On the event $B_n$ described above, to have $X_1>R_1$ or $X_2>R_2$ implies that there exists an edge $e\neq e_1$ such that $U_e \geq U_{e_1} - b_n^{\frac34 + \frac\alpha{8}}.$ However, this event has vanishing probability by \eqref{sosh1} and the fact that $b_n^{\delta-1}\max_i U_{i,i}\to 0$ in probability for all sufficiently small $\delta>0$ (indeed by Lemma \ref{le:PoiExt}, $a_n ^{-1} \max_i U_{i,i}$ converges weakly to the Fr\'echet distribution, see first comment after Lemma \ref{le:PoiExt}). Thanks to \eqref{soshn} this shows that $\mathds{P}(X_1>R_1 \;\text{or}\; X_2>R_2) \to 0$. Recursively, the probability of $X_{2i+1}> R_{2i+1}$ or $X_{2i+2}> R_{2i+2}$ on the event $B_n\cap \{X_j=R_j,\; \forall j=1,\dots,2i\}$ vanishes as $n\to\infty$. Indeed, at each step we have removed a row and a column corresponding to the largest off--diagonal weight and we may repeat the same reasoning as above. This proves \eqref{star} as required. \end{proof} \begin{proof}[Proof of Theorem \ref{th:inv}(ii)] Let us define $m_n=n(n+1)/2$. Observe that \begin{equation}\label{zetar} \sum_{i=1}^n\rho_i = 2 S_n + D_n % \quad\text{where}\quad S_n:=\sum_{e \in O_n} U_e % \quad\text{and}\quad D_n:=\sum_{i=1}^nU_{i,i}\,. \end{equation} Here, as in the previous proof $O_n$ denotes the set of off--diagonal edges. For $\alpha\in(1,2)$, we have by the weak law of large numbers $S_n/m_n \to 1$ and $D_n/n \to 1$ in probability. Therefore \begin{equation}\label{as1} \lim_{n\to\infty}\frac1{m_n}\sum_{i=1}^n \rho_i = 2\,, \quad\text{in probability}. \end{equation} Theorem \ref{th:inv}(ii) thus follows directly from Lemma \ref{ordst} and \eqref{as1}. The same reasoning applies in the case $\alpha=1$ replacing the law of large numbers by the statement \eqref{eq:llnweaka1} which now gives \eqref{as1} with $m_n$ replaced by $m_n w_{m_n}$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:inv}(i)] If $U_{e_1}\geq U_{e_2}\geq \cdots$ are the ranked values of the i.i.d.\ random vector $(U_e)_{e\in O_n}$ and $S_n$ is their sum as in \eqref{zetar} then by Lemma \ref{le:PoiExt}(ii), replacing $n$ with $m_n$, we have \begin{equation}\label{pdamn} \left(\frac{U_{e_1}}{S_n},\frac{U_{e_2}}{S_n},\dots\right) \overset{d}{\underset{n\to\infty}{\longrightarrow}} \left( \frac{x_1}{\sum_{i=1}^\infty x_i}, \frac{x_2}{\sum_{i=1}^\infty x_i},\dots \right) \end{equation} where $x_1>x_2>\cdots$ denote the ranked points of the Poisson point process on $(0,\infty)$ with intensity $\alpha\,x^{-\alpha-1}$. Write $X_1,X_2,\dots$ for the ranked values of row sums as in Lemma \ref{ordst}, so that $\widetilde\rho_i = X_i/(2S_n + D_n)$, where $D_n,S_n$ are as in \eqref{zetar}. Let $$ Y_{2\ell -1} = \frac{X_{2\ell-1}}{2S_n + D_n} - \frac{U_{e_\ell}}{2S_n}\,,\;\; Y_{2\ell} = \frac{X_{2\ell}}{2S_n + D_n} - \frac{U_{e_\ell}}{2S_n}\,. $$ Thanks to \eqref{pdamn} it is sufficient to prove that $\mathds{P}(\max_{1\leq i\leq 2k} |Y_i|> \varepsilon) \to 0\,,$ as $n\to\infty$, for any fixed $\varepsilon>0$ and $k\in\mathds{N}$. This follows from the argument used in the proof of \eqref{sosh} and \eqref{star}. \end{proof}
0903.3894
\section{Introduction} With the advent of quantum computing and quantum communication, it becomes increasingly important to develop ways for protecting quantum information against the adversarial effects of noise \cite{qecbook}. Researchers have developed many theoretical techniques for the protection of quantum information \cite{PhysRevLett.77.793,PhysRevA.54.1098,thesis97gottesman,PhysRevLett.78.405,ieee1998calderbank,PhysRevLett.79.3306,mpl1997zanardi,PhysRevLett.81.2594,kribs:180501,qic2006kribs,poulin:230504,arxiv2007brun}% \ since Shor's original contribution to the theory of quantum error correction \cite{PhysRevA.52.R2493}. Quantum convolutional coding is a technique for protecting a stream of quantum information \cite{PhysRevLett.91.177902,arxiv2004olliv,isit2006grassl,ieee2006grassl,ieee2007grassl,isit2005forney,ieee2007forney,cwit2007aly,arx2007aly,arx2007wildeCED,arx2007wildeEAQCC,arx2008wildeUQCC,arx2008wildeGEAQCC,pra2009wilde} and is perhaps more valuable for quantum communication than it is for quantum computation (though see the tail-biting technique in Ref.~\cite{ieee2007forney}). Quantum convolutional codes bear similarities to classical convolutional codes \cite{book1999conv,mct2008book}. The encoding circuit for a quantum convolutional code consists of a single unitary repeatedly applied to the quantum data stream \cite{PhysRevLett.91.177902}. Decoding a quantum convolutional code consists of applying a syndrome-based version of the Viterbi decoding algorithm \cite{itit1967viterbi,ieee2007forney,PhysRevLett.91.177902}. The encoding circuit for a classical convolutional code has a particularly simple form. Given a mathematical description of a classical convolutional code, one can easily write down a shift register implementation for the encoding circuit \cite{book1999conv}. For this reason among others, deep space missions such as \textit{Voyager} and \textit{Pioneer }used classical convolutional codes to protect classical information \cite{ieee2007pollara}. A natural question is whether there exists such a simple mapping from the mathematical description of a quantum convolutional code to a quantum shift register implementation. Many researchers have investigated the mathematical constructions of quantum convolutional codes, but few \cite{arxiv2004olliv,isit2006grassl,ieee2006grassl,ieee2007grassl}\ have attempted to develop encoding circuits for them. The Ollivier-Tillich quantum convolutional encoding algorithm \cite{arxiv2004olliv} is similar to Gottesman's technique \cite{thesis97gottesman}\ for encoding a quantum block code. The Grassl-R\"{o}tteler encoding algorithm \cite{isit2006grassl,ieee2006grassl,ieee2007grassl} encodes a quantum convolutional code with a sequence of elementary encoding operations. Each of these elementary encoding operations has a mathematical representation as a polynomial matrix, and each elementary encoding operation builds up the mathematical representation of the quantum convolutional code. The Ollivier-Tillich and Grassl-R\"{o}tteler encoding algorithms leave a practical question unanswered. They both do not determine how much memory a given encoding circuit requires, and in the Grassl-R\"{o}tteler algorithm, it is not even explicitly clear how the encoding circuit obeys a convolutional structure (it obeys a periodic structure, but the convolutional structure demands that the encoding circuit consist of the same single unitary applied repeatedly on the quantum data stream). In this paper, I develop the theory of quantum shift register circuits, using tools familiar from linear system theory \cite{kalaith1980book}\ and classical convolutional codes \cite{book1999conv}. I explicitly show how to connect quantum shift register circuits together so that they encode a quantum convolutional code. I develop a general technique for reducing the amount of memory that the quantum shift register encoding circuit requires. Theorem~\ref{thm:memory-CSS}\ of this paper answers the above question concerning memory use in a CSS\ quantum convolutional code---it determines the amount of memory that a given CSS\ quantum convolutional code requires, as a function of the mathematical representation of the code. I also show how to implement any elementary operation from the shift-invariant Clifford group \cite{isit2006grassl,arx2007wildeEAQCC}\ with a quantum shift register circuit. These quantum shift register circuits might be of interest to experimentalists wishing to implement a quantum error-correcting code that has a simple encoding circuit, but unlike a quantum block code, has a memory structure. Classical convolutional codes were most useful in the early days of computing and communication because they have a higher performance/complexity trade-off than a block code that encodes the same number of information qubits \cite{ieee2007forney}. At the current stage of development, experimentalists have the ability to perform few-qubit interactions, and it might be useful to exploit these few-qubit interactions on a quantum data stream, rather than on a single block of qubits. Other authors have suggested the idea of a quantum shift register \cite{prsa2000grassl,arx2001QSR}, but it is not clear how we can apply the ideas in these papers to the encoding of a quantum convolutional code. Additionally, another set of authors labeled their work as a \textquotedblleft quantum shift register\textquotedblright\ \cite{ieee2001QSR},\ but this quantum shift register is not useful for protecting quantum information (nor is it even useful for coherent quantum operations). The closest work to this one is the discussion in Section~IIB\ of Ref.~\cite{arx2007poulin}. Though, Poulin \textit{et al.} did not develop the quantum shift register idea in much detail because their focus was to develop the theory of decoding quantum turbo codes. The discussion in Ref.~\cite{arx2007poulin} is one of the inspirations for this work (as well as the initial work of Ollivier and Tillich \cite{PhysRevLett.91.177902,arxiv2004olliv}), and this paper is an extension of that discussion. The most natural implementation of a quantum shift register circuit may be in a spin chain \cite{PhysRevLett.91.207901,B07}. Such an implementation requires a repetition of acting with the encoding unitary at the sender's register and allowing the Hamiltonian of the spin chain to shift the qubits by a certain amount. Further investigation is necessary to determine if this scheme would be feasible. Another natural implementation of a quantum shift register circuit is with linear optical circuits \cite{Knill:2001:46}. One can implement the feedback necessary for this circuit by redirecting light beams with mirrors. The difficulty with this approach is that controlled-unitary encoding is probabilistic. I structure this work as follows. The next section begins with examples that illustrate the operation of a quantum shift register circuit. I then present a simple example of a quantum shift register circuit that encodes a CSS\ quantum convolutional code. This example demonstrates the main ideas for constructing quantum shift register encoding circuits. First, build a quantum shift register circuit for each elementary encoding operation in the Grassl-R\"{o}tteler encoding algorithm. Then, connect the outputs of the first quantum shift register circuit to the inputs of the next one and so on for all of the elementary quantum shift register circuits. Finally, simplify the device by determining how to \textquotedblleft commute gates through the memory\textquotedblright\ of the larger quantum shift register circuit (discussed in more detail later). This last step allows us to reduce the amount of memory that the quantum shift register circuit requires. Section~\ref{sec:finite-depth-CNOTs} follows this example by developing two types of finite-depth controlled-NOT (CNOT)\ quantum shift register circuits (I explain the definition of \textquotedblleft finite-depth\textquotedblright% \ later on). Section~\ref{sec:memory-comp}\ then states and proves Theorem~\ref{thm:memory-CSS}---this theorem gives a formula to determine the amount of memory that a given CSS\ quantum convolutional code requires. I then develop the theory of quantum shift register circuits with controlled-phase gates and follow by giving the encoding circuit for the Forney-Grassl-Guha code ~\cite{ieee2007forney}. Grassl and R\"{o}tteler stated that the encoding circuit for this code requires two frames of memory qubits \cite{isit2006grassl}, but I instead find with this paper's technique that the minimum amount it requires is five frames. Section~\ref{sec:infinite-depth} then develops quantum shift register circuits for infinite-depth operations, which are important for the encoding of Type II CSS\ entanglement-assisted quantum convolutional codes \cite{arx2007wildeEAQCC}. Theorem~\ref{thm:memory-CSS}\ also determines the amount of memory required by these codes. I then conclude with some observations and open questions. \section{Examples of Quantum Shift Register Circuits} Let us begin with a simple example to show how we can build up an arbitrary finite-depth CNOT operation. Consider the full set of Pauli operators on two qubits \cite{qecbook,book2000mikeandike}:% \[% \begin{array} [c]{cc}% Z & I,\\ I & Z,\\ X & I,\\ I & X. \end{array} \] We can form a symplectic representation of the full set of Pauli operators for two qubits with the following matrix \cite{qecbook,book2000mikeandike}:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] , \] where the entries to the left of the vertical bar correspond to the $Z$ operators and the entries to the right of the vertical bar correspond to the $X$ operators. Suppose that we perform a CNOT\ gate from the first qubit to the second qubit conditional on a bit $f_{0}$. We perform the gate if $f_{0}=1$ and do not perform it otherwise. The above Pauli operators transform as follows:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{0} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{0}\\ 0 & 1 \end{array} \right] . \]% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=1.060300in, natwidth=3.853600in, height=0.5812in, width=2.0358in ]% {figures/simple-CNOT.pdf}% \caption{The above figure depicts a simple CNOT\ transformation conditional on the bit $f_{0}$. The circuit does not apply the gate if $f_{0}=0$ and applies it if $f_{0}=1$.}% \label{fig:simple-CNOT}% \end{center} \end{figure} Figure~\ref{fig:simple-CNOT}\ depicts the \textquotedblleft quantum shift register circuit\textquotedblright\ that implements this transformation (this device is not really a quantum shift register circuit because it does not exploit a set of memory qubits). Let us incorporate one frame of memory qubits so that the circuit really now becomes a quantum shift register circuit. Consider the circuit in Figure~\ref{fig:one-delay-CNOT}. The first two qubits are fed into the device and the second one is the target of a CNOT\ gate from a future frame of qubits (conditional on the bit $f_{1}$). The two qubits are then stored as two memory qubits (swapped out with what was previously there). On the next cycle, the two qubits are fed out and the first qubit that was previously in memory acts on the second qubit in a frame that is in the past with respect to itself. We would expect the $X$ variable of the first outgoing qubit to propagate one frame into the past with respect to itself and the $Z$ variable of the second incoming qubit to propagate one frame into the future with respect to itself. We make this idea more clear in the below analysis.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=2.559800in, natwidth=4.726200in, height=1.1156in, width=2.0358in ]% {figures/one-delay-CNOT.pdf}% \caption{A quantum shift register device that incorporates one frame of memory qubits.}% \label{fig:one-delay-CNOT}% \end{center} \end{figure} We can analyze this situation with a set of recursive equations. Let $x_{1}\left[ n\right] $ denote the bit representation of the $X$ Pauli operator for the first incoming qubit at time $n$ and let $z_{1}\left[ n\right] $ denote the bit representation of the $Z$ Pauli operator for the first incoming qubit at time $n$. Let $x_{2}\left[ n\right] $ and $z_{2}\left[ n\right] $ denote similar quantities for the second incoming qubit at time $n$. Let $m_{1}^{x}\left[ n\right] $ denote the bit representation of the $X$ Pauli operator acting on the first memory qubit at time $n$ and let $m_{1}^{z}\left[ n\right] $ denote the bit representation of the $Z$ Pauli operator acting on the first memory qubit at time $n$. Let $m_{2}^{x}\left[ n\right] $ and $m_{2}^{z}\left[ n\right] $ denote similar quantities for the second memory qubit. In the symplectic bit vector notation, we denote the \textquotedblleft Z\textquotedblright\ part of the Pauli operators acting on these four qubits at time $n$ as% \[ \mathbf{z}\left[ n\right] \equiv\left[ \begin{array} [c]{cccc}% z_{1}\left[ n\right] & z_{2}\left[ n\right] & m_{1}^{z}\left[ n-1\right] & m_{2}^{z}\left[ n-1\right] \end{array} \right] , \] and the \textquotedblleft X\textquotedblright\ part by% \[ \mathbf{x}\left[ n\right] \equiv\left[ \begin{array} [c]{cccc}% x_{1}\left[ n\right] & x_{2}\left[ n\right] & m_{1}^{x}\left[ n-1\right] & m_{2}^{x}\left[ n-1\right] \end{array} \right] . \] The symplectic vector for the inputs is then% \begin{equation} \left[ \left. \begin{array} [c]{c}% \mathbf{z}\left[ n\right] \end{array} \right\vert \begin{array} [c]{c}% \mathbf{x}\left[ n\right] \end{array} \right] . \label{eq:symplectic-vector}% \end{equation} I prefer this bit notation of Poulin \textit{et al}. \cite{arx2007poulin} because it is more flexible for quantum shift register circuits. It allows us to capture the evolution of an arbitrary tensor product of Pauli operators acting on these four qubits at time $n$. At time $n$, the two incoming qubits and the \textit{previous} memory qubits from time $n-1$ are fed into the quantum shift register device and the CNOT\ gate acts on them. The notation in Figure~\ref{fig:one-delay-CNOT} indicates that there is an implicit swap at the end of the operation. The incoming qubits get fed into the memory, and the previous memory qubits get fed out as output. Let $x_{1}^{\prime}\left[ n\right] $, $z_{1}^{\prime }\left[ n\right] $, $x_{2}^{\prime}\left[ n\right] $, and $z_{2}^{\prime }\left[ n\right] $ denote the respective output variables. The symplectic transformation for the CNOT\ gate is% \[ \left[ \left. \begin{array} [c]{cccc}% 1 & 0 & 0 & 0\\ 0 & 1 & f_{1} & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array} \right\vert \begin{array} [c]{cccc}% 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & f_{1} & 1 & 0\\ 0 & 0 & 0 & 1 \end{array} \right] . \] The above matrix postmultiplies the vector in (\ref{eq:symplectic-vector}) to give the following output vector. We denote the \textquotedblleft Z\textquotedblright\ part of the output Pauli operators acting on these four qubits at time $n$ as% \[ \mathbf{z}^{\prime}\left[ n\right] \equiv\left[ \begin{array} [c]{cccc}% m_{1}^{z}\left[ n\right] & m_{2}^{z}\left[ n\right] & z_{1}^{\prime }\left[ n\right] & z_{2}^{\prime}\left[ n\right] \end{array} \right] , \] and the \textquotedblleft X\textquotedblright\ part by% \[ \mathbf{x}^{\prime}\left[ n\right] \equiv\left[ \begin{array} [c]{cccc}% m_{1}^{x}\left[ n\right] & m_{2}^{x}\left[ n\right] & x_{1}^{\prime }\left[ n\right] & x_{2}^{\prime}\left[ n\right] \end{array} \right] , \] with the change of locations corresponding to the implicit swap. The symplectic vector for the outputs is then% \begin{equation} \left[ \left. \begin{array} [c]{c}% \mathbf{z}^{\prime}\left[ n\right] \end{array} \right\vert \begin{array} [c]{c}% \mathbf{x}^{\prime}\left[ n\right] \end{array} \right] . \end{equation} It is simpler to describe the above transformation as a set of recursive \textquotedblleft update\textquotedblright\ equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =m_{1}^{x}\left[ n-1\right] ,\\ z_{1}^{\prime}\left[ n\right] & =m_{1}^{z}\left[ n-1\right] +f_{1}% z_{2}\left[ n\right] ,\\ x_{2}^{\prime}\left[ n\right] & =m_{2}^{x}\left[ n-1\right] ,\\ z_{2}^{\prime}\left[ n\right] & =m_{2}^{z}\left[ n-1\right] ,\\ m_{1}^{x}\left[ n\right] & =x_{1}\left[ n\right] ,\\ m_{1}^{z}\left[ n\right] & =z_{1}\left[ n\right] ,\\ m_{2}^{x}\left[ n\right] & =x_{2}\left[ n\right] +f_{1}m_{1}^{x}\left[ n-1\right] ,\\ m_{2}^{z}\left[ n\right] & =z_{2}\left[ n\right] . \end{align*} Some substitutions simplify this set of recursive equations so that it becomes the following set:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =x_{1}\left[ n-1\right] ,\\ z_{1}^{\prime}\left[ n\right] & =z_{1}\left[ n-1\right] +f_{1}% z_{2}\left[ n\right] ,\\ x_{2}^{\prime}\left[ n\right] & =x_{2}\left[ n-1\right] +f_{1}% x_{1}\left[ n-2\right] ,\\ z_{2}^{\prime}\left[ n\right] & =z_{2}\left[ n-1\right] . \end{align*} We can transform this set of equations into the \textquotedblleft% $D$-domain\textquotedblright\ with the $D$-transform \cite{book1999conv}. The set transforms as follows:% \begin{align*} x_{1}^{\prime}\left( D\right) & =Dx_{1}\left( D\right) ,\\ z_{1}^{\prime}\left( D\right) & =D\left( z_{1}\left( D\right) +f_{1}D^{-1}z_{2}\left( D\right) \right) ,\\ x_{2}^{\prime}\left( D\right) & =D\left( x_{2}\left( D\right) +f_{1}Dx_{1}\left( D\right) \right) ,\\ z_{2}^{\prime}\left( D\right) & =Dz_{2}\left( D\right) . \end{align*} This set of transformations is linear, and we can write them as the following matrix equation:% \[ D\left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{1}D^{-1} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{1}D\\ 0 & 1 \end{array} \right] . \] The factor of $D$ accounts for the unit delay necessary to implement this device, but it is not particularly relevant for the purposes of the transformation (we might as well say that this quantum shift register device implements the transformation without the factor of $D$). Postmultiplying the vector% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}\left( D\right) & z_{2}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}\left( D\right) & x_{2}\left( D\right) \end{array} \right] \] by the above matrix gives the output vector% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}^{\prime}\left( D\right) & z_{2}^{\prime}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}^{\prime}\left( D\right) & x_{2}^{\prime}\left( D\right) \end{array} \right] . \] The above transformation confirms our intuition concerning the propagation of $X$ and $Z$ variables. The $D$ term on the right side of the transformation matrix indicates that the $X$ variable of the first qubit propagates one frame into the past with respect to itself, and the $D^{-1}$ term on the left side of the matrix indicates that the $Z$ variable of the second qubit propagates one frame into the future with respect to itself.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=2.559800in, natwidth=4.726200in, height=1.1156in, width=2.0358in ]% {figures/one-delay-combo-CNOT.pdf}% \caption{The circuit in the above figure combines the circuits in Figures~\ref{fig:simple-CNOT} and \ref{fig:one-delay-CNOT}.}% \label{fig:one-delay-combo-CNOT}% \end{center} \end{figure} We now consider combining the different quantum shift register circuits together. Suppose that we connect the outputs of the device in Figure~\ref{fig:simple-CNOT} to the inputs of the device in Figure~\ref{fig:one-delay-CNOT}. Figure~\ref{fig:one-delay-combo-CNOT}% \ depicts the resulting quantum shift register circuit, and it follows that the resulting transformation in the $D$-domain is% \begin{equation} D\left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{0}+f_{1}D^{-1} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{0}+f_{1}D\\ 0 & 1 \end{array} \right] . \label{eq:unit-delay-combo}% \end{equation} Now consider the \textquotedblleft two-delay transformation\textquotedblright% \ in Figure~\ref{fig:two-delay-CNOT}. The circuit is similar to the one in Figure~\ref{fig:one-delay-CNOT}, with the exception that the first outgoing qubit acts on the second incoming qubit and the second incoming qubit is delayed two frames with respect to the first outgoing qubit. We now expect that the $X$ variable propagates two frames into the past, while the $Z$ variable propagates two frames into the future. The transformation should be as follows:% \begin{equation} D^{2}\left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{2}D^{-1} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{2}D\\ 0 & 1 \end{array} \right] . \label{eq:two-delay}% \end{equation} An analysis similar to the one for the \textquotedblleft one-delay\textquotedblright\ CNOT transformation shows that the circuit indeed implements the above transformation.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.246500in, natwidth=4.726200in, height=1.6821in, width=2.437in ]% {figures/two-delay-CNOT.pdf}% \caption{The circuit in the above figure implements a two-delay CNOT transformation.}% \label{fig:two-delay-CNOT}% \end{center} \end{figure} Let us connect the outputs of the device in Figure~\ref{fig:one-delay-combo-CNOT} to the inputs of the device in Figure~\ref{fig:two-delay-CNOT}. The resulting $D$-domain transformation should be the multiplication of the transformation in (\ref{eq:unit-delay-combo}) with that in (\ref{eq:two-delay}), and an analysis with recursive equations confirms that the transformation is the following one:% \begin{equation} D^{3}\left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{0}+f_{1}D^{-1}+f_{2}D^{-2} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{0}+f_{1}D+f_{2}D^{2}\\ 0 & 1 \end{array} \right] . \label{eq:two-delay-combo}% \end{equation} The resulting device uses three frames of memory qubits to implement the transformation. This amount of memory seems like it may be too much, considering that the output data only depends on the input from two frames into the past. Is there any way to save on memory consumption?% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=4.246200in, natwidth=8.007300in, height=1.785in, width=3.3399in ]% {figures/in-between-combo-CNOT.pdf}% \caption{The above circuit connects the outputs of the circuit in Figure~\ref{fig:one-delay-combo-CNOT}\ to the inputs of the circuit in Figure~\ref{fig:two-delay-CNOT}.}% \label{fig:in-between-circuit}% \end{center} \end{figure} First, let us connect the outputs of the circuit in Figure~\ref{fig:one-delay-combo-CNOT} to the inputs of the circuit in Figure~\ref{fig:two-delay-CNOT}. Figure~\ref{fig:in-between-circuit}\ depicts the resulting device. In this \textquotedblleft combo\textquotedblright% \ device, the target of the CNOT\ gate conditional on $f_{2}$ does not act on the source of the CNOT\ gate conditional on $f_{1}$. Therefore, we can commute the \textquotedblleft$f_{2}$-gate\textquotedblright\ with the \textquotedblleft$f_{1}$-gate.\textquotedblright\ Now, we can actually then \textquotedblleft commute this gate through the memory\textquotedblright% \ because it does not matter whether this CNOT\ gate acts on the qubits before they pass through the memory or after they come out. It then follows that the last frame of memory qubits are not necessary because there is no gate that acts on these last qubits. Figure~\ref{fig:two-delay-combo-CNOT}\ depicts the simplified transformation. It is also straightforward to check that the resulting transformation is as follows:% \begin{equation} D^{2}\left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f_{0}+f_{1}D^{-1}+f_{2}D^{-2} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f_{0}+f_{1}D+f_{2}D^{2}\\ 0 & 1 \end{array} \right] , \label{eq:two-delay-combo-reduced}% \end{equation} where the premultiplying delay factor in (\ref{eq:two-delay-combo-reduced}) is now $D^{2}$ instead of $D^{3}$ as in (\ref{eq:two-delay-combo}).% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.246500in, natwidth=5.193200in, height=1.5342in, width=2.437in ]% {figures/two-delay-combo-CNOT.pdf}% \caption{The above circuit reduces the amount of memory required to implement the transformation in (\ref{eq:two-delay-combo}).}% \label{fig:two-delay-combo-CNOT}% \end{center} \end{figure} \section{General Encoding Algorithm} The procedure in the previous section allows us to simplify the circuit by eliminating the last frame of memory qubits. This procedure of determining whether we can \textquotedblleft commute gates through the memory\textquotedblright\ is a general one that we can employ for reducing memory in quantum shift register circuits. In the above example, we can determine the number of frames of memory that are necessary by considering the absolute degree of the polynomial transformation in (\ref{eq:two-delay-combo}) (without including the $D^{3}$ prefactor). The \textit{absolute degree} $\left\vert \deg\right\vert \left( B\left( D\right) \right) $ of a polynomial matrix $B\left( D\right) $ is% \[ \left\vert \deg\right\vert \left( B\left( D\right) \right) \equiv \max\left\{ d_{1},d_{2}\right\} , \] where% \begin{align*} d_{1} & \equiv\max_{i,j}\left\{ \deg\left( \left[ B\left( D\right) \right] _{ij}\right) \right\} ,\\ d_{2} & \equiv\max_{i,j}\left\{ \left\vert \text{del}\left( \left[ B\left( D\right) \right] _{ij}\right) \right\vert \right\} , \end{align*} del$\left( b\left( D\right) \right) $ is the lowest power in the polynomial $b\left( D\right) $, and the absolute degree is modulo any prefactor terms such as the $D^{3}$ in (\ref{eq:two-delay-combo}). In the case of the transformation in (\ref{eq:two-delay-combo}), the absolute degree is equal to two, so we should expect to have two frames of memory qubits. Theorem~\ref{thm:memory-CSS}\ generalizes this idea by showing that the absolute degree of an encoding matrix corresponds to the amount of memory that a CSS\ quantum convolutional code requires. The procedure in the previous section demonstrates a general procedure for constructing quantum shift register circuits for quantum convolutional codes. We can break the encoding operation into elementary operations as the Grassl-R\"{o}tteler encoding algorithm does~\cite{ieee2006grassl,isit2006grassl,ieee2007grassl,arx2007wildeEAQCC}. The general procedure implements each elementary operation with a quantum shift register circuit, connects the outputs of one quantum shift register circuit to the inputs of the next, and determines if it is possible to \textquotedblleft commute gates through memory\textquotedblright\ as shown in the above example. This procedure produces a quantum shift register encoding circuit that uses the minimal amount of memory. \section{Example of a Quantum Shift Register Encoding Circuit for a CSS\ Quantum Convolutional Code} Let us consider a simple example of a CSS\ quantum convolutional code \cite{PhysRevA.54.1098,PhysRevLett.77.793}. Its stabilizer matrix \cite{arxiv2004olliv,isit2006grassl} is as follows:% \begin{equation} \left[ \left. \begin{array} [c]{ccc}% 0 & 0 & 0\\ D & 1 & 1+D \end{array} \right\vert \begin{array} [c]{ccc}% 1 & D & 1+D\\ 0 & 0 & 0 \end{array} \right] . \label{eq:simple-example-CSS-code}% \end{equation} I now show how to encode the above quantum convolutional code using a slight modification of the Grassl-R\"{o}tteler encoding algorithm for CSS\ codes \cite{ieee2007grassl}. One begins with the stabilizer matrix for two ancilla qubits per frame:% \[ \left[ \left. \begin{array} [c]{ccc}% 0 & 0 & 0\\ 0 & 1 & 0 \end{array} \right\vert \begin{array} [c]{ccc}% 1 & 0 & 0\\ 0 & 0 & 0 \end{array} \right] . \] The first ancilla qubit of every frame is in the state $\left\vert +\right\rangle $ and the second ancilla qubit of every frame is in the state $\left\vert 0\right\rangle $. First send the three qubits through a quantum shift register device that implements a\ CNOT$\left( 3,2\right) \left( 1+D^{-1}\right) $. This notation indicates that there is a CNOT\ gate from the third qubit to the second in the same frame and to the second in a future frame. The stabilizer becomes% \begin{equation} \left[ \left. \begin{array} [c]{ccc}% 0 & 0 & 0\\ 0 & 1 & 1+D \end{array} \right\vert \begin{array} [c]{ccc}% 1 & 0 & 0\\ 0 & 0 & 0 \end{array} \right] . \label{eq:first-CNOTs}% \end{equation} Then send the three qubits through a quantum shift register device that performs a CNOT$\left( 1,2\right) \left( D\right) $ (indicating a CNOT\ from the first qubit in one frame to the second in a delayed frame) and another quantum shift register device that performs a CNOT$\left( 1,3\right) \left( 1+D\right) $. The stabilizer becomes% \begin{equation} \left[ \left. \begin{array} [c]{ccc}% 0 & 0 & 0\\ D & 1 & 1+D \end{array} \right\vert \begin{array} [c]{ccc}% 1 & D & 1+D\\ 0 & 0 & 0 \end{array} \right] , \label{eq:second-CNOTs}% \end{equation} and is now encoded. Note that the above circuit is a \textquotedblleft classical\textquotedblright\ circuit in the sense that it uses only CNOT gates in its implementation. Figure~\ref{fig:CSS-full-circuit}\ depicts the quantum shift register circuit corresponding to the above operations.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=10.400200in, natwidth=12.174000in, height=2.1326in, width=3.4402in ]% {figures/CSS-full-circuit.pdf}% \caption{The above circuit implements the set of transformations outlined in (\ref{eq:first-CNOTs}-\ref{eq:second-CNOTs}).}% \label{fig:CSS-full-circuit}% \end{center} \end{figure} It again seems that the circuit in Figure~\ref{fig:CSS-full-circuit} is wasteful in memory consumption. Is there anything we can do to simplify this circuit? First notice that the target qubit of the CNOT\ gate in the second quantum shift register is the same as the target qubit of the second CNOT\ gate in the first quantum shift register. It follows that these two gates commute so that we can act with the CNOT\ gate in the second quantum shift register before acting with the second CNOT\ gate of the first quantum shift register. But we can do even better. Acting first with the CNOT\ gate in the second quantum shift register is equivalent to having it act before the first frame of memory qubits gets delayed. Figure~\ref{fig:CSS-mod-circuit}% \ depicts this simplification. \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=6.447200in, natwidth=12.333100in, height=1.9873in, width=3.4402in ]% {figures/CSS-mod-circuit.pdf}% \caption{The above figure depicts a simplification of the circuit in Figure~\ref{fig:CSS-full-circuit}.}% \label{fig:CSS-mod-circuit}% \end{center} \end{figure} But glancing at Figure~\ref{fig:CSS-mod-circuit}, it is now clear that the second quantum shift register circuit no longer serves any purpose. We may remove it from the circuit. Figure~\ref{fig:CSS-final-mod-circuit}\ displays the resulting simplified circuit.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=4.946700in, natwidth=9.020000in, height=1.9121in, width=3.3399in ]% {figures/CSS-final-mod-circuit.pdf}% \caption{The above figure depicts a simplified version of the circuit in Figure~\ref{fig:CSS-mod-circuit} where we have removed the second unnecessary quantum shift register circuit. The above circuit uses less memory than the one in Figure~\ref{fig:CSS-mod-circuit}, while still effecting the same transformation.}% \label{fig:CSS-final-mod-circuit}% \end{center} \end{figure} We can apply a similar logic to the two gates in the second quantum shift register of Figure~\ref{fig:CSS-final-mod-circuit} because the two gates there commute with the preceding gates. Performing a similar simplification and elimination of the last frame of memory qubits leads to the final circuit. Figure~\ref{fig:CSS-circuit}\ depicts the quantum shift register circuit that encodes this quantum convolutional code with one frame of memory qubits.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.586400in, natwidth=6.634000in, height=1.7755in, width=3.3399in ]% {figures/CSS-circuit.pdf}% \caption{The circuit in the above figure is a quantum shift register encoding circuit for the CSS quantum convolutional code in (\ref{eq:simple-example-CSS-code}). }% \label{fig:CSS-circuit}% \end{center} \end{figure} The overall encoding matrix for this code is% \begin{align*} & \text{CNOT}\left( 3,2\right) \left( 1+D^{-1}\right) \text{CNOT}\left( 1,2\right) \left( D\right) \text{CNOT}\left( 1,3\right) \left( 1+D\right) \\ & =\left[ \left. \begin{array} [c]{ccc}% 1 & 0 & 0\\ D & 1 & D+1\\ D^{-1}+1 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array} \right\vert \begin{array} [c]{ccc}% 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 1 & D & D+1\\ 0 & 1 & 0\\ 0 & D^{-1}+1 & 1 \end{array} \right] . \end{align*} The absolute degree of the encoding matrix is one, and thus, this CSS\ code requires one frame of memory qubits. Theorem~\ref{thm:memory-CSS} generalizes this result to show that the memory of the encoding circuit for any CSS\ quantum convolutional code is given by the absolute degree of the encoding matrix for the circuit. \section{Primitive Quantum Shift Register Circuits for CSS\ Quantum Convolutional Codes} \label{sec:finite-depth-CNOTs}In this section, I outline some basic primitive operations that are useful building blocks for the quantum shift register circuits of CSS\ (and non-CSS) quantum convolutional codes. I illustrate delay elements and finite-depth CNOT\ operations. \subsection{Delay Operations} The simplest operation that we can perform with a quantum shift register circuit is to delay one qubit with respect to the others in a given frame. The way to implement this operation is simply to insert a memory element on the qubit that we wish to delay. Figure~\ref{fig:delay}\ depicts this delay operation. Suppose that the Pauli operators for the two qubits in the example are as follows (with the convention that the \textquotedblleft Z\textquotedblright\ operators are on the left and the \textquotedblleft X\textquotedblright\ operators are on the right):% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=0.920200in, natwidth=3.166900in, height=0.4869in, width=1.612in ]% {figures/delay.pdf}% \caption{A circuit that implements a simple delay operation on the first qubit in each frame.}% \label{fig:delay}% \end{center} \end{figure} \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] . \] The circuit in Figure~\ref{fig:delay}\ transforms the operators as follows:% \[ \left[ \left. \begin{array} [c]{cc}% D & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ D & 0\\ 0 & 1 \end{array} \right] . \] \subsection{Building Finite-Depth CNOT\ Operations} I now show how to generalize the above examples to implement a general CNOT\ finite-depth operation. Suppose that we have two qubits on which we would like to perform a finite-depth operation \footnote{A finite-depth operation is one that takes any finite weight Pauli operator to another finite-weight Pauli operator.}. The Pauli operators for these qubits are as follows:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] . \] A general shift-invariant finite-depth CNOT\ operation translates the above set of operators to the following set:% \begin{equation} \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f\left( D^{-1}\right) & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f\left( D\right) \\ 0 & 1 \end{array} \right] , \label{eq:finite-depth-poly-transform}% \end{equation} where $f\left( D\right) $ is some arbitrary binary polynomial:% \[ f\left( D\right) =\sum_{i=0}^{M}f_{i}D^{i}. \] \begin{theorem} \label{thm:finite-depth-x}The circuit in Figure~\ref{fig:CNOT-finite-depth}% \ implements the transformation in (\ref{eq:finite-depth-poly-transform}) and it requires $M$ frames of memory qubits. \end{theorem} \begin{proof} The proof of this theorem uses linear system theoretic techniques by considering symplectic binary vectors that correspond to the Pauli operators for the incoming qubits, the outgoing ones, and the memory qubits. We can formulate a system of recursive equations involving these binary variables similar to how we did for the previous examples. Let us label the bit representations of the $X$ Pauli operators for all the qubits as follows:% \[ x_{1}^{\prime},x_{2}^{\prime},m_{1,1}^{x},m_{2,1}^{x},m_{1,2}^{x},m_{2,2}% ^{x},\ldots,m_{1,M}^{x},m_{2,M}^{x},x_{1},x_{2}, \] where the primed variables are the outputs and the unprimed are the inputs. Let us label the bit representations of the $Z$ Pauli operators similarly:% \[ z_{1}^{\prime},z_{2}^{\prime},m_{1,1}^{z},m_{2,1}^{z},m_{1,2}^{z},m_{2,2}% ^{z},\ldots,m_{1,M}^{z},m_{2,M}^{z},z_{1},z_{2}. \]% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=4.900000in, natwidth=6.539700in, height=2.284in, width=3.039in ]% {figures/CNOT-finite-depth.pdf}% \caption{The circuit in the above figure implements the transformation in (\ref{eq:finite-depth-poly-transform}). }% \label{fig:CNOT-finite-depth}% \end{center} \end{figure} The circuit in Figure~\ref{fig:CNOT-finite-depth} implements the following set of recursive \textquotedblleft X\textquotedblright\ equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =m_{1,M}^{x}\left[ n-1\right] ,\\ x_{2}^{\prime}\left[ n\right] & =m_{2,M}^{x}\left[ n-1\right] ,\\ m_{1,1}^{x}\left[ n\right] & =x_{1}\left[ n\right] ,\\ m_{2,1}^{x}\left[ n\right] & =x_{2}\left[ n\right] +f_{0}x_{1}\left[ n\right] +\sum_{i=1}^{M}f_{i}m_{1,i}^{x}\left[ n-1\right] , \end{align*} and $\forall i=2\ldots M$,% \begin{align*} m_{1,i}^{x}\left[ n\right] & =m_{1,i-1}^{x}\left[ n-1\right] ,\\ m_{2,i}^{x}\left[ n\right] & =m_{2,i-1}^{x}\left[ n-1\right] . \end{align*} The set of \textquotedblleft Z\textquotedblright\ recursive equations is as follows:% \begin{align*} z_{1}^{\prime}\left[ n\right] & =m_{1,M}^{z}\left[ n-1\right] +f_{M}\ z_{2}\left[ n\right] ,\\ z_{2}^{\prime}\left[ n\right] & =m_{2,M}^{z}\left[ n-1\right] ,\\ m_{1,1}^{z}\left[ n\right] & =z_{1}\left[ n\right] +f_{0}\ z_{2}\left[ n\right] ,\\ m_{2,1}^{z}\left[ n\right] & =z_{2}\left[ n\right] , \end{align*} and $\forall i=2,\ldots,M$,% \begin{align*} m_{1,i}^{z}\left[ n\right] & =m_{1,i-1}^{z}\left[ n-1\right] +f_{i-1}\ z_{2}\left[ n\right] ,\\ m_{2,i}^{z}\left[ n\right] & =m_{2,i-1}^{z}\left[ n-1\right] . \end{align*} Simplifying\ the \textquotedblleft X\textquotedblright\ equations gives the following two equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =x_{1}\left[ n-M\right] ,\\ x_{2}^{\prime}\left[ n\right] & =x_{2}\left[ n-M\right] +\sum_{i=0}% ^{M}f_{i}x_{1}\left[ n-M-i\right] . \end{align*} Simplifying\ the \textquotedblleft Z\textquotedblright\ equations gives the following two equations:% \begin{align*} z_{1}^{\prime}\left[ n\right] & =z_{1}\left[ n-M\right] +\sum_{i=0}% ^{M}f_{i}z_{2}\left[ n-M+i\right] ,\\ z_{2}^{\prime}\left[ n\right] & =z_{2}\left[ n-M\right] . \end{align*} Applying the $D$-transform to the above gives the following set of equations:% \begin{align*} x_{1}^{\prime}\left( D\right) & =D^{M}x_{1}\left( D\right) ,\\ x_{2}^{\prime}\left( D\right) & =D^{M}\left( x_{2}\left( D\right) +\sum_{i=0}^{M}f_{i}D^{i}x_{1}\left( D\right) \right) \\ & =D^{M}\left( x_{2}\left( D\right) +f\left( D\right) x_{1}\left( D\right) \right) ,\\ z_{1}^{\prime}\left( D\right) & =D^{M}\left( z_{1}\left( D\right) +\sum_{i=0}^{M}f_{i}D^{-i}z_{2}\left( D\right) \right) \\ & =D^{M}\left( z_{1}\left( D\right) +f\left( D^{-1}\right) z_{2}\left( D\right) \right) ,\\ z_{2}^{\prime}\left( D\right) & =D^{M}z_{2}\left( D\right) . \end{align*} Rewriting the above set of equations as a matrix transformation reveals that it is equivalent to the transformation in (\ref{eq:finite-depth-poly-transform}) (modulo the factor $D^{M}$):% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ f\left( D^{-1}\right) & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & f\left( D\right) \\ 0 & 1 \end{array} \right] D^{M}. \] Postmultiplying the following vector by the above transformation% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}\left( D\right) & z_{2}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}\left( D\right) & x_{2}\left( D\right) \end{array} \right] , \] gives the following output vector:% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}^{\prime}\left( D\right) & z_{2}^{\prime}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}^{\prime}\left( D\right) & x_{2}^{\prime}\left( D\right) \end{array} \right] . \] The circuit in Figure~\ref{fig:CNOT-finite-depth} uses $M$ frames of memory qubits ($2M$ actual memory qubits). \end{proof} Suppose now that we reverse the direction of the CNOT\ gates in Figure~\ref{fig:CNOT-finite-depth}. The result is to perform a shift-invariant finite-depth CNOT\ operation \textquotedblleft conjugate\textquotedblright\ to that in (\ref{eq:finite-depth-poly-transform}):% \begin{equation} \left[ \left. \begin{array} [c]{cc}% 1 & f\left( D\right) \\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ f\left( D^{-1}\right) & 1 \end{array} \right] . \label{eq:finite-depth-poly-transform-conj}% \end{equation} It merely switches the roles of the $X$ and $Z$ variables.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=4.900000in, natwidth=6.539700in, height=2.4794in, width=3.2993in ]% {figures/CNOT-finite-depth-conj.pdf}% \caption{The circuit in the above figure implements the transformation in (\ref{eq:finite-depth-poly-transform-conj}). Comparing the circuit in the above figure to the one in Figure~\ref{fig:CNOT-finite-depth} reveals that it merely flips the direction of the CNOT gates.}% \label{fig:CNOT-finite-depth-conj}% \end{center} \end{figure} \begin{theorem} \label{thm:finite-depth-z}The circuit in Figure~\ref{fig:CNOT-finite-depth-conj}\ implements the transformation in (\ref{eq:finite-depth-poly-transform-conj}) and requires $M$ frames of memory qubits. \end{theorem} \begin{proof} The proof follows analogously to the above proof by noting that the recursive equations for the \textquotedblleft X\textquotedblright\ and \textquotedblleft Z\textquotedblright\ variables interchange after reversing the direction of the CNOT\ gates. \end{proof} \section{Memory Requirements for a CSS\ Quantum Convolutional Code} \label{sec:memory-comp}Is there a general way for determining how much memory a given code requires just by inspecting its stabilizer matrix? This section answers this question with a theorem that determines the amount of memory that a given CSS\ quantum convolutional code requires. Ref.~\cite{ieee2007grassl} defines the individual constraint length, the overall constraint length, and the memory of a quantum convolutional code in analogy to the classical definitions \cite{book1999conv}. These definitions are analogous to the classical definitions, but there does not seem to be an operational interpretation of them in terms of the actual memory that a given quantum convolutional code requires. We recall those definitions. The constraint length $\nu_{i}$ for row $i$ of the stabilizer matrix is as follows:% \[ \nu_{i}\equiv\max_{j}\left\{ \max\left\{ \deg\left( X_{ij}\left( D\right) ,\deg\left( Z_{ij}\left( D\right) \right) \right) \right\} \right\} \] The overall constraint length $\nu$ is the sum of the individual constraint lengths:% \[ \nu\equiv\sum_{i}\nu_{i}. \] The memory $m$ is% \[ m\equiv\max_{i}\nu_{i}. \] We now consider a general technique for computing the memory requirements of a CSS\ quantum convolutional code, and the resulting formula does not correspond to the above definitions. We exploit the Grassl-R\"{o}tteler algorithm for encoding CSS\ codes \cite{ieee2007grassl}. This algorithm consists of a sequence of Hadamards, a cascade of CNOTs, another sequence of Hadamards, and another cascade of CNOTs. Here, I give a slightly simplfied algorithm that does not require any Hadamard gates. Suppose that a quantum convolutional code has the following stabilizer matrix:% \begin{equation} \left[ \left. \begin{array} [c]{c}% H_{1}\left( D\right) \\ 0 \end{array} \right\vert \begin{array} [c]{c}% 0\\ H_{2}\left( D\right) \end{array} \right] .\label{eq:stab-matrix}% \end{equation} We can determine an encoding algorithm by looking at a series of steps to decode the above quantum convolutional code. Assume that the matrices $H_{1}\left( D\right) $ and $H_{2}\left( D\right) $ correspond to noncatastrophic, delay-free check matrices so that they each have a Smith normal form \cite{book1999conv,ieee2007grassl}:% \[ H_{i}\left( D\right) =A_{i}\left( D\right) \left[ \begin{array} [c]{cc}% I & 0 \end{array} \right] B_{i}\left( D\right) , \] where $i=1,2$. If the matrices $A_{i}\left( D\right) $ for $i=1,2$ are not equal to the identity matrix, we can premultiply $H_{i}\left( D\right) $ with the inverse matrix $A_{i}^{-1}\left( D\right) $ for $i=1,2$. These row operations do not affect the error-correcting properties of the quantum convolutional code and give an equivalent code. Let us redefine the matrices $H_{i}\left( D\right) $ as follows:% \[ H_{i}\left( D\right) \equiv\left[ \begin{array} [c]{cc}% I & 0 \end{array} \right] B_{i}\left( D\right) , \] for $i=1,2$. We can then write each matrix $B_{i}\left( D\right) $ as follows:% \[ B_{i}\left( D\right) =\left[ \begin{array} [c]{c}% H_{i}\left( D\right) \\ \tilde{H}_{i}\left( D\right) \end{array} \right] . \] Consider again the stabilizer matrix in (\ref{eq:stab-matrix}). Use CNOT\ gates to perform the elementary column operations in the matrix $B_{2}^{-1}\left( D\right) $. These operations postmultiply entries in the \textquotedblleft X\textquotedblright\ matrix with the matrix $B_{2}% ^{-1}\left( D\right) $ and postmultiply entries in the \textquotedblleft Z\textquotedblright\ matrix with the matrix $B_{2}^{T}\left( D^{-1}\right) $. The stabilizer matrix in (\ref{eq:stab-matrix}) transforms to the following matrix:% \[ \left[ \left. \begin{array} [c]{cc}% H_{1}\left( D\right) H_{2}^{T}\left( D^{-1}\right) & H_{1}\left( D\right) \tilde{H}_{2}^{T}\left( D^{-1}\right) \\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ I & 0 \end{array} \right] . \] The matrix $H_{1}\left( D\right) H_{2}^{T}\left( D^{-1}\right) $ is null because the code is a CSS\ code and satisfies the dual-containing constraint. The stabilizer matrix for the code is then as follows:% \[ \left[ \left. \begin{array} [c]{cc}% 0 & H_{1}\left( D\right) \tilde{H}_{2}^{T}\left( D^{-1}\right) \\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ I & 0 \end{array} \right] . \] Compute the Smith form of the matrix $H_{1}\left( D\right) \tilde{H}_{2}% ^{T}\left( D^{-1}\right) $:% \[ H_{1}\left( D\right) \tilde{H}_{2}^{T}\left( D^{-1}\right) =A_{3}\left( D\right) \left[ \begin{array} [c]{cc}% I & 0 \end{array} \right] B_{3}\left( D\right) . \] Perform the row operations in $A_{3}^{-1}\left( D\right) $ on the first set of rows. Finally, perform the conjugate CNOT\ gates corresponding to the entries in $I\oplus B_{3}^{-1}\left( D\right) $---implying that we perform them only on the last few qubits. These operations postmultiply the \textquotedblleft X\textquotedblright\ matrix by the matrix $I\oplus B_{3}% ^{T}\left( D^{-1}\right) $ and postmultiply the \textquotedblleft Z\textquotedblright\ matrix by the matrix $I\oplus B_{3}^{-1}\left( D\right) $. These operations then produce the following stabilizer matrix:% \[ \left[ \left. \begin{array} [c]{ccc}% 0 & I & 0\\ 0 & 0 & 0 \end{array} \right\vert \begin{array} [c]{ccc}% 0 & 0 & 0\\ I & 0 & 0 \end{array} \right] . \] We are done at this point. These decoding operations give the following transformations for the \textquotedblleft X\textquotedblright\ matrix:% \[ B_{2}^{-1}\left( D\right) \left( I\oplus B_{3}^{T}\left( D^{-1}\right) \right) , \] and the following transformations for the \textquotedblleft Z\textquotedblright\ matrix:% \[ B_{2}^{T}\left( D^{-1}\right) \left( I\oplus B_{3}^{-1}\left( D\right) \right) . \] To encode the quantum convolutional code, we perform the above operations in the reverse order. For encoding, the following transformations postmultiply the \textquotedblleft X\textquotedblright\ matrix:% \[ E_{X}\left( D\right) \equiv\left( I\oplus\left( B_{3}^{T}\right) ^{-1}\left( D^{-1}\right) \right) B_{2}\left( D\right) , \] and the following transformations for the \textquotedblleft Z\textquotedblright\ matrix:% \[ E_{Z}\left( D\right) \equiv\left( I\oplus B_{3}\left( D\right) \right) \left( B_{2}^{T}\right) ^{-1}\left( D^{-1}\right) . \] The overall encoding matrix is \[ B\left( D\right) \equiv\left[ \left. \begin{array} [c]{c}% E_{Z}\left( D\right) \\ 0 \end{array} \right\vert \begin{array} [c]{c}% 0\\ E_{X}\left( D\right) \end{array} \right] . \] (See Ref.~\cite{ieee2007grassl} for a more detailed analysis of this algorithm). I now give a theorem that determines the amount of memory that a CSS\ quantum convolutional code requires. \begin{theorem} \label{thm:memory-CSS}The number $m$ of frames of memory qubits required for a CSS\ quantum convolutional code encoded with the Grassl-R\"{o}tteler encoding algorithm is upper bounded by the absolute degree of $B\left( D\right) $:% \[ m\leq\left\vert \deg\right\vert \left( B\left( D\right) \right) . \] \end{theorem} \begin{proof} I employ an inductive method of proof. The above encoding algorithm for a CSS\ quantum convolutional code demonstrates that we only have to consider how CNOT\ gates combine together in a quantum shift register construction. We can map each elementary CNOT\ operation to a quantum shift register circuit and connect its outputs to the inputs of the quantum shift register circuit for the next elementary CNOT\ operation. This technique is wasteful with respect to memory, but the proof of this theorem shows all the ways that we can reduce the amount of memory when combining quantum shift register circuits corresponding to CNOT\ operations. The result of the theorem then gives a simple formula for determining the amount of memory that a CSS\ quantum convolutional code requires. For the base step of the proof, consider that a CNOT\ gate from qubit $i$ to qubit $j$ in a frame delayed by $l$ requires at most $l$ frames of memory qubits. This result follows by extending the circuit of Figure~\ref{fig:two-delay-CNOT}. The polynomial matrix for this CNOT\ gate that acts on the $i^{\text{th}}$ and $j^{\text{th}}$ qubits is as follows:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ D^{-l} & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & D^{l}\\ 0 & 1 \end{array} \right] , \] and has an absolute degree of $\left\vert l\right\vert $. We abbreviate the above transformation as CNOT$\left( i,j\right) \left( D^{l}\right) $. So the theorem holds for this base case. Now consider two CNOT\ gates that have the same source qubits, but the source of one of them acts on a target qubit in a frame delayed by $l_{0}$ and the source of another acts on a target qubit in a frame delayed by $l_{1}$. Suppose, without loss of generality, that $\left\vert l_{1}\right\vert >\left\vert l_{0}\right\vert $ (these integers can be negative---we should use the term \textquotedblleft advanced by\textquotedblright\ instead of \textquotedblleft delayed by\textquotedblright\ for this case). This combination is a special case of Theorems~\ref{thm:finite-depth-x} and \ref{thm:finite-depth-z} and, therefore, we can implement this gate with $\left\vert l_{1}\right\vert $ frames of memory qubits. The polynomial matrix for the first CNOT\ is CNOT$\left( i,j\right) \left( D^{l_{0}}\right) $ and that for the second is CNOT$\left( i,j\right) \left( D^{l_{1}}\right) $. It is straightforward to check that the polynomial matrix for the combined operation is CNOT$\left( i,j\right) \left( D^{l_{0}}+D^{l_{1}}\right) $. The theorem thus holds for this case because the absolute degree of CNOT$\left( i,j\right) \left( D^{l_{0}}+D^{l_{1}}\right) $ is $\left\vert l_{1}\right\vert $. The theorem similarly holds if two CNOT gates have the same source qubits but have target qubits that do not have the same index within their given frame. It also holds if two CNOT\ gates have the same target qubits but have source qubits that do not have the same index within their given frame. The polynomial matrices for the first case are CNOT$\left( i,j\right) \left( D^{l_{0}}\right) $ and CNOT$\left( i,k\right) \left( D^{l_{1}}\right) $ where WLOG $\left\vert l_{1}\right\vert >\left\vert l_{0}\right\vert $. These two polynomial matrices commute. One can construct a quantum shift register circuit with the techniques in this paper, and this circuit uses $\left\vert l_{1}\right\vert $ frames of memory qubits. It is straightforward to check that the absolute degree of the multiplication of matrices is $\left\vert l_{1}\right\vert $. A similar symmetric analysis applies to the other case where the target qubits are the same but the source qubits are different. The main reason the theorem holds in these scenarios is that the polynomial matrix representations of these gates commute with one another. Any time the polynomial representations commute, the corresponding gates in the cascaded quantum shift registers commute through memory so that the maximum amount of frames of memory qubits is equal to the absolute degree of the entries in the multiplication of the polynomial matrices. Suppose the source qubits and target qubits of the two CNOT\ gates do not intersect in any way. Then their polynomial matrix representations commute and the amount of memory required is again equal to the absolute degree of the polynomial matrices corresponding to the CNOT\ gates. An example is CNOT$\left( i,j\right) \left( D^{l_{1}}\right) $ and CNOT\ $\left( k,l\right) \left( D^{l_{0}}\right) $ where $i\neq j\neq k\neq l$ and WLOG\ $\left\vert l_{1}\right\vert >\left\vert l_{0}\right\vert $. One can use the techniques in this paper to construct a combined quantum shift register circuit that requires $\left\vert l_{1}\right\vert $ frames of memory qubits. Suppose the index of source qubit of the first CNOT\ gate is the same as the index of the target of the second CNOT\ gate, but the index of the target of the first is different from the index of the source qubit of the second. An example of this scenario is CNOT$\left( i,j\right) \left( D^{l_{0}}\right) $ followed by CNOT$\left( k,i\right) \left( D^{l_{1}}\right) $ where $l_{0}$ and $l_{1}$ are any integers and $\left\vert l_{1}\right\vert >\left\vert l_{0}\right\vert $ WLOG. The multiplication of the two polynomial matrices gives the following polynomial matrix:% \[ \left[ \left. \begin{array} [c]{ccc}% 1 & 0 & D^{-l_{1}}\\ D^{-l_{0}} & 1 & D^{-\left( l_{1}+l_{0}\right) }\\ 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array} \right\vert \begin{array} [c]{ccc}% 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 1 & D^{l_{0}} & 0\\ 0 & 1 & 0\\ D^{l_{1}} & 0 & 1 \end{array} \right] , \] where the indices $i$, $j$, and $k$ correspond to the first, second, and third columns of the above \textquotedblleft Z\textquotedblright\ and \textquotedblleft X\textquotedblright\ submatrices. It is again straightforward using the technique in this paper to construct a quantum shift register circuit that uses memory equal to the absolute degree of the above polynomial matrix. The circuit uses $\left\vert l_{1}\right\vert $ frames of memory qubits in the case that $l_{1}$ is positive and $l_{0}$ is negative and vice versa and uses $\left\vert l_{1}+l_{0}\right\vert $ frames of memory qubits in the case that $l_{0}$ and $l_{1}$ are both positive or both negative. The last scenario to consider is when the index of the source qubit of the first CNOT\ gate is the same as the index of the target of the second CNOT\ gate, and the index of the target of the first CNOT\ gate is the same as the index of the source of the second CNOT\ gate. An example of this scenario is CNOT$\left( i,j\right) \left( D^{l_{0}}\right) $ followed by CNOT$\left( j,i\right) \left( D^{l_{1}}\right) $, where $l_{0}$ and $l_{1}$ are any integers and $\left\vert l_{1}\right\vert >\left\vert l_{0}\right\vert $ WLOG. The multiplication of the two polynomial matrices gives the following polynomial matrix:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & D^{-l_{1}}\\ D^{-l_{0}} & 1+D^{-\left( l_{0}+l_{1}\right) }\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1+D^{l_{0}+l_{1}} & D^{l_{0}}\\ D^{l_{1}} & 1 \end{array} \right] , \] where the indices $i$ and $j$ correspond to the first and second columns of the above \textquotedblleft Z\textquotedblright\ and \textquotedblleft X\textquotedblright\ submatrices. It is again straightforward to construct a quantum shift register using the techniques in this paper that uses a number of frames of memory qubits equal to the absolute degree of the polynomial matrix. The inductive step follows by considering that any arbitrary encoding with CNOTs is a sequence of elementary column operations of the form:% \[ B\left( D\right) =B_{\left( 1\right) }\left( D\right) B_{\left( 2\right) }\left( D\right) \cdots B_{\left( p\right) }\left( D\right) , \] where $p$ is the total number of elementary operations and the above decomposition is a \textit{particular} decomposition of the matrix $B\left( D\right) $ into elementary operations. Suppose the above encoding matrix requires $m$ frames of memory qubits and $m$ is also the absolute degree of $B\left( D\right) $. Suppose we cascade another elementary encoding operation with matrix representation $B_{\left( p+1\right) }\left( D\right) $. If $B_{\left( p+1\right) }\left( D\right) $ commutes with $B_{\left( p\right) }\left( D\right) $, then it increases the absolute degree of the resulting matrix $B\left( D\right) $ and the memory required for the quantum shift register circuit only if it has a higher absolute degree than $B_{\left( p\right) }\left( D\right) $. The case is thus reducible to the case where it does not commute with $B_{\left( p\right) }\left( D\right) $. So, suppose $B_{\left( p+1\right) }\left( D\right) $ does not commute with $B_{\left( p\right) }\left( D\right) $. There are two ways in which this non-commutativity can happen and I detailed them above. The analysis above for both cases shows that the absolute degree and the number of frames of memory qubits increase by the same amount depending whether $l_{0}$ and $l_{1}$ are positive or negative. \end{proof} \begin{corollary} A Type\ I\ CSS\ entanglement-assisted quantum convolutional code \cite{arx2007wildeEAQCC}\ encoded with the Grassl-R\"{o}tteler encoding algorithm requires $m$ frames of memory qubits, where% \[ m\leq\left\vert \deg\right\vert \left( B\left( D\right) \right) . \] The matrix $B\left( D\right) $ is the polynomial matrix representation of the encoding operations of the entanglement-assisted code. \end{corollary} \begin{proof} A Type I CSS\ entanglement-assisted convolutional code is one that has a finite-depth encoding and decoding circuit. It is possible to show that the encoding circuit consists entirely of CNOT\ gates. The proof proceeds analogously to the proof of the above theorem. \end{proof} \section{Other Operations in the Finite-Depth Shift-Invariant Clifford Group} \label{sec:finite-depth-Clifford}CNOT gates are not the only gates that are useful for encoding a quantum convolutional code. The Hadamard gate, the Phase gate, and the controlled-Phase gate are also useful and are in the finite-depth shift-invariant Clifford group \cite{isit2006grassl}. There is no need to formulate a primitive quantum shift register circuit for the Hadamard gate or the Phase gate---the implementation is trivial and does not require memory qubits. The controlled-phase gate is useful for implementation with a quantum shift register circuit because it acts on two qubits. There are two types of a quantum shift register circuit that we can develop with a controlled-Phase gate. The first type is similar to that for the finite-depth CNOT quantum shift register circuit because it involves two qubits per frame. The second type is different because it involves only one qubit per frame. \subsection{Finite-Depth Controlled-Phase Gate with Two Qubits per Frame} Suppose that we have two qubits on which we would like to perform a finite-depth controlled-Phase gate operation. The Pauli operators for these qubits are as follows:% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & 0\\ 0 & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] . \] A general shift-invariant finite-depth controlled-Phase gate\ operation translates the above set of operators to the following set:% \begin{equation} \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & f\left( D\right) \\ f\left( D^{-1}\right) & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] , \label{eq:finite-depth-controlled-phase}% \end{equation} where $f\left( D\right) $ is some arbitrary binary polynomial:% \[ f\left( D\right) =\sum_{i=0}^{M}f_{i}D^{i}. \] \begin{theorem} \label{thm:finite-depth-controlled-Phase}The circuit in Figure~\ref{fig:cphase-finite-depth}\ implements the transformation in (\ref{eq:finite-depth-controlled-phase}), and it requires no more than $M$ frames of memory qubits. \end{theorem} \begin{proof} The proof of this theorem is similar to that of the previous theorems. We can formulate a system of recursive equations involving binary variables. Let us label the bit representations of the $X$ Pauli operators for all the qubits as follows:% \[ x_{1}^{\prime},x_{2}^{\prime},m_{1,1}^{x},m_{2,1}^{x},m_{1,2}^{x},m_{2,2}% ^{x},\ldots,m_{1,M}^{x},m_{2,M}^{x},x_{1},x_{2}, \] where the primed variables are the outputs and the unprimed are the inputs. Let us label the bit representations of the $Z$ Pauli operators similarly:% \[ z_{1}^{\prime},z_{2}^{\prime},m_{1,1}^{z},m_{2,1}^{z},m_{1,2}^{z},m_{2,2}% ^{z},\ldots,m_{1,M}^{z},m_{2,M}^{z},z_{1},z_{2}. \]% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=4.900000in, natwidth=6.539700in, height=2.284in, width=3.039in ]% {figures/CPhase-finite-depth.pdf}% \caption{The circuit in the above figure implements the transformation in (\ref{eq:finite-depth-controlled-phase}).}% \label{fig:cphase-finite-depth}% \end{center} \end{figure} The circuit in Figure~\ref{fig:cphase-finite-depth} implements the following set of recursive \textquotedblleft X\textquotedblright\ equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =m_{1,M}^{x}\left[ n-1\right] ,\\ x_{2}^{\prime}\left[ n\right] & =m_{2,M}^{x}\left[ n-1\right] ,\\ m_{1,1}^{x}\left[ n\right] & =x_{1}\left[ n\right] ,\\ m_{2,1}^{x}\left[ n\right] & =x_{2}\left[ n\right] , \end{align*} and $\forall i=2\ldots M$,% \begin{align*} m_{1,i}^{x}\left[ n\right] & =m_{1,i-1}^{x}\left[ n-1\right] ,\\ m_{2,i}^{x}\left[ n\right] & =m_{2,i-1}^{x}\left[ n-1\right] . \end{align*} The set of \textquotedblleft Z\textquotedblright\ recursive equations is as follows:% \begin{align*} z_{1}^{\prime}\left[ n\right] & =m_{1,M}^{z}\left[ n-1\right] +f_{M}\ x_{2}\left[ n\right] ,\\ z_{2}^{\prime}\left[ n\right] & =m_{2,M}^{z}\left[ n-1\right] ,\\ m_{1,1}^{z}\left[ n\right] & =z_{1}\left[ n\right] +f_{0}\ x_{2}\left[ n\right] ,\\ m_{2,1}^{z}\left[ n\right] & =z_{2}\left[ n\right] +f_{0}x_{1}\left[ n\right] +\sum_{i=1}^{M}f_{i}m_{1,i}^{x}\left[ n-1\right] , \end{align*} and $\forall i=2,\ldots,M$,% \begin{align*} m_{1,i}^{z}\left[ n\right] & =m_{1,i-1}^{z}\left[ n-1\right] +f_{i-1}\ x_{2}\left[ n\right] ,\\ m_{2,i}^{z}\left[ n\right] & =m_{2,i-1}^{z}\left[ n-1\right] . \end{align*} Simplifying\ the \textquotedblleft X\textquotedblright\ equations gives the following two equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =x_{1}\left[ n-M\right] ,\\ x_{2}^{\prime}\left[ n\right] & =x_{2}\left[ n-M\right] . \end{align*} Simplifying\ the \textquotedblleft Z\textquotedblright\ equations gives the following two equations:% \begin{align*} z_{1}^{\prime}\left[ n\right] & =z_{1}\left[ n-M\right] +\sum_{i=0}% ^{M}f_{i}x_{2}\left[ n-M+i\right] ,\\ z_{2}^{\prime}\left[ n\right] & =z_{2}\left[ n-M\right] +\sum_{i=0}% ^{M}f_{i}x_{1}\left[ n-M-i\right] . \end{align*} Applying the $D$-transform to the above gives the following set of equations:% \begin{align*} x_{1}^{\prime}\left( D\right) & =D^{M}x_{1}\left( D\right) ,\\ x_{2}^{\prime}\left( D\right) & =D^{M}x_{2}\left( D\right) ,\\ z_{1}^{\prime}\left( D\right) & =D^{M}\left( z_{1}\left( D\right) +\sum_{i=0}^{M}f_{i}D^{-i}x_{2}\left( D\right) \right) \\ & =D^{M}\left( z_{1}\left( D\right) +f\left( D^{-1}\right) z_{2}\left( D\right) \right) ,\\ z_{2}^{\prime}\left( D\right) & =D^{M}z_{2}\left( D\right) +\sum _{i=0}^{M}f_{i}D^{i}x_{1}\left( D\right) \\ & =D^{M}\left( z_{2}\left( D\right) +f\left( D\right) x_{1}\left( D\right) \right) , \end{align*} Rewriting the above set of equations as a matrix transformation reveals that it is equivalent to the transformation in (\ref{eq:finite-depth-controlled-phase}):% \[ \left[ \left. \begin{array} [c]{cc}% 1 & 0\\ 0 & 1\\ 0 & f\left( D\right) \\ f\left( D^{-1}\right) & 0 \end{array} \right\vert \begin{array} [c]{cc}% 0 & 0\\ 0 & 0\\ 1 & 0\\ 0 & 1 \end{array} \right] D^{M}. \] Postmultiplying the following vector by the above transformation% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}\left( D\right) & z_{2}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}\left( D\right) & x_{2}\left( D\right) \end{array} \right] , \] gives the following output vector:% \[ \left[ \left. \begin{array} [c]{cc}% z_{1}^{\prime}\left( D\right) & z_{2}^{\prime}\left( D\right) \end{array} \right\vert \begin{array} [c]{cc}% x_{1}^{\prime}\left( D\right) & x_{2}^{\prime}\left( D\right) \end{array} \right] . \] \end{proof} \subsection{Finite-Depth Controlled-Phase Gate with One Qubit per Frame} Suppose that we have one qubit on which we would like to perform a finite-depth controlled-Phase gate operation. The Pauli operators for this qubit are as follows:% \[ \left[ \left. \begin{array} [c]{c}% 1\\ 0 \end{array} \right\vert \begin{array} [c]{c}% 0\\ 1 \end{array} \right] . \] A general shift-invariant finite-depth controlled-Phase gate\ operation translates the above set of operators to the following set:% \begin{equation} \left[ \left. \begin{array} [c]{c}% 1\\ f\left( D\right) +f\left( D^{-1}\right) \end{array} \right\vert \begin{array} [c]{c}% 0\\ 1 \end{array} \right] , \label{eq:finite-depth-controlled-phase-single}% \end{equation} where $f\left( D\right) $ is some arbitrary binary polynomial:% \[ f\left( D\right) =\sum_{i=1}^{M}f_{i}D^{i}. \] \begin{theorem} \label{thm:finite-depth-controlled-Phase-single}The circuit in Figure~\ref{fig:cphase-finite-depth-single}\ implements the transformation in (\ref{eq:finite-depth-controlled-phase-single}) and it requires $M$ frames of memory qubits. \end{theorem} \begin{proof} The proof of this theorem is similar to that of the previous theorems. We can formulate a system of recursive equations involving binary variables. Let us label the bit representations of the $X$ Pauli operators for all the qubits as follows:% \[ x^{\prime},m_{1}^{x},m_{2}^{x},\ldots,m_{M}^{x},x, \] where the primed variables are the outputs and the unprimed are the inputs. Let us label the bit representations of the $Z$ Pauli operators similarly:% \[ z^{\prime},m_{1}^{z},m_{2}^{z},\ldots,m_{M}^{z},z. \]% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.433300in, natwidth=6.352900in, height=1.6544in, width=3.039in ]% {figures/CPhase-finite-depth-single.pdf}% \caption{The circuit in the above figure implements the transformation in (\ref{fig:cphase-finite-depth-single}).}% \label{fig:cphase-finite-depth-single}% \end{center} \end{figure} The circuit in Figure~\ref{fig:cphase-finite-depth-single} implements the following set of recursive \textquotedblleft X\textquotedblright\ equations:% \begin{align*} x_{1}^{\prime}\left[ n\right] & =m_{M}^{x}\left[ n-1\right] ,\\ m_{1}^{x}\left[ n\right] & =x\left[ n\right] , \end{align*} and $\forall i=2\ldots M$,% \[ m_{i}^{x}\left[ n\right] =m_{i-1}^{x}\left[ n-1\right] . \] The set of \textquotedblleft Z\textquotedblright\ recursive equations is as follows:% \begin{align*} z^{\prime}\left[ n\right] & =m_{M}^{z}\left[ n-1\right] +f_{M}\ x\left[ n\right] ,\\ m_{1}^{z}\left[ n\right] & =z\left[ n\right] +\sum_{i=1}^{M}f_{i}% m_{i}^{x}\left[ n-1\right] , \end{align*} and $\forall i=2,\ldots,M$,% \[ m_{i}^{z}\left[ n\right] =m_{i-1}^{z}\left[ n-1\right] +f_{i-1}\ x\left[ n\right] . \] Simplifying\ the \textquotedblleft X\textquotedblright\ equations gives the following equation:% \[ x^{\prime}\left[ n\right] =x\left[ n-M\right] . \] Simplifying\ the \textquotedblleft Z\textquotedblright\ equations gives the following equation:% \begin{multline*} z^{\prime}\left[ n\right] =z\left[ n-M\right] +\sum_{i=1}^{M}f_{i}x\left[ n-M+i\right] \\ +\sum_{i=1}^{M}f_{i}x\left[ n-M-i\right] . \end{multline*} Applying the $D$-transform to the above gives the following set of equations:% \begin{align*} x^{\prime}\left( D\right) & =D^{M}x\left( D\right) ,\\ z^{\prime}\left( D\right) & =D^{M}\left( z\left( D\right) +\sum _{i=1}^{M}f_{i}D^{-i}x\left( D\right) +\sum_{i=1}^{M}f_{i}D^{i}x\left( D\right) \right) \\ & =D^{M}\left( z\left( D\right) +\left( f\left( D^{-1}\right) +f\left( D\right) \right) x\left( D\right) \right) , \end{align*} Rewriting the above set of equations as a matrix transformation reveals that it is equivalent to the transformation in (\ref{eq:finite-depth-controlled-phase}):% \[ \left[ \left. \begin{array} [c]{c}% 1\\ f\left( D^{-1}\right) +f\left( D\right) \end{array} \right\vert \begin{array} [c]{c}% 0\\ 1 \end{array} \right] D^{M}. \] Postmultiplying the following vector by the above transformation% \[ \left[ \left. \begin{array} [c]{c}% z\left( D\right) \end{array} \right\vert \begin{array} [c]{c}% x\left( D\right) \end{array} \right] , \] gives the following output vector:% \[ \left[ \left. \begin{array} [c]{c}% z^{\prime}\left( D\right) \end{array} \right\vert \begin{array} [c]{c}% x^{\prime}\left( D\right) \end{array} \right] . \] \end{proof} \section{Quantum Shift Register Encoding Circuit for the Forney-Grassl-Guha Code} \label{sec:forney-guha-grassl}I now present another example of a quantum shift register encoding circuit for a quantum convolutional code. The code that I choose is the Forney-Grassl-Guha code from Section IIIB of Ref.~\cite{ieee2007forney}. The code has three qubits per frame, and its stabilizer matrix is% \[ \left[ \left. \begin{array} [c]{ccc}% 1+D & 1 & 1+D\\ 0 & D & D \end{array} \right\vert \begin{array} [c]{ccc}% 0 & D & D\\ 1+D & 1+D & 1 \end{array} \right] . \] We can again employ the Grassl-R\"{o}tteler encoding algorithm \cite{ieee2006grassl}\ to determine a sequence of encoding operations for this code. This sequence of encoding operations is% \begin{align*} & H\left( 1\right) H\left( 2\right) P\left( 1\right) \text{C-PHASE}% \left( 1,3\right) \left( D^{-1}+1+D\right) \\ & \text{C-PHASE}\left( 1,2\right) \left( D^{-1}\right) \text{C-PHASE}% \left( 2,3\right) \left( 1+D+D^{2}\right) \\ & \text{CNOT}\left( 2,3\right) \left( 1\right) \text{CNOT}\left( 3,2\right) \left( D\right) \text{CNOT}\left( 2,3\right) \left( D\right) \\ & \text{CNOT}\left( 1,2\right) \left( 1\right) \text{CNOT}\left( 1,3\right) \left( 1+D\right) \text{CNOT}\left( 2,1\right) \left( D\right) , \end{align*} where the order of operations goes from left to right and top to bottom, $H\left( i\right) $ is a Hadamard gate on qubit $i$, and $P\left( i\right) $ is a Phase gate on qubit $i$. I use the technique in this paper to cascade several quantum shift register circuits and commute gates through memory. Figure~\ref{fig:FGG-circuit}\ depicts the quantum shift register circuit that encodes the Forney-Grassl-Guha code.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=9.072800in, natwidth=9.460200in, height=3.2007in, width=3.2396in ]% {figures/FGG-circuit.pdf}% \caption{The above circuit encodes the Forney-Grassl-Guha code from Ref.~\cite{ieee2007forney}.}% \label{fig:FGG-circuit}% \end{center} \end{figure} \section{General Infinite-Depth Operations} \label{sec:infinite-depth}We now turn to infinite-depth operations. Briefly, infinite-depth operations can take a finite-weight Pauli operator to an infinite-weight Pauli operator (similar to the way that an infinite-impulse response filter can have an infinite-duration response to a finite-duration input). Section~VI of Ref.~\cite{arx2007wildeEAQCC}\ discusses infinite-depth Clifford operations. Here, I give a simplification of that discussion by showing how to implement an arbitrary infinite-depth operation using quantum shift register circuits. Let $f\left( D\right) $ be some binary polynomial:% \begin{equation} f\left( D\right) =\sum_{i=0}^{M}f_{i}D^{i}. \label{eq:inf-depth-poly}% \end{equation} Suppose that we have one qubit on which we would like to perform an infinite-depth controlled-NOT\ operation. The Pauli operators for this qubit are as follows:% \begin{equation} \left[ \left. \begin{array} [c]{c}% 1\\ 0 \end{array} \right\vert \begin{array} [c]{c}% 0\\ 1 \end{array} \right] . \label{eq:logical-operators}% \end{equation} A \textquotedblleft Z\textquotedblright\ infinite-depth operation transforms the logical operators to be as follows:% \begin{equation} \left[ \left. \begin{array} [c]{c}% 1/f\left( D^{-1}\right) \\ 0 \end{array} \right\vert \begin{array} [c]{c}% 0\\ f\left( D\right) \end{array} \right] , \label{eq:z-inf-depth}% \end{equation} where $1/f\left( D^{-1}\right) =D^{M}/D^{M}f\left( D^{-1}\right) $. \begin{theorem} \label{thm:z-inf-depth}The circuit in Figure~\ref{fig:z-inf-depth}\ implements the transformation in (\ref{eq:z-inf-depth}) and requires $M$ memory qubits.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.433300in, natwidth=6.352900in, height=1.7452in, width=3.205in ]% {figures/z-inf-depth.pdf}% \caption{The quantum shift register circuit in the above figure implements the transformation in (\ref{eq:z-inf-depth}).}% \label{fig:z-inf-depth}% \end{center} \end{figure} \end{theorem} \begin{proof} I use a similar linear system theoretic technique that exploits recursive equations and the $D$-transform and assume without loss of generality that the coefficient $f_{M}=1$. We use similar notation as before for the \textquotedblleft X\textquotedblright\ and \textquotedblleft Z\textquotedblright\ variables. We get the following set of \textquotedblleft X\textquotedblright\ recursive equations:% \begin{align*} x^{\prime}\left[ n\right] & =m_{M}^{x}\left[ n-1\right] +f_{0}x\left[ n\right] ,\\ m_{1}^{x}\left[ n\right] & =x\left[ n\right] , \end{align*} and $\forall i=2,\ldots,M$,% \[ m_{i}^{x}\left[ n\right] =m_{i-1}^{x}\left[ n-1\right] +f_{M-i+1}x\left[ n\right] . \] The \textquotedblleft Z\textquotedblright\ recursive equations are as follows:% \begin{align*} z^{\prime}\left[ n\right] & =m_{M}^{z}\left[ n-1\right] ,\\ m_{1}^{z}\left[ n\right] & =z\left[ n\right] +\sum_{i=0}^{M-1}% f_{i}m_{M-i}^{z}\left[ n-1\right] , \end{align*} and $\forall i=2\ldots M$,% \[ m_{i}^{z}\left[ n\right] =m_{i-1}^{z}\left[ n-1\right] . \] The first set of \textquotedblleft X\textquotedblright\ recursive equations reduces to the following equation by substitution:% \begin{equation} x^{\prime}\left[ n\right] =\sum_{i=0}^{M}f_{i}x\left[ n-i\right] . \label{eq:inf-depth-rec-simple}% \end{equation} We can reduce the \textquotedblleft Z\textquotedblright\ equations by first noticing that we can rewrite the equation for $m_{1}^{z}$ as follows:% \[ m_{1}^{z}\left[ n\right] +f_{M-1}m_{1}^{z}\left[ n-1\right] =z\left[ n\right] +\sum_{i=0}^{M-2}f_{i}m_{M-i}^{z}\left[ n-1\right] . \] We can use the other memory equations to iterate this procedure, and we end up with% \[ \sum_{i=0}^{M}f_{M-i}m_{1}^{z}\left[ n-i\right] =z\left[ n\right] . \] Noting that% \[ z^{\prime}\left[ n\right] =m_{1}\left[ n-M\right] , \] and using shift-invariance, the above equation becomes% \begin{equation} \sum_{i=0}^{M}f_{i}z^{\prime}\left[ n+i\right] =z\left[ n\right] . \label{eq:inf-depth-rec-simple-z}% \end{equation} Applying the $D$-transform to (\ref{eq:inf-depth-rec-simple}) gives the following equation:% \[ x^{\prime}\left( D\right) =\sum_{i=0}^{M}f_{i}D^{i}x\left( D\right) =f\left( D\right) x\left( D\right) . \] Applying the $D$-transform to (\ref{eq:inf-depth-rec-simple-z}) gives% \begin{align*} \sum_{i=0}^{M}f_{i}D^{-i}z^{\prime}\left( D\right) & =z\left( D\right) \\ \Rightarrow f\left( D^{-1}\right) z^{\prime}\left( D\right) & =z\left( D\right) \\ \Rightarrow z^{\prime}\left( D\right) & =\frac{1}{f\left( D^{-1}\right) }z\left( D\right) . \end{align*} Rewriting the above set of transformations as a matrix shows that it is equivalent to the desired transformation in (\ref{eq:z-inf-depth}):% \[ \left[ \begin{array} [c]{cc}% z\left( D\right) & x\left( D\right) \end{array} \right] \left[ \begin{array} [c]{cc}% 1/f\left( D^{-1}\right) & 0\\ 0 & f\left( D\right) \end{array} \right] =\left[ \begin{array} [c]{cc}% z^{\prime}\left( D\right) & x^{\prime}\left( D\right) \end{array} \right] . \] \end{proof} Another infinite-depth operation is an \textquotedblleft X\textquotedblright% \ infinite-depth operation. It transforms the bit representations in (\ref{eq:logical-operators}) to the following bit representations:% \begin{equation} \left[ \left. \begin{array} [c]{c}% 0\\ f\left( D\right) \end{array} \right\vert \begin{array} [c]{c}% 1/f\left( D^{-1}\right) \\ 0 \end{array} \right] , \label{eq:x-inf-depth}% \end{equation} where $f\left( D\right) $ is defined in (\ref{eq:inf-depth-poly}) and $1/f\left( D^{-1}\right) =D^{M}/D^{M}f\left( D^{-1}\right) $.% \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=3.433300in, natwidth=6.352900in, height=1.7452in, width=3.205in ]% {figures/x-inf-depth.pdf}% \caption{The quantum shift register circuit in the above figure implements the transformation in (\ref{eq:x-inf-depth}).}% \label{fig:x-inf-depth}% \end{center} \end{figure} \begin{theorem} The circuit in Figure~\ref{fig:x-inf-depth} implements the transformation in (\ref{eq:x-inf-depth}) and requires $M$ memory qubits. \end{theorem} \begin{proof} The proof proceeds analogously to the proof of Theorem~\ref{thm:z-inf-depth}% \ with the \textquotedblleft X\textquotedblright\ and \textquotedblleft Z\textquotedblright\ variables switching roles because the directionality of the CNOT gates in the circuit in Figure~\ref{fig:x-inf-depth}\ reverses. \end{proof} \section{Memory Requirements for Type II CSS\ Entanglement-Assisted Quantum Convolutional Codes} \label{sec:memory-comp-EA}Our last contribution is a formula for the amount of memory that a Type II CSS\ entanglement-assisted quantum convolutional code. A Type II CSS\ entanglement-assisted quantum convolutional code is one that uses infinite-depth operations, outlined in the previous section, in the encoding circuit \cite{arx2007wildeEAQCC}. A particular diagonal matrix $\Gamma_{2}\left( D\right) $ is the essential matrix that determines the infinite-depth operations for a Type II entanglement-assisted code (See Section~VII of Ref.~\cite{arx2007wildeEAQCC}). Each entry on the diagonal corresponds to an infinite-depth operation, similar to the polynomial in (\ref{eq:z-inf-depth}) and (\ref{eq:x-inf-depth}). Therefore, the amount of memory that these infinite-depth operations require is% \[ m_{1}\equiv\max_{i}\left\{ \left\vert \deg\right\vert \left( \left[ \Gamma_{2}\left( D\right) \right] _{ii}\right) \right\} . \] Suppose a qubit does not have the maximum absolute degree. Alice should delay this qubit by the difference between that qubit's absolute degree and the maximum absolute degree so that each qubit lines up properly when output from the infinite-depth encoding operations. Note that it is not possible to commute through memory any gates occurring after an infinite-depth operation. The structure of the encoding circuit for the Type II entanglement-assisted codes consists of three layers. The first layer is a set of finite-depth CNOT\ operations characterized by a matrix $L\left( D\right) $ (See Section~VII of Ref.~\cite{arx2007wildeEAQCC}), the second layer consists of the infinite-depth operations, and the third layer is another set of finite-depth CNOT\ operations that we name $B\left( D\right) $. It is possible to show that we can implement these encoding circuits with CNOT\ gates only, as we did in Section~\ref{sec:memory-comp}\ for CSS\ quantum convolutional codes. Thus, it is straightforward to determine the amount of memory that a Type II CSS\ entanglement-assisted quantum convolutional code requires, using Theorem~\ref{thm:memory-CSS}\ and the above upper bound $m_{1}$ \begin{corollary} The amount of memory that a Type II CSS\ entanglement-assisted quantum convolutional code requires is upper bounded by the following quantity:% \[ m_{1}+\left\vert \deg\right\vert \left( L\left( D\right) \right) +\left\vert \deg\right\vert \left( B\left( D\right) \right) . \] \end{corollary} \section{Conclusion} I have developed the theory for a quantum shift register circuit. These circuits can encode and decode quantum convolutional codes. The two important contributions of this paper are the technique for cascading quantum shift register circuits and the formulas for the memory required by a CSS\ quantum convolutional code. Quantum shift register circuits should be useful for experimentalists wishing to demonstrate the operation of a quantum convolutional code. Some interesting open questions remain. I have not yet determined the amount of memory that a general (non-CSS) code requires. The proof technique of Theorem~\ref{thm:memory-CSS}\ does not extend to combinations of controlled-Phase and controlled-NOT gates because they combine differently from the way that cascades of controlled-NOT\ gates combine. It might also be interesting to study the entanglement structure of states that are input to a quantum shift register circuit, in a way similar to the observation in Ref.~\cite{arx2008wildeOEA}\ concerning the relation of an entanglement measure to an entanglement-assisted code. \begin{acknowledgments} I\ thank Martin R\"{o}tteler for the initial suggestion to pursue a quantum shift register implementation of encoding circuits for quantum convolutional codes, for hosting me as a visitor to NEC\ Laboratories America for the month of September 2008, and for useful comments on the presentation of this manuscript. I thank Martin R\"{o}tteler, Hari Krovi, and Markus Grassl for useful discussions on this topic. I thank Kevin Obenland and Andrew Cross for discussions on this topic and thank Kevin Obenland for the suggestion to make the quantum shift register diagrams \textquotedblleft read like a book,\textquotedblright\ from left to right and top to bottom. I acknowledge support from the internal research and development grant SAIC-1669 of Science Applications International Corporation. \end{acknowledgments} \bibliographystyle{unsrt}
2206.04343
\section{Introduction} Matter, described by quantum fields in a continuous space, can spontaneously break space translation symmetry by self-organizing into a periodic structure. This phenomenon of crystallization is one of the cornerstone concepts in physics. Crystals realize various states of condensed matter, such as metals, insulators, and superconductors. In recent decades multiple discussions focused on the generalizations of the crystallization phenomenon. One such concept is the moire crystals in twisted multi-layer materials \cite{cao2018unconventional}, leading to crystals with extremely large unit cells. A separate class of order is quasicrystals \cite{shechtman1984metallic,levine1984quasicrystals}. Another generalization that attracted interest for a long time is the class of order where a classical field demonstrates crystallization coexisting with the spontaneous breaking of additional symmetries. The most known state of a crystal in a classical field is a vortex lattice in a superconductor. This is also the case in a class of supersolids, i.e. the systems that break transition symmetry and have superfluid order (for a review see \cite{Svistunov2015}). A class of systems, the so-called Fulde-Ferrell-Larkin-Ovchinnikov superconductors \cite{LarkinOvchinnikov1964} exhibit a crystallization in the form of superconducting Cooper-pair-density-wave. It has been argued that dense quark matter in the cores of neutron stars is such a crystal \cite{alford2001crystalline}. Cluster crystal is another different type of crystallization where a unit cell is a cluster of particles \cite{malescio2003stripe}. This state is believed to form in outer regions of neutron stars, and has direct counterparts in soft matter \cite{caplan2017colloquium} and the quantum Hall effect \cite{fogler1996ground,shapere2012classical}. Another topic of recent interest is time crystals: where the crystallization is in time or in time and space \cite{wilczek2012quantum,shapere2012classical}. In this work, we propose a new generalization of the concept of a crystal: the ground state fractal crystal. Fractals are ubiquitous in nature. In condensed matter, they usually appear as a result of a dynamic/kinetic process \cite{liu1986fractals,Nakayama2009}. These random fractals were found, for example, in liquid crystal colloids by self-assembly \cite{solodkov2019self} and in polydisperse emulsions \cite{kwok2020apollonian}. Moreover, deterministic fractals can appear as boundary phenomena arising due to competing effects in bulk and at interfaces. An example of the latter is the Landau pattern in type-I superconductors \cite{landau1938intermediate} or states similar to Apollonian packing of circles in smectic A liquid crystals \cite{meyer2009focal}. Below we investigate the possibility of a different state the ground state fractal crystal. We define it as a state that satisfies the following conditions: \begin{itemize} \item The state should spontaneously break space translation symmetry down to a crystalline group \item The unit cell of the resulting crystal should have an infinite number of elements, with each unit cell forming a fractal \item The state should present an energy minimum of a Hamiltonian that respects translation invariance. \end{itemize} A particularly interesting question is whether there are classical field theories with such a ground state. Classical field theories could be grouped into four categories by continuous/discrete space and field. For example, the Ising model \cite{ising1925contribution} is a discrete space, discrete field model. While the lattice XY model is a discrete space continuous field model. An example of a continuous space continuous field model is Ginzburg-Landau theory \cite{Ginzburg1950}. All these models have a uniform or crystal-like modulated ground states. To realize a fractal crystal that does not suffer short-distance cutoff, only models with continuous space can be potential candidates. \section{CSDF model derivation} Here we formulate the simplest continuum-space-discrete-field (CSDF) model that can have fractal crystal as a ground state. To phenomenologically derive the CSDF model, an analogy with static Cahn-Hilliard \cite{CahnHilliard} model is useful. So, firstly let us briefly recap the phenomenological derivation of the static Cahn-Hilliard model that describes structure formation in the standard problem of phase separation \cite{CahnHilliard}: \begin{equation}\label{CH} \begin{gathered} F_{CH}[c(\textbf{r})] = \int f\left( c(\textbf{r}), \nabla c(\textbf{r}), \nabla^2 c(\textbf{r}), .. \right) d\textbf{r} \\ f = V(c) + \gamma (\nabla c)^2 + .. \end{gathered} \end{equation} where space is two dimensional $\textbf{r} = (x, y)$, the usually used form of the potential is $V(c) = - 2 c^2 + c^4$, and $c$ is order parameter of the model. The model describes a binary system. We will refer to $c = 1$ as the first phase and $c = -1$ as the second phase (in general phases can have different values of $c$). Note, that to justify expansion in orders of $c$ and derivatives in the Cahn-Hilliard model one assumes that $c$ is small and slowly changing in space. Let us next consider a phase-separation-like process but in a different limit where instead $c$ is not small in general and changes very fast in space, i.e. the width of the interface between the phases is negligibly small. Hence we can approximately set $c = \pm1$ everywhere. Namely, now configuration is uniquely defined by coordinates of interfaces between two phases labeled by $c = 1$ and $c = -1$. There could be multiple disconnected interfaces. Let us enumerate the interfaces by the index $i = 1, .. N$ and parameterize curves associated with the interfaces by arc length $s$, such that each interface is given by $\textbf{r}_i(s)$, see \figref{fig_vectors_parametrization}. Hence the energy functional is given by: \begin{equation} F[c(\textbf{r})] = \sum_i G[\textbf{r}_i(s)], \end{equation} where $G$ is a new energy functional that depends on the shape and size of the given interface. Now, let us phenomenologically derive the explicit form of the energy functional $G$ in the spirit of the Cahn-Hilliard model \eqref{CH}. To do that we need to ensure that the model satisfies different symmetry conditions. Also, we need to determine what will play the role of the order parameter. In analogy with \eqref{CH} we can write: \begin{equation}\label{Grs} G[\textbf{r}(s)] = \int g\left( \textbf{r}(s), \textbf{r}'(s), \textbf{r}''(s), .. \right) ds \end{equation} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{vectors_parametrization.pdf} \caption{ Parametrization of the interface curve between phases one (gray) and two (white). Here $\textbf{r}$ is the coordinate of the point on the interface parameterized by arc length $s$. $\textbf{t}$ is unitary vector tangent to the curve $\textbf{t} = \textbf{r}'$. $\textbf{n}$ is unitary vector orthogonal to the curve $\textbf{n} = \textbf{t}' / \kappa = \textbf{t} \times \hat{\textbf{z}}$. } \label{fig_vectors_parametrization} \end{figure} The model should be translationally invariant. Hence $g$ shouldn't depend on $\textbf{r}$. Namely, shifting the patch of one phase shouldn't change the energy. Next $\textbf{r}' \equiv \textbf{t}$ where $\textbf{t}(s)$ is unitary vector tangent to the interface, see \figref{fig_vectors_parametrization}. We demand that the model should be rotationally invariant. Hence $g$ should not depend on $\textbf{t}$. The next derivative gives $\textbf{r}'' = \textbf{t}' = \kappa \textbf{n}$, where $\kappa(s)$ is the signed curvature of the interface curve. Vector normal to the curve is $\textbf{n} = \textbf{t} \times \hat{\textbf{z}}$, where $\hat{\textbf{z}} = (0, 0, 1)$ is unit vector orthogonal to $xy$ plane. Since $\textbf{n}' = - \kappa \textbf{t}$ and $\textbf{t}' = \kappa \textbf{n}$ any higher order derivatives of $\textbf{r}$ will be spanned by vectors $\textbf{n}$ and $\textbf{t}$ and depend on curvature and it's derivatives. Namely, $\textbf{r}^{(n)} = a \textbf{n} + b \textbf{t}$ where $a,\ b$ are functions of $\kappa, \kappa' ..$. For example, $\textbf{r}''' = \kappa' \textbf{n} - \kappa^2 \textbf{t}$. This means that model \eqref{Grs} depends only on signed curvature $\kappa$ and its derivatives. Hence it's the natural equivalent of the order parameter in such model\footnote{Consider another way to see how curvature appears in this model. When expanding Cahn-Hilliard model to higher orders in derivatives one obtains that, for example, for $c(r)$ we get $\nabla^2 c = c'' + \kappa c'$. After integrating orthogonal to the interface one obtains a model that is functional of curvature. See \cite{barkman2020ring} for a similar approximation.}. Hence we can rewrite \eqref{Grs} in terms of $\kappa$: \begin{equation}\label{Gks} G[\kappa(s)] = \int g\left( \kappa(s), \kappa'(s), \kappa''(s), .. \right) ds \end{equation} \section{Small curvature $\kappa$ expansion} Next, consider the case where $\kappa(s)$ is small and slowly changing function of $s$. This leads to the expansion: \begin{equation}\label{gk} g = V(\kappa) + \gamma (\kappa')^2 + .. \end{equation} Let us consider the case where $\gamma$ is very large and hence $\kappa' = 0$. In that case, the curvature is constant along the interface, making it a circle. Note, that sign of the curvature depends on the phase. Namely, if we set the disc of phase one (two) on phase two (one) background then we have positive (negative) curvature. Let us consider different ground states that this model can have depending on the potential $V(\kappa)$. (i) If $V > 0$ then the system will have one uniform phase. (ii) If $V < 0$ then the system will make infinitely many interfaces. Since we have interfaces of zero thickness (unlike the Cahn-Hilliard model) these interfaces will be infinitely close to each other and energy will diverge. (iii) Energy of a single interface circle as a function of $\kappa$ changes sign and has a negative minimum. Circle energy is then $U(\kappa) = \frac{2 \pi}{|\kappa|}V(\kappa)$. Hence if $U(\kappa)$ has minimum with $U_{min} < 0$ and $U(0),\ U(+\infty) > 0$ the ground state of this model can represent a nontrivial configuration of interfaces. (iv) Potential $V(\kappa)$ cannot be an even function of $\kappa$ to have a convergent minimum. Since then there would be two minimums of equal energies for $\pm\kappa_{min}$. Hence it would be beneficial to put phase two circle $\kappa_2$ inside the phase one circle $\kappa_1$, see \figref{fig_hexagonal_packing}. Since this model has zero thickness interfaces these circles could have infinitely close curvatures $\kappa_1 \to - \kappa_2$. This process can be repeated and infinitely many circles then would be inserted. This means that energy would diverge. Consider the simplest example \begin{equation}\label{Vk} V(\kappa) = V_0 + V_1 \kappa + V_2 \kappa^2 \end{equation} Hence $U(\kappa) = 2 \pi \left( \frac{V_0}{|\kappa|} + V_1 \text{sign}(\kappa) + V_2 |\kappa| \right)$, where $V_0,\ V_2 > 0$ and $V_1 \neq 0$. We can always rescale the model to get rid of $V_0$ and $V_2$. The sign of $V_1$ just sets whether positive or negative $\kappa$ will be preferred. Let us set $V_1 = - v$, where $v >0$ resulting in circular interface energy: \begin{equation}\label{Uk} U(\kappa) = 2 \pi \left( \frac{1}{|\kappa|} - \text{sign}(\kappa) v + |\kappa| \right) \end{equation} This energy is minimal for $\kappa_{min} = 1$ and equals $U_{min} = 2 \pi (2 - v)$. For $v > 2$ it can have hexagonal lattice of circles as ground state with $\kappa_{hex} = \left( v - \sqrt{v^2 - 3} \right)^{-1}$, see \figref{fig_hexagonal_packing}. Where $\kappa_{hex} < \kappa_{min}$ since $\kappa_{hex}$ is found to minimize energy density $\rho = 3 U(\kappa) / S_{hex}$, with hexagon area $S_{hex} = 6 \sqrt{3} / \kappa^2$. See \appref{app_kProof} for a comparison of energy of this state to other packings, where we prove that it has lower energy than all other compact packings of discs and all packings with lower density. Compact packing is a packing where every pair of discs in contact is in mutual contact with two other discs. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{hexagonal_packing.pdf} \caption{ The crystalline ground state of the model \eqref{gk} and \eqref{Vk} in the form of hexagonal packing of discs. Phase one (two) is colored black (white). } \label{fig_hexagonal_packing} \end{figure} \section{Expansion in small curvature radius $R$ } Let us consider the case opposite to the one studied in the previous section. Namely, here we assume that curvature $\kappa$ is rather large. In this case we can expand in signed curvature radius $R \equiv 1 / \kappa$. We obtain expansion similar to \eqref{gk}: \begin{equation}\label{gR} g = V(R) + \gamma (R')^2 + .. \end{equation} Term proportional to $(R')^2$ here plays similar role as $(\kappa')^2$ in \eqref{gk}. The only difference is that $(R')^2$ diverges if the interface is not convex -- meaning that $R$ along the interface should be sign definite. Otherwise, it plays the same role of fixing the shape of the interface. In this section, we also assume that $\gamma$ is rather large so that interfaces form circles. First, let us study the simplest (rescaled) model: \begin{equation}\label{VR} V(R) = 1 - v R + R^2 \end{equation} which leads to single circle energy $U(R) = 2 \pi |R| \left( 1 - v R + R^2 \right)$. We suppose that this model has various ground states. For $0 < v < 2$ ground state is uniform single phase. At $v > 2$ the model spontaneously breaks transitional symmetry: for $2 < v < 6.20..$ it is a hexagonal lattice \figref{fig_hexagonal_packing} with disc radius $R = 1.14..$. Another phase transition happens at $v = 6.20$. For $6.20.. < v < 13.17..$ it is a hexagonal lattice with an additional set of smaller discs \figref{fig_hexagonal_packing_1}. Larger discs have radius $R = 1.29..$. To check that we compared the energy densities of different packings of the discs \cite{kennedy2006compact}, see \appref{app_Rcomp}. This pattern may continue by adding more and more smaller circles. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{hexagonal_packing_1.pdf} \caption{ The crystalline ground state of the model \eqref{gR} and \eqref{VR} in the form of hexagonal packing of discs with smaller discs in between the large disks. Phase one (two) is colored black (white). } \label{fig_hexagonal_packing_1} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{Us.pdf} \caption{ The energy of a disc $U(R)$ as a function of its signed curvature radius $R$ for different models. Namely, the models, which are obtained by expanding in powers of curvature $\kappa$ \eqref{Uk} (blue) and radius $R$ \eqref{VR} (orange). Here we fixed $v = 2.2$ and hence they have hexagonal packing of discs as the ground state. Fractal crystal is obtained as the ground state in model \eqref{VR_frac} (green). } \label{fig_Us} \end{figure} \section{The ground state fractal crystal} In this section, we demonstrate a model where the translation invariance breaks down to the ground state fractal crystal. To achieve that it has to be energetically beneficial to add circles to any-sized gaps between already placed circles. It means that we can set $V(R) \to 0^-$ for $R \to 0^+$. Hence let us assume that potential $V(R) \to - R^\alpha$ in that limit. Energy density for hexagonal lattice of small circles is then $\rho = 2 \pi |R| V(R) / S_{hex} \to - R^{\alpha - 1}$ for $R \to 0^+$. So we obtain the condition that $\alpha \geq 1$. Otherwise, the energy diverges as many circles of size $R \to 0^+$ populate the system. Note, that $\alpha = 1$ is a special case since in $R \to 0^+$ limit all packings fully covering the plane have the same energy. Hence if the subleading order in energy density is positive the ground state energy will diverge by the inclusion of infinitely small discs. If the subleading order is negative ground state can be realized by some nontrivial fractal packing of discs. One of the simplest options is to set $\alpha = 3$ and the potential to be: \begin{equation}\label{VR_frac} V(R) = - R^3 + R^4 \end{equation} This model has circle energy given by $U(R) = 2 \pi |R| \left( - R^3 + R^4 \right)$. Firstly, it is easy to see that discs occupying the plane in model \eqref{VR_frac} will fully cover the plane. This can be seen as follows: if they do not and there are some gaps between discs -- the energy can be decreased by placing smaller discs there. Next, we can show which type of packing will be present for discs in the limit $R \to 0$. To that end, consider an empty gap of area $S \to 0$ between already placed discs. We want to find what type of disc packing will give the lowest energy for this gap. Energy in this limit in general is \begin{equation} E = - A_k \ \ \ {\rm with} \ \ \ k > 2 \end{equation} For the case \eqref{VR_frac} we have $k = 4$. $A_k$ is a sum of radii to power $k$, which is given by \cite{gilbert1964randomly}: \begin{equation}\label{Ak} A_k = \sum_{i = 0}^{+\infty} n_i R_i^k = - c_k \int_0^{R_0} R^k n'(R) dR \end{equation} where $c_k$ are constants, $n_i$ is number of discs with radius $R_i$, sorted such that $R_0 > R_1 > ..$. Whereas $n(R)$ is number of discs with radii $r$ such that $R_0 \geq r \geq R$. For a given packing of circles, for $R \to 0$ it is possible to show \cite{melzak1969solid} that \begin{equation} n(R) = c R^{-d} \end{equation} where $d$ is the Hausdorff dimension of the packing and $c$ is some other constant characterising it. Hence $A_k$ can be estimated as \begin{equation}\label{Ak_sol} A_k = \frac{c_k c d}{k - d} R_0^{k - d} \end{equation} where parameters $c,\ d$ and $R_0$ depend on the packing. Using relation $\pi A_2 = S$ we can eliminate the parameter $c$: \begin{equation}\label{Ak_asymp} A_k = b \frac{2 - d}{k - d} R_0^{k - 2} \end{equation} where packing independent constant $b = S c_k / c_2$. From \eqref{Ak_asymp} we see that maximum of $A_k$ and hence minimum of energy $E = - A_k$ is achieved for maximal $R_0$ and minimal $d$. Maximal $R_0$ means that the largest disc should be as large as the gap allows (which corresponds to Apollonian packing). While as was shown in \cite{melzak1969solid} $d \in [d_A, 2]$ for various disc packings, where $d_A \simeq 1.3056867..$ is the dimension of Apollonian packing. Hence we see that for $R \to 0$ the ground state is fractal Apollonian packing of discs. For the model \eqref{VR_frac} we propose a candidate for global minimum \figref{fig_apollonian_packing}. Where radius of the biggest circle is $R = \left. \frac{2 A_4}{3 A_5} \right|_{R_0 = 1} = 0.667379..$ and energy density $\rho = - \left. \frac{8 \pi A_4^3}{27 S A_5^2} \right|_{R_0 = 1} = -0.269622..$. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{apollonian_packing.pdf} \includegraphics[width=0.99\linewidth]{apollonian_packing_close.pdf} \caption{ Fractal crystal in the form of a periodic Apollonian packing arising as the energy minimum state in the model \eqref{gR} and \eqref{VR_frac}. Colors correspond to different energies of phase one discs. For some finite number of generations of circles of a certain size, the second phase occupies the gaps between them. As smaller circles are included the fraction of the second phase goes to zero. The interfaces between the phases are drawn in black. } \label{fig_apollonian_packing} \end{figure} \section{Phase transition between fractal and uniform states} To study transition between fractal and uniform phases consider the following modification of model \eqref{VR_frac}: \begin{equation}\label{VR_frac_uni} V(R) = a R^2 - R^3 + R^4 \end{equation} Where for $a = 0$ fractal phase is the ground state. As $a$ is increased up to $1 / 4$ supposedly larger and larger discs are removed from the fractal. For $a > 1 / 4$ \eqref{VR_frac_uni} has uniform ground state with energy density $\rho = 0$. We can define order parameter in this case as density of area which is not occupied by discs \begin{equation} \sigma = (S_{total} - S_{discs}) / S_{total}. \end{equation} Now let us introduce a critical exponent $\omega$ defined by \begin{equation} \sigma \propto a^\omega \ \ {\rm for} \ \ a \to 0. \end{equation} Consider configuration \figref{fig_apollonian_packing} with small discs of radius $R < R_n$ removed. Similar to \eqref{Ak} and \eqref{Ak_sol} in the limit $a \to 0$ we can compute $\sigma$ as: \begin{equation} \sigma = \pi S_{total}^{-1} \sum_{i = n + 1}^{+\infty} n_i R_i^2 = \frac{c_2 c d_A}{2 - d_A} R_{n + 1}^{2 - d_A} \propto R_{n + 1}^{2 - d_A} \end{equation} Hence we need to find the relation between $R_{n + 1}$ and $a$ parameter. To do so note, that, energy density is given by $\rho^n = 2 \pi \frac{a A_3^n R_0^3 - A_4^n R_0^4 + A_5^n R_0^5}{R_0^2 S_0}$. Where $A_k^n = \sum_{i = 0}^n n_i \eta_i^k$ with $\eta_i = R_i / R_0$ and unit cell area $S_0$ are rescaled in terms of the radius of the largest disc $R_0$. Then we can minimise $\rho_n$ in terms of $R_0$ and expanding in $a$ we get: \begin{equation} \rho_{min}^n = \frac{4 \pi A_4^n}{3 S_0 A_5^n} \left( -\frac{2 (A_4^n)^2}{9 A_5^n} + a A_3^n \right) + O(a^2) \end{equation} Hence when $a$ is decreased configuration with $n + 1$ discs will become the ground state instead of the configuration with $n$ discs when $\rho_{min}^n = \rho_{min}^{n + 1}$. Solving the later for $a$, which we denote $a_{n + 1}$ in this case: \begin{equation} a_{n + 1} = \frac{2 A_4^n}{3 A_5^n} \eta_{n + 1} + .. \propto R_{n + 1} \end{equation} which means that the critical exponent: \begin{equation} \sigma \propto a^\omega,\ \ \text{with}\ \ \omega = 2 - d_A \simeq 0.6943.. \end{equation} \section{Conclusions} We presented the concept of the ground state fractal crystal, as a state generalizing crystalline order. We phenomenologically derived the model that is defined on a two-dimensional continuous space and has a two-valued discrete field. The model respects space translation symmetry. The energy of this model is expressed as an integral over interfaces between two phases of function that depends on signed curvature $\kappa$ and its derivatives. We demonstrated that the energy minimization in this model leads to the spontaneous breakdown of translation symmetry in the form of a crystal where each unit cell can be a fractal. The model can be generalized to the situation with some finite interface thickness $d$. Then the fractal structure will be present only down to order $d$. This is similar to other fractals in physical systems which generically feature some microscopic cutoff length scale. The open question is whether these states are realized in physical systems with complex order. Such as the generalizations of the phases occurring in quantum Hall systems between stripe and bubble phases \cite{fogler2002stripe}, phases between a two-dimensional electron liquid and Wigner crystal \cite{spivak2004phases}, soft matter states or hierarchical structure formation in polydisperse vortex clusters \cite{meng2016phase}. The fractal energy minimizers in a classical field theory that we find can in principle be related to quantum problems. In our case, the energy minimizer is a crystal of Apollonian packing in each unit cell. On the other hand, the fractals similar to integral Apollonian packing are related to Hofstadter Butterfly \cite{satija2016tale}, i.e. the energy spectrum of electrons in a magnetic field. Finally, we note that we presented the simplest continuous space discrete field model that can be easily generalized. For example, one can consider a three-dimensional version with principle curvature for two-dimensional interfaces between phases. Next, more phases can be included, which will result in new types of interfaces. Namely, for three or more phases one can also have a vertex in two dimensions (or vertices and lines in three dimensions) where three or more phases meet. Energies of these lower-dimensional interfaces can be set in addition to functional that depends on curvature. Moreover in order to obtain fractal packing of discs one in principle can have other models that somehow favor specific shape and bigger sizes of areas of a given phase, see \appref{app_Other2dFractals}. For a one-dimensional system, the interface is just a point that has no curvature. However, it is still possible to obtain some fractal patterns as the ground state in it, see \appref{app_1dFractals}. \begin{acknowledgments} The work was supported by the Swedish Research Council Grants 2016-06122, 2018-03659. This work was inspired by the video "Newton's Fractal (which Newton knew nothing about)" by 3Blue1Brown (\href{https://www.youtube.com/watch?v=-RdOwhmqP5s&ab_channel=3Blue1Brown}{link}). We thank Mats Barkman, Sahal Kaushik, and Boris Svistunov for useful discussions. \end{acknowledgments}
1606.09284
\section{Introduction} Today, quantitative data of single dividing cells across generations and lineages can be produced with high throughput and spatiotemporal resolution. Such improved data have enabled renewed investigation of microbiological phenomena, where, given the intrinsic stochasticity of these systems, approaches based on statistical physics play a primary role. One example is the decision mechanism by which a cell divides, which has a key role in its size determination. Several important recent findings have progressed this field, which were obtained by joint use of theoretical models and experiments measuring cell size and division events dynamically. Namely, (i) interesting scaling behavior emerges for the distributions of key variables such as doubling times and cell sizes across conditions and species~\cite{Kennard2016,Iyer-Biswas2014,Giometto2013}, suggesting the existence of universal parameters setting these variables; (ii) relations between fluctuations of different quantities, for example relations between cell size and doubling time fluctuations with the average growth rate~\cite{Kennard2016,Taheri-Araghi2015}; (iii) mechanisms of division control can be explored and inferred using theoretical models, formulated as stochastic processes (of different kinds) whose dynamic variables are cell size, time and division events~\cite{Amir2014,Osella2014a,Campos2014,Taheri-Araghi2015,Kennard2016}. The last point is the most studied, due to its direct biological relevance. The data typically rule out controls based on pure size and time measurements~\cite{Taheri-Araghi2015,Osella2014a,Robert2014,Soifer2016}. Concerted control mechanisms where multiple variables (e.g., time and size) may enter jointly have been proposed~\cite{Amir2014,Osella2014a}. Several studies in \emph{E.~coli}~\cite{Campos2014,Taheri-Araghi2015} and other microbes~\cite{Deforet2015,Taheri-Araghi2015,Soifer2016,Tanouchi2015} have argued for a mechanism in which the size extension in a single cell cycle is nearly constant and independent of the initial size of the cell (sometimes called ``adder'' mechanism of division control). However, it is clear that the constant added size is not the only trend found in the data~\cite{Campos2014,Taheri-Araghi2015,Iyer-Biswas2014a,Jun2015}, and that it is not a neccessary and sufficient condition for the observed scaling behavior and fluctuation patterns~\cite{Kennard2016}. More broadly, the question of how much a mechanism can be isolated and specified with available data is still open. Additionally, existing studies so far have relied on different modeling approaches, and raise the need for a unified framework. Specifically, two dominant formalisms emerge. The first describes the continuous-time division process by a hazard function, defining the probability per unit time that a cell divides, as a function of the values of measurable variables such as initial and/or current size, incremental or multiplicative growth, and elapsed time from cell division. The second formalism describes cell size across generations as a discrete-time auto-regressive process, (where a unit of time is a cell cycle). Here, we propose a unified framework linking explicitly these two formalisms and we pose the question of the general possibility to distinguish mechanisms from data. Our formalism specifies the precise conditions on the parameters imposed by the empirically found scaling properties. By expanding around the mean initial size or inter-division time (generalizing the approach of ref.~\cite{Amir2014}), we show explicitly how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well as control by constant added size. As we show by analytical estimates and numerical simulation, the available data are characterized with great precision by the first-order approximation of this expansion. Hence, a single dimensionless parameter defines the strength and the action of the division control. However, this parameter may emerge from several mechanisms, which are distinguished only by higher-order terms in our perturbative expansion. Finally, we estimate the sample size needed to distinguish between second-order effects, and show that it is close to but larger than the size of currently available datasets. \section{Background} \subsection{Theoretical description of division control.} \label{sec:cgrowth} Our description assumes exponential growth of the cell size $x(t)=x_0 e^{\alpha t}$, which is well supported in the literature~\cite{Iyer-Biswas2014,Taheri-Araghi2015,Campos2014,Osella2014a} and, as in previous modeling frameworks, neglects fluctuations of the growth rate $\alpha$~\cite{Iyer-Biswas2014,Osella2014a,Taheri-Araghi2015}. A cell divides at a size $x_f$, and divides into two cells of equal size $x_f/2$ (we thus do not consider the small fluctuations around binary fission, the process of filamentation and recovery, or species with non-binary division~\cite{Schmoller2015,Osella2014a,Jun2015}). A control mechanism defines the division size $x_f$. In absence of this control, fluctuations of cell size may grow indefinitely in time. The full information on division control is encoded by the function $p(x_f| x_0, \alpha)$, the conditional probability that a cell, born at size $x_0$ and growing with a growth rate $\alpha$, divides at size $x_f$. Note that the growth-division process is defined by four variables $x_0,x,t,\alpha$, with the constraint of exponential growth and the model assumption of negligibile fluctuations in $\alpha$. This allows different equivalent parametrization of the process. A quantity of intereset is the size at birth of a cell, followed across generations. Given the conditional probability $p^{i}_b(x_0|\alpha)$ of observing a cell at generation $i$ with initial size $x_0$, the following Chapman-Kolmogorov equation gives the same probability at the subsequent generation \begin{equation} p^{i+1}_b(x_0|\alpha) := 2 \int_0^\infty dy \ p(2 x_0| y, \alpha) p^{i}_b(y|\alpha) \ , \label{Eq:probkol} \end{equation} where $p(x_f| x_0, \alpha)$ plays the role of a transition probability. The assumption of exponential single-cell growth implies that in this process the noise on doubling times has a multiplicative effect. Consequently, it is useful to introduce the quantity $q= \log(x/x^\ast)$, which measures logarithmic deviations in size. At this stage, $x^\ast$ is an arbitrary scale, necessary to make the argument of the logarithm dimensionless. This choice is convenient as the exponential growth maps into the linear relation $q(t)=q_0 + \alpha t$. The mechanism of division control can be equivalently specified in terms of $q$, by introducing the transition probability \begin{equation} \rho( q_f | q_0 , \alpha ) := x^\ast e^{q_f} p( x^\ast e^{q_f} | x^\ast e^{q_0} , \alpha ) \ . \label{Eq:procq} \end{equation} The mechanism of division control, defined by $p(x_f| x_0,\alpha)$, determines the stationary distribution (if it exists) of sizes observed in a steadily-dividing population or genealogy, denoted by $p^*$. The stationary distribution for interdivision times $t_d$ derives equivalently from the mechanism of division control. A change of condition, e.g., nutrients or temperature, corresponds to a change of the growth rate $\alpha$, which, on turn has an effect on division control. It is observed experimentally that the stationary distributions of both initial size and inter-division time, measured under different conditions, collapse when rescaled by their means~\cite{Taheri-Araghi2015,Kennard2016}, as shown in Fig.~\ref{fig:data}. In the following, we will assume this scaling property, which implies some constraints on the control defined by $p(x_f| x_0, \alpha)$~\cite{Kennard2016}. \subsection{Scaling laws for size and doubling-time distributions as a result of division control} \label{sec:scaling} We now derive explicitly the constraints on division control emerging from finite-size scaling, following ref.~\cite{Kennard2016}. Numerous experimental studies~\cite{Taheri-Araghi2015,Kennard2016,Iyer-Biswas2014,Trueba1982} have shown that for several bacterial species and conditions the steady-state distributions of initial (or final) sizes and doubling times of dividing cells collapse when rescaled by their means. For instance, in the case of the initial size distribution, the scaling condition reads \begin{equation} p^\ast_b( x_0 | \alpha ) = \frac{1}{\langle x_0 \rangle_\alpha} F\left( \frac{x_0}{\langle x_0 \rangle_\alpha} \right) \ , \label{Eq:collapse_init} \end{equation} where we defined \begin{equation} \langle x_0 \rangle_\alpha := \int dx \ p^\ast_b(x|\alpha) x \ . \label{Eq:mean_init} \end{equation} A similar equation applies to the inter-division time distribution using $\langle t_d \rangle_\alpha$, i.e., the average inter-division time conditional on a growth rate. When the fluctuations of $\alpha$ are neglected, the collapse can be explained as a result of the division control, but does not by itself isolate a specific mechanism~\cite{Kennard2016}. Specifically, the observed collapse of the doubling-time and initial-size distributions implies that the conditional distribution $p(x_f| x_0, \alpha)$ (for a growth condition with a given mean growth rate) has to collapse when both variables are rescaled by $\langle x_0 \rangle_{ \alpha }$ \begin{equation} p( x_f | x_0, \alpha ) = \frac{1}{\langle x_0 \rangle_\alpha} G\left( \frac{x_f}{\langle x_0 \rangle_\alpha} , \frac{x_0}{\langle x_0 \rangle_\alpha} \right) \ . \label{Eq:collapse_div} \end{equation} This calculation is discussed in Appendix~\ref{app:collapse}. Another important constraint implied by the scaling of the doubling time distributions (Appendix~\ref{app:collapse}) is that the product $\alpha \langle \tau \rangle_{\alpha}$, does not depend on the mean growth rate in a given condition $\alpha$, which is the familiar condition matching the population average of growth rate and with the inverse average doubling time. Finally, Eq.~\eqref{Eq:collapse_div} implies that the division control depends on a single ``internal'' size scale, which, in turn, sets the value of $\langle x_0 \rangle_\alpha$. In conclusion, the joint universality in doubling time and size distributions can be explained by division control mechanisms based on a single length (size) scale and $1/\alpha$ as the unique time scale. While this condition does not imply any mechanism, it can be applied to different modeling frameworks, allowing model-independent predictions. \begin{figure}[tbp] \centering \includegraphics[width=0.38\textwidth]{Fig1.pdf} \caption{ Finite-size scaling properties of cell size and the division-control function. A: Collapse of the probability distribution of rescaled logarithmic size $\log(x_0/\langle x_0 \rangle_\alpha)$ across different conditions/strains (colors) (equivalent to the condition in Eq.~\eqref{Eq:collapse_init}). B: The function $f(\cdot)$ defining the mechanism of size control in the discrete-time Langevin framework (Eq.~\eqref{Eq:autodiscrete}) collapse when rescaled as predicted by Eq.~\eqref{Eq:fvsh}. Data were obtained from refs.~\cite{Kennard2016} and \cite{Wang2010a}. Data from ref.~\cite{Kennard2016} refer to two strains, P5-ori (a BW25113 derivative strain) and MRR, grown on agarose pads in four nutrient conditions (Glc, CAA, RDM and LB). Data from ref.~\cite{Wang2010a} (orange triangle) refer to MG1655 strain in a microfludic device with LB as growth medium. } \label{fig:data} \end{figure} \section{Results} \subsection{A unified modeling framework connects different descriptions of the growth-division process.} \label{sec:unif} We aim to provide a generic framework describing growth-division data, which can be compared with data and used to draw conclusions on the possible mechanism of division control. To this end, two main theoretical formalisms have been employed so far. The first describes cell growth and division as a continuous-time process in which the main parameter is time elapsed from the last cell division. The second describes the dynamics of measurable variables, such as initial size and interdivision time across generations, thus using the generation index as a discrete time. This section reviews the two frameworks, showing how they are equivalent, and explicitly providing the map connecting them. This map leads us to a discrete-time equation, where the function describing the control is mapped explicitly to a hazard rate. Finally, we show how this equation is constrained by the collapse of size and doubling-time distributions. The continuous-time approach~\cite{Osella2014a,Taheri-Araghi2015,Painter1968} supposes an underlying ``decisional process'' for cell division, which is entirely specified by the dependency of the division rate $h_d$ from the measured dynamic parameters, such as cell instantaneous and initial size, added size, elapsed time from the previous cell division, and growth rate. The fuction $h_d$ is analogous to an hazard rate in survival models. In particular, since division control is fully specified by $p(x_f| x_0,\alpha)$, one has the relation \begin{equation} p( x | x_0, \alpha ) = -\frac{d}{d x} \exp\left( \int_0^x ds \ h_d\left(s,x_0,\alpha\right) \right) \ . \label{Eq:defhd_div} \end{equation} This function $h_d$ can be inferred directly from data or a specific functional form can be assumed to test specific model predictions~\cite{Osella2014a}. Previous work~\cite{Taheri-Araghi2015} has shown that data are well reproduced by models where the division rate depends on added size $x-x_0$, or by more complex ``concerted control'' models where the rate is allowed to depend on two variables, instantaneous size $x$ and initial size $x_0$ \emph{or} elapsed time $t$ (the latter two variables are essentially interchangeable since the distribution of elongation rates is generally quite peaked)~\cite{Osella2014a}. This approach works very well in reproducing essentially all available observations. However, it leads to the problem of finding an interpretation of $h_d$, which is not simple. In future studies, where $h_d$ can be linked to ``molecular'' variables such as concentrations or absolute amounts of cell-cycle related proteins this may become easier. The other problem with the approach is that $h_d$ is a function, and, while it can be inferred directly from data, its parameterization may be far from obvious. In order to comply with the empirical scaling properties of initial, final and added size, and of interdivision time, the hazard rate function must collapse when both variables are rescaled by $\langle x_0 \rangle_{\alpha }$~(see Eq.~\eqref{Eq:collapse_div}), \begin{displaymath} h_{d}(x,x_0,\alpha) = \tilde{h}\left(\frac{x}{\langle x_0 \rangle_{\alpha }}, \frac{x_0}{\langle x_0 \rangle_{\alpha }} \right) \ . \end{displaymath} The discrete-time formalism~\cite{Amir2014,Soifer2016,Tanouchi2015,Taheri-Araghi2015} gives up the ambition of capturing doubling time fluctuations, in order to obtain a clearer view of the dynamics of cell size. Importantly, this approach makes an assumption for doubling time fluctuations, defining the doubling time conditional to a certain initial size $x_0$ as a random variable with a pre-defined mean $\tau_0$ and ``noise'' $\xi$, $\tau = \tau_0 + \xi$, where the distribution of the zero-mean variable $\xi$ must be specified. One can verify \textit{a posteriori} whether these assumptions are reasonable in data. This choice leads to discrete-time Langevin equations for the initial size $x_0(i)$ where $i$ is the cell-cycle index. \begin{equation} x_0(i+1) = f\left(x_0(i),\alpha\right) + \eta(x_0(i),\alpha) \ , \label{Eq:autodiscrete} \end{equation} where the function $f(\cdot)$ specifies cell division control, while $\eta$ is a random noise with mean zero and arbitrary distribution. In particular $f(\cdot)$ is given by \begin{equation} f\left(x_0,\alpha\right) = \frac{1}{2} \int_0^\infty d x \ p( x | x_0, \alpha ) x \ . \label{Eq:fdefinition} \end{equation} Different forms of this function correspond to different kinds of controls on cell division. For instance a perfect sizer (division triggered by an absolute cell size $x^\ast$) corresponds to $f\left(x_0,\alpha\right) = x^\ast $ while an adder (division triggered by a noisy constant added size) is defined by $f\left(x_0,\alpha\right) = (x_0 + \Delta)/2 $. The scaling relation in Eq.~\eqref{Eq:collapse_div}, imposes that $f\left(x_0,\alpha\right)/\langle x_0 \rangle_\alpha$ is solely a function of the ratio $x_0/\langle x_0 \rangle_\alpha$. In particular, one can derive a simple relation between $f$ and the hazard rate function, obtaining \begin{equation} \begin{split} & \frac{1}{\langle x_0 \rangle_\alpha} f\left(x_0,\alpha\right) = \\ & = \frac{1}{2} \left( - \frac{x_0}{\langle x_0 \rangle} + \int_{\frac{x_0}{\langle x_0 \rangle}}^\infty dy \ \exp\left( \int_y^\infty dz\ \tilde{h}\left( z, \frac{x_0}{\langle x_0 \rangle} \right) \right) \right) \ . \label{Eq:fvsh} \end{split} \end{equation} This function can be estimated from empirical data as $f(x_0,\alpha)=\langle x_0(i+1) \rangle_{x_0(i)=x_0}$, the average size at birth of the daughter conditional on the size at birth of the mother. Fig.~\ref{fig:data}B reports $f(x_0/\langle x_0 \rangle_\alpha)/\langle x_0 \rangle_\alpha$ in empirical data for different growth conditions experiments and strains, showing the expected collapse. Considering the discrete framework, we can write an equivalent process for the initial logarithmic size $q$, which, after having imposed the constraints given by the scaling of the stationary distribution, reads \begin{equation} q_0(i+1) = \bar{q}_\alpha + g\left( q_0(i) - \bar{q}_\alpha \right) + \xi( q_0(i) - \bar{q}_\alpha ) \ , \label{Eq:autodiscreteq} \end{equation} where $\bar{q}_\alpha = \log(c \langle x_0 \rangle_\alpha / x^\ast)$, with $c$ being an arbitrary constant, and $g(\cdot)$ specifies cell division control in log-space, analgous to $f(\cdot)$ in Eq.\eqref{Eq:autodiscrete}. The noise term is again drawn from a zero-mean distribution. Also in this case we can write explicitly the form of $g$ given an hazard rate function $h_d(\cdot)$ (see Appendix~\ref{app:hazardlin}). The function can be estimated from empirical data by evaluating $g(\Delta q)=\langle q_0(i+1) - \bar{q}_\alpha \rangle_{q_0(i) - \bar{q}_\alpha=\Delta q}$. The two functions $f(\cdot)$ and $g(\cdot)$, appearing in Eq.~\eqref{Eq:autodiscrete} and~\eqref{Eq:autodiscreteq} are interchangeable. Both expressions, once defined, correspond unequivocally to a specific division control mechanism. To obtain the hazard rate function, one must specify the distribution of the noise terms. Since in empirical data the initial and final size are approximately lognormally distributed~\cite{Kennard2016}, the steady-state distribution of $q$ can be well approximated by a Gaussian. It is therefore reasonable to assume that the distribution of the noise is Gaussian itself \begin{equation} \Delta q_0(i+1) = g\left( \Delta q_0(i) \right) + \sigma\left( \Delta q_0(i) \right) \xi \ , \label{Eq:autodiscreteq2} \end{equation} where $\xi$ in this expression is a Gaussian random variable of zero mean and unit variance and $\sigma(\cdot)$ a proper function of $\Delta q_0(i) = q_0(i) - \bar{q}_\alpha$. Under this assumption, we obtain (see Appendix~\ref{app:hazardlin}) \begin{displaymath} h_d(x,x_0,\alpha) = \frac{1}{x} g_\sigma \left( \frac{ \log(x/\langle x_0\rangle_\alpha) - g\left( \log(x_0/\langle x_0\rangle_\alpha) \right)}{\sqrt{2} \sigma( \log(x_0/\langle x_0\rangle_\alpha) )} \right) \ , \end{displaymath} where \begin{displaymath} g_\sigma(y) = \frac{1}{\sqrt{2\pi}\sigma} \frac{ \exp (-y^2) }{ 1 - \text{Erf}(y) } \ , \end{displaymath} where $\text{Erf}(\cdot)$ is the error function. Note that, however, for unspecified $g(\cdot)$ and $\sigma(\cdot)$, the stationary distribution of this process is not a Gaussian in general. Our direct calculation of the hazard rate $h_d$ from the control function $g$ links the discrete-time to the continuous-time formalism through a quantitative map. We will now focus on the parameterization defined in Eq.~\eqref{Eq:autodiscreteq2}, showing how it can be reduced to a single relevant parameter, using a perturbative approach. \begin{figure}[tbp] \centering \includegraphics[width=0.38\textwidth]{Fig2.pdf} \caption{Unified framework of division control and comparison with data. A: Division control function $g(\cdot)$, for an adder model (purple solid line) and linearized models with different values of the control parameter $\lambda$ (cyan, grey, magenta solid lines). This function defines the control mechanism in the model (Eq.~\eqref{Eq:autodiscreteq}). The adder mechanism is near linear and closest to the linearization with $\lambda \sim 0.5$~\cite{Amir2014}. B: Comparison between data (symbols) and the linearized discrete-time Langevin framework, for different values of the single control parameter $\lambda$. The roughly linear scaling of the symbols suggests that the data are close to a simple discrete-Langevin scenario, and the collapse across different strains and conditions confirms the results of Fig.~\ref{fig:data}. Values of $\lambda$ around $1/2$ well reproduce the data, but deviations are visible. Data (from ref.\cite{Kennard2016} and~\cite{Wang2010a}) refer to different strains of dividing \emph{E.~coli} cells grown in different conditions (see Fig.~\ref{fig:data}).} \label{fig:modeldata} \end{figure} \subsection{A perturbative approach identifes the conditions for a steady-state size distribution (homeostasis)} \label{sec:expansion} We now consider a general perturbative expansion around the mean initial logarithmic size of the population, which unveils the relations between different simplified descriptions, and extends the approach of ref.~\cite{Amir2014}. As we will see, it is possible to assign a simple interpretation to the coefficient of the expansion and use it to formulate physical considerations and estimates on the possible division control mechanisms. This kind of expansion is justified by empirical observations, as follows. The collapse of the initial-size distributions implies that the standard deviation $\sigma_{x_0}$ (which depends on the condition through the population growth rate $\alpha$), scales as $k \langle x_0 \rangle_\alpha$, where $k$ is a constant independent of $\alpha$. The constant $k$, which is the coefficient of variation, has empirical values around $0.15$~\cite{Taheri-Araghi2015}. Such value implies that the fluctuations of sizes around their mean are small and suggests therefore to expand size fluctuations around the mean. Instead of expanding the feedback control in powers of the ratio $\sigma_{x_0}/\langle x_0 \rangle_\alpha$, we will focus on logarithmic size, i.e., the previously introduced variable $q$. Starting from Eq.~\eqref{Eq:autodiscreteq2}, one can expand $g(\cdot)$ and $\sigma(\cdot)$ around $q_0(i) =\bar{q}_\alpha$. In this case, taking the first order of the expansion, we obtain \begin{equation} \Delta q_0(i+1) = (1- \lambda) \Delta q_0(i) + \sigma \xi \ , \label{Eq:autodiscreteq22} \end{equation} where $\lambda = 1- g'(0)$ and $\sigma = \sigma(0)$. The process defined by Eq.~\eqref{Eq:autodiscreteq2} is a discrete-time Langevin equation in a quadratic potential with stiffness $\lambda$, and thus can have multiple physical analogs. Its exact solution is a Gaussian distribution of $q_0$ with mean $\bar{q}_\alpha$ and variance $\sigma_q^2=\sigma^2/(\lambda(2-\lambda))$ (see Appendix~\ref{app:linearized} and ref.~\cite{Amir2014}), which correspond to a lognormal distribution of $x_0$. This relation can be considered as a discrete version of a fluctuation-dissipation theorem, as it connects the fluctuations of cell size $\sigma_q$ with the strength of the response $\lambda$ to deviation of the size from the mean. Eq.~\eqref{Eq:autodiscreteq22} can be solved exactly. Starting from an arbitrary initial condition, we derive the distribution of sizes after any number of generations (Appendix \ref{app:linearized}). In particular, it is possible to calculate how fluctuations of size are dampened in time. Starting at generation $0$ with an initial size corresponding to $q_0(0)$, the expected size at birth after $n$ generations is \begin{equation} \langle \Delta q_0(n) \rangle = \Delta q_0(0) \left( 1 - \lambda \right)^n \ . \label{Eq:timelinear} \end{equation} It is simple to see from this expression that, as expected, homeostasis is possible only if $|1-\lambda| < 1$ and that $1 < \lambda < 2$ would lead to oscillatory sizes around the mean~\cite{Tanouchi2015}. The role played by $\lambda$ is therefore to set the correlation time-scale, measured in generations. Eq.~\eqref{Eq:autodiscreteq22} and~\eqref{Eq:timelinear} show that a steady-state size distribution (``size homeostasis'') is possible if $|1-\lambda|=|g'(0)| < 1$ (see Fig.~\ref{fig:stability}A). This is a necessary condition, but not a sufficient one, as it only implies local stability of the deterministic solution (Appendix \ref{app:stationarity}). While values of $\lambda$ between $1$ and $2$ guarantee homeostasis, current data suggests that they are not biologically relevant. A value larger than one would in fact correspond to an extra correction of the size, which controls fluctuations in a oscillatory way. In this case, if a cell has a size larger than the average, the daughter will have on average a size smaller than average but closer and the grand-daughter a size again larger than the average and so on. Since this behavior is not observed in experiments we restrict our analysis to the case $0 < \lambda < 1$. Considering the next orders in the expansion, one can obtain precise criteria on the conditions leading to homeostasis. When only the deterministic part of Eq.~\eqref{Eq:autodiscreteq} is considered (i.e. $\sigma=0$), it is possible to show that the equilibrium is unique and globally stable if $|g(\Delta q)| < |\Delta q|$. If the noise is additive (i.e., $\sigma(\Delta q)$ is independent of $\Delta q$), then global stability implies that the process is stable and always reaches a stationary distribution. On the other hand, what is relevant for homeostasis is that the basin of attraction determined by $g(\cdot)$ is large enough compared to the typical fluctuations. The basin of attraction of the deterministic equation correspond to the values of $\Delta q$ such that $|g(\Delta q)| < |\Delta q|$ (see Fig~\ref{fig:stability}B). When the noise in Eq.~\eqref{Eq:autodiscreteq22} is not additive (i.e., $\sigma(\Delta q)$ depends on $\Delta q$), a general condition is unknown. A perturbative approach gives conditions on the parameters of the expansion that determine homeostasis. For instance, considering the first orders in the expansion of $\sigma\left( \Delta q_0(i) \right) $ around $0$, we obtain that the variance of initial logarithmic size distribution is finite only if $\sigma'(0) < \sqrt{\lambda(2-\lambda)}$ (see Appendix~\ref{app:linearized}). \begin{figure}[tbp] \centering \includegraphics[width=0.38\textwidth]{Fig3.pdf} \caption{Conditions for stability of the deterministic part of cell-size control. A: Under linear control with negligible noise, $\Delta q(i+1) = (1-\lambda) \Delta q$. If $(1-\lambda)> 1$ (green line) the system is unstable, while if $(1-\lambda)< 1$ (red line) the system is stable. The black line reprents the marginally stable case $\Delta q(i+1) = \Delta q$. By using a similar argument (see Appendix~\ref{app:stationarity}), it is possible to show that if $|g(\Delta q)| < |\Delta q|$ for any $|\Delta q|$, then the system is globally stable. B: In the more general case of a locally stable point the basin of attraction can be obtained as the set of values of $\Delta q$ such that $|g(\Delta q)| < |\Delta q|$. } \label{fig:stability} \end{figure} \subsection{Inequalities defining the relevant parameters given a set of experimental observations. } This section derives general constraints on the relevant parameters given the number of observations through simple quantitative estimates. The above calculations unequivocally define $\lambda$ as the most important parameter at play, together with another scale defining the width of the noise. A further question is whether this is effectively the only relevant parameter. In order to answer this question, one has to consider higher order terms in the expansion, and ask when those terms play a role, and whether they can be identified from data. In fact, the number of available observations define whether a truncated expansion description is useful to describe the data. The expansion around $\langle x_0 \rangle$ (Eq.~\eqref{Eq:autodiscreteq2}) is effective as long as the fluctuations of size are sufficiently small. In order to estimate precisely the regime where the approximation is valid, we include the second order in the expansion \begin{equation} q_0(i+1) = \bar{q}_\alpha + (1- \lambda) \left( q_0(i) - \bar{q}_\alpha \right) + \gamma \frac{\left( q_0(i) - \bar{q}_\alpha \right)^2}{2} + \sigma \xi \ , \label{Eq:autodiscreteq3} \end{equation} where $\gamma=g''(0)$, the second derivative of the control function. We set out to evaluate the difference between this process and the one defined by Eq.~\eqref{Eq:autodiscreteq22}. The quadratic term is measurable from stochastic trajectories if it is sufficiently large compared to stochastic fluctuations. Thus, we evaluate the distribution of $q_0(i+1) - \bar{q}_\alpha - (1-\lambda) \left( q_0(i) - \bar{q}_\alpha \right)$ and ask weather, for given sample size and value of $q_0(i)$, its mean is significantly different from zero or not. The error on the mean is given by the standard deviation divided by the square root of the sample size. Hence, the quadratic term is detectable if \begin{equation} \frac{\sigma}{\sqrt{T(q)}} < \gamma \frac{\left( q_0(i) - \bar{q}_\alpha \right)^2}{2} \ , \label{Eq:autodiscreteq4} \end{equation} where $T(q)$ is the number of cells with initial size $q$. Since the distribution of $q$ is approximately Gaussian (in the limit of $\gamma\approx0$), the number of cells with initial size in a bin of width $\Delta q$ around $q$ is estimated by \begin{equation} T(q) = N \frac{\exp\left( - \frac{(q-\bar{q}_\alpha)^2}{2 \sigma_q^2 } \right)}{\sqrt{2\pi \sigma_q^2}} \Delta q \ , \label{Eq:autodiscreteq5} \end{equation} where $N$ is the total number of cells. The bin size must be smaller than the standard deviation of the distribution, and we can parameterize it by defining $\Delta q=\epsilon \sigma_q$. The constraint on the total number of cells measured in order to recognize higher-order terms then reads \begin{equation} N > \frac{\sqrt{2\pi}\sigma_q}{\epsilon \sigma_q} \frac{\sigma^2}{\gamma^2} \frac{4}{(q-\bar{q}_\alpha)^4} \exp\left( \frac{(q-\bar{q}_\alpha)^2}{2\sigma_q^2 } \right) \ . \label{Eq:autodiscreteq6} \end{equation} The above expression reveals an important tradeoff (illustrated in Fig~\ref{fig:Nmin}A). Choosing a value of $q$ close to the mean for the bin will give a large sample size in data, but also causes the effect to be measured to be very small, and increasingly close to the experimental resolution. Conversely, choosing a value of $q$ very far from the mean corresponds to larger effects, but needs large sample sizes to be measured. The optimal choice of $q$ to evaluate deviations from the linear model in the data is the one that minimizes the left side of the Eq~\eqref{Eq:autodiscreteq6}. We have therefore \begin{equation} \begin{split} N > & \frac{1}{\epsilon} \frac{1}{\sigma_q^2 \gamma^2 } \lambda(2-\lambda) \min_t \frac{4\sqrt{2\pi}}{t^4} \exp\left( \frac{t^2}{2 } \right) \\ & \approx \frac{4.6}{\epsilon} \lambda(2-\lambda) \frac{1}{\gamma^2\sigma_q^2} = \frac{4.6}{\epsilon} \lambda(2-\lambda)\frac{1}{\gamma^2\log\left(1+\cv_{x}^2\right)} \ . \label{Eq:autodiscreteq7} \end{split} \end{equation} Where $\cv_x$ is the coefficient of variation of the distribution of $x$. In the available data, this is around $0.15$~\cite{Kennard2016,Taheri-Araghi2015}. Considering $\epsilon = 0.1$ and, assuming $\lambda(2-\lambda)$ to be a number of order $1$ (which should be the case if $\lambda \approx 1/2$), we obtain $N \approx 1.5 \cdot 10^3 / \gamma^2$. The factor $\gamma^2$, which is set by the second derivative of $g(\cdot)$, plays a very important role here as its value sets the scale at which specific mechanisms can be distinguished. \section{Interpretability of mechanisms of division control. } \label{sec:interpret} This section relates the perturbative expansion and its interpretation to mechanisms of division control discussed in the literature, given the available data. We specifically consider the case of the constant added size model, proposed as a mechanism of division control across different conditions, obtaining the parameters of its expansion. \subsection{The ``concerted control'' mechanism} Eq.~\eqref{Eq:autodiscreteq22} provides a generic description of division control for small fluctuations. When it is interpreted in terms of mechanisms of control, it corresponds to the simplest ``concerted control'' model, i.e. to a mix of sizer and timer behavior (as in the framework of ref.\cite{Amir2014}). Specifically, since the time between divisions is $(q_f - q_0)/\alpha$, setting $\tau = \tau_0 + \xi$, where $\xi$ are Gaussian, independent, zero-mean random variables, one obtains \begin{equation} \tau_0 = (1-\lambda) \frac{\log 2}{ \alpha} + \frac{\lambda}{ \alpha } \log\frac{x^\ast_\alpha}{x_0} \ . \label{Eq:ariel} \end{equation} The above equation can be interpreted as implementing the the control on cell division as a mixture of timer and sizer behavior~\cite{Amir2014}. Indeed, the doubling time is set by a convex combination with mixing parameter $\lambda$ of a fixed time (set by the inverse mean growth rate $1/\alpha$) and a perfect sizer (set by a limit threshold size $x^\ast$ for cell division). A pure sizer model, recovered for $\lambda=1$ would set this conditional interdivision time as $\tau_0 = \frac{\lambda}{\alpha } \log\frac{x^*_\alpha}{x_0}$, while a pure timer model ($\lambda=0$) defines $\tau_0$ as $\log 2 /\alpha$. It is straightforward to show that, in the small noise limit, $x^\ast_\alpha = 2\langle x_0\rangle = \langle x_f\rangle$. The concerted control is a consequence of the combination of these two decision processes, set by the parameter $\lambda$. As shown in section~\ref{sec:expansion} this process leads to stationary distributions of sizes if $0< \lambda < 2$. Our previous calculations show that such an effective cell-cycle model (equivalent to the approach introduced in ref.~\cite{Amir2014}) can be characterized as the auto-regressive model giving a discrete-time Langevin equation with harmonic potential. As discussed above, this has a number of consequences, including a strict relation between the noise in $\tau$ and in $\log x_0$, and the fact that the characteristic times (in generations) for damping of fluctuations and perturbations (fluctuation-dissipation theorem) is $1/\lambda$. As shown in section~\ref{sec:expansion} and previously suggested~\cite{Amir2014}, the linear dependency of $\tau_0$ on $\log x_0$ can be seen as a first-order approximation of a generic function relating the doubling time to the initial size. Thus, nearly all models where one of the two terms in Eq.~\eqref{Eq:ariel} is not strictly null are expected to behave similarly to this concerted control model as long as the probed initial cell sizes $x_0$ are close to their mean $\langle x_0 \rangle_\alpha$, or equivalently as long as the noise in $ \alpha \tau$ is small. \subsection{The constant added size mechanism} We now consider the constant added size mechanism, written as a discrete Langevin dynamics on the logarithmic initial size $q$. The deterministic part is defined by \begin{equation} g\left( q - \bar{q}_\alpha \right) = q - \bar{q}_\alpha + \int d z \ F\left( z \right) \log\left( \frac{c + z e^{\bar{q}_\alpha- q_0} }{2} \right) \ , \label{Eq:adder_SJ} \end{equation} where $c$ is determined by imposing $g(0)=0$ and $F(\cdot)$ is the probability distribution of the relative fluctuations of the added size around its mean (Appendix \ref{app:adder}). Expansion of this function gives $\lambda = 1/2$, consistently with previous results~\cite{Amir2014}, which guarantees stationarity of the process. The second-order term gives $\gamma=1/4$. Since all the parameters are fixed one can write Eq.~\eqref{Eq:autodiscreteq6}, for the ``adder'' model as \begin{equation} N > \frac{55}{\epsilon} \frac{1}{\log\left(1+\cv^2_x\right)} \ . \label{Eq:autodiscreteq_adder} \end{equation} Realistic values of $\cv_x$ are around $0.15$~\cite{Taheri-Araghi2015}. The remaining parameter $\epsilon$ defines how coarse is the binning. Since it must be by definition a small number (if it was not one would have to consider other sources of errors) we shall assume $\epsilon = 0.1$. In this case we would obtain that at least $N \approx 50000$ cell divisions are required to distinguish between the adder model and any other model with the same first-order expansion. This estimate sets therefore a threshold on the number of cells that one need to measure to have enough statistical power to observe non-linearity in the size control function $g(\Delta q)$. Fig.~\ref{fig:Nmin} compare the current available datasets with the estimated threshold, showing that all of them are below the estimated threshold. A linearization of $g(\Delta q)$ should be, for most available experimental data sets, sufficient to describe the main observations. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Fig4.pdf} \caption{Estimated threshold of detectability of non-linear contributions to cell-division control. A: The trade-off between two competitive terms determines an optimal cell-size fluctuation to identify cell size control. One one hand, large deviations of size from the average, correspond to stronger corrections make differences between mechanism more detectable. On the other hand, fewer cells have large fluctuations, reducing the statistical power and increasing the sampling noise. The optimal fluctuation value is the one that minimizes the error on the inferred cell-size control mechanism. Our calculations (Eq.~\eqref{Eq:autodiscreteq6} show that, when fluctuations are rescaled by the variance of the distribution, the optimal value is independent of the cell-size control mechanism, as shown in the plots of the contributions described by Eqs~\eqref{Eq:autodiscreteq4} (discriminatory power, green line) and \eqref{Eq:autodiscreteq6} (sampling level, blue line). B: Estimated detection threshold for $\epsilon=0.1$ (gray line) and data points corresponding to the available datasets. All the available data sets lay below the threshold of detectability, suggesting that a linear model of division control is sufficient to describe these data. } \label{fig:Nmin} \end{figure} To make the result on the estimated threshold for detectability more concrete, we consider explicitly the case of the adder mechanism and its distinguishibility from the linearized model (Eq~\eqref{Eq:autodiscreteq22}). By definition, the adder mechanism predicts that the conditional mean (and distribution) of the added size, given initial size, is independent on initial size. While the first-order expansion of the framework defined here with $\lambda=1/2$ (and analogously for the model in ref.~\cite{Amir2014}) does not follow this functional trend (i.e., the next orders in the expansion are different), it shows a very small difference with the adder model in the empirical range of sizes, which might not be discerned with the sampling of available empirical data. In order to further support this point, we employed direct numerical simulations at different number of realizations (mimicking experimental sampling levels). As explained above, the most complete information on the process is the transition probability $p(x_f|x_0,\alpha)$. For an adder, this probability depends only on the difference $x_f - x_0$, i.e. $p(x_f-x_0|x_0)$ is independent of $x_0$, or, in other words, the conditional distribution of added size given initial size does not depend on the initial size. The fact that $p(x_f-x_0|x_0)$, obtained for different $x_0$, collapses has been interpreted in ref.~\cite{Taheri-Araghi2015} as evidence in favor of the adder mechanism of division control. To gain more insight into this conclusion, we simulated the first-order process (Eq.~\eqref{Eq:autodiscreteq3}) with $\lambda=1/2$. Fig.~\ref{fig:adder_l05}, reports the binned histograms of rescaled added sizes for cells with different intial sizes, and using similar bin sizes as in ref.~\cite{Taheri-Araghi2015}, the very good collapse shows that the difference between the probability distributions is barely detectable at the available level of sampling. We quantified the error on the collapse measuring the average $L_2$ distance between all the pairs of curves plotted in Fig.~\ref{fig:adder_l05}A. This error, measured for different values of $\lambda$ and different sample sizes, can be compared with the expected error due to fluctuations in the adder, also estimated as the avarage $L_2$ distance between distribution of the added size given the initial one. As expected, Fig.~\ref{fig:adder_l05}A shows that the error is minimal when $\lambda = 0.5$. Interestingly, the measured error does not depend on the sample size, while the expected error from the adder model decreases as the number of measured cells increases. Fig.~\ref{fig:adder_l05}B shows that the two error measures become comparable when $N$ is between $10000$ and $20000$, which is around the same order of magnitude of the number of cells measured in ref.~\cite{Taheri-Araghi2015}. Thus, the test presented in Fig.~\ref{fig:adder_l05} might work with the existing sampling levels. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Fig5.pdf} \caption{Detectability of adder mechanism from simulated models and direct test. Plotted data refer to simulations of an models with realistic parameters and sampling of cells. A: Conditional distribution of added size fiven intial size for different initial sizes (different colors) obtained with the linearized model (Eq.~\eqref{Eq:autodiscreteq3}) with $\lambda = 0.5$. The linearized model reproduces visually the collapse expected for the adder model. The simulations consider conservatively a coefficient of variation of the added size equal to $0.3$ (which is larger than the observed values~\cite{Taheri-Araghi2015}) and a total number of cells $N=50000$. B: Error test on the collapse of the distribution of added size (estimated as the average $L_2$ distance between all pairs of curves) for different values of $N$ and $\lambda$. The horizontal dashed lines represent the expected error in collapse in the adder model due to fluctuations. For these parameter values, the error in the collapse for the model with $\lambda = 0.5$ starts to be relevant when $N \sim 10000$. This test may be applied to empirical data: in order for an adder to be detectable, the solid line should stay above the dashed line. } \label{fig:adder_l05} \end{figure} \section{Discussion} Our approach provides a map between an autoregressive discrete-time formalism of cell division control and a continuous-time description based on hazard-rate functions, showing the impact on both formalisms of the observed scaling behavior of cell sizes and doubling times. This map connects the approaches used in refs~\cite{Osella2014a,Jun2015,Kennard2016} with those of refs.~\cite{Amir2014,Campos2014,Soifer2016}, and leads us to propose a unified framework (with discrete-time Langevin equations) embracing both formalisms to develop and explore effective models of cell division. The framework has the additional advantage of showing how parameter-poor models describing different kinds of cell division control are possible for multiple mechanisms. The use of a discrete-time Langevin formalism for the logarithmic size leads to a the simple physical analogy with a fluctuationg system and guides in the interpretation of the model parameters. The same formalism also enables the applicability of familiar concepts in the statistical physics of fluctuating systems, such as correlation, response, and fluctuation-dissipation relations. We anticipate that such concepts will become useful for future studies of dividing cells in fluctuating environments. An important extension of the present framework should incorporate growth fluctuations. Recent studies~\cite{Harris2016,Iyer-Biswas2014} show clear indications that the assumption of constant growth rate $\alpha$ is an oversimplification, and that size homeostasis needs to be understood by addressing contributions from both growth and cell division. Possibly the most relevant result from recent experimental work is the existence of a mechanism governed by a single size scale. The ``microscopic'' origin of this length scale is a relevant question that is not solved by any of the mechanisms proposed in the literature. By exploring systematically a perturbative expansion of the model, we show how the unified framework defined here can lead to similar equations to the ones introduced in ref.~\cite{Amir2014}, with the advantage of elucidating the direct link with the hazard rate function. The main difference is found in the dependence of the noise term on the growth rate $\alpha$. In the setting defined here, there is no dependency, while ref.~\cite{Amir2014} assumes a dependency (see Sec.~\ref{sec:interpret}). The importance of this difference is that in the setting defined here the distributions of division times collapse as observed experimentally. Furthermore, the more general framework presented here can be used to compute the next orders of the expansions, and to study hierarchically the mechanisms leading to homeostasis. The perturbative approach also leads to relevant insight on the ability to distinguish different control mechanisms from data. Overall, our results indicate that a linearization of the control function $g(\Delta q)$ should be, for most currently available experimental data sets, sufficient to describe the main observations. Thus, for most practical purposes, the physical analogy with the discrete-time version of harmonic fluctuations for logarithmic size is valid. The bounds on sampling levels that we derive analytically estimate the number of cell divisions that need to be measured in order to evaluate higher-order nonlinear ``anharmonic'' terms, which are necessary to pinpoint precise mechanisms. Importantly, these calculations show that there is an optimal choice of cell sizes to test the deviations. On the one hand, testing sizes that deviate a great deal from the average will show stronger corrections, and make differences between mechanism more detectable. On the other hand, fewer cells have large fluctuations, reducing the statistical power and increasing the sampling noise of such measurements. The optimal fluctuation value is the one that minimizes the error on the inferred cell-size control mechanism. Importantly, the calculations show that, when fluctuations are rescaled by the variance of the distribution, the optimal value is independent of the cell-size control mechanism. Thus, we expect that different division control functions should be distinguishable without requiring an ad hoc number of observations. Comparing the detection threshold with the number of observations made in available studies, we find that the number of measured cells should typically be insufficient to draw any strong conclusions beyond the linear approximation. By applying our methods, we show that at the current sampling levels, it might be very hard to distinguish it from linear response (compatible with many scenarios), even with sophisticated tests such as collapse of the conditional distribution of added size. This poses important caveats on the possibility of determining a specific mechanism from specific data sets and in the interpretation of measured trends as ``microscopic'' mechanisms of size control. We propose the method developed in Fig.~\ref{fig:adder_l05}B as an effective way, applicable to empirical data, to test for deviations from the behavior of the linearized model, which should work with sampling levels that can be attained experimentally with existing approaches. We are currently working on extending this approach using Bayesian statistics and producing reliable statistical estimators of the relevant parameters ($\lambda$ and $\gamma$) of the division control function. \section{Acknowledgements} This work was supported by the International Human Frontier Science Program Organization, grant RGY0070/2014. \bibliographystyle{unsrt}
1606.09225
\section{Introduction} Quantum computers can perform certain tasks more efficiently than classical computers \cite{Shor:1997:PAP:264393.264406,Bennett1997}. Furthermore, the results and limitations of realistic quantum computers gives us insight into the fundamentals of quantum mechanics. Quantum computation has thus attracted great interest from the research community. Recently IBM has released access to its 5-qubit quantum computer to the scientific community under the moniker ``IBM Quantum Experience''\cite{IBM}. The IBM Quantum Experience provides access to a 5-qubit quantum computer with a limited set of gates described by IBM as ``the world’s first quantum computing platform delivered via the IBM Cloud''. A body of research focuses on properties of 5-qubit systems \cite{Touchette2010,Das2004}, and much of it has recently been released or updated to rely upon results running on IBM's Quantum Experience \cite{Devitt2016,Alsina2016,2016arXiv160508922R,2015arXiv151100267B,2016arXiv160501351T}. Understanding the capabilities of, and developing and debugging algorithms for deployment on this infrastructure, calls for analyzing and simulating 5-qubit systems in detail. A variety of existing software toolkits are useful in quantum computation study and research, ranging from the general QuTIP \cite{Johansson2012,Johansson2013} available in Python, to more specialized toolkits available in a variety of scientific computing languages: QUBIT4MATLAB (Matlab)\cite{Toth2008}, QCMPI (Fortran 90)\cite{Tabakin2009} provide rapid evaluation of quantum algorithms, including noise analysis, for a large number of qubits by exploiting parallel computing. The FEYNMAN (Maple) \cite{Radtke2005,Radtke2006,Radtke2007,Radtke2008} program offers interactive simulations on $n$-qubit quantum registers without restrictions other than available memory and time resources of computation. The QDENSITY (\textit{Mathematica})\cite{Julia-Diaz2009} program provides commands to create and analyze quantum circuits. The {\tt libquantum} package (C) provides the ability to simulate a variety of processes based on its implementation of a quantum register\cite{libquantum}, Qinf (Maxima) allows the manipulation of instances of objects that appear in quantum information theory and quantum entanglement\cite{qinf}. A detailed comparison between other quantum simulators is beyond the scope of this work. Those that are available use various computer languages, the majority in C/C++, and have different focuses, ranging from particular algorithms, generalisability, or scalability. A more comprehensive list of available tools for work in quantum computation is given on the Quantiki wiki\cite{availablesims}. In this paper, I describe \textbf{Quintuple}, an open-source Python module allowing both simulation of all operations available via IBM's Quantum Experience hardware and programming for a 5-qubit quantum computer at a high level of abstraction\cite{quintuplegit}. \textbf{Quintuple} allows the researcher, educator or student to quickly and repeatedly execute code in a simplified language compatible with execution on the IBM Quantum Experience hardware and/or in pure Python compatible with other Python code and libraries. By dialing in the focus of \textbf{Quintuple} on the uniquely available IBM Quantum Experience hardware, it can be deployed on the platform without additional configuration. By keeping the implementation to just those elements necessary to perform an ideal simulation of IBM's 5-qubit quantum computer, and not relying on a much larger, fuller featured toolkit, as well as by providing an open-source object-oriented implementation in a widely used high level language, Python, it is hoped this module will be useful to more novice programmers and/or those less experienced in the intricacies of quantum computation. The core of \textbf{Quintuple} is only 675 lines of Python code, and \textbf{Quintuple} additionally provides over 40 example programs with expected results, including examples of Grover's Algorithm and the Deutsch-Jozsa algorithm, for execution within \textbf{Quintuple} or on the IBM Quantum Experience. In \textbf{Section \ref{quantuminf}} a brief overview is given of the terminology and mathematics necessary to follow the operation of \textbf{Quintuple}. In \textbf{Section \ref{quintuple}}, the \textbf{Quintuple} code and the APIs to design and test 5-qubit quantum algorithms in simulation and/or on IBM's hardware are introduced. In \textbf{Section \ref{usage}}, through the lens of an algorithm which swaps the state of two qubits, various modes of usage of the APIs presented in \textbf{Section \ref{quintuple}} are presented. \textbf{Section \ref{conclusion}} provides a summary and outlook for potential future work. \section{Overview of Quantum Information} \label{quantuminf} Here I give a brief primer on quantum information and computation necessary to describe \textbf{Quintuple}'s implementation and assisting in understanding the IBM Quantum Experience. Knowledge of complex conjugation, basic linear algebra fluency; matrix operations including multiplication, tranpose, trace, and tensor products is assumed, among other mathematical concepts. Sexplicit exposition of this formalism and any explanation of the \emph{whys} of quantum mechanics is beyond the scope of this limited overview of quantum information. For an detailed overview of the math and quantum mechanics of quantum information, as well as a lucid exposition of the fundamentals of quantum information in detail, an excellent resource is the canonical textbook of Nielsen and Chuang\cite{nielsen2010quantum}. For further overview of the simulation of $n$-qubit systems the overview by Radtke is an excellent supplement to this more limited exposition\cite{Radtke2005}. A qubit is the quantum generalization of a classical bit. Unlike a classical bit, it can take any value corresponding to a linear superposition of its constituents: formally two orthonormal eigenstates. Our default choice of basis throughout this manuscript is $$ \{ \ket{0}=\begin{pmatrix}1 \\ 0\end{pmatrix},\ket{1}=\begin{pmatrix}0 \\ 1\end{pmatrix} \}$$. This multi-purpose notation ($\bra{}$ or $\ket{}$), used throughout this manuscript to represent a quantum state, is called bra-ket or Dirac notation and is standard in quantum mechanics. Without getting into a detailed discussion of the mathematics, one can, simplistically, think of the symbol lying between the $\ket{}$ notation as being a label for the state. Whether the notation is $\ket{symbol}$ vs. $\bra{symbol}$ indicates whether it is represented as a column or a row vector respectively, where $\bra{symbol}$ is the conjugate transpose of $\ket{symbol}$ and vice versa. Thus a generic one-qubit state $\ket{\psi}$ is \begin{equation} \ket{\psi}=a\ket{0}+b\ket{1}. \end{equation} The coefficients $a,b$ are complex numbers and these complex coefficients provide the representation of $\psi$ in the ${\ket{0},\ket{1}}$ basis. The probability of finding $\ket{\psi}$ in state $\ket{0}$ is $\abs{a}^2=aa^*$ where $a^*$ is the complex conjugate of $a$, similarly the probability of finding $\ket{\psi}$ in state $\ket{1}$ is $\abs{b}^2=bb^*$. These two probabilities normalize to one: $\abs{a}^2 + \abs{b}^2=1$. A single qubit state $\ket{\psi}$ can be physically realized by a variety of mechanisms which correspond to a quantum-mechanical two-state systems, for example a two spin system, or a two level system, among many others. The Bloch sphere is a useful way to visualize the state of a single qubit on a unit sphere. Formally, in the Bloch sphere representation the qubit state is written as \begin{equation} \ket{\psi} = \cos{\frac{\theta}{2}}\ket{0}+e^{i\phi} \sin{\frac{\theta}{2}}\ket{1}, \end{equation} where $\theta$ and $\phi$ are the polar coordinates to describe a vector on the unit sphere. To make use of the power of quantum computation we will in general want more than one qubit. In a classical $n$-bit register we can initialize each bit to 0 or 1. For example to represent the base 10 number 19 in a classical 5-bit register we can set its elements to $10011$. For $n$ qubits, to create an analogous state, a so-called quantum register we prepare the state $\ket{10011}=\ket{1} \otimes \ket{0} \otimes \ket{0} \otimes \ket{1} \otimes \ket{1}$. Here $\otimes$ corresponds to the tensor product (also known as the direct or Kronecker product). Generically an $n$-bit quantum register can hold any superposition of $n$-qubit states. For an $n$ qubit state there are $2^n$ possible values of which the $n$-qubit state can, in general, be a superposition of. For example for a 2-qubit state we have $2^2=4$ possible states, $\{\ket{00},\ket{01},\ket{10},\ket{11}\}$. For a 3-qubit state we have $2^3=4$ possible states or \begin{equation} \{\ket{000},\ket{001},\ket{010},\ket{011},\ket{100},\ket{101},\ket{110},\ket{111}\}. \end{equation} Numbering the states from $0$ to $2^n-1$, the canonical ordering used throughout this manuscript is: \begin{equation} \label{canonicalordering} \sum_{m=0}^1 \ldots \sum_{j=0}^1 \sum_{i=0}^1 \ket{i j ... m }, \end{equation} where the number of summations corresponds to the total number of qubits. Thus if we incorporate the amplitudes, the complex coefficients of these states, we can compute the probability of finding \begin{equation} \ket{\psi}=\sum_{m=0}^1 \ldots \sum_{j=0}^1 \sum_{i=0}^1 c_{ij\ldots m}\ket{i j ... m }, \end{equation} in state $\ket{i j ... m }$ as the squared absolute value of $c_{ij\ldots m}$, $|c_{ij\ldots m}|^2=c_{ij\ldots m} c_{ij\ldots m}^*$. If we can represent an $n$-qubit state as the tensor product of the states of individual qubits \begin{equation} \ket{q_0 q_1 \ldots q_n}=\ket{q_0} \otimes \ket{q_1} \otimes \ldots \otimes \ket{q_n}, \end{equation} the state is called separable. However, due to the nature of superposition, it may be that a multi-qubit state is non-separable and individual qubits states are not well defined independent of other qubits. This non-local correlation phenomenon known as entanglement is a necessary resource to achieve the exponential speed up of quantum compared to classical computation \cite{jozsa2003role}. As such, the concept of quantum registers, necessary to store multi-qubit non-separable states, will play a primary role in quantum computation simulation. We have outlined the analogy to the classical n-bit register, the n-qubit quantum register for keeping track of quantum data. Here we will do the same with a classical gate and a quantum gate, which evolve classical and quantum states respectively. In classical computation, a classical gate operates on a classical register to evolve its state. In quantum computation, a quantum gate operates on a quantum register to evolve its state. Quantum states can be represented by matrices; the mathematics of the evolution of quantum states can unsurprisingly be represented by matrices as well. To represent quantum gates, these matrices must conform with the postulates of quantum mechanics as they multiply a state to produce an evolved state. Specifically, we know that the evolution of states must conserve probability (preserve norms); we cannot produce a state which is a superposition of states with probability greater than one. Matrices which ensure the conservation of probability when they multiply states are called \textit{unitary}. Formally, this corresponds to any matrix $U$ which satisfies the property that its conjugate transpose $U^\dagger$ is also its inverse, that is $U^\dagger U=UU^\dagger=I$, where $I$ is the identity matrix. In quantum computation, a quantum gate corresponds to a unitary matrix, and any unitary matrix corresponds to a valid quantum gate. Since unitary matrices are always invertible, quantum gates and thus computation is reversible; any operation we can do we can undo\cite{bennett1973logical}. As a qubit state can be realized physically by a variety of quantum mechanical systems, so can quantum gates be physically realized by a variety of quantum mechanical mechanisms, which must necessarily depend on the system's representation of the qubit. For example, in a system where qubits are represented by ions in a quantum trap, a laser tuned to a particular frequency can induce a unitary transformation effectively acting as a quantum gate. Gates acting on a single qubit can be applied to a quantum register of an arbitrary qubit number. For example, for a gate X if the desired qubit to act on is the 3rd qubit in a 4-qubit quantum register. X is a gate which flips the qubit it acts on from $\ket{0}$ to $\ket{1}$ or from $\ket{1}$ to $\ket{0}$. The appropriate gate is formed via $X_{3 \text{of} 4}=I \otimes I \otimes X \otimes I $ where $I$ is the $2 \times 2$ identity matrix. In general, to create a gate $G_{m \text{of} n}$ to operate on the $m$th qubit of a register of $n$ qubits from a gate $G$ that operates on a single qubit, one may use \begin{equation} G_{i \text{ of } n} = \bigotimes_{i=1}^{n} \begin{cases} I & \text{ if } i \neq m \\ G & \text{ if } i=m. \end{cases} \end{equation} Here, $\bigotimes_{i=1}^{n}$ is the analog of $\sum_{i=0}^{n}$ corresponding to the tensor product, instead of the summation operation. We can see that the application of a gate on a single qubit in this fashion doesn't generate entanglement as it never results in the expansion of the size of the quantum register it is acting on. Specific sets of classical gates, for example the the NOT and AND gates can be used to construct all other classical logic gates and thus forms a set of universal classical gates. Other such sets exist; in fact the NAND, \textit{negative and} gate alone is a universal classical gate\cite{nielsen2010quantum}. In quantum computation, to obtain a universal gate set we will need a multi-qubit gate which applies on 2-qubits of an $n$-qubit register. The CNOT gate is one such gate. CNOT is the \textit{2-qubit controlled not gate}. Its first input is known as the control qubit, the second as the target qubit and the state of the target qubit is flipped on output if and only if the control qubit is $\ket{1}$. The application of CNOT can under many scenarios generate entanglement. CNOT combined with single qubit gates can approximate arbitrarily well any (unitary) operation on a quantum computer\cite{Barenco1995}. Quantum gates can be combined to form quantum circuits, the analog to classical circuits composed of logic gates connected by wires. The full set of gates that both the IBM Quantum Experience and \textbf{Quintuple} support, form a (non-minimal) universal quantum gate set, such that we can combine the gates in a quantum circuit to create any multi-qubit logic gate we desire. We'll need to understand how measurement functions in quantum mechanics to understand the constraints of extracting information from a quantum register. Measurement in quantum mechanics is something which engages a lot of discussion, but its properties are straightforward to state in mathematics if not in philosophy. It is possible to perform a measurement of a single qubit with respect to any basis $\{\ket{a},\ket{b}\}$ (not just the default $\{\ket{0},\ket{1}\}$ basis) so long as this basis is orthonormal, that is that the total probability is one. It likewise is possible to measure a multi-qubit system with respect to any orthonormal basis. Earlier, we stated that the probability of finding: \begin{equation} \ket{\psi}=\sum_{m=0}^1 \ldots \sum_{j=0}^1 \sum_{i=0}^1 c_{ij\ldots m}\ket{i j ... m }, \end{equation} in state $\ket{i j ... m }$ is the squared absolute value of $c_{ij\ldots m}$, $\abs{c_{ij\ldots m}}^2=c_{ij\ldots m} c_{ij\ldots m}^*$. Here when we perform a measurement we actually do find the system in one of these states $\ket{i j ... m}$ with the appropriate probability $|c_{ij\ldots m}|^2$. After the measurement is performed, the state is collapsed and all further measurements return the same result, state $\ket{i j \ldots m}$ with probability 1. \section{Quantum information tools represented in Quintuple} \label{quintuple} Only those states and gates which are useful to interfacing with IBM's 5-qubit quantum computer are supported by \textbf{Quintuple}. The only external Python module \textbf{Quintuple} relies upon is the \textbf{numpy} module. The core of \textbf{Quintuple} is just 675 lines long. \subsection{States} States are available as static member variables of the \pythoninline{class State}. The following qubit states are are available Standard (z) basis (\pythoninline{State.zero_state,State.one_state}): \begin{equation} \ket{0}=\begin{pmatrix} 1 \\ 0\end{pmatrix}, \ket{1}=\begin{pmatrix}0 \\ 1 \end{pmatrix}. \end{equation} Diagonal (x) basis (\pythoninline{State.plus_state,State.minus_state}): \begin{equation} \ket{+}= {\frac{1}{\sqrt{2}}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \ket{-}=\frac{1}{\sqrt{2}}\begin{pmatrix}1 \\ -1 \end{pmatrix}. \end{equation} Circular (y) basis (\pythoninline{State.plusi_state,State.minusi_state}): \begin{equation} \ket{\circlearrowright}= {\frac{1}{\sqrt{2}}} \begin{pmatrix} 1 \\ i \end{pmatrix}, \ket{\circlearrowleft}=\frac{1}{\sqrt{2}}\begin{pmatrix}1 \\ -i \end{pmatrix}. \end{equation} The \pythoninline{class State} has a variety of helper methods, including those to transform to the $x$ or $y$ basis, to see if a multi-qubit state is simply separable into individual qubits in the set $\{\ket{0},\ket{1},\ket{+},\ket{-},\ket{\circlearrowright},\ket{\circlearrowleft} \}$, and to extract the $n$th qubit from a separable multi-qubit state. This class implements the measurement method, following the limitations of nature, and supports retrieving a state's representation on the Bloch sphere, not possible in nature but feasible in simulation. The class also has a method to create a state from binary string (e.g. ``01011'' corresponding to $\ket{01011}$) and return a string from a separable state. For example, we can compute the representation of the state $\ket{10011}$ in the \textbf{Quintuple} module numerically with \begin{python} np.kron(State.one_state,np.kron( State.zero_state,np.kron( State.zero_state,np.kron( State.one_state,State.one_state)))) \end{python} or more concicely by \pythoninline{State.state_from_string("10011")}. \subsection{Gates} A variety of single qubit gates are supported, as is the 2-qubit gate CNOT. Later, the \pythoninline{class QuantumComputer} will use these gates as building blocks to define gates which operate on quantum registers of up to 5-qubits appropriately. In the following gate definitions the Python syntax is given in parenthesis. $H$ gate; Hadamard gate (\pythoninline{Gate.H}): \begin{equation} H={\frac{1}{\sqrt{2}}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}. \end{equation} $X,Y,Z$ gates; Pauli gates (\pythoninline{Gate.X,Gate.Y,Gate.Z}): \begin{equation} X=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \end{equation} \begin{equation} Y=\begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}. \end{equation} \begin{equation} Z=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{equation} $I$ gate; Identity gate (\pythoninline{Gate.eye}): \begin{equation} I=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation} $S$ gate; Phase gate (\pythoninline{Gate.S}): \begin{equation} S=\begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix}. \end{equation} $S^\dagger$ gate (\pythoninline{Gate.Sdagger}): \begin{equation} S^\dagger=\begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix}. \end{equation} $T$ gate; $\pi/8$ gate (\pythoninline{Gate.T}): \begin{equation} T=\begin{pmatrix} 1 & 0 \\ 0 & e^{\frac{i \pi}{4}} \end{pmatrix}. \end{equation} $T^\dagger$ gate (\pythoninline{Gate.Tdagger}): \begin{equation} T^\dagger=\begin{pmatrix} 1 & 0 \\ 0 & e^{-\frac{i \pi}{4}} \end{pmatrix}. \end{equation} $CNOT$ gate (\pythoninline{Gate.CNOT2_01}): \begin{equation} CNOT=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}. \end{equation} It can easily be checked that these gates produce the desired behavior. All other combinations of target and control qubits are available within \pythoninline{class Gate} acting on quantum registers of up to 5 qubits. Here, the number appearing after the CNOT indicates the number of qubits in the register the gate is to operate on, the first subscript indicates the control qubit index in the entangled qubit register, and the second subscript indicates the target qubit index, both 0 based. For example \pythoninline{CNOT4_03} is to operate on a 4-qubit register with the 0th qubit corresponding to the control qubit and the 3rd qubit corresponding to the target qubit. The \pythoninline{class QuantumComputer} helpfully supports specifying only the target and control qubits when applying the CNOT gate and automatically deploys the correct gate to achieve this based on the internal configuration of its quantum registers. \subsection{Probabilities} Several convenience methods are provided to help compute probabilities and expectation values. For a qubit residing in a quantum register representing an arbitrary number of entangled qubits, the method \pythoninline{Probability.get_probabilities(qubit)} returns an array of probabilities representing the quantum register in the canonical ordering defined in \textbf{Equation \ref{canonicalordering}}. The method \pythoninline{Probability.pretty_print_probabilities} prints each state and its associated probability for easy examination. For a state representing a single qubit, there are several additional methods available within \pythoninline{class Probability} to help calculate the expectation of the state in the standard (z), circular (x) or diagonal bases, respectively. \subsection{QuantumRegister} \label{quantum_register} To represent a possibly non-separable group of distinguishable qubits, one can treat them together in terms of a single quantum register to keep track of their ordering and their entangled state. \textbf{Quintuple} uses the \pythoninline{class QuantumRegister} for this purpose, and the register is managed by the \pythoninline{class QuantumComputer} so that it isn't necessary to follow how the qubits within \pythoninline{class QuantumComputer} are internally arranged for the user to be able to perform operations and measurements. The \pythoninline{QuantumRegister} object can be queried as to the number of qubits it represents, which particular qubits it represents, its state, and whether it is equal to another \pythoninline{QuantumRegister} object. The \pythoninline{class QuantumRegister} has an additional method not provided in nature. Specifically a qubit is a superposition of states and when measured its state collapses to just one of these states with a probability given by the probability amplitude squared. All further measurements return the same state as the qubit is no longer in a superposition of states. The \pythoninline{QuantumComputer} supports measurement in the fashion of nature, but it also for convenience of further analysis, saves the value of the full state before collapse in the \pythoninline{QuantumRegister} object, which can be retrieved with the method \pythoninline{get_noop()}. \subsection{QuantumRegisterCollection} \label{quantum_register_collection} The \pythoninline{class QuantumRegisterCollection} is an abstraction that assists the \pythoninline{class QuantumComputer} in managing its \pythoninline{QuantumRegister}s. This class returns the register in which a particular qubit resides, manages the merging of two \pythoninline{QuantumRegister}s under the hood via its \pythoninline{entangle_qubits} method, and allows easy querying as to the order of the qubits it is representing. This is useful to the \pythoninline{class QuantumComputer} as it supports the user querying about the state of the qubits in any increasing order the user desires. The abstraction of the \pythoninline{QuantumRegisterCollection} allows the \pythoninline{QuantumComputer} to keep the qubits separately, in separate registers for as long as possible, only merging into a single register when necessary. This means that the matrix operations associated with gate action are kept smaller and that states are kept separated for clarity for as long as is possible. \subsection{QuantumComputer} The \pythoninline{class QuantumComputer} manages five qubits in an arbitrary grouping of quantum registers and allows the user to apply quantum gates and measure and extract state information without having to consider how the qubits are internally represented. At creation or upon reset, the \pythoninline{class QuantumComputer} prepares five qubits named ``q0'',``q1'',``q2'',``q3'',``q4'',and ``q5'' each having state $\ket{0}$. Its two primary methods are \pythoninline{apply_gate} and \pythoninline{apply_two_qubit_gate_CNOT}, which allow the user to apply the respective one and two qubit quantum gates which \textbf{Quintuple} supports. Additionally, the \pythoninline{execute} method allows the user to execute code snippets in a simplified syntax designed to be fully compatible with execution on the IBM Quantum Experience hardware, which compiles to use the appropriate pure Python methods. After the evolution code has been executed, the internal state can be easily queried and compared to expected results. \subsubsection{Applying Gates to Individual Qubits} The method \pythoninline{apply_gate} takes as arguments a gate of the \pythoninline{class Gate} and the name of the qubit to act on. Under the hood, this method acts on this qubit by simply applying the gate if the qubit is the only element of its quantum register, or if the qubit is a member of a quantum register with more than one element, by creating and applying the corresponding gate to act on that qubit within the register. \subsubsection{Applying Controlled Gates to Two Qubits} The \pythoninline{apply_two_qubit_gate_CNOT} method has a similar syntax, taking as an argument the name of the control and the name of the target qubit. No gate name is needed as this method handles CNOT only. The quantum register(s) containing these two qubits, potentially the same register, are found within the \pythoninline{QuantumComputer}'s \pythoninline{QuantumRegisterCollection}. If the two quantum registers the method is acting on (containing the control and the target qubit respectively) are distinct and each contain one qubit only, then a combined state corresponding to the tensor product of these two states is created and the default \pythoninline{Gate.CNOT2_01} gate is applied. If after application, the combined state is fully separable into two individual qubits in the $z$ basis ($\{\ket{0}, \ket{1}\}$), the target qubit is alone changed and the two are not entangled. If, however, after the application the combined state is not fully separable in this fashion, they are merged into a single quantum register. If one or both of the quantum registers given contain more than one qubit, then their states are likewise combined via a tensor product as necessary (if they don't already reside in the same quantum register). Then the appropriate CNOT matrix formulation for the combined state is applied to the combined state. The state of the relevant quantum register--the new register if one was created, otherwise the existing register which held both qubits--is set to the output of this calculation. Although \textbf{Quintuple} currently supports only the \pythoninline{CNOT} controlled gate out of the box; additional controlled gates could be easily supported. Indeed, given that the gate set \textbf{Quintuple} supports is universal further controlled gates can be built out of supported components without modification. \subsubsection{Measurement} The \pythoninline{measure} method of the \pythoninline{class QuantumComputer} does a probabilistic measurement of the quantum register in which the desired qubit resides. The measurement is performed in the default ($z$) basis and collapses the state. Since we are in a simulation, we can perform the same computation repeatedly and verify that the measurement operation statistically converges to the distribution given by the probability amplitude of the state in superposition resulting from the computation. Since we are in a simulation we can also have direct access to these amplitudes. For convenience, before a measurement is performed the state in superposition is stored and is accessible later via the method \pythoninline{get_noop()} of the \pythoninline{class QuantumRegister}. Nature doesn't give us this information, but the \textbf{Quintuple} module can. This is useful for testing or later analysis. The \pythoninline{bloch} method of \pythoninline{class QuantumComputer} implements the capability of visualizing a single qubit on the bloch sphere. If the value of \pythoninline{get_noop()} is set, the state has been accessed and is collapsed. \subsubsection{Checking Output} Internally, \pythoninline{class QuantumComputer} may be representing qubits in any combination of \pythoninline{QuantumRegister}s and within each \pythoninline{QuantumRegister} in any order. To compare to expected outputs, we need to be able to compare the probability amplitudes or qubit states for a collection of qubits in a specified order or to compare the Bloch coordinates for a given qubit to an expected result. Thus, methods are provided so that the user can specify a qubit or group of qubits in a comma separated string, along with the expected result in their specified order and use an equality to test whether the result matches. The algorithm used to output the entangled state in the desired order is given in \textbf{Appendix \ref{reordering}}. At this time the requested order must be in increasing qubit index order due to the detailed implementation of the reordering algorithm. The \pythoninline{probabilites_equal} and \pythoninline{qubit_states_equal} methods function similarly, the former comparing probabilities and the latter amplitudes. If one of the quantum registers contains the requested qubits in order directly, this is simply computed and returned. Otherwise, an algorithm is run to output an entangled state representing the ordered tensor product of the requested qubits, and the probability or amplitude vector representing this entangled state is compared to that specified by the user. The \pythoninline{bloch_coords_equal} simply compares the Bloch representation of the desired qubit to that specified, if it happens to be in its own quantum register. If the desired qubit is in a quantum register with other qubits, it attempts to separate it from the quantum register in which it resides. The ``easy'' separation algorithm is simplistic, and only succeeds if the state is a permutation of the tensor product single-qubit states which are in the set $\{\ket{0},\ket{1},\ket{+},\ket{-},\ket{\circlearrowright},\ket{\circlearrowleft}\}$. Thus, just because the separation algorithm returns failure does not imply the state is fundamentally inseparable. If the desired qubit is not easily separable from others in its quantum register, the comparison method raises an exception. If it is, then the method finds the desired qubit, now in a single qubit state, and compares the result to that desired. \subsubsection{Execution of Programs in IBM's syntax} Programs for execution on \textbf{Quintuple}'s \pythoninline{class QuantumComputer} can be written in a concise format, compatible with direct execution on the IBM Quantum Experience hardware. It is the language which is printed out to accompany the graphical setup of states, gates, and measurement operations in the IBM Quantum Experience interface. The interface also allows the user to simply copy and paste programs in this language, rather than forcing them through the graphical intermediary. Currently, the following syntax for use in \pythoninline{class QuantumComputer}'s \pythoninline{execute} method encompasses all that a user is able to do on the 5-qubit IBM Quantum Experience hardware: \begin{tabular} {| l | r | } \hline available qubit list & $q[0],q[1],q[2],q[3],q[4]$ \\ 1-qubit gate list & h,t,tdg,s,sdg,x,y,z,id \\ 1-qubit gate action & ``gate q[i];'' \\ 2-qubit CNOT gate list & cnot \\ 2-qubit CNOT gate action & ``cnot q[control], q[target];'' \\ measurement operation list & measure, bloch \\ measurement operation action & ``operation q[i];'' \\ \hline \end{tabular} \noindent Here $\{$h,t,tdg,s,sdg,x,y,z,id$\}$ correspond to the Python \begin{python} Gate.H, Gate.T, Gate.Tdagger, Gate.S, Gate.Sdagger, Gate.X, Gate.Y, Gate.Z, Gate.eye \end{python} A program in this syntax it can be executed easily. Program code can be put in a Python string or equivalently read in from a file into a string. The code can be executed with the \pythoninline{execute} method, and afterwards the state of the quantum computer can be probed as desired in pure Python. The following \textbf{Section \ref{usage}} contains an explicit example of code in this syntax and its usage. For convenience, although the \pythoninline{execute} method takes in a string representing the program code, for testing and keeping track of program output the \pythoninline{class Program} is provided. This has the code in its \pythoninline{code} variable but additionally can store an expected \pythoninline{result_probability} or \pythoninline{bloch_vals}. For perusal, use, elaboration and testing, over 40 example programs are collected in the \pythoninline{class Programs}. \section{Quintuple Code, exploration of modes of usage} \label{usage} In this section a variety of modes of usage of the \textbf{Quintuple} module are provided. For consistency and comparison, each example mode of usage executes the same algorithm, corresponding to swapping the states of two qubits. As a more detailed overview of the action of this algorithm, here the quantum computer begins with $\ket{q_1}=\ket{0}$, $\ket{q_2}=\ket{0}$. Then the code applies the $X$ gate to $\ket{q2}$ which inverts it to be $\ket{q_2}=\ket{1}$. Thus at the initial stage $\ket{q_1 q_2}=\ket{01}$. The algorithm then applies a series of CNOT and H gates such that we end up with $\ket{q_1 q_2}=\ket{10}$, a swapping of the states of the qubits at the initial stage. \subsection{Syntax compatible with IBM Quantum Experience hardware} To prepare for execution, we set the \pythoninline{swap_code} variable to the string containing the program code: \begin{python} x q[2]; cx q[1], q[2]; h q[1]; h q[2]; cx q[1], q[2]; h q[1]; h q[2]; cx q[1], q[2]; measure q[1]; measure q[2]; \end{python} \noindent We can then execute and examine the results with: \begin{python} qc=QuantumComputer() qc.execute(swap_code) Probability.pretty_print_probabilities( qc.qubits.get_quantum_register_containing( "q1").get_state()) \end{python} \noindent which will print, as expected: \begin{python} |psi>=|10> Pr(|10>)=1.000000; \end{python} \subsection{Swap program in pure Python} This same algorithm can be executed in pure python using the machinery of \pythoninline{class QuantumComputer} \begin{python} qc=QuantumComputer() qc.apply_gate(Gate.X,"q2") qc.apply_two_qubit_gate_CNOT("q1","q2") qc.apply_gate(Gate.H,"q1") qc.apply_gate(Gate.H,"q2") qc.apply_two_qubit_gate_CNOT("q1","q2") qc.apply_gate(Gate.H,"q1") qc.apply_gate(Gate.H,"q2") qc.apply_two_qubit_gate_CNOT("q1","q2") qc.measure("q1") qc.measure("q2") \end{python} \subsection{Swap in Pure python, without the QuantumComputer machinery} Equivalently this algorithm can be run using the machinery of the \textbf{Quintuple} module states and gates without relying on the abstraction of its \pythoninline{class QuantumComputer} in the following manner: \begin{python} q1=State.zero_state q2=State.zero_state q2=Gate.X*q2 new_state=Gate.CNOT2_01*np.kron(q1,q2) H2_0=np.kron(Gate.H,Gate.eye) H2_1=np.kron(Gate.eye,Gate.H) new_state=H2_0*new_state new_state=H2_1*new_state new_state=Gate.CNOT2_01*new_state new_state=H2_0*new_state new_state=H2_1*new_state new_state=Gate.CNOT2_01*new_state \end{python} This manner of working with the module provides the most complete mathematical understanding of the operations that \pythoninline{class QuantumComputer} is abstracting. Any individual state or gate can be printed, and it is clear how entanglement is represented as this is not done under the hood. This mode of execution also provides the most clear understanding of the convenience that \textbf{Quintuple}'s \pythoninline{class QuantumComputer} affords. Explicit execution in this manner requires a more complicated syntax, manual management of quantum registers, and no convience methods are available. \section{Summary and Outlook} \label{conclusion} \textbf{Quintuple} has been developed to aid the study and research of 5-qubit systems. \textbf{Quintuple} facilitates the development and debugging of quantum algorithms for deployment on IBM's Quantum Experience by providing an out-of-the-box self-contained ideal simulator of IBM's 5-qubit hardware and software infrastructure. Using the widely available and open source computer language Python and its numerical module \textbf{numpy}, \textbf{Quintuple} provides full support for all operations available on the IBM Quantum Experience hardware. This quantum computer class can be used interactively or scripted, in native Python or using a simplified syntax directly compatible with that used on the IBM Quantum Experience infrastructure. \textbf{Quintuple} has been designed to be flexible enough to be simply extended to support further qubits, gates, syntax, and algorithmic abstractions as the IBM Quantum Experience infrastructure itself expands in functionality. Several extensions of \textbf{Quintuple} are planned. First, as the IBM Quantum Experience evolves, whether to support additional gates, qubit number, or abstraction, \textbf{Quintuple} will necessarily need to be updated to keep parity. Some of these updates can and will be done in anticipation, so long as the simplicity of \textbf{Quintuple} is maintained and is backwards compatible with the existing IBM Quantum Experience hardware support. Second, \textbf{Quintuple} is an ideal quantum computer simulator, but a real quantum computer has a variety of interactions of a quantum register with its environment. Hardware designers attempt to minimize such interactions, but realistically always exist. These interactions, due to the noise of the environment, induce a non-unitary evolution component to the system, resulting in a loss of information called decoherence. The IBM Quantum Experience hardware is no exception to being susceptible to these non-ideal interactions, and it is possible to model these as well in simulation. Doing so will make \textbf{Quintuple} even more useful to researchers designing and implementing algorithms to run on the IBM Quantum Experience, so integrating such modeling is planned for a future update of \textbf{Quintuple}. \section*{Acknowledgments} I am currently on leave from the NSF-AAPF grant 1501208 to conduct observations in Antarctica with the South Pole Telescope. I would like to thank Dr. Casey Handmer and Dr. Jerry M. Chow respectively for helpful comments during the preparation of this manuscript. I acknowledge use of the IBM Quantum Experience for this work. The views expressed are those of the author and do not reflect the official policy or position of IBM or the IBM Quantum Experience team. \begin{appendices} \section{Reordering Algorithm} \label{reordering} The reordering algorithm allows the user to compare an ordered set of qubits to the output of the quantum computation. Such an algorithm is necessary as while internal to the quantum computer abstraction qubits can be in arbitrary order grouped in arbitrary quantum registers, the user desires output in a specified order. This section presents the details of this algorithm. There are some requested configurations which would be impossible to provide without merging quantum registers, and the first step of the reordering algorithm computes and merges quantum registers as needed so that it may be possible to sort and return the desired configuration. For example, if the user requests the order ``q0'',``q1'',``q3'',``q4'' and internally ``q0'' and ``q4'' are members of a single quantum register, and ``q1'' and ``q3'' are members of another, these two quantum registers will have to be merged before sorting. After the first step, we know we have a set of quantum registers that it is possible to reorder into the requested order. However it could still be the case that we cannot return exactly the requested order. For example if ``q0'',``q1'',``q3'',``q4'' were again requested but in this case internally all 5-qubits reside in the same quantum register, it will be in general not possible to separate out ``q2''. This is checked for and the algorithm throws an exception if sorting is not possible at this stage. Since we are only dealing with a small number of qubits (5) it is possible to use a simplistic sorting algorithm for clarity; in this case bubble sort is chosen. With each step in the sorting algorithm, we must also rearrange the state of the quantum register involved to correspond to the new order. Using a sorting algorithm with simple well defined operations, bubble sort with its in place swaps, makes it easy to apply the necessary matrix operations to the quantum register. The bubble sort algorithm is simple to describe: it steps through a list comparing adjacent items and swaps them as necessary, and repeats this stepping through until the list is sorted. It has a worst case performance of $\mathcal{O}(n^2)$. Since $n=5$ in our case this is not a big penalty to pay for simplicity, and the nature of quantum computation makes this the least of our worries were \textbf{Quintuple} attempted to be extended to large $n$. The bubble sort algorithm is explicitly coded so that as we swap the qubits to match the desired order, we also rearrange the state of the quantum register involved to correspond with the new order. This is done by computing the permutation matrix corresponding to the rearranging prescribed by bubble sort, and applying this permutation matrix to the state. This is done with every swap that bubble sort prescribes of the qubit list, meaning that the state is in the corresponding order when the qubit list is sorted. The final step of the reordering algorithm is to just return a single state representing the qubits of interest; this is possible as was ensured in the previous step. For example if ``q0'',``q1'',``q3'',``q4'' are requested, and ``q2'' resides in a separate quantum register than any of these qubits, then ``q2'' is ignored. The result is then easily computed as the ordered tensor product the quantum registers solely containing ordered qubits of interest. This result can be compared to the expected state supplied by the user. The pseudo-code for the algorithm is included below: \begin{algorithm} \caption{Reordering}\label{reorder} \begin{algorithmic}[1] \Function{Reorder}{O: requested order} \Algphase{Phase 1 - Merge quantum registers} \Require O is in increasing order \For{$q \in O$} \For{$r \in R \gets \text{ quantum registers}$} \State $rmin \gets \text{smallest qubit in r}$ \State $rmax \gets \text{largest qubit in r}$ \State $S \gets \text{all qubits between (inclusive) } rmin \text{ and } rmax$ \If{$q \not \in r \And q \in S $} \State $r_q \gets \text{the register q belongs to}$ \State $\textsc{merge}(r_q,r)$ \EndIf \EndFor \EndFor \Algphase{Phase 2 - Sort quantum registers} \Ensure Every quantum register has qubits that are either all in $O$ or none are in $O$ \For{$r \in R \gets \text{ quantum registers}$} \State $Q \gets \text{ qubits in }r $ \If{$Q \cap O \not \in \{\emptyset,Q\}$} \State \Return $failure$ \EndIf \If{$Q$ not ordered} \State $n \gets length(Q)$ \State $swapped \gets true$ \While{$swapped \neq false$} \State $swapped \gets false$ \For{$i=0$ to $n-1$} \If{$Q[i]>Q[i+1]$} \State $\textsc{Swap}(r,i,i+1)$ \State $swapped \gets true$ \EndIf \EndFor \EndWhile \EndIf \EndFor \Algphase{Phase 3 - Create combined answer state} \State $answer \gets nil$ \For{$r \in R \gets \text{ quantum registers}$} \State $Q \gets \text{ qubits in }r $ \For{$q \in Q $} \If{$Q \in O$} \If{$answer = nil$} \State $answer \gets q$ \Else \State $answer \gets answer \otimes q$ \EndIf \EndIf \EndFor \EndFor \State \Return $answer$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{Swap }\label{swap} \begin{algorithmic}[1] \Procedure{Swap}{$r,i,j$} \State $Q \gets \text{ qubits in }r $ \State $state \gets \text{ state of } r $ \Algphase{Phase 1 - Permute the state } \State $n \gets length(Q)$ \State $L\gets \text{ all possible states of } n \text{ qubits in canonical ordering} $ \State $permute \gets Id_{n\times n}$ \Comment{$n\times n$ identity matrix} \State $swapped \gets \emptyset $ \For{$c \in L$} \State $newc \gets c$ \State $\textsc{SwapHelper}(newc,i,j)$ \If{$newc \neq c$} \State $i_{\text{per}} \gets \text{ index of } c \text{ in L }$ \State $j_{\text{per}} \gets \text{ index of } newc \text{ in L }$ \State $swap \gets \{i_{\text{per}},j_{\text{per}}\}$ \If{$swap \not \in swapped$} \State $swapped \gets swapped \cup swap $ \State $\textsc{SwapHelper}(permute.rows,i_{\text{per}},j_{\text{per}})$ \EndIf \EndIf \EndFor \State $state \gets permute \cdot state$ \Algphase{Phase 2 - Swap the qubits in the register} \State $\textsc{SwapHelper}(Q,i,j)$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Swap Helper}\label{swaphelper} \begin{algorithmic}[1] \Procedure{SwapHelper}{$l,i,j$} \State $tmp \gets l[i]$ \State $l[i] \gets l[j]$ \State $l[j] \gets tmp$ \EndProcedure \end{algorithmic} \end{algorithm} \clearpage \end{appendices} \bibliographystyle{phy-bstyles/cpc}
1802.09128
\section{Additional Experiments: Numerical Counterexample to Oja Convergence with Constant Step Size} \label{sec:counter} \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{Figs/counter2.pdf} \vspace{-3.0cm} \caption{Counterexample for the convergence of averaged SGD with constant step size.} \label{fig:counter} \end{figure} In this section we present an additional experiment which shows empirically that averaged SGD with constant step-size does not always converge for the streaming $k$-PCA problem. We consider $d=2$ and a covariance matrice $H$ with random eigenvectors and two eigenvalues $\frac{1\pm1/\pi}{4}$. The noise distribution of the stream $\{h_n h_n^\top \}_{n\geq0}$ uses a more involved construction. Consider $\alpha_n\sim\mathcal N(-\pi/2,\pi^2/4)$ and $\beta_n\sim\mathcal N(\pi/4,\pi^2/16)$ and $\tau_n\sim \mathcal{B}(1/2)$. Then define $\theta_n$ as: \[ \theta_n= \tau_n \alpha_n+(1-\tau_n)\beta_n, \] and the stream $h_n$ to be \[ h_n= \Big[\frac{\cos(\theta_n)}{\sqrt{(1-1/\pi)/2}}, \frac{\sin(\theta_n)}{\sqrt{(1+1/\pi)/2}}\Big]. \] \myfig{counter}, compares the performance of averaged SGD with constant step size $\gamma=1$ and the decreasing step size $\gamma_n=\frac{1}{\sqrt{n}}$. We see that with constant step size both SGD and averaged SGD do not converge to the true solution. SGD oscillates around the solution in ball of radius $\sim \gamma$ and averaged SGD does converge but not to the correct solution (although is still contained in a ball of radius $\sim 10^{-4}$ around the correct solution). On the other hand, SGD with decreasing step size behaves just as well as with a Gaussian data stream. SGD converges to the solution at the slow rate $O(1/\sqrt{n})$, while averaged SGD converges at the fast rate $O(1/{n})$. This interesting example shows that constant step size averaged SGD does not converge to the correct solution in all situations. However, it remains a open problem to investigate the convergence properties of constant step size SGD in the Gaussian case. \section{Proofs in \mysec{application}} \label{sec:appapp} Here we provide further discussion and proofs of results described in \mysec{application}. \subsection{Proofs in \mysec{geostrong}} Here we present the proofs of the slow convergence rate (both in 2nd and 4th moments) for SGD applied to geodesically-smooth and strongly-convex functions. As discussed in \mysec{geostrong} we will take the retraction $R$ to be the exponential map throughout this section. Before we begin, we recall the following Lemma from \citet{moulines2011non}, \begin{lemma}\label{lem:computsum} Let $n,m\in\mathbb{N}$ such that $m<n$ and $\alpha\geq0$. Then, \[ \frac{1}{2(1-\alpha)}[n^{1-\alpha}-m^{1-\alpha}]\leq \sum_{k=m+1}^n n^{-\alpha}\leq \frac{1}{1-\alpha}[n^{1-\alpha}-m^{1-\alpha}]. \] \end{lemma} This follows by simply bounding sums via integrals. With this result we can now show that SGD applied to (local) geodesically-smooth and strongly-convex functions will converge at the ``slow'' rate with an appropriately decaying size. \begin{proposition}\label{prop:mom2} Let Assumptions \ref{assump:manifold}, \ref{assump:noiseunbiased}, \ref{assump:noiseLip} and \ref{assump:strongconv} hold for the iterates evolving in \eq{grad_desc}. Recalling that $\gamma_n=Cn^{-\alpha}$ where $C>0$ and $\alpha\in[1/2,1)$ we have, \[ \mathbb{E}[d^2(x_{n},x_\star)]\leq \frac{2C \zeta \upsilon^2}{\mu n^\alpha } + O( \exp(-c\mu n^{1-\alpha} )), \] and \[ \mathbb{E}[d^4(x_{n},x_\star)]\leq\frac{4C(3+\zeta)\zeta\upsilon^4}{\mu n^{2\alpha} } + O( \exp(-c\mu n^{1-\alpha} )). \] for some $c > 0$, where $\zeta > 0$ is a constant depending on the geometry of $\mathcal{M}$. \end{proposition} \begin{proof} Throughout we will use $c$ to denote a global, positive constant that may change from line to line. The quantity $\zeta \equiv \zeta(\kappa, c) = \frac{\sqrt{\abs{\kappa}} c}{\tanh(\sqrt{\abs{\kappa}} c)}$ is a geometric quantity from \citet{zhang2016first}, where $\kappa$ denotes the sectional curvature of the manifold. Note that bound on the sectional curvature is subsumed by Assumption \ref{assump:manifold} -- since it is a smooth function on $\mathcal{X}$, so is bounded. \paragraph{Bound on the second moment.} We first prove the 2nd-moment bound by following the proof of Theorem 2 by \citet{moulines2011non}, but adapting it to the setting of g-strong convexity. Using Corollary 8 (a generalization of the law of cosines to the manifold setting) by \citet{zhang2016first} we have \begin{equation}\label{eq:mom2} d^2(x_{n+1},x_\star) \leq d^2(x_{n},x_\star) +2\gamma_{n+1}\langle \nabla f_{n+1}(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle +\gamma_{n+1}^2 \zeta \Vert \nabla f_{n+1}(x_n) \Vert^2, \end{equation} where $\zeta$ satisfies $\max_{x\in\mathcal{X}} \zeta(\kappa, d(x,x_\star))\leq \zeta$. Taking conditional expectations yields \[ \mathbb{E}[d^2(x_{n+1},x_\star)\vert \mathcal{F}_n]\leq d^2(x_{n},x_\star) +2\gamma_{n+1}\langle \nabla f(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle +\gamma_{n+1}^2 \zeta \mathbb{E}[\Vert \nabla f_{n+1}(x_n) \Vert^2\vert \mathcal{F}_n]. \] Using the definition of g-strong convexity and Assumptions~\ref{assump:noiseLip},\ref{assump:strongconv} we directly get \[ \mathbb{E}[d^2(x_{n+1},x_\star) | \mathcal{F}_n]\leq (1-2\gamma_{n+1}\mu )d^2(x_{n},x_\star)+ \gamma_{n+1}^2 \zeta \upsilon^2. \] Taking the full expectation, and denoting by $\delta_{n}=\mathbb{E}[d^2(x_{n},x_\star)]$, we obtain the recursion, \[ \delta_n\leq (1-2\gamma_{n}\mu )\delta_{n-1}+ \gamma_{n}^2 \zeta \upsilon^2. \] Unrolling the recursion we have, \[ \delta_n\leq \prod_{i=1}^n(1-2\gamma_{i}\mu )\delta_0 + \zeta\upsilon^2 \sum_{i=1}^n \gamma_i^2 \prod_{k=i+1}^n(1-2\mu\gamma_k). \] Using the elementary inequality $(1-x)\leq \exp(-x)$ for $x\in\mathbb R$, we observe the first term on the right side decreases exponentially fast. To analyze the second term, we split it into two components around $\lfloor n/2 \rfloor$: \begin{equation} \sum_{i=1}^n \gamma_i^2 \prod_{k=i+1}^n(1-2\mu\gamma_k) = \sum_{i=1}^{\lfloor n/2 \rfloor} \gamma_i^2 \prod_{k=i+1}^n(1-2\mu\gamma_k) +\sum_{i={\lfloor n/2 \rfloor}+1}^n \gamma_i^2 \prod_{k=i+1}^n(1-2\mu\gamma_k). \label{eq:split} \end{equation} For the first term in \eq{split}, using again $(1-x)\leq \exp(-x)$ for $x\in\mathbb R$ \begin{eqnarray*} \sum_{i=1}^{\lfloor n/2 \rfloor} \gamma_i^2 \prod_{k=i+1}^n(1-2\mu\gamma_k) &\leq& \prod_{k=\lfloor n/2 \rfloor+1}^n(1-2\mu\gamma_k) \sum_{i=1}^{\lfloor n/2 \rfloor} \gamma_i^2\\ &\leq& \prod_{k=\lfloor n/2 \rfloor+1}^n\exp(-2\mu\gamma_k) \sum_{i=1}^{\lfloor n/2 \rfloor} \gamma_i^2\\ &\leq& \exp(-2\mu \sum_{k=\lfloor n/2 \rfloor+1}^n\gamma_k) \sum_{i=1}^{\lfloor n/2 \rfloor} \gamma_i^2\\ &\leq& C \exp(-{c\mu }n^{1-\alpha}) n^{1-2\alpha}, \end{eqnarray*} using Lemma \ref{lem:computsum}, which will decrease exponentially fast as $n \to \infty$. For the second term, \begin{eqnarray*} \sum_{i={\lfloor n/2 \rfloor}+1}^n \gamma_i^2 \prod_{k=i+1}^n(1-\mu\gamma_k)&\leq&\gamma_{\lfloor n/2 \rfloor}\sum_{i={\lfloor n/2 \rfloor}+1}^n \gamma_i \prod_{k=i+1}^n(1-\mu\gamma_k)\\ &=&\gamma_{\lfloor n/2 \rfloor}\sum_{i={\lfloor n/2 \rfloor}+1}^n\frac{ 1-(1-\mu\gamma_i)}{\mu} \prod_{k=i+1}^n(1-\mu\gamma_k) \\ &=&\frac{\gamma_{\lfloor n/2 \rfloor}}{\mu}\sum_{i={\lfloor n/2 \rfloor}+1}^n [ \prod_{k=i+1}^n(1-\mu\gamma_k)-\prod_{k=i}^n(1-\mu\gamma_k)]\\ &\leq&\frac{\gamma_{\lfloor n/2 \rfloor}}{\mu}[1-\prod_{k=\lfloor n/2 \rfloor+2}^n(1-\mu\gamma_k)]]\\ &\leq&\frac{\gamma_{\lfloor n/2 \rfloor}}{\mu}\leq \frac{2C}{n^\alpha\mu}. \end{eqnarray*} The bound on the second moment follows from this last inequality. \paragraph{Bound on the fourth moment.} We now prove the bound on the 4th-moment. We start by expanding the square of \eq{mom2}, \begin{multline*} d^4(x_{n+1},x_\star) \leq d^4(x_{n},x_\star) +4\gamma_{n+1}^2(\langle \nabla f_{n+1}(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle)^2 +\gamma_{n+1}^4 \zeta^2 \Vert \nabla f_{n+1}(x_n) \Vert^4 \\ +4 \gamma_{n+1}\langle \nabla f_{n+1}(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle d^2(x_{n},x_\star) +2 \gamma_{n+1}^2 \zeta \Vert \nabla f_{n+1}(x_n) \Vert^2d^2(x_{n},x_\star) \\+ 4 \gamma_{n+1}^3\langle \nabla f_{n+1}(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle \zeta \Vert \nabla f_{n+1}(x_n) \Vert^2. \end{multline*} Taking conditional expectations and using Cauchy-Schwarz we have, \begin{eqnarray*} \mathbb{E} [d^4(x_{n+1},x_\star) \vert\mathcal F_{n}] &\leq& d^4(x_{n},x_\star) +2(2+\zeta)\gamma_{n+1}^2 \mathbb{E} [ \Vert \nabla f_{n+1}(x_n)\Vert^2 \vert\mathcal F_{n}]d^2(x_{n},x_\star) \\ && +\gamma_{n+1}^4 \zeta^2 \mathbb{E}[ \Vert \nabla f_{n+1}(x_n) \Vert^4 \vert\mathcal F_{n}] +4 \gamma_{n+1}\langle \nabla f(x_n),{\mathop { \rm Exp{}}}_{x_n}^{-1}(x_\star)\rangle d^2(x_{n},x_\star) \\ &&+ 4 \gamma_{n+1}^3 \zeta \mathbb{E} \Vert \nabla f_{n+1}(x_n) \Vert^3 \vert\mathcal F_{n}] d(x_{n},x_\star). \end{eqnarray*} Using that $f$ is g-strongly convex (Assumption~\ref{assump:strongconv}), the 4th-moment bound in Assumption~\ref{assump:noiseLip}, and Jensen's inequality we obtain, \begin{multline*} \mathbb{E} [d^4(x_{n+1},x_\star) \vert\mathcal F_{n}]\leq (1-4\gamma_{n+1}\mu)d^4(x_{n},x_\star) +2(2+\zeta)\gamma_{n+1}^2 d^2(x_{n},x_\star) \upsilon^2 \\ +\gamma_{n+1}^4 \zeta^2 \upsilon^4 + 4 \gamma_{n+1}^3 \zeta \upsilon^3 d(x_{n},x_\star). \end{multline*} Using the upper bound $4 \gamma_{n+1}^3 \zeta \upsilon^3 d(x_{n},x_\star) \leq 2 \gamma_{n+1}^4 \zeta^2 \upsilon^4+2 \gamma_{n+1}^2 \upsilon^2d(x_{n},x_\star) ^2 $, we have, \begin{equation}\label{eq:dada} \mathbb{E} [d^4(x_{n+1},x_\star) \vert\mathcal F_{n}]\leq (1-4\gamma_{n+1}\mu)d^4(x_{n},x_\star) +2(3+\zeta)\upsilon^2\gamma_{n+1}^2 d^2(x_{n},x_\star) +3\gamma_{n+1}^4 \zeta^2 \upsilon^4. \end{equation} Now let us define, $a_n= \mathbb{E} [d^4(x_{n+1},x_\star)]$, $b_n= \mathbb{E} [d^2(x_{n+1},x_\star)]$ and $u_n=a_n+\frac{2(3+\zeta)\upsilon^2}{\mu} \gamma_{n+1}b_n$. Taking the full expectation of \eq{dada}, we can bound $u_{n+1}$ as, \begin{eqnarray*} u_{n+1} &\leq & (1-\gamma_{n+1}\mu) u_n +3\gamma_{n+1}^4 \zeta^2 \upsilon^4 +\frac{2(3+\zeta)\zeta\upsilon^4}{\mu} \gamma_{n+1}^3 +2(3+\zeta)\upsilon^2\gamma_{n+1}^2 b_n\\ && - (1-\gamma_{n+1}\mu) \frac{2(3+\zeta)\upsilon^2}{\mu} \gamma_{n+1} b_n+(1-2\gamma_{n+1}\mu) \frac{2(3+\zeta)\upsilon^2}{\mu} \gamma_{n+1} b_n. \end{eqnarray*} Noting that $2(3+\zeta)\upsilon^2\gamma_{n+1}^2 b_n - (1-\gamma_{n+1}\mu) \frac{2(3+\zeta)\upsilon^2}{\mu} \gamma_{n+1} b_n+(1-2\gamma_{n+1}\mu) \frac{2(3+\zeta)\upsilon^2}{\mu} \gamma_{n+1} b_n=0$, we obtain the simple upper-bound on $u_{n+1}$, \[ u_{n+1} \leq (1-\gamma_{n+1}\mu) u_n +3\gamma_{n+1}^4 \zeta^2 \upsilon^4 +\frac{2(3+\zeta)\zeta\upsilon^4}{\mu} \gamma_{n+1}^3. \] Using $(1-x)\leq \exp( -x)$ for $x\in\mathbb R$, we have, \[ u_{n+1} \leq \exp (-\gamma_{n+1}\mu) u_n +3\gamma_{n+1}^4 \zeta^2 \upsilon^4 +\frac{2(3+\zeta)\zeta\upsilon^4}{\mu} \gamma_{n+1}^3. \] We can unroll this recursion as before, \[ u_{n} \leq \exp (-\mu \sum_{i=1}^n \gamma_{i}) u_0 + \sum_{i=1}^n[3\zeta^2 \upsilon^4\gamma_{i}^4 +\frac{2(3+\zeta)\zeta\upsilon^4}{\mu} \gamma_{i}^3] \prod_{k=i+1}^n\exp(-\mu\gamma_k). \] Proceeding exactly as in the proof of the bound on the second moment, we may bound $u_n$ as \[ u_n\leq \frac{2(3+\zeta)\zeta\upsilon^4}{\mu} \gamma_{\lfloor \frac{n}{2}\rfloor}^2 +\text{exponentially small remainder terms}. \] The conclusion follows. \end{proof} \section{Applications} \label{sec:application} We now introduce two applications of our Riemannian iterate-averaging framework. \subsection{Application to Geodesically-Strongly-Convex Functions} \label{sec:geostrong} In this section, we assume that $f$ is globally geodesically convex over $\mathcal{X}$ and take $R \equiv {\mathop { \rm Exp{}}}$, which allows the derivation of global convergence rates. This function class encapsulates interesting problems such as the matrix Karcher mean problem \citep{bini2013computing} which is non-convex in Euclidean space but geodesically strongly convex with an appropriate choice of metric on $\mathcal{M}$. \citet{zhang2016first} show for geodesically-convex $f$, that averaged SGD with step size $\gamma_n \propto \frac{1}{\sqrt{n}}$ achieves the slow $O\big(\frac{1}{\sqrt{n}}\big)$ convergence rate. If in addition, $f$ is geodesically strongly convex on $\mathcal{X}$, they obtain the fast $O(\frac{1}{n})$ rate. However, their result is not \textit{algorithmically robust}, requiring a delicate specification of the step size $\gamma_n \propto \frac{1}{\mu n}$, which is often practically impossible due to a lack of knowledge of $\mu$. Assuming smoothness of $f$, our iterate-averaging framework provides a means of obtaining a robust and global convergence rate. First, we make the following assumption: \begin{assumption} \label{assump:strongconv} The function $f$ is $\mu$-geodesically-strongly-convex on $\mathcal{X}$, for $\mu>0$, and the set $\mathcal{X}$ is geodesically convex. \end{assumption} Then using our main result in Theorem \ref{thm:main}, with $\gamma_n \propto \frac{1}{n^{\alpha}}$, we have: \begin{proposition} \label{prop:strongconvrate} Let Assumptions \ref{assump:manifold}, \ref{assump:HessianLip}, \ref{assump:noiseunbiased}, \ref{assump:noiseLip}, and \ref{assump:strongconv} hold for the iterates evolving in \eq{grad_desc} and \eq{ave_grad_desc} and take the retraction $R$ to be the exponential map ${\mathop { \rm Exp{}}}$. Then, \[ \mathbb{E}[\Vert \tilde{\Delta}_n \Vert^2] \leq \frac{1}{n} \tr[\nabla^2 f(x_\star)^{-1} \Sigma \nabla^2 f(x_\star)^{-1}] + O(n^{-2\alpha}) + O(n^{\alpha-2}). \] \end{proposition} We make several remarks. \begin{itemize} \item In order to show the result, we first derive a slow rate of convergence for SGD, by arguing that $\mathbb{E}[d^2(x_{n},x_\star)]\leq \frac{2C \zeta \upsilon^2}{\mu n^\alpha } + O( \exp(-c\mu n^{1-\alpha} ))$ and $ \mathbb{E}[d^4(x_{n},x_\star)]\leq\frac{4C(3+\zeta)\zeta\upsilon^4}{\mu n^{2\alpha} } + O( \exp(-c\mu n^{1-\alpha} ))$ where $c, C> 0$ and $\zeta > 0$ is a geometry-dependent constant (see Proposition \ref{prop:mom2} for more details). The result follows by combining these results and Theorem \ref{thm:main}. \item As in Theorem \ref{thm:main} we also obtain convergence in law and the statistically optimal covariance. \item Importantly, taking the step size to be $\gamma_n \propto \frac{1}{\sqrt{n}}$ provides a single, robust algorithm achieving both the slow $O\big(\frac{1}{\sqrt{n}}\big)$ rate in the absence of strong convexity (by \citet{zhang2016first}) and the fast $O(\frac{1}{n})$ rate in the presence of strong convexity. Thus (Riemannian) averaged SGD automatically adapts to the strong-convexity in the problem without any prior knowledge of its existence (i.e., the value of $\mu$). \end{itemize} \subsection{Streaming Principal Component Analysis (PCA)} \label{sec:stream_pca} The framework of geometric optimization is far-reaching, containing even (Euclidean) non-convex problems such as PCA. Recall the classical formulation of streaming $k$-PCA: we are given a stream of i.i.d.~symmetric positive-definite random matrices $H_n \in \mathbb{R}^{d \times d}$ such that $\mathbb{E} H_n\!=\!H$, with eigenvalues $\{ \lambda_i \}_{1 \leq i \leq d}$ sorted in decreasing order, and hope to approximate the subspace of the top~$k$ eigenvectors, $\{ v_i \}_{1 \leq i \leq k}$. Sharp convergence rates for streaming PCA (with $k\!=\!1$) were first obtained by \citet{jain2016streaming} and \citet{shamir16b} using the randomized power method. \citet{shamir2016fast} and \citet{AllenLi2017-streampca} later extended this work to the more general streaming $k$-PCA setting. These results are powerful---particularly because they provide \textit{global} convergence guarantees. For streaming $k$-PCA, a similar dichotomy to the convex setting exists: in the absence of an eigengap ($\lambda_k=\lambda_{k+1}$) one can only attain the slow $O\big(\frac{1}{\sqrt{n}}\big)$ rate, while the fast $O(\frac{1}{n})$ rate is achievable when the eigengap is positive ($\lambda_k>\lambda_{k+1}$). However, as before, a practically burdensome requirement of these fast $O (\frac{1}{n} )$, global-convergence guarantees is that the step sizes of their corresponding algorithms depend explicitly on the unknown eigengap\footnote{In this example, the eigengap is analogous to the strong-convexity parameter $\mu$.} of the matrix $H$. By viewing the $k$-PCA problem as minimizing the Rayleigh quotient, $f(X) = -\frac{1}{2} \tr [X^\top H X]$, over the Grassmann manifold, we show how to apply the Riemannian iterate-averaging framework developed here to derive a fast, \textit{robust} algorithm, \begin{align} X_n = R_{X_{n-1}} \left(\gamma_n H_n X_{n-1}\right) \quad \text{ and } \quad \tilde X_n = R_{\tilde X_{n-1}}\Big(\frac{1}{n} X_{n}X_{n}^\top \tilde X_{n-1}\Big), \label{eq:robust_oja} \end{align} for streaming $k$-PCA. Recall that the Grassmann manifold $\mathcal{G}_{d,k}$ is the set of the $k$-dimensional subspaces of a $d$-dimensional Euclidean space which we equip with the projection-like, second-order retraction map $R_X(V)=(X+V)[(X+V)^\top(X+V)]^{-1/2}$. Observe that the randomized power method update \citep{OjaKar85}, $ X_n = R_{X_{n-1}}\big( \gamma_n H_n X_{n-1} \big) $, in \eq{robust_oja}, is almost identical to the Riemannian SGD update, $ X_n = R_{X_{n-1}}\big(\gamma_n(I-X_{n-1} X_{n-1}^{\top}) H_n X_{n-1}\big) $, in \eq{grad_desc}. The principal difference between both is that in the randomized power method, the Euclidean gradient is used instead of the Riemannian gradient. Similarly, the average $ \tilde X_n = R_{\tilde X_{n-1}}\big(\frac{1}{n} X_{n}X_{n}^\top \tilde X_{n-1}\big)$, considered in \eq{robust_oja}, closely resembles the (Riemannian) streaming average in \eq{ave_grad_desc} (see \myapp{alg_stream}). In fact we can argue that the randomized power method, Riemannian SGD, and the classic Oja iteration (the linearization of the randomized power method in $\gamma_n$) are equivalent up to $O(\gamma_n^2)$ corrections (see Lemma \ref{lem:equiv_oja}). The average $ \tilde X_n = R_{\tilde X_{n-1}}\big(\frac{1}{n} X_{n}X_{n}^\top \tilde X_{n-1}\big)$ also admits the same linearization as the Riemannian streaming average up to $O(\gamma_n)$ corrections (see Lemma \ref{lem:average_oja}). Using results from \citet{shamir2016fast} and \citet{AllenLi2017-streampca} we can then argue that the randomized power method iterates satisfy a slow rate of convergence under suitable conditions on their initialization. Hence, the present framework is applicable and we can use geometric iterate averaging to obtain a local, robust, accelerated convergence rate. In the following, we will use $\{e_j \}_{1\leq j \leq k}$ to denote the standard basis vectors in $\mathbb{R}^k$. \begin{theorem} \label{thm:oja_main} Let Assumption~\ref{assump:manifold} hold for the set $\mathcal{X} = \{ X : \Vert X_\star^\top X\Vert_F^2 \geq k-\eta \}$, for some constant $0 < \eta <\frac{1}{4}$, where $X_\star$ minimizes $f(X)$ over the $k$-Grassmann manifold. Denote, $\tilde{H}_n = H^{-1/2} H_n H^{-1/2}$, and the 4th-order tensor $C_{ii'jj'} = \mathbb{E} [(v_i^\top \tilde H_n v_j) (v_{i'}^\top \tilde H_n v_{j'})]$. Further assume that $\norm{H_n}_2 \leq 1$ a.s., and that $\lambda_k > \lambda_{k+1}$. Then if $X_n$ and $\tilde{X}_n$ evolve according to \eq{robust_oja}, there exists a positive-definite matrix $C$, such that $\tilde{\Delta}_n = R_{X_{\star}}^{-1}(\tilde{X}_n)$ satisfies: \begin{align} \sqrt{n} \tilde{\Delta}_n \overset{D}{\to} \mathcal{N}(0, C) \quad \text{ with } \quad C = \sum_{j'=1}^k\sum_{i'=k+1}^d \sum_{j=1}^k\sum_{i=k+1}^d C_{ii'jj'} \frac{\sqrt{\lambda_i \lambda_j} \cdot \sqrt{\lambda_{i'} \lambda_{j'}}}{(\lambda_{j}-\lambda_{i}) \cdot (\lambda_{j'}-\lambda_{i'})} (v_i e_j^\top) \otimes (v_{i'} e_{j'}^\top). \notag \end{align} \end{theorem} We make the following observations: \begin{itemize} \item If the 4th-order tensor satisfies\footnote{For example if $H_n = h_n h_n^\top$ for $h_n \sim \mathcal{N}(0, \Sigma)$ -- so $H_n$ is a rank-one stream of Gaussian random vectors -- this condition is satisfied. See the proof of Theorem \ref{thm:oja_main} for more details. } $C_{ii'jj'} = \kappa \delta_{ii'}\delta_{jj'}$ for constant $\kappa$, the aforementioned covariance structure simplifies to, \begin{align} C = \kappa \sum_{j=1}^k\sum_{i=k+1}^d\frac{{\lambda_i \lambda_j}}{(\lambda_j-\lambda_i)^2} (v_i e_j^\top) \otimes (v_{i} e_{j}^\top). \notag \end{align} This asymptotic variance matches the result of \citet{reiss2016non}, achieving the same statistical performance as the empirical risk minimizer and matching the lower bound of \citet{CaiMaWu13} obtained for the (Gaussian) spiked covariance model. \item Empirically, even using a constant step size in \eq{robust_oja} appears to yield convergence in a variety of situations; however, we can see a numerical counterexample in \myapp{counter}. We leave it as an open problem to understand the convergence of the iterate-averaged, constant step-size algorithm in the case of Gaussian noise \citep{BouLac85}. \item Assumption~\ref{assump:manifold} could be relaxed using a martingale concentration result showing the iterates $X_n$ are restricted to $\mathcal{X}$ with high probability similar to the work of \citet{shamir16b} and \citet{AllenLi2017-streampca}. \end{itemize} Note that we could also derive an analogous result to Theorem \ref{thm:oja_main} for the (averaged) Riemannian SGD algorithm in \eq{grad_desc} and \eq{ave_grad_desc}. However, we prefer to present the algorithm in \eq{robust_oja} since it is simpler and directly averages the (commonly used) randomized power method. \section{Streaming PCA} \label{sec:stream_pca_app} Given a sequence of i.i.d.~symmetric random matrices $H_n \in \mathbb{R}^{d \times d}$ such that $\mathbb{E} H_n = H$, in the streaming $k$-PCA problem we hope to approximate the subspace of the top $k$ eigenvectors. Let us denote by $\{\lambda_i\}_{1\leq i\leq d}$ the eigenvalues of $H$ sorted in decreasing order. Sharp convergence rates and finite sample guarantees for streaming PCA (with $k=1$) were first obtained by \citet{jain2016streaming,shamir16b} using the randomized power method (with and without a positive eigengap $\lambda_1-\lambda_2$). When $\lambda_1>\lambda_2$, \citet{jain2016streaming} showed with proper choice of learning rate $\eta_i \sim {\tilde{O}} \left( \frac{1}{(\lambda_1-\lambda_2)i} \right)$, an $\epsilon$-approximation to the top eigenvector $v_1$ of $H$ could be found in $ {\tilde{O}}(\frac{\lambda_1}{(\lambda_1-\lambda_2)^2} \frac{1}{\epsilon})$ iterations with constant probability. In the absence of an eigengap, \citet{shamir16b} showed a slow rate of convergence $ {\tilde{O}}(\lambda_1/\sqrt{n})$ for the objective function using a step-size choice of $O(1/\sqrt{n})$. \citet{AllenLi2017-streampca,shamir2016fast}\footnote{\citet{shamir2016fast} does not directly address the streaming setting but his result can be extended to the streaming setting, as remarked by \citet{AllenLi2017-streampca}.} later extended these results to the more general streaming $k$-PCA setting (with $k\geq1$). The aforementioned results are quite powerful---because they are \textit{global} convergence results. In particular, they hold for any random initialization and do not require an initialization very close to the optima. In contrast, our framework only provides local results. However, streaming $k$-PCA still provides an instructive and practically interesting application of our iterate-averaging framework. An important theme in the following analysis is to leverage to underlying Riemannian structure of the $k$-PCA problem as a Grassmann manifold. Throughout this section we will assume that the stream of matrices satisfies the bound $\norm{H_n} \leq 1$ a.s. \subsection{Grassmann Manifolds} \paragraph{Preliminaries:} We begin by first reviewing the geometry of the Grassmann manifold and proving several useful auxiliary lemmas. We denote the Grassmann manifold $\mathcal{G}_{d,k}$, which is the set of the $k$-dimensional subspaces of a $d$-dimensional Euclidean space. Recalling the Stiefel manifold is the submanifold of the orthonormal matrices $\{ X\in\mathbb{R}^{d\times k}, X^\top X=I_k\}$), $\mathcal{G}_{d,k}$ can be viewed as the Riemannian quotient manifold of the Stiefel manifold where two matrices are identified as equivalent when their columns span the same subspace. Finally, $\mathcal{G}_{d,k}$ can also be identified with the set of rank $k$ projection matrices $\mathcal{G}_{d,k}=\{ X \in \mathbb{R}^{d\times d} \text{ s.t. } X^\top=X, X^2=X, \tr(X)=k \}$ \citep[see, e.g.,][for further details]{edelman1998geometry,absil2004riemannian}. We will use $\mathbf{X}$ to denote an element of $\mathcal{G}_{d,k}$, and $X$ a corresponding member of the equivalence class associated to $\mathbf{X}$, which belongs to the Stiefel manifold. Further, the tangent space at that point $\mathbf X$ is given by $T_{\mathbf X}\mathcal{G}_{d,k}= \{ Y\in \mathbb{R}^{d\times k}, Y^\top X=0 \}$. In the following we identify $\mathbf X$ and $X$ when it is clear from the context. For our present purposes, we consider the (second-order) retraction map \begin{equation}\label{eq:retraproj} R_X(V)=(X+V)[(X+V)^\top(X+V)]^{-1/2}, \end{equation} which is projection-like mapping onto $\mathcal{G}_{d,k}$. Note we implicitly extend $R_X$ to all matrices in $\mathbb{R}^{d\times d}$, and do not consider it only defined on the tangent space $T_X\mathcal{G}_{d,k}$. For $V \in T_X\mathcal{G}_{d,k}$, we still have $R_X(V)=(X+V)[I_k+V^\top V]^{-1/2}$. If $X^\top Y$ is invertible, then a short computation shows that $R_X^{-1}(Y)=(I-X^\top X X)Y(X^\top Y)^{-1}$ with $\Vert R_X^{-1}(Y)\Vert_F^2=\tr [( X^\top Y X Y^\top)^{-1}-I]$. As we argue next, $\Vert R_X^{-1}(Y)\Vert_F^2$ is in fact locally equivalent to the induced, Frobenius norm $d_F(X,Y)=2^{-1/2}\Vert X X^\top-Y Y^\top\Vert_F^2$. \paragraph{Measuring Distance on $\mathcal{G}_{d,k}$:} It will be useful to have several notions of distance defined on $\mathcal{G}_{d,k}$ between two representative elements $X$ and $Y$. Let $\theta_i$ for $i=1, \hdots, k$ denote the principal angles between the two subspaces spanned by the columns of $X$ and $Y$, i.e. $U \cos(\Theta) V^\top$ is the singular value decomposition (SVD) of $X^\top Y$, where $\Theta$ is the diagonal matrix of principal angles, and $\theta$ the $k$-vector formed by $\theta_i$. Our first distance of interest will be the arc length (or geodesic distance): \[ d_A(X, Y) = \norm{\theta}_2 = \norm{{\mathop { \rm Exp{}}}^{-1}_{X}(Y)}_2, \] while the second will be the projected, Frobenius norm: \[ d_F(X, Y) = \norm{\sin \theta}_2 = 2^{-1/2} \norm{X X^\top - Y Y^\top}_{F}. \] The distance $d_F(\cdot, \cdot)$ is induced by embedding $\mathcal{G}_{d,k}$ in Euclidean space $\mathbb{R}^{d \times k}$ and inheriting the corresponding Frobenius norm. Lastly, we will also consider the pseudo-distance induced by the retraction $\Vert R^{-1}_{X}(Y)\Vert_F$. Conveniently, for small principal angles we can show these quantities are asymptotically equivalent, \begin{lemma} \label{lem:distequiv} Let $\theta$ denote the $k$-vector of principal angles between the subspaces spanned by $X$ and $Y$. If $\norm{\theta}_{\infty} \leq \frac{\pi}{4}$ then: \[ \frac{\pi}{2} \Vert R^{-1}_{X}(Y)\Vert_F \geq \frac{\pi}{2} d_{F}(X, Y) \geq d_{A}(X, Y) \geq \frac{1}{\sqrt{2}} \Vert R^{-1}_{X}(Y)\Vert_F. \] \end{lemma} \begin{proof} In fact $d_{A}(X, Y) \geq d_{F}(X, Y)$ uniformly over $\theta$ which follows $\sin x \leq x$ for all $x$. Using the elementary inequality $|\frac{\sin x}{x}| \geq \frac{2}{\pi}$ for $x \in [-\pi/2, \pi/2]$ we immediately obtain that $d_{A}(X, Y) \leq \frac{\pi}{2} d_{F}(X, Y)$ if $\norm{\theta}_{\infty} \leq \pi/2$. A direct computation shows: \[ \Vert R^{-1}_{X}(Y)\Vert^2_F = \tr[ (X^\top YY^\top X)^{-1}-I]= \tr[ \cos(\Theta)^{-2}-I]=\tr[ (I-\sin^2(\Theta))^{-1}-I]. \] Using that for $x\in(0,\frac{1}{\sqrt{2}})$, $\frac{1}{1-x}\geq 1+x$ and $\frac{1}{1-x}\leq 1+2x$, we obtain for $\Vert \theta \Vert_\infty<\pi/4$ that, \[ d^2_{F}(X, Y)\leq \Vert R^{-1}_{X}(Y)\Vert^2_F \leq 2d^2_{F}(X, Y),\] concluding the argument. \end{proof} The local equivalence between these quantities will prove useful for relating various algorithms for the streaming $k$-PCA problem. \paragraph{Streaming $k$-PCA in $\mathcal{G}_{d,k}$:} Within the geometric framework we can cast the $k$-PCA problem as minimizing the Rayleigh quotient, $f(X) = -\frac{1}{2} \tr [X^\top H X]$, over $\mathcal{G}_{d,k}$ as \[\min _{X \in \mathcal{G}_{d,k}} -\frac{1}{2} \tr [X^\top H X].\] \citet{edelman1998geometry} show the (Riemannian) gradient of $f$\footnote{Note that on embedded manifold the Riemannian gradient of a function is given by projection of its Euclidean gradient in the tangent space of the manifold at a point.} is given by \[\nabla f (X)=-(I-X^\top X) H X.\] Similarly, the Hessian operator is characterized by the property that if $\Delta\in T_X\mathcal{G}_{d,k}$, then \[\nabla^2 f (X)[\Delta] =\Delta X^\top H X-(I -X X^\top)H\Delta.\] It is worth noting that $\nabla f (X) = 0$ for $X$ spanning any subspace or eigenvector of $H$, but that $\nabla^2 f(X)$ is only positive-definite at the optimum $X_\star$. \subsection{Algorithms for streaming $k$-PCA} \label{sec:alg_stream} We now have enough background to describe several, well-known iterative algorithms for the streaming $k$-PCA problem and elucidate their relationship to Riemannian SGD. \begin{description} \item[Randomized power method \citep{OjaKar85}:] corresponds to SGD on the Rayleigh quotient (with step size $\gamma_n$) over Euclidean space followed by a projection, \[ X_n = ( I_d + \gamma_n H_n ) X_{n-1} \big[ X_{n-1}^\top ( I_d + \gamma_n H_n ) ^{2}X_{n-1} \big]^{-1/2}. \] \item[Oja iteration \citep{Oja82}:]corresponds to a first-order expansion in $\gamma_n$ of the previous randomized power iteration, \[ X_n = X_{n-1} + \gamma_n ( I - X_{n-1} X_{n-1}^\top) H_n X_{n-1}. \] \item[Yang iteration \citep{Yan95}:] corresponds to a symmetrization of the Oja iteration, \[ X_{n} = X_{n-1} + \gamma_n (2 H_n - X_{n-1} X_{n-1}^\top H_n- H_n X_{n-1} X_{n-1}^\top) X_{n-1}. \] In the special case that $H_n=h_n h_n^\top$, the Yang iteration can also be related to the unconstrained stochastic optimization of the function $X\mapsto\mathbb{E} \Vert h_n- X X^\top h_n\Vert_F^2$. \item[(Stochastic) Gradient Descent over $\mathcal{G}_{d,k}$ \citep{bonnabel2013stochastic}:] corresponds to directly optimizing the Rayleigh quotient over $\mathcal{G}_{d,k}$ by equipping SGD with either the exponential map ${\mathop { \rm Exp{}}}$ or the aforementioned retraction $R$, \[ X_n = R_{X_{n-1}}\big(\gamma_n(I-X_{n-1} X_{n-1}^{\top}) H_n X_{n-1}\big). \] \end{description} We are now in position to show that for the present problem Riemannian SGD, the randomized power method, and Oja's iteration are equivalent updates up to $O(\gamma_n^2)$ corrections. First note that the randomized power method (with the aforementioned choice of retraction $R$) can be written as: \begin{align} R_{X}(\gamma_n H_nX), \label{eq:rand_power} \end{align} bearing close resemblance to the Riemannian SGD update, \begin{align} R_{X}(\gamma_n(I-X X^{\top}) H_nX). \label{eq:rie_sgd} \end{align} The principal difference between \eq{rie_sgd} and \eq{rand_power} is that in the randomized power method, the Euclidean gradient is used instead of the Riemannian gradient. In the following Lemma we argue that both of the updates in \eq{rie_sgd} and \eq{rand_power} can be directly approximated by the Oja iteration, $X_{n+1}=X_n+\gamma_{n+1}(I-X_n X_n^{\top}) H_{n+1}X_n+O(\gamma_{n+1}^2)$ up to 2nd-order terms and hence that $\Vert R_{X_\star}^{-1}(X_{n+1})) -R_{X_\star}^{-1}(R_{X_n}(\gamma_n\nabla f_{n+1}(X_n))))\Vert_F^2=O(\gamma_n^2)$. Therefore, a direct modification of Lemma \ref{lem:tangent_rec} shows that the iterates in \eq{rand_power} generated from the randomized power method may also be linearized in the tangent space. \begin{lemma} \label{lem:equiv_oja} Alternatively, let $X_n = R_{X_{n-1}}(\gamma_n(I-X_{n-1} X_{n-1}^{\top}) H_n X_{n-1})$ denote the Riemannian SGD update (equipped with second-order retraction $R$) or $X_n = R_{X_{n-1}}(H_n X_{n-1})$, the randomized power update. Then both updates satisfy, \[ X_{n}=X_n+\gamma_{n}(I-X_{n-1} X_{n-1}^{\top}) H_{n}X_{n-1}+O(\gamma_{n}^2), \] and hence are equivalent to the Oja update up to $O(\gamma_n^2)$ terms. \end{lemma} \begin{proof} The computation for both the randomized power method and the Riemannian SGD (equipped with the retraction $R$) are straightforward. For the randomized power method we have that, \begin{eqnarray*} X_n & = & ( I_d + \gamma_n H_n ) X_{n-1} \big[ X_{n-1}^\top ( I_d + \gamma_n H_n ) ^{2}X_{n-1} \big]^{-1/2} \\ & = & ( I_d + \gamma_n H_n ) X_{n-1} \big[ I_d + 2 \gamma_n X_{n-1}^\top H_n X_{n-1} \big]^{-1/2} + O(\gamma_n^2) \\ & = & ( I_d + \gamma_n H_n ) X_{n-1} \big[ I_d - \gamma_n X_{n-1}^\top H_n X_{n-1}\big] + O(\gamma_n^2)\\ & = & X_{n-1}+\gamma_n [I_d -X_{n-1}X_{n-1}^\top] H_n X_{n-1}+ O(\gamma_n^2). \end{eqnarray*} An identical computation shows the same result for the Riemannian SGD update, \begin{align} X_n & = \big( I_d + \gamma_n [I_d -X_{n-1}X_{n-1}^\top] H_n \big) X_{n-1} \big[ X_{n-1}^\top( I_d + \gamma_n [I_d -X_{n-1}X_{n-1}^\top] H_n ) ^{2}X_{n-1} \big]^{-1/2} \notag \\ & = X_{n-1}+\gamma_n [I_d -X_{n-1}X_{n-1}^\top] H_n X_{n-1}+ O(\gamma_n^2). \notag \end{align} \end{proof} Since these two algorithms are identical up to $O(\gamma_n^2)$ corrections we can directly show they will have the same linearization in $T_{X_{\star}} \mathcal{M}$. \begin{lemma} \label{lem:oja_linear} Let, $\Delta_n=R_{X_\star}^{-1}(X_{n})$, where $X_n$ is obtained from one iteration of the randomized power method, $X_n=R_{X_n}(\gamma_n H_n X_n)$. Then $\Delta_n$ obeys, \[ \Delta_n=\Delta_{n-1} -\gamma_n \nabla^2 f(x_\star) \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}) +O(\gamma_n^2). \] \end{lemma} \begin{proof} Let $\dot \Delta_n=R_{X_\star}^{-1}(Y_{n})$, where $Y_n$ is obtained from one iteration of Riemannian SGD (with the aforementioned second-order retraction $R$), $Y_n=R_{X_n}(\gamma_n \nabla f_n(X_n))$. From Lemma \ref{lem:tangent_rec_3} we have that $\dot \Delta_n$ may be linearized as \[ \dot\Delta_{n}=\Delta_{n-1} -\gamma_n \nabla^2 f(x_\star) \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}) \] Defining $\square= X_{n-1}+\gamma_n [I_d -X_{n-1}X_{n-1}^\top] H_n X_{n-1}$ for the Oja update, we can then show the randomized power method satisfies, \[ R^{-1}_{X_\star}(X_n)= Y_n(X_\star^\top Y_n)^{-1}=[\square +O(\gamma_n^2)](X_\star^\top (\square +O(\gamma_n^2)))^{-1}=\square (X_\star^\top \square )^{-1}+O(\gamma_n^2) \] This allows us to directly bound the difference between $\dot\Delta_{n}$ and $\Delta_n$ using Lemma \ref{lem:equiv_oja}, \[ \Vert \dot \Delta_n- \Delta_n \Vert_F= \Vert R^{-1}_{X_\star}(X_n)-R^{-1}_{X_\star}(Y_n) \Vert_F = \Vert Y_n(X_\star^\top Y_n)^{-1}-X_n(X_\star^\top X_n) \Vert_F = O(\gamma_n^2). \] Hence the randomized power method iterate $X_n$ obeys the linearization, \[ \Delta_n=\Delta_{n-1} -\gamma_n \nabla^2 f(x_\star) \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}) +O(\gamma_n^2), \] and falls within the scope of our framework. \end{proof} \subsection{Algorithms for Streaming Averaging} Just as there are several algorithms to compute the primal iterate $X_n$, their are several reasonable ways to compute the streaming average $\tilde X_n$. Our general framework directly considers $\tilde X_n=R_{\tilde X_{n-1}}[\frac{1}{n}R^{-1}_{\tilde X_{n-1}}(X_n)]$ which leads to the following update rule: \begin{equation}\label{eq:averageojaret} \tilde X_n=R_{\tilde X_{n-1}}\Big[\frac{1}{n}(I -\tilde X_{n-1}\tilde X_{n-1}^\top)X_n[\tilde X_{n-1}X_n]^{-1}\Big]. \end{equation} As described in \mysec{com}, the streaming average is an approximation to a corresponding global Riemannian average (which is intractable to compute). Hence, it is reasonable to consider other global Riemannian averages that the streaming average approximates. For instance, the update rule in \eq{averageojaret} is naturally motivated by the global minimization of $X\mapsto\sum_{i=1}^n\Vert R^{-1}_X(X_i)\Vert^2$. Considering instead the distance $d_F$, and attempting to minimizing $X\mapsto\sum_{i=1}^n d^2_F(X,X_i)$, suggests a different averaging scheme. A short computation shows the aforementiond problem can be rewritten as the maximization of the function $2\tr [X^\top(\sum_{i=1}^n X_iX_i^\top)X]$ and is therefore precisely equivalent to the $k$-PCA problem. With this in mind, we can directly use the randomized power method to compute the streaming average of the iterates. This leads to the different update rule: \begin{equation}\label{eq:averageoja} \tilde X_n=R_{\tilde X_{n-1}}\Big[\frac{1}{n}X_nX_n^\top \tilde X_{n-1}\Big], \end{equation} which is exactly one step of randomized power method iteration to compute the first $k$ eigenvectors of the matrix $\frac{1}{n}\sum_{i=1}^nX_iX_i^\top$ with step size $\gamma_n=\frac{1}{n}$. Using similar computations to those in Lemma \ref{lem:equiv_oja} and \ref{lem:oja_linear}, it can be shown that these iterates are equivalent to those in \eq{averageojaret} since they have a similar linearzation in $T_{X_\star}\mathcal{G}_{d,k}$. \begin{lemma}\label{lem:average_oja} Let $\tilde X_n=R_{\tilde X_{n-1}}\Big[\frac{1}{n}X_nX_n^\top \tilde X_{n-1}\Big]$ and $\tilde \Delta_n=R_{X_\star}^{-1}(\tilde X_n)$. Then $\tilde X_n$ obeys \[ \tilde \Delta_n=\tilde \Delta_{n-1}+\frac{1}{n}\big[\tilde \Delta_{n-1}+ \Delta_{n} \big]+ \frac{1}{n}O\big( \Vert \Delta_{n-1}\Vert^2+\Vert \tilde\Delta_{n}\Vert^2 + \frac{1}{n^2}\big). \] \end{lemma} \begin{proof} Following a similar approach as in the proof of Lemma \ref{lem:tangent_rec} we can show that \[ \tilde \Delta_n=\tilde \Delta_{n-1}+\frac{1}{n+1}[I-X_\star X_\star^\top]X_nX_n^\top \tilde X_{n-1}+O(\frac{1}{n^2}). \] Then a direct expansion shows that for all $\Delta = R^{-1}_{X_\star}(X)$, \[ X=R_{X_\star}(\Delta)=(X_\star+\Delta)[I+\Delta^\top\Delta]^{-1/2}=X_\star+\Delta+O(\Vert\Delta\Vert^2), \] so we obtain (using $\Vert\Delta_n^\top \tilde\Delta_{n-1}\Vert=O(\Vert\Delta_n\Vert^2+\Vert \tilde \Delta_{n-1}\Vert^2)$) that \[ [I-X_\star X_\star^\top]X_nX_n^\top \tilde X_{n-1}=\Delta_n+\tilde \Delta_{n-1}+O(\Vert\Delta_n\Vert^2+\Vert \tilde \Delta_{n-1}\Vert^2)). \] \end{proof} This manner of iterate averaging is interesting, not only due to its simplicity, but due to its close connection to the primal, randomized power method. In fact, this averaging method allows us to interpret the aforementioned, streaming, averaged PCA algorithm as a preconditioning method. Consider the case where we hope to compute the principal $k$-eigenspace of a poorly conditioned matrix $H$. The aforementioned streaming, averaged PCA algorithm can be interpreted as the composition of two stages. First, running $n$ steps of the randomized power method with step size $\gamma_n\propto1/\sqrt{n}$, to produce a well-conditioned matrix $\frac{1}{n}\sum_{i=1}^nX_iX_i^\top$, and then using the randomized power method with step size $1/n$ to compute the average of the points $X_i$ (which is efficient since the eigengap of $\frac{1}{n}\sum_{i=1}^nX_iX_i^\top$ is large). The intuition for this first step is formalized in the following remark, \begin{remark} Assume the sequence of iterates $\{ X \}_{i=0}^{n}$ satisfies $d_F(X_i, X_\star) = {O}(\sqrt{\gamma})$ for all $i=1, \dots, n$ then the eigengap of the averaged matrix $\tilde{X} = \frac{1}{n} \sum_{i=1}^{n} X_i X_i^\top$ satisfies $\tilde \lambda_{k}-\tilde \lambda_{k+1} \geq 1-kO(\sqrt{\gamma})$ (where we denote by $\{\tilde \lambda_i\}_{1\leq i\leq d}$ the eigenvalues of $\tilde X$ sorted in decreasing order). \end{remark} \begin{proof} $X_i X_i^\top = X_\star X_\star^\top + \eta_i$ for $\eta_i$ satisfying $\norm{\eta_i}_{F} \leq O(\sqrt{\gamma})$ by the definition of the distance $d_{F}(\cdot, \cdot)$, so it follows that $\tilde{X}_n = X_\star X_\star^{\top} + \eta$ for $\eta$ satisfying $\norm{\eta}_{F} \leq O(\sqrt{\gamma})$ using the triangle inequality. Now using the Weyl inequalities \citep{horn1990matrix} we have that: \[ \lambda_{k}(\tilde{X}_n - \eta + \eta) \geq \lambda_{k}(\tilde{X}_n - \eta) + \lambda_d(\eta), \] where we denote by $\lambda_k(M)$ the $k$-th largest eigenvalue of the matrix $M$. Moreover $\lambda_{k}(\tilde{X}_n - \eta)=1$ and $|\lambda_d(\eta)| \leq \norm{\eta}_F \leq O(\sqrt{\gamma})$ since the spectral norm is upper bounded by the Frobenius norm, so $\lambda_{k}(\tilde{X}_n) \geq 1-O(\sqrt{\gamma})$. Now, recall that each $X_i \in \mathcal{G}_{d,k}$, so $\Tr[\tilde{X}_n]= \frac{1}{n} \sum_{i=1}^{n} \Tr[X_i X_i^\top] = k$ due to the normalization constraint $X_i X_i^\top = I_{k}$. Thus we must have that $\tilde \lambda_{k+1} \leq k O(\sqrt{\gamma})$ which implies $\tilde \lambda_k(\tilde{X}_n)-\tilde\lambda_{k+1}(\tilde{X}_n) = 1-kO(\sqrt{\gamma})$. \end{proof} For these reasons we prefer to use \eq{averageoja} rather than \eq{averageojaret} in our experiments and our presentation. It is worth noting that since these both iterations are equivalent up to $O(\gamma_n)$ corrections, they will (a) have the same theoretical guarantees in our framework and (b) perform similarly in practice. \subsection{Convergence Results} We are now ready to apply Theorem \ref{thm:oja_main} to the streaming $k$-PCA problem, with only the tedious task of verifying our various technical assumptions. Since we only seek to derive a local convergence result, we will once again use Assumption \ref{assump:manifold} which stipulates the iterates $X_n$ are restricted to $\mathcal{X}$ and that the map $X\mapsto\norm{R_{X_\star}^{-1}(X)}_F^2$ is strongly-convex. Here we will take the set $\mathcal{X}=\{Y : d_F(X_\star, Y)\leq \delta \}$, for $\delta>0$. As previously noted, if the retraction $R$ was taken as the exponential map, the map $X\mapsto \norm{{\mathop { \rm Exp{}}}_{X_\star}^{-1}(X)}_F^2$ would always be retraction strongly convex locally in a ball around $y$ whose radius depends on the curvature, as explained by \citet{Afs11}. However, we can also verify, that even when $R$ is the aforementioned projection-like retraction it is locally retraction strongly convex, \begin{remark} \label{rmk:strongcvxpca} Let $k=1$ and take $R$ as the second-order retraction defined in \eq{retraproj}. Then there exists a constant $\delta>0$ such that on the set $\mathcal{X} = \{Y : \abs{X_\star^\top Y} \geq 1-\delta \}$, $X\mapsto \norm{R_{X_\star}^{-1}(X)}_F^2$ is retraction strongly convex. \end{remark} \begin{proof} For notational convenience, let $\epsilon$ be defined so we consider $\mathcal{Y} = \{ Y : \abs{X_\star^\top Y} \geq 1-\epsilon \}$. It suffices to show that for $\norm{X_\star}_2=1$, all $Y$ such that $\norm{Y}_2=1$ and $\abs{X_\star^\top Y} \geq 1-\epsilon$, and all $V$ such that $\norm{V}_2=1$, the map $g(t) : t \mapsto \norm{R_{X_\star}^{-1}(R_{Y}(t V))}_F^2$ is convex in $t$. For $k=1$, using the previous formulas for the retraction and its inverse norm, we can explicitly compute the map as $g(t) = \frac{1+t^2}{(X_\star^\top(Y+tV))^2}-1$. We then have that $g''(t) = \frac{2 (3 (X_\star V)^2-2t (X_\star^\top v)(X_\star^\top y)+ (X_\star^\top y)^2)}{(t (X_\star^\top v) +(X_\star^\top y))^4}$. It suffices to show that $3 (X_\star^\top V)^2+ (X_\star^\top Y)^2 > 2t (X_\star^\top V)(X_\star^\top Y)$ for all $t$ such that $R_{Y}(t V) \in \mathcal{Y}$. Since $V \in T_{Y}\mathcal{M}$ we must have that $V^\top Y = 0$ which implies that $(V^\top X_\star)^2 = (V^\top (Y-X_\star))^2 \leq 2-2Y^\top X_{\star} \leq 2 \epsilon$. Hence we have that $2\abs{(X_\star^\top V)(X_\star^\top Y)} \leq 4\epsilon$. On the otherhand, we have that $3 (X_\star^\top V)^2+ (X_\star^\top Y)^2 > (X_\star^\top Y)^2 > (1-\epsilon)^2$. Thus the statement is satisfied for all $\abs{t} \leq \frac{(1-\epsilon)^2}{4\epsilon}$. Using the local equivalence of distances for PCA (Lemma \ref{lem:distequiv}), the conclusion holds for some constant $\delta>0$. \end{proof} A similar, but lengthier linear algebra computation, also shows the result for $k > 1$. Theorem \ref{thm:oja_main} requires several other assumptions (namely Assumptions \ref{assump:HessianLip} and \ref{assump:noiseLip}) which are (surprisingly) tedious to verify even in the simple case when $f$ is the quadratic Rayleigh quotient. However, we note Lemma \ref{lem:tangent_rec_2}, for the streaming $k$-PCA problem, can also be derived from first principles, circumventing the need to directly check these assumptions. \begin{remark} If $f$ is the Rayleigh quotient, the conclusion of Lemma \ref{lem:tangent_rec_2} follows (without having to directly verify Assumptions \ref{assump:HessianLip} and \ref{assump:noiseLip}), when the stream of matrices satisfies the almost sure bound $\Vert H_n\Vert\leq1$. \end{remark} \begin{proof} As in the proof of Lemma \ref{lem:average_oja}, we use \[ X_n=R_{X_\star}(\Delta_n)=(X_\star+\Delta_n)[I+\Delta_n^\top\Delta_n]^{-1/2}=X_\star+\Delta_n+O(\Vert\Delta_n\Vert^2), \] to simplify $\nabla f_{n+1}(X_n)$. This yields \begin{eqnarray*} \nabla f_{n+1}(X_n) &=&(I-X_nX_n^\top)H_{n+1}X_n +O(\Vert\Delta_n\Vert^2)\\ &=&(I-(X_\star +\Delta_n)(X_\star+\Delta_n)^\top)H_{n+1}(X_\star +\Delta_n)\\ &=&(I-X_\star X_\star^\top)H_{n+1}X_\star+(I-X_\star X_\star^\top)H_{n+1}\Delta_n-\Delta_n X_\star^\top H_{n+1}X_\star\\ &&+ X_\star \Delta_n^\top H_{n+1}X_\star+O(\Vert\Delta_n\Vert^2). \end{eqnarray*} Upon projecting back into $T_{X_\ast}\mathcal{M}$, the term $ X_\star \Delta_n^\top H_nX_\star$ vanishes and we obtain, \begin{eqnarray*} (I-X_\star X_\star^\top)\nabla f_{n+1}(X_n) &=& \nabla^2 f(X_\star)\Delta_n+\nabla f_{n+1}(X_\ast)+\xi_{n+1}, \end{eqnarray*} with $\xi_{n+1}=(I-X_\star X_\star^\top)H_{n+1}\Delta_n-\Delta_n X_\star^\top H_{n+1}X_\star$ satisfying $\mathbb{E}[\xi_{n+1}|\mathcal{F}_{n}]=0$ and $\mathbb{E}[\Vert\xi_{n+1}\Vert^2|\mathcal{F}_{n}]=O(\Vert \Delta_n\Vert^2)$ since $\mathbb{E}[\Vert H-H_{n+1}\Vert^2|\mathcal{F}_{n}]$ is bounded (recall that we assume $\norm{H_n} \leq 1$ a.s.). \end{proof} Using results from the work of \citet{AllenLi2017-streampca} and \citet{shamir2016fast} we can now argue under appropriate conditions that the randomized power method for the streaming $k$-PCA problem will converge in expectation to a neighborhood of $X_\star$. Namely, \begin{lemma}[\cite{AllenLi2017-streampca} and \cite{shamir2016fast}] \label{lem:oja_converge} Let $X_n$ denote the iterates of the randomized power method evolving as in \eq{rand_power}. If Assumption \ref{assump:manifold} holds for $\mathcal{X}$ defined above with $\delta<1/4$ then, \[ \mathbb{E}[d_{F}^2 (X_\star, X_n)] = O(\gamma_n^2). \] \end{lemma} \begin{proof} This a direct adaptation of Lemma 10 by \citet{shamir2016fast} as explained by \citet[][Section 3]{AllenLi2017-streampca}. \end{proof} Finally, we are able to present the proof of Theorem \ref{thm:oja_main}. Since the assumptions of Lemma \ref{lem:oja_converge} are satisfied by assumption, and we have asymptotic equivalence of distances from Lemma \ref{lem:distequiv}, Assumption \ref{assump:slowrate} is satisfied. Further, since Lemmas \ref{lem:oja_linear} and \ref{lem:average_oja} show the linearized process of the randomized power iterates is equivalent to that of Riemannian SGD up to $O(\gamma_n^2)$ corrections, distributional convergence immediately follows from these Lemmas and Theorem \ref{thm:main}. Lastly, we can compute the asymptotic variance. We first compute the inverse of the Hessian of $f$. Let us consider the following basis of $T_{X_\star}\mathcal{G}_{d,k}$, $\{v_i e_j^\top \}_{k<i\leq d;j\leq k}$, where we denote by $v_i$ the eigenvector of $H$ associated with the eigenvalue $\lambda_i$ and $e_j$ the $j$th standard basis vector. Indeed $\{v_i e_j^\top \}_{k<i\leq d;j\leq k}$ is a linear independent set since $\{v_i e_j^\top \}_{i\leq d;j\leq k}$ is a basis of $\mathbb R^{d,k}$. Moreover for $k<i\leq d$, $X_\star^\top v_i e_j^\top=0$ so $ v_i e_j^\top\in T_{X_\star}\mathcal{G}_{d,k}$. We conclude this set is a basis from a dimension count. We now compute the projection of the Hessian on this basis \begin{eqnarray*} \nabla^2 f(X_\star)[v_ie_j^\top]&=& v_ie_j^\top X_\star^\top H X_\star - Hv_ie_j^\top\\ &=& v_ie_j^\top \mathop{\rm diag}(\lambda_1,\dots, \lambda_k)-\lambda_i v_ie_j^\top\\ &=&(\lambda_j-\lambda_i) v_ie_j^\top. \end{eqnarray*} Therefore \[ [\nabla^2 f(X_\star)]^{-1}[v_ie_j^\top]= \frac{v_ie_j^\top}{\lambda_j-\lambda_i}. \] We now reparametrize $\tilde H_n= H^{-1/2} H_n H^{-1/2} $ such that $\mathbb{E}[\tilde H_n]= I$. Thus \begin{eqnarray*} \nabla f_n(X_\star) &=& (I-X_\star X_\star ^\top)H_nX_\star \\ &=&(I-X_\star X_\star ^\top)H^{1/2} \tilde H_nH^{1/2} X_\star\\ &=& \sum_{i=k+1}^d \sqrt{\lambda_i} v_iv_i^\top \tilde H_n\sum_{j=1}^k \sqrt{\lambda_j}v_j e_j^\top\\ &=& \sum_{j=1}^k\sum_{i=k+1}^d\sqrt{\lambda_i \lambda_j} [v_i^\top \tilde H_n v_j] v_i e_j^\top. \end{eqnarray*} This yields \[ [\nabla^2 f(X_\star)]^{-1}\nabla f_n(X_\star) = \sum_{j=1}^k\sum_{i=k+1}^d\frac{\sqrt{\lambda_i \lambda_j}}{\lambda_j-\lambda_i}[v_i^\top \tilde H_n v_j]v_ie_j^\top. \] And the asymptotic covariance becomes, \begin{align} & [\nabla^2 f(X_\star)]^{-1}\mathbb{E}[\nabla f_n(X_\star)\nabla f_n(X_\star)^\top] \nabla^2 f(X_\star)]^{-1}= \notag \\ & \sum_{j'=1}^k\sum_{i'=k+1}^d \sum_{j=1}^k\sum_{i=k+1}^d\frac{\sqrt{\lambda_i \lambda_j} \cdot \sqrt{\lambda_{i'} \lambda_{j'}}}{(\lambda_{j}-\lambda_{i}) \cdot (\lambda_{j'}-\lambda_{i'})} \mathbb{E} \left[\left(v_i^\top \tilde H_n v_j\right) \left(v_{i'}^\top \tilde H_n v_{j'}\right)\right](v_i e_j^\top) \otimes (v_{i'} e_{j'}^\top). \notag \end{align} It is interesting to note that the tensor structure of $C_{ii'jj'}$ significantly simplifies if we have that $H_n = h_n h_n^\top$ for $h_n \sim \mathcal{N}(0, \Sigma)$ -- that $H_n$ is comprised of a rank-one stream of Gaussians. Recall we have $\tilde{H}_n = H^{-1/2} H_n H^{-1/2} = H^{-1/2} x_n x_n^\top H^{-1/2} = x'_n (x'_n)^\top$. So for a rank-one stream, \begin{align} C_{ii'jj'} = \mathbb{E} \left[\left(v_i^\top \tilde H_n v_j\right) \left(v_{i'}^\top \tilde H_n v_{j'}\right)\right] = \mathbb{E} \left[ \langle v_i, x'_n \rangle \langle v_{i'}, x'_n \rangle \langle v_j, x'_n \rangle \langle v_{j'}, x'_n \rangle \right]. \notag \end{align} Since $x'_n \sim \mathcal{N}(0, I_d)$ and the law of a jointly multivariate normal random variable is invariant under an orthogonal rotation, the joint distribution of the vector $[\langle v_i, x'_n \rangle, \langle v_{i'}, x'_n \rangle, \langle v_j, x'_n \rangle, \langle v_{j'}, x'_n \rangle] \sim \mathcal{N}(0, I_4)$ for $i \neq i'$ and $j \neq j'$. So the only non-vanishing terms become, \begin{align} C_{ii'jj'} = \delta_{ii'} \delta_{jj'}, \notag \end{align} and the asymptotic covariance reduces to, \begin{align} C = \sum_{j=1}^k\sum_{i=k+1}^d\frac{{\lambda_i \lambda_j}}{(\lambda_j-\lambda_i)^2} (v_i e_j^\top) \otimes (v_{i} e_{j}^\top). \notag \end{align} This is precisely the same asymptotic variance given by \citet{reiss2016non} and matches the lower bound of \citet{CaiMaWu13} obtained for the (Gaussian) spiked covariance model. However, formally, our result requires the $H_n$ to be a.s. bounded. \section{Conclusions} We have constructed and analyzed a geometric framework on Riemannian manifolds that generalizes the classical Polyak-Ruppert iterate-averaging scheme. This framework is able to accelerate a sequence of slowly-converging iterates to an iterate-averaged sequence with a robust $O(\frac{1}{n})$ rate. We have also presented two applications, to the class of geodesically-strongly-convex optimization problems and to streaming $k$-PCA. Note that our results only apply locally, requiring the iterates to be constrained to lie in a compact set $\mathcal{X}$. Considering a projected variant of our algorithm as in \citet{FlaBac17} is a promising direction for further research that may allow us to remove this restriction. Another interesting direction is to provide a global-convergence result for the iterate-averaged PCA algorithm presented here. \subsection*{Acknowledgements} The authors thank Nicolas Boumal and John Duchi for helpful discussions. Francis Bach acknowledges support from the European Research Council (grant SEQUOIA 724063), and Michael Jordan acknowledges support from the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670. \section{Experiments} \label{sec:experiments} Here, we illustrate the practical utility of our results on a synthetic, streaming $k$-PCA problem using the SGD algorithm defined in \eq{robust_oja}. We take $k=10$ and $d=50$. The stream $H_n \in \mathbb{R}^d$ is normally-distributed with a covariance matrix $H$ with random eigenvectors, and eigenvalues decaying as $1/i^\alpha+\beta$, for $i = 1, \dots, k$, and $1/(i-1)^\alpha$, for $i = i+1,\dots, d$ where $\alpha,\beta\geq0$. All results are averaged over ten repetitions. \paragraph{Robustness to Conditioning.} In \myfig{synthetic} we consider two covariance matrices with different conditioning and we compare the behavior of SGD and averaged SGD for different step sizes (constant (cst), proportional to $1/\sqrt{n}$ (-1/2) and $1/n$ (-1)). When the covariance matrix is well-conditioned, with a large eigengap (left plot), we see that SGD converges at a rate which depends on the step size whereas averaged SGD converges at a $O(1/n)$ rate independently of the step-size choice. For poorly conditioned problems (right plot), the convergence rate deteriorates to $1/\sqrt{n}$ for non-averaged SGD with step size $1/\sqrt{n}$, and averaged SGD with both constant and $1/\sqrt{n}$ step sizes. The $1/n$ step size performs poorly with and without averaging. \begin{figure}[!ht] \vspace{-1cm} \centering \begin{minipage}[c]{.5\linewidth} \includegraphics[width=\linewidth]{Figs/gapgrand} \end{minipage} \hspace*{-10pt} \begin{minipage}[c]{.5\linewidth} \includegraphics[width=\linewidth]{Figs/gappetit} \end{minipage} \vspace{-2.0cm} \caption{Streaming PCA. Left: Well-conditioned problem. Right: Poorly-conditioned problem.} \label{fig:synthetic} \vspace{0cm} \end{figure} \paragraph{Robustness to Incorrect Step-Size.} In \myfig{syntheticrob} we consider a well-conditioned problem and compare the behavior of SGD and averaged SGD with step size proportional to $C/\sqrt n$ and $C/n$ to investigate the robustness to the choice of the constant $C$. For both algorithms we take three different constant prefactors $C/5$, $C$ and $5C$. For the step size proportional to $C/\sqrt n$ (left plot), both SGD and averaged SGD are robust to the choice of $C$. For SGD, the iterates eventually converge at a $1/\sqrt{n}$ rate, with a constant offset proportional to $C$. However, averaged SGD enjoys the fast rate $1/n$ for all choices of $C$. For the step size proportional to $C/n$ (right plot), if $C$ is too small, the rate of convergence is extremely slow for SGD and averaged SGD. \begin{figure}[!ht] \vspace{-1cm} \centering \begin{minipage}[c]{.5\linewidth} \includegraphics[width=\linewidth]{Figs/dftcc} \end{minipage} \hspace*{-10pt} \begin{minipage}[c]{.5\linewidth} \includegraphics[width=\linewidth]{Figs/dfftc} \end{minipage} \vspace{-2.0cm} \caption{Robustness to constant in step size. Left: step size proportional to $n^{\!-\!1/2}$. Right: step size proportional to $n^{\!-\!1}$.} \label{fig:syntheticrob} \end{figure} \section{Introduction} We consider stochastic optimization of a (potentially non-convex) function $f$ defined on a Riemannian manifold $\mathcal{M}$, and accessible only through unbiased estimates of its gradients. The framework is broad---encompassing fundamental problems such as principal components analysis (PCA) \citep{edelman1998geometry}, dictionary learning \citep{SunQinWri17}, low-rank matrix completion \citep{BouAbs11} and tensor factorization \citep{IshAbsVanDeL11}. The classical setting of stochastic approximation in $\mathbb{R}^d$, first appearing in the work of \citet{robbins1951stochastic}, has been thoroughly explored in both the optimization and machine learning communities. A key step in the development of this theory was the discovery of Polyak-Ruppert averaging---a technique in which the iterates are averaged along the optimization path. Such averaging provably reduces the impact of noise on the problem solution and improves convergence rates in certain important settings~\citep{Pol90, ruppert1988efficient}. By contrast, the general setting of stochastic approximation on Riemannian manifolds has been far less studied. There are many open questions regarding achievable rates and the possibility of accelerating these rates with techniques such as averaging. The problems are twofold: it is not always clear how to extend a gradient-based algorithm to a setting which is missing the global vector-space structure of Euclidean space, and, equally importantly, classical analysis techniques often rely on the Euclidean structure and do not always carry over. In particular, Polyak-Ruppert averaging relies critically on the Euclidean structure of the configuration space, both in its design and its analysis. We therefore ask: \emph{Can the classical technique of Polyak-Ruppert iterate averaging be adapted to the Riemannian setting?} Moreover, do the advantages of iterate averaging, in terms of rate and robustness, carry over from the Euclidean setting to the Riemannian setting? Indeed, in the traditional setting of Euclidean stochastic optimization, averaged optimization algorithms not only improve convergence rates, but also have the advantage of adapting to the hardness of the problem \citep{moulines2011non}. They provide a single, robust algorithm that achieves optimal rates with and without strong convexity, and they also achieve the statistically optimal asymptotic variance. In the presence of strong convexity, setting the step size proportional to $\gamma_n = \frac{1}{\mu n}$ is sufficient to achieve the optimal $O(\frac{1}{n})$ rate. However, as highlighted by \citet{NemJudLan08} and \citet{moulines2011non}, the convergence of such a scheme is highly sensitive to the choice of constant prefactor $C$ in the step size; an improper choice of $C$ can lead to an arbitrarily slow convergence \emph{rate}. Moreover, since $\mu$ is often never known, properly calibrating $C$ is often impossible (unless an explicit regularizer is added to the cost function, which adds an extra hyperparameter). In this paper, we provide a practical iterate-averaging scheme that enhances the robustness and speed of stochastic, gradient-based optimization algorithms, applicable to a wide range of Riemannian optimization problems---including those that are (Euclidean) non-convex. Principally, our framework extends the classical Polyak-Ruppert iterate-averaging scheme (and its inherent benefits) to the Riemannian setting. Moreover, our results hold in the general stochastic approximation setting and do not rely on any finite-sum structure of the objective. Our main contributions are: \begin{itemize} \item The development of a geometric framework to transform a sequence of slowly converging iterates on $\mathcal{M}$, produced from SGD, to an iterate-averaged sequence with a robust, fast $O(\frac{1}{n})$ rate. \item A general formulation of geometric iterate averaging for a class of locally smooth and geodesically-strongly-convex optimization problems. \item An application of our framework to the (non-convex) problem of streaming PCA, where we show how to transform the slow rate of the randomized power method (with no knowledge of the unknown eigengap) into an algorithm that achieves the optimal rate of convergence and which empirically outperforms existing algorithms. \end{itemize} \subsection{Related Work} \textbf{Stochastic Optimization:} The literature on (Euclidean) stochastic optimization is vast, having been studied through the lens of machine learning \citep{bottou-98x, ShaShaSreSri09}, optimization \citep{NesVia08}, and stochastic approximation \citep{KusYin03}. Polyak-Ruppert averaging first appeared in the works of \citet{Pol90} and \citet{ruppert1988efficient}; \citet{polyak1992acceleration} then provided asymptotic normality results for the distribution of the averaged iterate sequence. \citet{moulines2011non} later generalized these results, providing non-asymptotic guarantees for the rate of convergence of the averaged iterates. An important contribution of \citet{moulines2011non} was to present a unified analysis showing that iterate averaging coupled with sufficiently slow learning rates could achieve the optimal convergence in all settings (i.e., with and without strong convexity). \\ \textbf{Riemannian Optimization:} Riemannian optimization has not been explored in the machine learning community until relatively recently. \citet{udriste1994convex} and \citet{absil2009optimization} provide comprehensive background on the topic. Most existing work has primarily focused on providing asymptotic convergence guarantees for non-stochastic algorithms \citep[see, e.g.,][who analyze the convergence of Riemannian trust-region and Riemannian L-BFGS methods, respectively]{Absil2007,ring2012optimization}. \citet{bonnabel2013stochastic} provided the first asymptotic convergence proof of stochastic gradient descent (SGD) on Riemannian manifolds while highlighting diverse applications of the Riemannian framework to problems such as PCA. The first global complexity results for first-order Riemannian optimization, utilizing the notion of \emph{functional g-convexity}, were obtained in the foundational work of \citet{zhang2016first}. The finite-sum, stochastic setting has been further investigated by \citet{zhang2016riemannian} and \citet{sato2017riemannian}, who developed Riemannian SVRG methods. However, the potential utility of Polyak-Ruppert averaging in the Riemannian setting has been unexplored. \section{Preliminaries}\label{sec:background} We recall some important concepts from Riemannian geometry. \citet{do2016differential} provides more a thorough review, with \citet{absil2009optimization} providing a perspective particularly relevant for Riemannian optimization. As a base space we consider a Riemannian manifold $(\mathcal{M}, \mathfrak{g})$---a smooth manifold equipped with a Riemannian metric $\mathfrak{g}$ containing a compact, connected subset $\mathcal{X}$. At all $x \in \mathcal{M}$, the metric $\mathfrak{g}$ induces a natural inner product on the tangent space $T_{x} \mathcal{M}$, denoted by $\langle\cdot,\cdot\rangle$---this inner product induces a norm on each tangent space denoted by $\Vert \cdot \Vert$. The metric $\mathfrak{g}$ also provides a means of measuring the length of a parametrized curve from $\mathbb{R}$ to the manifold; a \emph{geodesic} is a constant speed curve $\gamma : [0,1] \to \mathcal{M}$ that is locally distance-minimizing with respect to the distance~$d$ induced by $\mathfrak{g}$. When considering functions $f : \mathcal{M} \to \mathbb{R}$ we will use $\nabla f(x) \in T_{x} M$ to denote the \emph{Riemannian gradient} of $f$ at $x \in \mathcal{M}$, and $\nabla^2 f(x) : T_{x} M \to T_{x} M$, the \emph{Riemannian Hessian} of $f$ at $x$. When considering functions between manifolds $F : \mathcal{M} \to \mathcal{M}$, we will use $D F(x) : T_{x} \mathcal{M} \to T_{F(x)} \mathcal{M}$ to denote the \emph{differential} of the mapping at $x$ (its linearization) \citep[see][for more formal definitions of these objects]{absil2009optimization}. \begin{figure}[!ht] \vspace{-.5cm} \centering \begin{minipage}[c]{.48\linewidth} \def2.5in{2.5in} \input{Figs/retraction.tex} \end{minipage} \begin{minipage}[c]{.48\linewidth} \def2.5in{2.5in} \input{Figs/transport.tex} \end{minipage} \vspace{-.5cm} \caption{Left: a tangent vector $v$ in the tangent space of a point $x$ and the corresponding retraction generating a curve pointing in the ``direction'' of the tangent vector $v$. Right: The parallel transport of a different $v$ along the same path.} \label{fig:manifold} \end{figure} The \emph{exponential map} ${\mathop { \rm Exp{}}}_{x}(v) : T_{x} \mathcal{M} \to \mathcal{M}$ maps $v \in T_{x} \mathcal{M}$ to $y \in \mathcal{M}$ such that there is a geodesic with $\gamma(0)=x$, $\gamma(1)=y$, and $\frac{d}{dt}\gamma(0)=v$; although it may not be defined on the whole tangent space. If there is a unique geodesic connecting $x, y \in \mathcal{X}$, the exponential map will have a well-defined inverse ${\mathop { \rm Exp{}}}_{x}^{-1}(y) : \mathcal{M} \to T_{x}\mathcal{M}$, such that the length of the connecting geodesic is $d(x,y)=\Vert {\mathop { \rm Exp{}}}_{x}^{-1}(y) \Vert$. We also use $R_x : T_{x} \mathcal{M} \to \mathcal{M}$ and $R_x^{-1} : \mathcal{M} \to T_{x}\mathcal{M}$ to denote a \emph{retraction mapping} and its inverse (when well defined), which is an approximation to the exponential map (and its inverse). $R_x$ is often computationally cheaper to compute then the entire exponential map ${\mathop { \rm Exp{}}}_x$. Formally, the map $R_x$ is defined as a first-order retraction if $R_x(0) = x$ and $D R_x(0)= id_{T_{x} \mathcal{M}}$---so locally $R_x(\xi)$ must move in the ``direction'' of $\xi$. The map $R_x$ is a second-order retraction if $R_x$ also satisfies $\frac{D^2}{dt^2} R_x(t \xi)|_{0} = 0$ for all $\xi \in T_{x}\mathcal{M}$, where $\frac{D^2}{dt^2}\gamma = \frac{D}{dt} \dot{\gamma}$ denotes the acceleration vector field \citep[][Sec.~5.4]{absil2009optimization}. This condition ensures $R_x$ satisfies a ``zero-acceleration'' initial condition. Note that locally, for $x$ close to $y$, the retraction satisfies $\norm{R^{-1}_{x}(y)} = d(x, y) + o(d(x, y))$. If we consider the example where the manifold is a sphere (i.e., $\mathcal{M} = S^{d-1}$ with the round metric $\mathfrak{g}$), the exponential map along a vector will generate a curve that is a great circle on the underlying sphere. A nontrivial example of a retraction $R$ on the sphere can be defined by first moving along the tangent vector in the embedded Euclidean space $\mathbb{R}^d$, and then projecting this point to the closest point on the sphere. We further define the \emph{parallel translation} $\tp{x}{y} : T_x \mathcal{M} \to T_y \mathcal{M}$ as the map transporting a vector $v \in T_x \mathcal{M}$ to $\tp{x}{y} v$, along a path $R_{x}(\xi)$ connecting $x$ to $y = R_{x}(\xi)$, such that the vector stays ``constant" by satisfying a zero-acceleration condition. This is illustrated in \myfig{manifold}. The map $\tp{x}{y}$ is an isometry. We also consider, a different vector transport map $\te{x}{y}:T_x \mathcal{M} \to T_y \mathcal{M}$ which is the differential $DR_{x}(R_x^{-1}(y))$ of the retraction $R$ \citep[][Sec.~8.1]{absil2009optimization}. Following \citet{huang2015broyden}, we will call a function $f$ on $\mathcal{X}$ \emph{retraction convex} on $\mathcal{X}$ (with respect to $R$) if for all $x \in \mathcal{X}$ and all $\eta \in T_{x}\mathcal{M}$ satisfying $\norm{\eta}=1$, $t \mapsto f(R_{x}(t \eta))$ is convex for all $t$ such that $R_{x}(t \eta) \in \mathcal{X}$; similarly $f$ is \emph{retraction strongly convex} on $\mathcal{X}$ if $t \mapsto f(R_{x}(t \eta))$ is strongly convex under the same conditions. If $R_x$ is the exponential map, this reduces to the definition of geodesic convexity \citep[see the work of][for further details]{zhang2016first}. \section{Assumptions} \label{sec:assumptions} We introduce several assumptions on the manifold $\mathcal{M}$, function $f$, and the noise process $\{\nabla f_n\}_{n\geq1}$ that will be relevant throughout the paper. \subsection{Assumptions on $\mathcal{M}$} First, we assume the iterates of the algorithm in \eq{grad_desc} and \eq{ave_grad_desc} remain in $\mathcal X$ where the manifold ``behaves well.'' Formally, \begin{assumption} \label{assump:manifold} For a sequence of iterates $\{ x_n \}_{n \geq 0}$ defined in \eq{grad_desc}, there exists a compact, connected subset $\mathcal X$ such that $x_n \in \mathcal X$ for all $n \geq 0$, and $x_\star \in \mathcal X$. Furthermore, $\mathcal X$ is totally retractive (with respect to the retraction $R$) and the function $x\mapsto \Vert \frac{1}{2} R_y^{-1}(x)\Vert^2$ is retraction strongly convex on $\mathcal{X}$ for all $y\in\mathcal{X}$. Also, $R$ is a second-order retraction at $x_\star$. \end{assumption} Assumption~\ref{assump:manifold} is restrictive, but standard in prior work on stochastic approximation on manifolds \cite[e.g.,][]{bonnabel2013stochastic, zhang2016riemannian, sato2017riemannian}. As further detailed by \citet{huang2015broyden}, a totally retractive neighborhood $\mathcal{X}$ is such that for all $x \in \mathcal{X}$ there exists $r>0$ such that $\mathcal{X} \subset R_{x} (\mathbb{B}_{r}(0))$ where $R_x$ is a diffeomorphism on $\mathbb{B}_{r}(0)$. A totally retractive neighborhood is analogous to the concept of a totally normal neighborhood \citep[see, e.g.,][Chap. 3, Sec. 3]{do2016differential}. Principally, Assumption~\ref{assump:manifold} ensures that the retraction map (and its respective inverse) are well-defined when applied to the iterates of our algorithm. If $\mathcal{M}$ is a Hadamard manifold, the exponential map (and its inverse) is defined everywhere on~$\mathcal{M}$, although this globally may not be true for a retraction $R$. Similarly, if $\mathcal{M}$ is a compact manifold the first statement of Assumption~\ref{assump:manifold} is always satisfied. Moreover, in the case of the exponential map, $x \mapsto \frac{1}{2} \Vert {\mathop { \rm Exp{}}}_y^{-1}(x)\Vert^2$ is strongly convex in a ball around $y$ whose radius depends on the curvature, as explained by \citet{Afs11} and \citet[Ch. IV, Sec. 2 Lemma 2.9]{Sak96}. For our present purpose, we also assume the retraction $R$ agrees with the Riemannian exponential map up to second order near $x_\star$. This assumption, that $R$ is a second-order retraction, is fairly general and is satisfied by projection-like retraction maps on matrix manifolds \citep[see, e.g.,][]{AbsMal12}. \subsection{Assumptions on $f$} We now introduce some regularity assumptions on the function $f$ ensuring sufficient differentiability and strong convexity at $x_\star$: \begin{assumption} \label{assump:strongconvpoint} The function $f$ is twice-continuously differentiable on $\mathcal{X}$. Further the Hessian of the function $f$ at $x_\star$, $\nabla^2 f(x_\star)$, satisfies, for all $v \in T_{x_\star}\mathcal{M}$ and $\mu>0$, \[ \langle v, \nabla^2 f(x_\star)v\rangle \geq \mu \Vert v\Vert^2>0. \] \end{assumption} Continuity of the Hessian also ensures local retraction strong convexity in a neighborhood of $x_\star$ \citep[][Prop. 5.5.6]{absil2009optimization}. Moreover, since the function $f$ is twice-continuously differentiable on $\mathcal{X}$ its Hessian is Lipschitz on this compact set. We formalize this as follows: \begin{assumption} \label{assump:HessianLip} There exists $M>0$ such that the Hessian of the function $f$, $\nabla^2 f$, is $M$-Lipschitz at $x_\star$. That is, for all $y \in \mathcal{X}$ and $v \in T_{y}\mathcal{M}$, \[ \Vert \tp{y}{x_\star} \circ \nabla^2 f(y) \circ \tp{x_\star}{y} - \nabla^2 f(x_\star)\Vert_{op} \leq M \Vert R_{x_\star}^{-1}(y) \Vert. \] \end{assumption} Note that $\Vert R_{x_\star}^{-1}(y) \Vert$ is not necessarily symmetric under the exchange of $x_\star$ and $y$. This term could also be replaced with $d(x_\star, y)$, since these expressions will be locally equivalent in a neighborhood of $x_\star$, but would come at the cost of a less transparent analysis. \subsection{Assumptions on the noise} \label{sec:assumptions_noise} We state several assumptions on the noise process that will be relevant throughout our discussion. Let $(\mathcal{F}_n)_{n \geq 0}$ be an increasing sequence of sigma-fields. We will assume access to a sequence $\{\nabla f_n\}_{n \geq 1}$ of noisy estimates of the true gradient $\nabla f$ of the function $f$, \begin{assumption} \label{assump:noiseunbiased} The sequence of (random) vector fields $\{ \nabla f_n \}_{n\geq1} : \mathcal M \to T\mathcal M$ is $\mathcal{F}_n$-measurable, square-integrable and unbiased: \[ \forall x \in \mathcal{X}, \ \forall n \geq 1, \ \mathbb{E}[\nabla f_n(x)\vert \mathcal{F}_{n-1}]=\nabla f(x). \] \end{assumption} This general framework subsumes two situations of interest. \begin{itemize} \item Statistical Learning (on Manifolds): minimizing a loss function $\ell: \mathcal M \times \mathcal Z \to \mathbb R$ over $x \in \mathcal X$, given a sequence of i.i.d. observations in $\mathcal Z$, with access only to noisy, unbiased estimates of the gradient $\nabla f_n=\nabla \ell(\cdot,z_n)$ \citep{AswBicTom11}. \item Stochastic Approximation (on Manifolds): minimizing a function $f(x)$ over $x \in \mathcal X$, with access only to the (random) vector field $\nabla f(x) + \epsilon_n(x)$ at each iteration. Here the gradient vector field is perturbed by a square-integrable martingale-difference sequence (for all $x\in\mathcal M $, $\mathbb{E}[\epsilon_n(x) \vert \mathcal{F}_{n-1}]=0$) \citep{bonnabel2013stochastic}. \end{itemize} Lastly, we will assume the vector fields $\{\nabla f_n\}_{n \geq 1}$ are individually Lipschitz and have bounded covariance at the optimum $x_\star$: \begin{assumption} \label{assump:noiseLip} There exists $L > 0$ such that for all $x \in \mathcal X$ and $n\geq 1$, the vector field $\nabla f_n$ satisfies \[ \mathbb{E} [\Vert \tp{x}{x_\star} \nabla f_n(x)- \nabla f_n(x_\star)\Vert^2\vert \mathcal{F}_{n-1}]\leq L^2 \ \Vert R_{x_\star}^{-1}(x) \Vert^2 , \] there exists $\tau>0$ such that $ \mathbb{E} [\Vert \nabla f_{n}(x) \Vert^4\vert \mathcal{F}_{n-1}]\leq \tau^4$ for all $x \in \mathcal{X}$, and a symmetric positive-definite matrix $\Sigma$ such that, \[ \mathbb{E}[\nabla f_n(x_\star) \otimes \nabla f_n(x_\star) \vert\mathcal F_{n-1}] = \Sigma \text{ a.s.} \] \end{assumption} These are natural generalizations of standard assumptions in the optimization literature \citep{Fab68} to the setting of Riemannian manifolds\footnote{Assuming bounded gradients does not contradict Assumption~\ref{assump:strongconvpoint}, since we are constrained to the compact set $\mathcal{X}$.}. Note that the assumption, \\ $\mathbb{E}[\nabla f_n(x_\star) \otimes \nabla f_n(x_\star) \vert\mathcal F_{n-1}] = \Sigma \text{ a.s.}$ could be slightly relaxed (as detailed in \myapp{conv_rates}), but allows us to state our main result more cleanly. \section{Appendices} In \myapp{main_proof} we provide the proof of Theorem \ref{thm:main}. In \myapp{app_pfsketch} we prove the relevant lemmas mirroring the proof sketch in \mysec{pfsketch}. In \myapp{appapp} we provide proofs of the results for the application discussed in \mysec{geostrong} about geodesically-strongly-convex optimization. \mysec{stream_pca_app} contains background and proofs of results discussed in \mysec{stream_pca} regarding streaming $k$-PCA. \mysec{counter} contains further experiments on synthetic PCA showing a counterexample to the convergence of averaged, constant step-size SGD mentioned in \mysec{experiments} in the main text. Throughout this section we will denote a sequence of vectors $X_{n}$ to be $X_{n} = O(f_n)$, for scalar functions $f_n$, if there exists a constant $C>0$, such that $\norm{X_{n+1}} \leq C f_n$ for all $n \geq 0$ almost surely. \section{Proofs for \mysec{results}} \label{sec:main_proof} Here we provide the proof of Theorem \ref{thm:main}. The first statement follows by combining Theorems \ref{thm:linear}, \ref{thm:asymp_ave}, Lemma \ref{lem:stream_avg_iters} and Slutsky's theorem. The second statement follows by using Theorems \ref{thm:linear}, \ref{thm:nonasymp_ave}, and Lemma \ref{lem:stream_avg_iters_4mom}. \section{Proofs for \mysec{pfsketch}} \label{sec:app_pfsketch} Here we detail the proofs results necessary to conclude our main result sketched in \mysec{pfsketch}. \subsection{Proofs in \mysec{pfsketch1}} We begin with the proofs of the geometric lemmas detailed in \mysec{pfsketch1}, showing how to linearize the progress of the SGD iterates $x_n$ in the tangent space of $x_\star$ by considering the evolution of $\Delta_n = R_{x_\star}^{-1}(x_n)$. Note that since by Assumption~\ref{assump:manifold}, for all $n\geq0$, $x_n\in\mathcal{X}$, the vectors $\Delta_n$ all belong to the compact set $R_{x_\star}^{-1}(\mathcal{X})$. In the course of our argument it will be useful to consider the function $F_{x, y}(\eta_x) = R_y^{-1} \circ R_x(\eta_x) : T_x \mathcal{M} \to T_x \mathcal{M}$ (which crucially is a function defined on a vector space) and further $D R_x(\eta_x) : T_x \mathcal{M} \to T_{R_x(\eta_x)} \mathcal{M}$, the linearization of the retraction map. The first recursion we will study is that of $\Delta_{n+1} = F_{x_n, x_\star}(-\gamma_{n+1} \nabla f_{n+1}(x_n))$: \begin{lemma}\label{lem:tangent_rec} Let Assumption \ref{assump:manifold} hold. If $\Delta_n = R_{x_\star}^{-1}(x_n)$ for a sequence of iterates evolving as in \eq{grad_desc}, then there exists a constant $C_{\text{manifold}}>0$ depending on $\mathcal{X}$ such that, \[ \Delta_{n+1} = \Delta_n - \gamma_{n+1} [\te{x_\star}{x_n}]^{-1} (\nabla f_{n+1}(x_n)) + \gamma_{n+1} g_n, \] where $\norm{g_n} \leq \gamma_{n+1} C_{\text{manifold}} \norm{\nabla f_{n+1}(x_n)}^2$. \end{lemma} \begin{proof} Using the chain rule for the differential of a mapping on a manifold and the first-order property of the retraction ($D R_x (0_x) = I_{T_x\mathcal{M}}$) we have that: \begin{multline*} D F_{x, y}(0_x) = D(R_y^{-1} \circ R_x)(0_x) = D R_{y}^{-1}(R_x(0_x)) \circ D R_x(0_x) \\= [D R_y(R_{y}^{-1}(R_x(0_x)))]^{-1} \circ I_{T_x \mathcal{M}} = [DR_y(R_{y}^{-1}(x))]^{-1}=[\te{y}{x}]^{-1}, \end{multline*} where the last line follows by the inverse function theorem on the manifold $\mathcal{M}$. Smoothness of the retraction implies the Hessian of $F_{x, y}$ is uniformly bounded in norm on the compact set $F_{x, y}^{-1}(R_{x_\star}^{-1}(\mathcal{X}))$. We use $C_{\text{manifold}}$ to denote a bound on the operator norm of the Hessian of $F_{x, y}$ in this compact set. In the present situation, we have that $\Delta_{n+1} = F_{x_n, x_\star}(-\gamma_{n+1} \nabla f_{n+1}(x_n))$. Since $F_{x_n, x_\star}$ is a function defined on vector spaces the result follows using a Taylor expansion, $F_{x_n, x_\star}(0)=\Delta_n$, the previous statements regarding the differential of $F_{x_n, x_\star}$, and the uniform bounds on the second-order terms. In particular, the second-order term in the Taylor expansion is upper bounded as $\gamma_{n+1} C_{\text{manifold}} \norm{\nabla f_{n+1}(x_n)}^2$ so the bound on the error term $g_n$ follows. \end{proof} We now further develop this recursion by also considering an asymptotic expansion of the gradient term near the optima. \begin{lemma}\label{lem:tangent_rec_2} Let Assumptions \ref{assump:HessianLip}, \ref{assump:noiseunbiased}, and \ref{assump:noiseLip} hold. If $\Delta_n = R_{x_{\star}}^{-1}(x_n)$ for a sequence of iterates evolving as in \eq{grad_desc}, then there exist sequences $\{ \tilde \xi_n \}_{n \geq 0}$ and $\{ \tilde e_n \}_{n \geq 0}$ such that \[ \tp{x_n}{x_\star} \nabla f_{n+1}(x_n)=\nabla^2 f(x_\star)\Delta_n + \nabla f_{n+1}(x_\star)+\tilde \xi_{n+1}+\tilde e_{n+1}, \] where $\mathbb{E}[\ \tilde \xi_{n+1}\vert\mathcal F_{n}]=0$, $\mathbb{E}[\Vert \tilde\xi_{n+1}\Vert^2 \vert\mathcal F_{n}]\leq 4 L \Vert \Delta_n\Vert^2$ and $\tilde e_{n+1}$ such that $\Vert\tilde e_{n+1}\Vert \leq \frac{M}{2}\Vert \Delta_n\Vert^2$. \end{lemma} \begin{proof} We begin with the decomposition: \begin{eqnarray*} \nabla^2 \!\! f(x_\star)\Delta_n&\!\!\!=\!\!\!& \tp{x_n}{x_\star} \nabla f(x_n) - \nabla f(x_\star) +[\nabla^2 \!\!f(x_\star)\Delta_n-\tp{x_n}{x_\star} \nabla f(x_n) - \nabla f(x_\star) ] \\ &\!\!\!=\!\!\!& \tp{x_n}{x_\star}\!\! \nabla f_{n+1}(x_n) - \nabla f_{n+1}(x_\star) +[\nabla^2 \!\!f(x_\star)\Delta_n-\tp{x_n}{x_\star} \!\!\nabla f(x_n) \!-\! \nabla f(x_\star) ] \\ &&+ [\tp{x_n}{x_\star} \nabla f(x_n) - \nabla f(x_\star) - \tp{x_n}{x_\star} \nabla f_{n+1}(x_n) + \nabla f_{n+1}(x_\star)]. \end{eqnarray*} Under Assumption \ref{assump:HessianLip}, using the manifold version of Taylor's theorem (see \citet{absil2009optimization} Lemma 7.4.8) we have for $\tilde e_{n+1}= \nabla^2 f(x_\star) \Delta_n -\tp{x_n}{x_{\star}} \nabla f(x_n)$, that \[ \Vert \tilde e_{n+1}\Vert \leq \frac{M}{2}\Vert \Delta_n\Vert^2. \] Denoting $\tilde \xi_{n+1}= [\tp{x_n}{x_\star} \nabla f(x_n) - \nabla f(x_\star) - \tp{x_n}{x_\star} \nabla f_{n+1}(x_n) + \nabla f_{n+1}(x_\star)]$, Assumption \ref{assump:noiseunbiased} directly implies that $\mathbb{E}[\ \tilde \xi_{n+1}\vert\mathcal F_{n}]=0$. Finally, using Assumption \ref{assump:noiseLip} and the elementary inequality $2 \mathbb{E}[A \cdot B | \mathcal{F}_n] \leq \mathbb{E}[A^2 | \mathcal{F}_n] + \mathbb{E}[B^2 | \mathcal{F}_n]$ for square-integrable random variables $A, B$ shows that, \begin{eqnarray*} \mathbb{E}[\Vert \tilde\xi_{n+1}\Vert^2 \vert\mathcal F_{n}] &\leq& 2\Vert \tp{x_n}{x_\star} \nabla f(x_n) - \nabla f(x_\star) \Vert^2 + 2 \mathbb{E} \left[\Vert \tp{x_n}{x_\star} \nabla f_{n+1}(x_n) - \nabla f_{n+1}(x_\star) \Vert^2 | \mathcal{F}_n \right] \\ &\leq &4L^2 \Vert \Delta_n \Vert^2. \end{eqnarray*} \end{proof} The last important step to conclude a linear recursion in $\Delta_n$ is to argue that the operator composition $ [\te{x_\star}{x_n}]^{-1}\tp{x_\star}{x_n} : T_{x_\star}\mathcal{M} \to T_{x_\star}\mathcal{M}$, is in fact an isometry (up to 2nd-order terms) since $x_n$ is close to $x_\star$. The following argument crucially uses the fact that $R_{x_\star}$ is a second-order retraction. \begin{lemma} \label{lem:tangent_rec_3} Let Assumption \ref{assump:manifold}. Let $\Delta_n = R_{x_{\star}}^{-1}(x_n)$ for a sequence $\{ x_n \}_{n \geq 0}$ evolving as in \eq{grad_desc}. Then there exists a trilinear operator $K(\cdot,\cdot,\cdot)$ such that \[ [\te{x_\star}{x_n}]^{-1}\tp{x_\star}{x_n} = I - K(\Delta_n,\Delta_n, \cdot) + O(\norm{\Delta_n}^3). \] \end{lemma} As noted in the proof, when the exponential map is used as the retraction, the operator $K$ is precisely the Riemann curvature tensor $R_{x_\star}(\Delta_n, \cdot)\Delta_n$ (up to a constant prefactor). \begin{proof} We derive a Taylor expansion for the operator composition $[\tp{x}{y}]^{-1}\te{x}{y}$ when $y$ is close to $x$. Consider the function $G(v)= [\tp{x}{R_x(v)}]^{-1}\te{x}{R_x(v)}: T_x\mathcal{M}\to L(T_x\mathcal{M})$ where $L(T_x\mathcal{M})$ denotes the set of linear maps on the vector space $T_x\mathcal{M}$. Now, recall that $\tp{x}{R_x(tv)}$ is precisely the parallel translation operator along the curve $\gamma(t)=R_y(tv)$. From Proposition 8.1.2 by \citet{absil2009optimization}, we have that \[ \frac{d}{dt} G(tv)\vert_{t=0} = \frac{d}{dt} [\tp{x}{R_x(tv)}]^{-1}\te{x}{R_x(tv)}\vert_{t=0} = \nabla_{\dot \gamma(0)} D R_y, \] where $\nabla$ denotes the Levi-Civita connection (see also the proof of \citet[Lemma 7.4.7]{absil2009optimization} and \citet[Chapter 2, Exercise 2]{do2016differential}). Using the definition of the covariant derivative $\nabla_{v}$ along a vector $v$, and of the acceleration vector field $\dot{\gamma}$ \citep[Section 5.4]{absil2009optimization} we have that \[ \nabla_{\dot \gamma(0)} D R_y= \frac{D}{dt} D R_y(\gamma(t))\vert_{t=0} = \frac{D^2}{dt^2} R_y(tv)\vert_{t=0}=0, \] since $R$ is a second-order retraction. Thus, $ \frac{d}{dt} G(tv)\vert_{t=0}=0$. We use $K$ to denote the symmetric trilinear map $d^2G(0)$, where $K(v,v,\cdot)=\frac{1}{2}\frac{d^2}{dt^2} G(tv)\vert_{t=0}$. Thus, since $G$ is smooth and the iterates are restricted to $\mathcal{X}$ by Assumption \ref{assump:manifold}, a Taylor expansion gives, for $v\in R_{x_\star}^{-1}(\mathcal{X})$, $ G(v)=G(0)+K(v,v,\cdot)+O(\Vert v \Vert^3) $. For $x=x_\star$ and $v=\Delta_n$, this yields \[ [\tp{x_\star}{x_n}]^{-1}\te{x_\star}{x_n}= I+K(\Delta_n,\Delta_n,\cdot)+O(\Vert \Delta_n \Vert^3). \] Lastly, as $ [\te{x_\star}{x_n}]^{-1}\tp{x_\star}{x_n} = \left([\tp{x_\star}{x_n}]^{-1} \te{x_\star}{x_n} \right)^{-1} = \left( I + K(\Delta_n,\Delta_n, \cdot) + O(\norm{\Delta_n}^3)) \right)^{-1} = I - K(\Delta_n, \Delta_n,\cdot) + O(\norm{\Delta_n}^3)$ the conclusion follows. In the special case the exponential map is used as retraction, \citet[][Theorem A.2.9]{waldmann2012geometric} directly relates $K$ to the Riemann curvature tensor. They show $K(v,v,\cdot)=-\frac{1}{6}R_{x_\star}(v,\cdot)v$ for $v\in T_{x_\star}\mathcal{M}$. However the result by \citet{waldmann2012geometric} provides the Taylor expansion up to arbitary order in $\Vert v \Vert$. \end{proof} Assembling Lemmas \ref{lem:tangent_rec}, \ref{lem:tangent_rec_2} and \ref{lem:tangent_rec_3} we obtain the desired linear recursion: \begin{theorem}\label{thm:linear} Let Assumptions \ref{assump:manifold}, \ref{assump:HessianLip}, \ref{assump:noiseunbiased}, and \ref{assump:noiseLip} hold. If $\Delta_n = R_{x_{\star}}^{-1}(x_n)$ for a sequence of iterates evolving as in \eq{grad_desc}, then there exists a martingale-difference sequence $\{ \xi_{n} \}_{n \geq 0}$ satisfying $\mathbb{E}[\xi_{n+1}\vert \mathcal{F}_{n}]=0$, $ \mathbb{E}[\Vert \xi_{n+1}\Vert^2 \vert \mathcal{F}_{n}] = O(\norm{\Delta_n}^2) $, and an error sequence $\{ e_n \}_{n \geq 0}$ satisfying $\mathbb{E}[ \norm{ e_{n+1} } | \mathcal{F}_{n} ] \Vert = O(\norm{\Delta_n}^2 + \gamma_{n+1})$ and $\mathbb{E}[ \norm{ e_{n+1} }^2 | \mathcal{F}_n ] \Vert = O(\norm{\Delta_n}^4 + \gamma_{n+1}^2)$ such that \[ \Delta_{n+1} = \Delta_n - \gamma_{n+1} \nabla^2 f(x_\star) \Delta_n -\gamma_{n+1} \nabla f_{n+1}(x_\star) \\ -\gamma_{n+1}\xi_{n+1} -\gamma_{n+1} e_{n+1}. \] \end{theorem} \begin{proof} Combining Lemmas \ref{lem:tangent_rec}, \ref{lem:tangent_rec_2} and \ref{lem:tangent_rec_3}, \begin{eqnarray*} \Delta_{n+1} &=& \Delta_n - \gamma_{n+1} [\te{x_\star}{x_n}]^{-1} (\nabla f_{n+1}(x_n)) + \gamma_{n+1} g_n \\ &=& \Delta_n - \gamma_{n+1} [\tp{x_n}{x_\star}\te{x_\star}{x_n}]^{-1} \tp{x_n}{x_\star}(\nabla f_{n+1}(x_n)) + \gamma_{n+1} g_n \\ &=& \Delta_n - \gamma_{n+1} [I - K(\Delta_n,\Delta_n, \cdot) ] \circ (\nabla^2 f(x_\star)\Delta_n + \nabla f_{n+1}(x_\star)+\tilde\xi_{n+1}+\tilde e_{n+1}) \\ && + \gamma_{n+1} g_n + O( \gamma_{n+1} \Vert \Delta_{n}\Vert^3)\\ &=& \Delta_n - \gamma_{n+1} \nabla^2 f(x_\star)\Delta_n -\gamma_{n+1} \nabla f_{n+1}(x_\star)\\ &&-\gamma_{n+1}\tilde \xi_{n+1} +{\gamma_{n+1}} K(\Delta_n,\Delta_n,\nabla f_{n+1}(x_\star)+\tilde \xi_{n+1} ) \\ &&-\gamma_{n+1}\tilde e_{n+1} +{\gamma_{n+1}} K(\Delta_n,\Delta_n,\nabla^2 f(x_\star) \Delta_n +\tilde e_{n+1}) \\ && + \gamma_{n+1} g_n + O( \gamma_{n+1} \Vert \Delta_{n}\Vert^3). \end{eqnarray*} Let $\xi_{n+1} = \tilde \xi_{n+1} -{\gamma_{n+1}} K(\Delta_n,\Delta_n,\nabla f_{n+1}(x_\star)+\tilde \xi_{n+1} )$. By linearity of the map $K(\Delta_n,\Delta_n,\cdot)$, $\mathbb{E}[\xi_{n+1}\vert \mathcal{F}_{n}]=0$. Moreover by smoothness of the retraction, the tensor $K$ is uniformly bounded in injective norm on the compact set $R_{x_\star}^{-1}(\mathcal{X})$, so $\mathbb{E}[\Vert \xi_{n+1}\Vert^2 \vert \mathcal{F}_{n}] = O(\Vert \Delta_n\Vert^2)$. Let $e_{n+1}=\tilde{e}_{n+1} - K(\Delta_n,\Delta_n,\nabla^2 f(x_\star)+\tilde{e}_{n+1} ) - g_{n} +O( \Vert \Delta_{n}\Vert^3)$. Using Assumptions \ref{assump:manifold}, \ref{assump:noiseLip} and the almost sure upper bound on $\tilde {e}_{n+1}$ we have that this term satisfies \[ \mathbb{E}[\Vert e_{n+1}\Vert ^2 | \mathcal{F}_n] = \O \left(\norm{\Delta_n}^4 + \gamma_{n+1}^2 \right). \] \end{proof} Note that sharper bounds may be obtained under higher-order assumptions on the moments of the noise. This would provide a sharp constant of the leading asymptotic term of $O(\frac{1}{n})$, when the step-size $\gamma_n=\frac{1}{\sqrt{n}}$ is used. \subsection{Proofs in \mysec{pfsketch2}} \label{sec:conv_rates} Here we provide proofs, in the Euclidean setting, of both asymptotic and non-asymptotic Polyak-Ruppert-type averaging results. We apply these results to the tangent vectors $\Delta \in T_{x_\star}\mathcal{M}$ as described in \mysec{pfsketch1}. \subsubsection{Asymptotic Convergence} Throughout this section, we will consider a general linear recursion perturbed by a remainder term $e_n$ of the form: \begin{align} \Delta_{n}=\Delta_{n-1} -\gamma_n A \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}), \label{eq:rec_with_error1} \end{align} for which we will show an asymptotic convergence result under appropriate conditions. Note that we eventually apply these convergence results to iterates $\Delta_n \in T_{x_\star} \mathcal{M}$, which is a finite-dimensional vector space. In this setting, a probability measure can be defined on a vector space (with inner product) with a covariance operator implicitly depending on the inner product (via the dual map). We make the following assumptions on the structure of the recursion: \begin{assumption} \label{assump:psd} $A$ is symmetric positive-definite matrix. \end{assumption} \begin{assumption} \label{assump:noise1} The noise process $\{ \varepsilon_{n} \}$ is a martingale-difference process (with $\mathbb{E}[\varepsilon_n\vert\mathcal F_{n-1}]=0$ and $\sup_n \mathbb{E}[\varepsilon_n^2] < \infty$), for which there exists $C>0$ such that $\mathbb{E} [\Vert \varepsilon_n \Vert^4 | \mathcal{F}_{n-1}] \leq C$ for all $n \geq 0$ and a matrix $\Sigma \succ 0$ such that \[\mathbb{E}[\varepsilon_n\varepsilon_n^\top\vert\mathcal F_{n-1}]\overset{P}{\to} \Sigma.\] \end{assumption} \begin{assumption} \label{assump:noise2} The noise process $\{ \xi_{n} \}$ is a martingale-difference process (with $\mathbb{E}[\xi_n\vert\mathcal F_{n-1}]=0$ and $\sup_n \mathbb{E}[\xi_n^2] < \infty$), and for sufficiently large $n \geq N$, there exists $K > 0$ such that \[ \mathbb{E}[\Vert \xi_n\Vert^2\vert\mathcal F_{n-1}] \leq K\gamma_n \text{ a.s. } \] with $\gamma_n \to 0$ as $n \to \infty$. \end{assumption} \begin{assumption} \label{assump:remainder} For $n \geq 0$ \[ \mathbb{E}[\Vert e_n \Vert] = O(\gamma_n). \] \end{assumption} \begin{assumption}\label{assump:step_size} $\gamma_n \to 0$, $\frac{\gamma_n-\gamma_{n-1}}{\gamma_n} = o(\gamma_n)$ and $\sum_{j=1}^{\infty} \frac{\gamma_j}{\sqrt{j}} < \infty$. \end{assumption} The first two conditions in Assumption \ref{assump:step_size} require that $\gamma_n$ decrease sufficiently slowly. For example $\gamma_n = \gamma t^{-\alpha}$ with $ \frac{1}{2} < \alpha < 1$ satisfy these two conditions but the sequence $\gamma = \gamma t^{-1}$ does not. We can now derive the asymptotic convergence rate, \begin{theorem} \label{thm:asymp_ave} Let Assumptions \ref{assump:psd}, \ref{assump:noise1}, \ref{assump:noise2}, \ref{assump:remainder} and \ref{assump:step_size} hold for the perturbed linear recursion in Equation \eqref{eq:rec_with_error1}. Then, \[ \sqrt n \bar{\Delta}_n \overset{D}{\to} \mathcal N (0, A^{-1}\Sigma A^{-1}). \] \end{theorem} \begin{proof} The argument mirrors the proof of Theorem 2 in \citet{polyak1992acceleration} so we only sketch the primary points. Throughout we will use $C$ to denote an unimportant, global constant that may change line to line. Consider the purely linear recursion of the form: \begin{align} & \Delta^1_{n}=\Delta^1_{n-1} -\gamma_n A \Delta^1_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}) \label{eq:linear_rec} \\ & \bar{\Delta}^1_{n} = \frac{1}{n} \sum_{i=0}^{n-1} \Delta^1_{i}, \nonumber \end{align} which satisfies $\Delta^1_{0} = \Delta_{0}$, and approximates the perturbed recursion in Equation \eqref{eq:rec_with_error1}, \begin{align} & \Delta_{n}=\Delta_{n-1} -\gamma_n A \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}) + \gamma_n e_{n} \label{eq:linear_rec_perturb} \\ & \bar{\Delta}_{n} = \frac{1}{n} \sum_{i=0}^{n-1} \Delta_{i}. \nonumber \end{align} Now, note that we can show that $\lim_{K \to \infty} \lim \sup_{n} \mathbb{E} \left[\Vert \varepsilon_n \Vert^2 \mathbb{I}[\Vert \varepsilon_n \Vert > K]|\mathcal{F}_{n-1} \right] \overset{p}{\to} 0$, using our (conditional) 4th-moment bound and the (conditional) Cauchy-Schwarz/Markov inequalities, so the relevant assumption in \citet{polyak1992acceleration} is satisfied. Then as the argument in Part 3 of the proof of Theorem 2 in \citet{polyak1992acceleration} shows, under Assumptions \ref{assump:psd}, \ref{assump:noise1}, \ref{assump:noise2} the conditions of Proposition (a) of Theorem 1 in \citet{polyak1992acceleration} also hold. This implies the linear process satisfies: \[ \sqrt n \bar{\Delta}^1_n \overset{D}{\to} \mathcal N (0, A^{-1}\Sigma A^{-1}). \notag \] We now argue that the process $\bar{\Delta}^1_n$ and $\bar{\Delta}_n$ are asymptotically equivalent in distribution. First, since the noise process is coupled between Equations \ref{eq:linear_rec} and \ref{eq:linear_rec_perturb}, the differenced process obeys a simple (perturbed) linear recursion, \[ \Delta_n - \Delta^1_{n} = (I-\gamma_j A)(\Delta_{n-1}-\Delta^1_{n-1}) - \gamma_n e_n. \notag \] Expanding and averaging this recursion (defining $\delta_n = \bar{\Delta}_n-\bar{\Delta}^1_{n}$) gives: \begin{align} & \Delta_n-\Delta^1_{n} = \sum_{j=1}^{n} \Pi_{i=j+1}^{n}(I-\gamma_j A) \gamma_j e_j \implies \delta_n = \frac{1}{n} \sum_{k=1}^{n-1} \sum_{j=1}^{k}[\Pi_{i=j+1}^{k}(I-\gamma_i A)] \gamma_j e_j \notag \\ \implies & \delta_n = \frac{1}{n} \sum_{j=1}^{n-1} \left[ \sum_{k=j}^{n-1} \Pi_{i=j+1}^{k}(I-\gamma_i A) \right] \gamma_j e_j. \notag \end{align} We can rewrite the recursion for this averaged differenced process as: \[ \sqrt{n}\delta_n = \frac{1}{\sqrt{n}} \sum_{j=1}^{n-1}(A^{-1} + w_j^n)e_j, \] defining, \[ w_j^n = \gamma_j \sum_{i=j}^{n-1} \Pi_{k=j+1}^{i} (I-\gamma_k A) - A^{-1}. \] Now if the step-size sequence satisfies the first two conditions of Assumption \ref{assump:step_size}, by Lemma 1 and 2 in \citet{polyak1992acceleration} we have that $\Vert w_j^n \Vert \leq C$ uniformly. So using Assumption \ref{assump:psd} we obtain that: \[ \sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \Vert (A^{-1}+w_j^t) e_j\Vert \leq C \sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \Vert e_j \Vert. \] An application of the Tonelli-Fubini theorem and Assumption \ref{assump:remainder} then shows that \[ \mathbb{E}[\sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \Vert e_j \Vert] = \sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \mathbb{E} [\Vert e_j \Vert] \leq C \sum_{j=1}^{\infty} \frac{\gamma_j}{\sqrt{j}} < \infty, \] by choice of the step-size sequence in Assumption \ref{assump:step_size}. Since $\sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \Vert e_j \Vert \geq 0$ and has finite expectation it must be that, \[ \sum_{j=1}^{\infty} \frac{1}{\sqrt{j}} \Vert e_j \Vert < \infty \implies \sum_{j=1}^{\infty} \frac{1}{\sqrt{j}}\Vert (A^{-1} + w_j^n)e_j \Vert < \infty. \] An application of the Kronecker lemma then shows that \[ \frac{1}{\sqrt{n}} \sum_{j=1}^{n-1} \Vert (A^{-1} + w_j^n)e_j \Vert \to 0 \implies \sqrt{n} \delta_n \to 0 \text{ a.s.} \] The conclusion of theorem follows by Slutsky's theorem. \end{proof} \subsubsection{Nonasymptotic Convergence} Throughout this section, we will consider a general linear recursion perturbed by remainder terms $e_n$ of the form: \begin{align} \Delta_{n}=\Delta_{n-1} -\gamma_n A \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}). \label{eq:rec_with_error2} \end{align} We make the following assumptions on the structure of the recursion: \begin{assumption} \label{assump:psd2} $A$ is symmetric positive-definite matrix, such that $A\succcurlyeq \mu I$ for $\mu > 0$. \end{assumption} \begin{assumption} \label{assump:noise1na} The noise process $\{ \varepsilon_{n} \}$ is a martingale-difference process (with $\mathbb{E}[\varepsilon_n\vert\mathcal F_{n-1}]=0$ and $\sup_n \mathbb{E}[\varepsilon_n^2] < \infty$) and a matrix $\Sigma \succ 0$ such that \[\mathbb{E}[\varepsilon_n\varepsilon_n^\top\vert\mathcal F_{n-1}] \preccurlyeq \Sigma.\] \end{assumption} \begin{assumption} \label{assump:noise2na} The noise process $\{ \xi_{n} \}$ is a martingale-difference process (with $\mathbb{E}[\xi_n\vert\mathcal F_{n-1}]=0$ and $\sup_n \mathbb{E}[\xi_n^2] < \infty$), and there exists $K > 0$ such that for $n\geq 0$ \[ \mathbb{E}[\Vert \xi_n\Vert^2\vert\mathcal F_{n-1}] \leq K\gamma_n \text{ a.s. } \] \end{assumption} \begin{assumption} \label{assump:remainderna} There exists $M$ such that for $n \geq 0$, $\mathbb{E}[\Vert e_n \Vert^2] \leq M \gamma_n^2$. \end{assumption} \begin{assumption} \label{assump:step_sizena} The step-sizes take the form $\gamma_n=\frac{C}{n^\alpha}$ for $C>0$ and $\alpha\in[1/2,1)$. \end{assumption} \begin{assumption} \label{assump:slowrate2} There exists $C' > 0$ such that for $n \geq 0$, we have that \[\sqrt{\mathbb{E}[\norm{\Delta_n}^2]} = O(\sqrt{\gamma_n}) = C' n^{-\alpha/2} \] \end{assumption} Using these Assumptions we can derive the non-asymptotic convergence rate: \begin{theorem} \label{thm:nonasymp_ave} Let Assumptions \ref{assump:psd2}, \ref{assump:noise1na}, \ref{assump:noise2na}, \ref{assump:remainderna}, \ref{assump:step_sizena} and \ref{assump:slowrate2} hold for the recursion in Equation \ref{eq:rec_with_error2}, \[ \mathbb{E}[\Vert \bar{\Delta}_n \Vert ^2] \leq \frac{1}{n} \tr [A^{-1} \Sigma A^{-1}] + O(n^{-2\alpha}) + O(n^{\alpha-2}). \] \end{theorem} \begin{proof} The argument mirrors the proof of Theorem 3 in \citet{moulines2011non} so we only sketch the key points. First, since $A$ is invertible due to Assumption \ref{assump:psd2}, from Equation \ref{eq:rec_with_error2}: \[ \Delta_{n-1}= \frac{A^{-1}(\Delta_{n-1}-\Delta_{n})}{\gamma_{n}} + A^{-1}\varepsilon_{n}+A^{-1}\xi_{n}+A^{-1}e_{n}, \] We now analyze the average of each of the 4 terms separately. Throughout we will use $C$ to denote an unimportant, numerical constant that may change line to line. \begin{itemize} \item Summing the first term by parts we obtain, \[ \frac{1}{n}\sum_{k=1}^n\frac{A^{-1} (\Delta_{k-1}-\Delta_{k})}{\gamma_{k}}=\frac{1}{n}\sum_{k=1}^{n-1}A^{-1}\Delta_k \left(\frac{1}{\gamma_{k+1}}-\frac{1}{\gamma_{k}}\right)-\frac{1}{n\gamma_n}A^{-1}\Delta_n+\frac{1}{n\gamma_1}A^{-1}\Delta_0, \] and using Minkowski's inequality (in $L_2$) gives, \[ \sqrt{\mathbb{E} \norm{ \frac{1}{n}\sum_{k=1}^n\frac{A^{-1}(\Delta_{k-1}-\Delta_{k})}{\gamma_{k}} }^2} \leq \frac{1}{n \mu} \sum_{k=1}^{n-1}\sqrt{\mathbb{E} \Vert \Delta_k\Vert^2} \abs{\frac{1}{\gamma_{k+1}}-\frac{1}{\gamma_{k}}} +\frac{\sqrt{\mathbb{E} \Vert \Delta_n\Vert^2}}{n\gamma_n \mu } +\frac{\Vert \Delta_0 \Vert}{n\gamma_1 \mu}. \] Since we choose a sequence of decreasing step-sizes of the form $\gamma=\frac{C}{n^{\alpha}}$ for $\alpha \in [\frac{1}{2}, 1)$, an application of the Bernoulli inequality shows that $\vert \gamma_{k+1}^{-1}-\gamma_{k}^{-1}\vert = C^{-1} [(k+1)^\alpha-k^\alpha]\leq C^{-1} \alpha k^{\alpha-1}$. By assumption, we have that $\sqrt{\mathbb{E} \Vert \Delta_n\Vert^2}\leq Cn^{-\alpha/2}$ so, \begin{eqnarray*} \sqrt{\mathbb{E} \norm{\frac{1}{n}\sum_{k=1}^n\frac{A^{-1}(\Delta_{k}-\Delta_{k})}{\gamma_{k}} }^2} &\leq& \frac{C\alpha }{n \mu } \sum_{k=1}^{n-1} k^{\alpha/2-1}+\frac{C}{\mu} n^{\alpha/2-1}+\frac{C}{n \mu}\Vert \Delta_0 \Vert \\ &\leq& \frac{2C n^{\alpha/2-1}}{ \mu }+\frac{Cn^{\alpha/2-1}}{\mu} +\frac{C\Vert \Delta_0 \Vert}{n \mu}\\ & \leq &\frac{3Cn^{\alpha/2-1}}{\mu} +\frac{C\Vert \Delta_0 \Vert}{n \mu}. \end{eqnarray*} This implies that, \[ \mathbb{E} \norm{ \frac{1}{n}\sum_{k=1}^n\frac{A^{-1}(\Delta_{k-1}-\Delta_{k})}{\gamma_{k}} }^2 = O(n^{\alpha-2}). \] \item Using the Assumption \ref{assump:noise1na} and the orthogonality of martingale increments we immediately obtain the leading order term as, \[ \mathbb{E} \Vert A^{-1} \bar \varepsilon_n\Vert ^2\leq\frac{1}{n}\tr [A^{-1} \Sigma A^{-1}]. \] \item Using Assumption \ref{assump:noise2na} and the orthogonality of martingale increments we obtain, \[ \mathbb{E} \Vert A^{-1} \bar \xi_n \Vert^2 =\frac{1}{n^2 \mu^2} \sum_{k=1}^n \mathbb{E} \Vert \xi_k \Vert^2 \leq \frac{C}{n^2 \mu^2} \sum_{k=0}^{n-1} k^{-\alpha} = O(n^{-(\alpha+1)}). \] \item Using the Minkowski inequality (in $L_2$), and Assumption \ref{assump:remainderna}, we have that \[ \mathbb{E} \Vert A^{-1} \bar e_{n-1}\Vert^2 \leq \left(\frac{1}{n\mu}\sum_{k=1}^n \sqrt{\mathbb{E} \Vert e_k \Vert^2 }\right)^2\leq \frac{M^2}{(n \mu)^2} \left(\sum_{k=1}^n k^{-\alpha}\right)^2 \leq \frac{M^2}{\mu^2}n^{-2\alpha}. \] \end{itemize} The conclusion follows. \end{proof} \subsubsection{On the Riemannian Center of Mass} \label{sec:com} Note that $\bar{\Delta}_n$ is \textit{not} computable, but has an interesting interpretation as an upper bound on the Riemannian center of mass (or Karcher mean), \[ K_n = \arg \min_{x \in \mathcal{M}} \frac{1}{n} \sum_{i=1}^{n} \norm{R_{x}^{-1}(x_i)}^2 \] of a set of iterates $\{ x_i \}_{n \geq 0}$ in $\mathcal{M}$. When it exists, computing $K_n$ is itself a nontrivial geometric optimization problem since it does not admit a closed-form solution in general. See \citet[][]{moakher2002means,bini2013computing,hosseini2015matrix} for more background on the Karcher mean problem. If we consider a ``symmetric'' retraction $R$ satisfying for $x,y\in\mathcal{X}$ that $\Vert R_{x}^{-1}(y)\Vert^2=\Vert R_{y}^{-1}(x)\Vert^2$ (which is the case for the exponential map for example), then \begin{lemma} \label{lem:karcher_mean} Let $\{ x_i \}_{i=0}^{n}$ be a sequence of iterates contained in $\mathcal{M}$ and let the retraction $R$ be symmetric, then \[\Vert R_{x_\star}^{-1}(K_n)\Vert^2\leq2\Vert\bar \Delta_n\Vert^2.\] \end{lemma} \begin{proof} The first-order optimality condition requires that $\nabla D(K_n) = 0$ where the manifold gradient is given by $\nabla D(x) = \frac{1}{n} \sum_{i=1}^{n} R_{x}^{-1}(x_i)$. Thus, $\nabla D(x_\star)=\bar \Delta_n$. By Assumption \ref{assump:manifold}, the function $D$ is $1$-retraction strongly convex. Defining the function $g: t\mapsto D \left(R_{x_\star} (t \frac{R_{x_\star}^{-1}(K_n)}{\Vert R_{x_\star}^{-1}(K_n)\Vert })\right)$, we have that at $t_0= \Vert R_{x_\star}^{-1}(K_n)\Vert$, \[ 2\Vert \nabla D(x_\star)\Vert^2=2(g'(t_0)-g'(0))^2 \geq {t_0^2} =\Vert R_{x_\star}^{-1}(K_n)\Vert^2. \] \end{proof} Therefore the Riemannian center of mass will enjoy the same convergence rate as $\bar \Delta_n$ itself. \subsection{Proofs in \mysec{pfsketch3}} Finally, we would like to asymptotically understand the evolution of the averaged vector $\tilde{\Delta}_n = R^{-1}_{x_\star}(\tilde{x}_n)$, where $\tilde{x}_n$ is the online, streaming iterate average. From \eq{ave_grad_desc} we see that $\tilde{\Delta}_{n+1} = F_{\bar{x}_n, x_\star}[\frac{1}{n+1} F^{-1}_{\bar{x}_n, x_\star}(\Delta_{n+1})] = \tilde{F}(\Delta_{n+1})$, defining $\tilde{F}(\cdot) = F_{\bar x_n,x_\star}[\frac{1}{n+1} F_{\bar x_n,x_\star}^{-1}(\cdot)]$. We first start with a lemma controlling $\Vert \tilde{\Delta}_n \Vert$, when $x_n$ locally converges to $x_\star$. \begin{lemma} \label{lem:avg_iters} Let Assumptions \ref{assump:slowrate} and \ref{assump:manifold} hold. Consider $x_n$ and $\tilde{x}_n$, which are a sequence of iterates evolving as in \eq{grad_desc} and \eq{ave_grad_desc}, and define $\tilde {\Delta}_{n} = R_{x_\star}^{-1}(\tilde {x}_{n})$. Then, $\mathbb{E} [\Vert \tilde{\Delta}_n \Vert^2] = O(\gamma_n)$ as well. \end{lemma} \begin{proof} By Assumption \ref{assump:manifold}, the function $x \to \Vert R_{x_\star}^{-1}(x) \Vert^2$ is retraction convex in $x$. Then, \begin{align} \Vert R_{x_\star}^{-1}(\tilde{x}_n) \Vert^2 = \Vert R_{x_\star}^{-1} \left(R_{\tilde{x}_{n-1}}(\frac{1}{n} R^{-1}_{\tilde{x}_{n-1}}(x_n)) \right) \Vert^2 \leq \frac{n-1}{n} \Vert R_{x_\star}^{-1} \left(x_{n-1}\right) \Vert^2 + \frac{1}{n} \Vert R_{x_\star}^{-1} \left(\tilde{x}_{n-1}\right) \Vert^2. \notag \end{align} A simple inductive argument then shows that $\Vert R_{x_\star}^{-1}(\tilde{x}_n) \Vert^2 \leq \frac{1}{n} \sum_{i=0}^{n} \Vert R^{-1}_{x_\star}(x_i)\Vert^2$. Using that $\mathbb{E} \Vert \Delta_n \Vert^2 = O(\gamma_n)$ (from Assumption \ref{assump:slowrate}), and taking expectations shows $\mathbb{E}[\Vert \tilde{\Delta}_n \Vert^2] \leq \frac{C}{n} \sum_{i=0}^{n} \gamma_i \leq C \gamma_n$ when we choose a step-size sequence of the form $\gamma_n = \frac{C}{n^{\alpha}}$. \end{proof} Finally using an asymptotic expansion we can show that $\tilde{\Delta}_n$ and $\bar{\Delta}_n$ approach each other: \begin{lemma} \label{lem:stream_avg_iters} Let Assumptions \ref{assump:slowrate} and \ref{assump:manifold} hold. As before, consider $x_n$ and $\tilde{x}_n$, which are a sequence of iterates evolving as in \eq{grad_desc} and \eq{ave_grad_desc}, and define $\tilde {\Delta}_{n} = R_{x_\star}^{-1}(\tilde {x}_{n})$. Then, \[ \tilde{\Delta}_n=\bar{\Delta} + e_n, \] where $\mathbb{E} [\Vert e_n \Vert] = O(\gamma_n)$. \end{lemma} \begin{proof} A similar chain rule computation to Lemma \eqref{lem:tangent_rec} shows that $d \tilde{F}(\Delta) = \frac{1}{n+1}I_{T_{x_\star}\mathcal{M}}$. Now, in addition to $\tilde {\Delta}_{n+1} = F_{\bar{x}_n, x_\star}[\frac{1}{n+1} F^{-1}_{\bar{x}_n, x_\star}(\Delta_{n+1})] = \tilde{F}(\Delta_{n+1})$, we also have that $\tilde{\Delta}_n = \tilde{F}(\tilde{\Delta}_n)$ identically. As $\tilde{F}(\cdot)$ is a mapping between vector spaces applying a Taylor expansion to the first expression about $\tilde{\Delta}_n$ gives: \begin{align} \tilde{\Delta}_{n+1} & = \tilde {\Delta}_{n} + \frac{1}{n+1}(\Delta_{n+1}-\tilde{\Delta}_n) + O(D^2 \tilde{F}(\Delta) \Vert \Delta_{n+1}-\tilde{\Delta}_n \Vert^2). \end{align} for $\Delta \in R_{x_\star}^{-1}(\mathcal{X})$. Since $\tilde{F}$ is twice-continuously differentiable and $R_{x_\star}^{-1}(\mathcal{X})$ is compact, direct computation of the Hessian using the chain and Leibniz rules shows \begin{align} \tilde{e}_n = O \left((n+1) D^2 \tilde{F}(\Delta) \Vert \Delta_{n+1}-\tilde{\Delta}_n \Vert^2 \right) = O\left((n+1) \left(\frac{1}{(n+1)^2} + \frac{1}{n+1} \right) \cdot \Vert \Delta_{n+1} - \tilde{\Delta}_n \Vert^2 \right), \notag \end{align} which implies that \[ \mathbb{E} \Vert \tilde{e}_n \Vert= O({\gamma_n}),\] since both $\mathbb{E} [\Vert \Delta_n \Vert^2] = O(\gamma_n)$ and $\mathbb{E} [\Vert \tilde{\Delta}_n\Vert^2] = O(\gamma_n)$ by Lemma \ref{lem:avg_iters}. Therefore $ (n+1) \tilde{\Delta}_{n+1} = n \tilde{\Delta}_{n}+ \Delta_{n+1}+e_n = \sum_{k=0}^{n+1}\Delta_k + \sum_{k=0}^{n+1} \tilde e_k \implies \tilde{\Delta}_{n+1} = \bar{\Delta}_{n+1} + e_{n+1}$ where $e_{n+1}=\frac{\sum_{k=0}^{n+1} \tilde e_k}{n+1}$, and $\mathbb{E}[\Vert e_{n+1} \Vert]= \mathbb{E}\big[\big\Vert \frac{\sum_{k=0}^{n+1} \tilde e_k}{n+1} \big\Vert\big] \leq \frac{1}{n+1} \sum_{i=0}^{n} \mathbb{E}[\Vert \tilde e_k \Vert] = O(\gamma_n)$. \end{proof} This result states that the distance between the streaming average $\tilde{\Delta}_n = R_{x_\star}^{-1}(\tilde{x}_n)$ is close to the computationally intractable $\bar{\Delta}_n$ up to $O(\gamma_n)$ error. We can prove a slightly stronger statement under a 4th-moment assumption on the iterates that follows identically to the above. \begin{lemma} \label{lem:stream_avg_iters_4mom} Let Assumption \ref{assump:manifold} hold, and assume the 4th-moment bound $\mathbb{E}[\Vert \Delta_n \Vert^4] = O(\gamma_n^2)$. As before, consider $x_n$ and $\tilde{x}_n$, which are a sequence of iterates evolving as in \eq{grad_desc} and \eq{ave_grad_desc}, and define $\tilde {\Delta}_{n} = R_{x_\star}^{-1}(\tilde {x}_{n})$. Then, \[ \mathbb{E} \left[ \Vert \tilde{\Delta}_n-\bar{\Delta} \Vert^2 \right] = O(\gamma_n^2). \] \end{lemma} \begin{proof} The proof is almost identical to the proofs of Lemma \ref{lem:avg_iters} and \ref{lem:stream_avg_iters} so we will be brief. Since the function $x \to x^2$ is convex and nondecreasing over positive support, using Assumption~\ref{assump:manifold}, the composition $x \to \Vert R_{x_\star}^{-1}(x) \Vert^4$ is also retraction-convex in $x$. An identical argument to the proof of Lemma \ref{lem:avg_iters} then shows that $\mathbb{E}[\Vert \Delta_n \Vert^4] = O(\gamma_n^2)$ implies $\mathbb{E}\left[\Vert \tilde{\Delta}_n \Vert^4\right] = O(\gamma_n^2)$. Using that $\mathbb{E}\left[\Vert \tilde{\Delta}_n \Vert^4\right] = O(\gamma_n^2)$, a nearly identical calculation to Lemma \ref{lem:stream_avg_iters} and an application of Minkowski's inequality (in $L_2$) shows that $\sqrt{\mathbb{E} \left[\Vert \bar{\Delta}_n - \tilde{\Delta}_n \Vert^2\right]} = O(\gamma_n)$. The conclusion follows. \end{proof} \section{Proof Sketch} \label{sec:pfsketch} We provide an overview of the arguments that comprise the proof of Theorem \ref{thm:main} (full details are deferred to \myapp{app_pfsketch}). We highlight three key steps. First, since we assume the iterates $x_n$ produced from SGD converge to within $\sim O(\sqrt{\gamma_n})$ of $x_\star$, we can perform a Taylor expansion of the recursion in \eq{grad_desc}, to relate the points $x_n$ on the manifold $\mathcal{M}$ to vectors $\Delta_n$ in the tangent space $T_{x_\star}\mathcal{M}$. This generates a (perturbed) linear recursion governing the evolution of the vectors $\Delta_n \in T_{x_\star} \mathcal{M}$. Recall that as $x_\star$ is unknown, $\Delta_n$ is not accessible, but is primarily a tool for our analysis. Second, we can show a fast $O(\frac{1}{n})$ convergence rate for the averaged vectors $\bar{\Delta}_n \in T_{x_\star} \mathcal{M}$, using techniques from the Euclidean setting. Finally, we once again use a local expansion of \eq{ave_grad_desc} to connect the averaged tangent vectors $\bar \Delta_n$ to the streaming, Riemannian average $\tilde \Delta_n$---transferring the fast rate for the inaccessible vector $\bar{\Delta}_n$ to the computable point $\tilde x_n$. Throughout our analysis we extensively use Assumption~\ref{assump:manifold}, which restricts the iterates $x_n$ to the subset $\mathcal{X}$. \subsection{From $\mathcal{M}$ to $T_{x_\star}\mathcal{M}$ } \label{sec:pfsketch1} We begin by linearizing the progress of the SGD iterates $x_n$ in the tangent space of $x_\star$ by considering the evolution of $\Delta_n = R_{x_\star}^{-1}(x_n)$. \begin{itemize} \item First, as the $\Delta_n$ are all defined in the vector space $T_{x_\star} \mathcal{M}$, Taylor's theorem applied to $R_{x_\star}^{-1} \circ R_{x_n}:T_{x_n} \mathcal{M} \to T_{x_\star} \mathcal{M}$ along with \eq{grad_desc} allows us to conclude that \[ \Delta_{n+1} = \Delta_n - \gamma_{n+1} [\te{x_\star}{x_n}]^{-1} (\nabla f_{n+1}(x_n)) + O(\gamma_{n+1}^2). \] See Lemma \ref{lem:tangent_rec} for more details. \item Second, we use the manifold version of Taylor's theorem and appropriate Lipschitz conditions on the gradient to further expand the gradient term $ \tp{x_n}{x_\star} \nabla f_{n+1}(x_n)$ as \[ \tp{x_n}{x_\star} \nabla f_{n+1}(x_n)=\nabla^2 f(x_\star)\Delta_n + \nabla f_{n+1}(x_\star)+ \xi_{n+1}+O(\Vert \Delta_n\Vert^2), \] where the noise term is controlled as $\mathbb{E}[\ \xi_{n+1}\vert\mathcal F_{n}]=0$, and $\mathbb{E}[\Vert \xi_{n+1}\Vert^2 \vert\mathcal F_{n}]=O( \Vert\Delta_n\Vert^2)$. See Lemma \ref{lem:tangent_rec_2} for more details. \item Finally, we argue that the operator $ [\te{x_\star}{x_n}]^{-1}\tp{x_\star}{x_n} : T_{x_\star}\mathcal{M} \to T_{x_\star}\mathcal{M}$ is a local isometry up to second-order terms, \[ [\te{x_\star}{x_n}]^{-1}\tp{x_\star}{x_n} = I + O(\norm{\Delta_n}^2), \] which crucially rests on the fact $R$ is a second-order retraction. See Lemma \ref{lem:tangent_rec_3} for more details. \item Assembling the aforementioned lemmas allows us to derive a (perturbed) linear recursion, governing the tangent vectors $\{ \Delta_n \}_{n \geq 0}$ as \begin{equation} \label{eq:final_proof_sketch} \Delta_{n+1} = \Delta_n - \gamma_{n+1} \nabla^2 f(x_\star) \Delta_n -\gamma_{n+1} \nabla f_{n+1}(x_\star) -\gamma_{n+1}\xi_{n+1} + O(\norm{\Delta_n}^2\gamma_n + \gamma_n^2). \end{equation} See Theorem \ref{thm:linear} for more details. \end{itemize} \subsection{Averaging in $T_{x_\star} \mathcal{M}$} \label{sec:pfsketch2} Our next step is to prove both asymptotic and non-asymptotic convergence rates for a general, perturbed linear recursion (resembling \eq{final_proof_sketch}) of the form, \begin{align} \Delta_{n}=\Delta_{n-1} -\gamma_n \nabla^2 f(x_\star) \Delta_{n-1}+ \gamma_n (\varepsilon_n+\xi_{n}+e_{n}),\label{eq:rec_with_error} \end{align} under appropriate assumptions on the error $\{ e_n \}_{n \geq 0}$ and noise $\{ \varepsilon_n \}_{n \geq 0}$, $\{ \xi_n \}_{n \geq 0}$ sequences detailed in \myapp{conv_rates}. Under these assumptions we can derive an asymptotic rate for the average, $\bar{\Delta}_n = \frac{1}{n}\sum_{i=1}^{n} \Delta_i$, under a first-moment condition on $e_n$: \[ \sqrt n \bar{\Delta}_n \overset{D}{\to} \mathcal N (0, \nabla^2 f(x_\star)^{-1}\Sigma \nabla^2 f(x_\star)^{-1}), \] and, under a slightly stronger second-moment condition on $e_n$ we have: \[ \mathbb{E}[\Vert \bar{\Delta}_n \Vert ^2] \leq \frac{1}{n} \tr [\nabla^2 f(x_\star)^{-1} \Sigma \nabla^2 f(x_\star)^{-1}] + O(n^{-2\alpha}) + O(n^{\alpha-2}), \] where $\Sigma$ denotes the asymptotic covariance of the noise $\varepsilon_n$. The proof techniques are similar to those of \citet{polyak1992acceleration} and \citet{moulines2011non} so we do not detail them here. See Theorems \ref{thm:asymp_ave} and \ref{thm:nonasymp_ave} for more details. Note that $\bar{\Delta}_n$ is \textit{not} computable, but does have an interesting interpretation as an upper bound on the Riemannian center-of-mass, $K_n = \arg \min_{x \in \mathcal{M}}\sum_{i=1}^{n} \norm{R_{x}^{-1}(x_i)}^2$, of a set of iterates $\{ x_n \}_{n \geq 0}$ in $\mathcal{M}$ \citep[see \mysec{com} and][for more details]{Afs11}. \subsection{From $T_{x_\star} \mathcal{M}$ back to $\mathcal{M}$} \label{sec:pfsketch3} Using the previous arguments, we can conclude that the averaged vector $\bar{\Delta}_n$ obeys both asymptotic and non-asymptotic Polyak-Ruppert-type results. However, $\bar{\Delta}_n$ is \textit{not} computable. Rather, $\tilde{\Delta}_n = R_{x_\star}^{-1}(\tilde{x}_n)$ corresponds to the computable, Riemannian streaming average $\tilde{x}_n$ defined in \eq{ave_grad_desc}. In order to conclude our result, we argue that $\tilde{\Delta}_n = R_{x_\star}^{-1}(\tilde{x}_n)$ and $\bar{\Delta}_n$ are close up to $O(\gamma_n)$ terms. The argument proceeds in two steps: \begin{itemize} \item Using the fact that $x \to \norm{R_{x_\star}^{-1}(x)}^2$ is retraction convex we can conclude that $\mathbb{E}[\norm{\Delta_n}^2] = O(\gamma_n)$ implies that $\mathbb{E}[\Vert \tilde{\Delta}_n\Vert^2] = O(\gamma_n)$ as well. See Lemma \ref{lem:avg_iters} for more details. \item Then, we can locally expand \eq{ave_grad_desc} to find that, \[ \tilde{\Delta}_{n+1} = \tilde{\Delta}_n + \frac{1}{n+1}(\Delta_{n+1}-\tilde{\Delta}_n)+\tilde{e}_n, \] where $\mathbb{E}[\Vert \tilde{e}_n\Vert] = O(\frac{\gamma_n}{n+1})$. Rearranging and summing this recursion shows that $\tilde{\Delta}_n = \bar{\Delta}_n+e_n$ for $\mathbb{E}[\Vert e_n\Vert] = O(\gamma_n)$, showing these terms are close. See Lemma \ref{lem:stream_avg_iters} for details. \end{itemize} \section{Results} \label{sec:results} We consider the optimization of a function $f$ over a compact, connected subset $\mathcal{X} \subset \mathcal{M}$, \[ \min_{x \in \mathcal{X} \subset \mathcal{M}} f(x), \] with access to a (noisy) first-order oracle $\{ \nabla f_n(x) \}_{n \geq 1}$. Given a sequence of iterates $\{x_n\}_{n\geq0}$ in~$\mathcal{M}$ produced from the first-order optimization of $f$, \begin{align} x_{n} = R_{x_{n-1}} \left(-\gamma_n \nabla f_{n} \left(x_{n-1} \right)\right), \label{eq:grad_desc} \end{align} that are converging to a \emph{strict} local minimum of $f$, denoted by $x_\star$, we consider (and analyze the convergence of) a streaming average of iterates: \begin{align} \tilde{x}_{n} = R_{\tilde{x}_{n-1}} \left(\frac{1}{n} R_{\tilde{x}_{n-1}}^{-1}\left(x_{n}\right)\right). \label{eq:ave_grad_desc} \end{align} Here we use $R_x$ to denote a retraction mapping (defined formally in \mysec{background}), which provides a natural means of moving along a vector (such as the gradient) while restricting movement to the manifold. As an example, when $\mathcal{M} = \mathbb{R}^d$ we can take $R_x$ as vector addition by $x$. In this setting, \eq{grad_desc} reduces to the standard gradient update $x_n = x_{n-1} - \gamma_n \nabla f_n(x_{n-1})$ and \eq{ave_grad_desc} reduces to the ordinary average $\tilde{x}_n = \tilde{x}_{n-1} + \frac{1}{n}(x_{n} - \tilde{x}_{n-1})$. In the update in \eq{grad_desc}, we will always consider step-size sequences of the form $\gamma_n = \frac{C}{n^\alpha}$ for $C>0$ and $\alpha \in \left(\frac{1}{2}, 1\right)$, which satisfy the usual stochastic approximation step-size rules $\sum_{i=1}^\infty \gamma_i=\infty$ and $\sum_{i=1}^\infty \gamma_i^2<\infty$ \citep[see, e.g.,][]{BenPriMet90}. Intuitively, our main result states that if the iterates $x_n$ converge to $x_\star$ at a slow $O(\gamma_n)$ rate, their streaming Riemannian average will converge to $x_\star$ at the the optimal $O(\frac{1}{n})$ rate. This result requires several technical assumptions, which are standard generalizations of those appearing in the Riemannian optimization and stochastic approximation literatures (detailed in \mysec{assumptions}). The critical assumption we make is that all iterates remain bounded in $\mathcal{X}$---where the manifold behaves well and the algorithm is well-defined (Assumption~\ref{assump:manifold}). The notion of slow convergence to an optimum is formalized in the following assumption: \begin{assumption}\label{assump:slowrate} If $\Delta_n = R_{x_\star}^{-1}(x_n)$ for a sequence of iterates evolving in \eq{grad_desc}, then \[ \mathbb{E}[\Vert \Delta_n \Vert^2] = O(\gamma_n). \] \end{assumption} Assumption~\ref{assump:slowrate} can be verified in a variety of optimization problems, and we provide such examples in \mysec{application}. As $x_\star$ is unknown, $\Delta_n$ is not computable but is primarily a tool for our analysis. Importantly, $\Delta_n$ is a tangent vector in $T_{x_\star} \mathcal{M}$. Note also that the norm $\Vert \Delta_n \Vert$ is locally equivalent to the geodesic distance $d(x_n,x_\star)$ on $\mathcal{M}$ (see \mysec{background}). We use $\Sigma$ to denote the covariance of the noisy gradients at the optima $x_\star$. Formally, our main convergence result regarding Polyak-Ruppert averaging in the manifold setting is as follows (where Assumptions~\ref{assump:manifold} through \ref{assump:noiseLip} will be presented later): \begin{theorem} \label{thm:main} Let Assumptions \ref{assump:slowrate}, \ref{assump:manifold}, \ref{assump:strongconvpoint}, \ref{assump:HessianLip}, \ref{assump:noiseunbiased}, and \ref{assump:noiseLip} hold for the iterates evolving according to \eq{grad_desc} and \eq{ave_grad_desc}. Then $\tilde{\Delta}_n = R_{x_{\star}}^{-1}(\tilde{x}_n)$ satisfies: \begin{align} \sqrt{n} \tilde{\Delta}_n \overset{D}{\to} \mathcal{N}(0, \nabla^2 f(x_\star)^{-1} \Sigma \nabla^2 f(x_\star)^{-1}). \notag \end{align} If we additionally assume a bound on the fourth moment of the iterates---of the form $\mathbb{E}[\Vert \Delta_n \Vert^4] = O(\gamma_n^2)$---then a non-asymptotic result holds: \begin{align} \mathbb{E}[\Vert \tilde{\Delta}_n \Vert^2] \leq \frac{1}{n} \tr[\nabla^2 f(x_\star)^{-1} \Sigma \nabla^2 f(x_\star)^{-1}] + O(n^{-2\alpha}) + O(n^{\alpha-2}). \notag \end{align} \end{theorem} We make several remarks regarding this theorem: \begin{itemize} \item The asymptotic result in Theorem \ref{thm:main} is a generalization of the classical asymptotic result of \citet{polyak1992acceleration}. In particular, the leading term has variance $O(\frac{1}{n})$ \textit{independently} of the step-size choice $\gamma_n$. In the presence of strong convexity, SGD can achieve the $O(\frac{1}{n})$ rate with a carefully chosen step size, $\gamma_n = \frac{C}{\mu n}$ (for $C=1$). However, the result is fragile: too small a value of $C$ can lead to an arbitrarily slow convergence rate, while too large a $C$ can lead to an ``exploding,'' non-convergent sequence \citep{NemJudLan08}. In practice determining $\mu$ is often as difficult as the problem itself. \item Theorem \ref{thm:main} implies that the distance (measured in $T_{x_\star} \mathcal{M}$) of the streaming average $\tilde{x}_n$ to the optimum, asymptotically saturates the Cramer-Rao bound on the manifold $\mathcal{M}$ \citep{smi05, Bou13}---asymptotically achieving the statistically optimal covariance\footnote{Note the estimator $\tilde{\Delta}_n$ is only asymptotically unbiased, and hence the Cramer-Rao bound is only meaningful in the asymptotic limit. However, this result can also be understood as saturating the H\`{a}jek-Le Cam local asymptotic minimax lower bound \citep[Ch. 8]{van1998asymptotic}.}. SGD, even with the carefully calibrated step-size choice of $\gamma_n = \frac{1}{\mu n}$, does not achieve this optimal asymptotic variance \citep{NevHas73}. \end{itemize} We exhibit two applications of this general result in \mysec{application}. Next, we introduce the relevant background and assumptions that are necessary to prove our theorem.
2005.01950
\section{Introduction} \label{sec:intro} The nature of dark matter (DM) is an enduring mystery. In many theories, dark matter can interact with some of the particles of the Standard Model (SM), which provides both a DM production mechanism in the early Universe and a way of detecting DM in the Universe today. In many of these scenarios, astrophysical dark matter can annihilate to SM particles, raising the possibility indirectly detecting DM through searches for these annihilation products. The natural targets for such searches are regions of high dark matter density such as the Galactic Centre or dwarf spheroidal galaxies, and many limits based on these searches have been published. A recent analysis indicates that combining these results for thermal dark matter, with the assumption that the annihilation is a $2\to 2$ $s$-wave process, leads to a lower bound on the DM mass of $m_\chi \gtrsim 20$~GeV~\cite{Leane:2018kjk}. However, that analysis included only visible final states, such as photons, charged particles or other highly detectable SM particles. To comprehensively test the WIMP paradigm, one must also consider those annihilation products which may be harder to detect. This includes states which might be assumed as largely invisible, such as neutrinos, together with truly invisible states, such as other dark sector particles. Though the latter cannot be excluded, the requirement that dark matter is thermally produced in the early Universe argues for a coupling to the SM. Although neutrinos are typically the most difficult to detect SM annihilation product and hence lead to conservative limits~\cite{Beacom:2006tt,Yuksel:2007ac}, we show that they are actually detectable at an interesting sensitivity. For dark matter that annihilates to neutrinos, the lower bound on $m_\chi$ is currently set by measurements of the effective number of neutrinos, $N_{\rm{eff}}$, from the Cosmic Microwave Background and Big Bang Nucleosynthesis. This leads to a lower bound on the mass of neutrinophilic dark matter of between 3.7 and 9.4~MeV, depending on the degrees of freedom of the DM~\cite{Boehm:2013jpa,Nollett:2013pwa,Nollett:2014lwa,Escudero:2018mvt,Sabti:2019mhn}. Future precision CMB experiments such as the Simons Observatory and CMB-S4 may increase these lower bounds to 10-16~MeV. There are well-defined UV-complete models corresponding to this region of DM parameter space, for instance~\cite{Boehm:2013jpa,Campo:2017nwh,Elor:2018twp,Blennow:2019fhy,Ballett:2019pyw}. DM-neutrino interactions may also have relevance for structure formation in the early Universe~\cite{Boehm:2000gq,Mangano:2006mp,Wilkinson:2014ksa} and can be constrained through cosmological measurements. Neutrino final states are particularly challenging for indirect detection due to their very weak interaction cross-sections. However, limits on dark matter annihilating to neutrinos have been set by neutrino experiments such as Super-Kamiokande (SuperK)~\cite{Frankiewicz:2015zma,Frankiewicz:2017trk} IceCube~\cite{Aartsen:2015xej} and ANTARES~\cite{Albert:2016emp}, covering masses from 1~GeV up to 100~TeV. Other experiments including HESS~\cite{Abdallah:2016ygi} and Fermi~\cite{Ackermann:2015zua} have presented limits based on searches for final states such as $\mu^+\mu^-$ and $W^+ W^-$, which lead to neutrinos via their decays. We shall see that the neutrino experiments have sensitivity to relic density scale annihilation cross sections in a region of parameter space that overlaps with that probed by the CMB and BBN measurements. However, these very different approaches provide important complementarity. In particular, the neutrino experiments are direct in the sense that the annihilation products are actually detected. Importantly, in the case that a signal were to be observed, they would be able to provide strong evidence for a DM origin and a determination of the DM mass. While limits set by these collaborations start at masses of 1~GeV, a number of groups have re-interpreted data and searches for other phenomena to constrain DM-neutrino interactions below this mass. These include measurements from the BOREXINO solar neutrino observatory~\cite{Bellini:2010gn,Campo:2017nwh,Arguelles:2019ouk}, KamLAND~\cite{Collaboration:2011jza,Arguelles:2019ouk} and Super-Kamiokande~\cite{PalomaresRuiz:2007eu,Campo:2017nwh,Klop:2018ltd,Arguelles:2019ouk}. Other earlier work in this area includes~\cite{Beacom:2006tt,Yuksel:2007ac,Rott:2011fh,Kappl:2011kz,Primulando:2017kxf}. An up-to-date and comprehensive summary of the current limits on DM annihilating to neutrinos over a wide range of masses is~\cite{Arguelles:2019ouk} and a broad discussion of BSM opportunities at future neutrino experiments can be found in ref.~\cite{Arguelles:2019xgp}. Between 10~MeV and 1~GeV the strongest limits on DM annihilation into neutrinos are currently set by recasting a variety of results of the Super-Kamiokande experiment~\cite{Arguelles:2019ouk}. SuperK is a 50~kT water Cherenkov detector at the Kamioka mine site in Japan which, within the next decade, will be superseded by a new large water Cherenkov detector, Hyper-Kamiokande. HyperK will have exceptional sensitivity to light DM annihilating into neutrinos. However, the HyperK Design Report (DR)~\cite{Abe:2018uyc} does not provide projections for DM masses below 1~GeV, even though the detector threshold will be a few MeV. The purpose of this paper is to estimate the sensitivity of the HyperK experiment to light dark matter annihilating in the Galactic Centre. We will focus on annihilation into neutrinos and muons. To do this we use a simulation of the HyperK detector that we describe in detail below. Exploiting the fact that HyperK is built upon the intellectual and technical foundations of SuperK, we first simulated the SuperK detector, validating our simulation using published SuperK results. We then scaled up our SuperK simulation to the dimensions and performance efficiency of HyperK, comparing when possible with results in the HyperK DR. We note that the upcoming JUNO and DUNE experiments, using liquid scintillator and liquid argon respectively, will also have excellent sensitivity to light DM that annihilates into final states involving neutrinos~\cite{Klop:2018ltd,Arguelles:2019ouk}. The primary backgrounds for dark matter searches in this mass range are from atmospheric neutrinos. However, for DM masses below 100~MeV there is an added complication due to the presence of the Diffuse Supernova Neutrino Background (DSNB). Measurement of the DSNB is much sought after in order to gain information on the star formation rate in the early Universe, for reviews see~\cite{Beacom:2010kk,Mirizzi:2015eza}. However, its presence has not yet been confirmed~\cite{Bays:2011si,Zhang:2013tua}. It is expected that the addition of small quantities of Gadolinium to the water in SuperK and HyperK (which allows neutrons produced in the inverse-beta process to be tagged~\cite{Beacom:2003nk}) will allow the DSNB to be measured~\cite{Horiuchi:2008jz}. In this work, in order to set a conservative limit, we consider the DSNB to be fixed and contribute to the background. The limits that can be set by HyperK in the mass range 20-80~MeV have previously been estimated in ref.~\cite{Campo:2018dfh} by rescaling and re-interpreting previous SuperK DSNB searches. We find that our projections agree quite well with those results. An important unknown that affects the translation of a limit on the flux to a constraint on the annihilation cross section, is the uncertainty associated with the DM halo profile. To estimate the impact of this uncertainly, we present results for three different profiles: the standard NFW~\cite{Navarro:1995iw}, as well as Moore~\cite{Moore:1999gc} and Isothermal~\cite{Bahcall:1980} profiles. Rather than focusing on a small signal region located at the Galactic Centre, we shall derive conservative all-sky limits that use the DM annihilation signal from the whole halo. While the HyperK angular resolution is relatively poor in the low mass (sub-GeV) energy regime that is our focus here, we note that our conservative limits could be strengthened somewhat by considering a smaller angular region~\cite{Yuksel:2007ac}. We begin by presenting details of our detector simulation and validation procedure in Section~\ref{sec:sims}. In Section~\ref{sec:limits} then we discuss our event selection, statistics and limit-setting procedures, to determine projections for the future HyperK reach. The appendix contains ancillary materials including the effects of varying the DM halo model. \section{HyperK Simulation and Validation} \label{sec:sims} In this section we describe our analysis setup and workflow, including the SuperK and HyperK detector simulations. We first discuss the detector geometries and event generation, our implementation of the tracking and smearing of particle momenta and energies, the relevant neutrino interactions and oscillations and, finally, how we categorise events. We then present results validating our workflow and simulations against results from SuperK and the HyperK Design Report. \subsection{Detector Geometry and Event Generation} \label{sec:geometry} The Super-Kamiokande experiment is a large, cylindrical, water Cherenkov detector located in the Mozumi mine in Japan. Hyper-Kamiokande is a next-generation experiment that will be the successor to SuperK and will be located in the nearby Tochibora mine. The SuperK detector is divided into an inner detector (ID) and outer detector (OD). The inner detector has a volume of 32 kilotons of water, and is surrounded by the outer detector, an approximately two metre wide cylindrical shell that is used mainly for veto purposes. For event classification purposes the SuperK collaboration also define a fiducial volume, a sub-region of the inner detector. For early analyses, this was the region of the ID more than two metres from the ID wall, although improved reconstruction techniques allowed an increase in the fiducial volume in more recent analyses~\cite{Jiang:2019xwn}. The wall of the inner detector is instrumented with photo-multiplier tubes to capture the Cherenkov light. We will not individually model these in our simulation but instead parametrise their collective overall response to different kinds of neutrino interactions. The HyperK detector is planned to be constructed in a similar way to SuperK, but on a larger scale. In particular, the size of the fiducial volume will be nearly an order of magnitude larger at HyperK. Our HyperK detector simulation is based on a scaled-up version of our SuperK detector simulation, both of which have been implemented using the ROOT geometry package~\cite{Brun:1997pa}. The detector dimensions for SuperK correspond to SuperK-IV~\cite{Fukuda:2002uc,Abe:2013gga} while those for HyperK are taken from the HyperK DR~\cite{Abe:2018uyc}, as detailed in Table~\ref{tab:detectorparameters}. In our simulations, the properties of compound materials such as stainless steel, concrete and air have been taken from GEANT4~\cite{Brun:1994aa}. \begin{table}[h] \centering \begin{tabular}{|l|c|c|} \hline & SK & HK-1TankHD \\ \hline Depth & 1000 m & 650 m \\ Tank diameter & 39 m & 74 m \\ Tank height & 42 m & 60 m \\ Total volume & 50 kton & 258 kton \\ Fiducial volume & 22.5 kton & 187 kton \\ Outer detector thickness & $\sim$ 2 m & 1-2~m \\ \hline \end{tabular} \caption{Dimensions of the SuperK~\cite{Fukuda:2002uc,Abe:2013gga} and future HyperK~\cite{Abe:2018uyc} water Cherenkov detectors. SuperK has undergone several configuration changes during its lifetime. The parameters in the table refer to SuperK-IV. }\label{tab:detectorparameters} \end{table} When neutrinos pass in the vicinity of one of the detectors they can interact with the surrounding rock, the detector material, or the water in the detector. To model these interactions we use the \texttt{GENIE}~3.0.4a~\cite{Andreopoulos:2009rq,Andreopoulos:2015wxa} Monte Carlo package, including the atmospheric neutrino flux and detector geometry drivers\footnote{\texttt{GENIE}~3.0.4 originally had an important bug in the calculation of the CCQE cross-section at low energies. This was remedied in v3.0.6, which appeared while this work was in progress and a patched version, v3.0.4a, which we use.}. We have studied two different tunes, G18\_02a\_00\_000 and G18\_10a\_00\_000, and will refer to these as G18\_02a and G18\_10a in what follows. \texttt{GENIE} includes precomputed cross-sections for neutrino interactions with matter for neutrino energies between 10~MeV and 1~TeV. For most masses, atmospheric neutrino are the dominant background for dark matter searches at SuperK and HyperK. Below neutrino energies of 100~MeV there is also an important contribution from the Diffuse Supernova Neutrino Background (DSNB). While this component has not yet been measured, it is expected that the addition of gadolinium will enable its discovery at SuperK~\cite{Horiuchi:2008jz}. We do not simulate the DSNB in this paper. Instead, we take the HyperK DR projection of the expected DSNB signal for a 10 year experimental running time from Fig.~188 of~\cite{Abe:2018uyc} and subtract the off the backgrounds. We use remainder as our DSNB background. Another important background below energies of 10~MeV is from solar neutrinos. Since this energy is less than 17~MeV, where the muon spallation background becomes dominant (see discussion below), we do not consider solar neutrinos further. We use the atmospheric neutrino fluxes calculated by HKKM~\cite{Honda:2011nf}, hereafter HKKM11, presented as a function of azimuth and zenith angle without oscillations at Kamioka. The HKKM11 fluxes are computed only down to energies of 100~MeV. Below that we use the FLUKA~\cite{Battistoni:2005pd} results, which extend down to 13~MeV. The FLUKA results are angle-averaged. To regain some angular information we make the assumption that the angular dependence of the neutrino flux below 100 MeV is the same as at 100 MeV, and distribute the total flux calculated with FLUKA in the same way as the lowest of the HKKM11 energy bins. We obtain the neutrino energy spectra for annihilating dark matter from \texttt{DarkSUSY}~\cite{Gondolo:2004sc,Bringmann:2018lay}. Since we are exclusively interested in sub-GeV dark matter in this paper we neglect possible corrections from electroweak Bremsstrahlung. We also account for the impact of neutrino oscillations. The neutrino oscillation length is \begin{eqnarray} L_{\rm osc} &=& \frac{4\pi E}{\delta m^2} \\ &=& 2.5 \times 10^6 ~{\rm km}~\frac{(10~{\rm meV})^2}{\delta m^2} \left(\frac{E}{100~{\rm GeV}}\right) \label{eq:osc_solar} \\ &=& 10 ~{\rm km} ~ \frac{(50~{\rm meV})^2}{\delta m^2} \left(\frac{E}{10~{\rm MeV}}\right). \label{eq:osc_atm} \end{eqnarray} The values in Eq.~\ref{eq:osc_solar} correspond to oscillations of high energy (100 GeV) neutrinos driven by the solar mass splitting, $\delta m_{21}^2 \sim 7 \times 10^{-5} {\rm eV}^2$, while those in Eq.~\ref{eq:osc_atm} correspond to oscillations of lower energy (10 MeV) neutrinos, driven by the atmospheric mass splitting $\delta m_{32}^2 \sim 2.5 \times 10^{-3} {\rm eV}^2$, and encompass the oscillation lengths of relevance for us. Hence, for the neutrino energies that we will consider, the oscillation lengths are of $\mathcal{O}(10-10^6)~\rm{km}$. This is to be compared with the distance to the Galactic Centre of 8~kpc $\sim 10^{17}$~km. Even if we were to consider DM annihilation in a very small region near the Galactic Centre, say the inner 1~pc~$\sim 3\times 10^{13}$~km, averaging over the production region should wash out the oscillations. We therefore expect that there will be no oscillatory features in the annihilation flux at Earth. Hence, the final flavour structure of the neutrino flux at Earth can be obtained from that at production using the simple expression \begin{eqnarray} \phi^{\rm final}_{\nu_\alpha}(E) & = \sum\limits_{i,\beta} \phi^{\rm source}_{\nu_\beta} (E) |U_{\beta i}|^2 |U_{\alpha i}|^2, \label{eq:DMneuoscvac} \end{eqnarray} where $\alpha$ labels flavour states, $i$ labels mass states, and $U_{\alpha i}$ are the PMNS matrix elements, \begin{equation} U=\begin{bmatrix} c_{13}c_{12} & c_{13}s_{12} & s_{13}e^{-i\delta} \\ -c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta} & c_{23}c_{12}-s_{23}s_{13}s_{12}e^{i\delta} & s_{23}c_{13} \\ s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta} & -s_{23}c_{12}-c_{23}s_{13}s_{12}e^{i\delta} & c_{23}c_{13} \end{bmatrix}. \end{equation} This conclusion will be valid only in the case of downward going neutrinos arriving from the Galactic Centre. For neutrinos that travel upward through the Earth before detection, we should consider matter effects in the Earth. We calculate the neutrino oscillation probabilities at the detector depth using the \texttt{nuCraft} code~\cite{Wallraff:2014qka}. The \texttt{nuCraft} code numerically solves the Schr\"odinger equation for neutrinos propagating through the Earth, which is treated as a sphere with smoothly varying mass density, instead of a series of shells of constant matter density. The Earth data come from the Preliminary Reference Earth Model~\cite{Dziewonski:1981xy}. We take the oscillation parameters from the PDG~\cite{Tanabashi:2018oca}, assuming a normal ordering of the neutrino masses. The precise values are shown in Table~\ref{tab:oscparams}. \begin{table}[h] \centering \begin{tabular}{|c|c||c|c|} \hline Parameter & Value & Parameter & Value \\ \hline $\sinsq{12}$ & $0.307 \pm 0.013$ & $\Deltamsq{21}$ & $(7.53 \pm 0.18) \times 10^{-5} \, \rm eV^2$ \\ $\sinsq{23}$ & $0.512^{+0.019}_{-0.022}$ & $\Deltamsq{32}$ & $(2.444 \pm 0.034) \times 10^{-3} \, \rm eV^2$ \\ $\sinsq{13}$ & $0.0218 \pm 0.0007$ & $\delta$ & $1.37^{+0.18}_{-0.16}$~rad\\ \hline \end{tabular} \caption{Neutrino parameters from~\cite{Tanabashi:2018oca} used to oscillate the neutrino fluxes. } \label{tab:oscparams} \end{table} \subsection{Neutrino Interactions} \label{sec:neutint} Neutrinos interact with matter through charged-current (CC) interactions in a number of different channels that lead to events within the inner detector. At energies above approximately 10~GeV the total cross-section is dominated by deep inelastic scattering (DIS), where the interaction is directly with the quarks and gluons that constitute the nucleus. More important for our study are quasi-elastic scattering (QE) and resonance neutrino production. The first of these is the interaction of a neutrino on a bound nucleon leading to $\nu_l + n \to l^- +p$. This is the dominant interaction mode for neutrino energies below 1~GeV. Between 1 and several GeV the dominant mode is baryonic resonance production and decay, for instance through $\nu_\mu + p \to \mu^- + \Delta^{++} \to \mu^- + \pi^+ p$. Sub-dominant contributions to the total cross-section come from meson exchange current (MEC) interactions (also referred to as the 2 particle-2 hole effect)~\cite{Martini:2009uj}, and from coherent and diffractive meson production. The MEC process requires the presence of two nucleons, where an electroweak boson from the leptonic current is exchanged by the nucleon pair, and is followed by 2-nucleon emission from the primary vertex. This generates pion-less final states similar to quasi-elastic scattering. This is particularly important for energies below 1~GeV. Finally, in coherent meson production the target nucleus remains in its ground state, leading to the production of a forward meson, for instance $\nu_\mu + \ce{^{16}O} \to \mu^- + \pi^+ + \ce{^{16}O}$. This is mainly important at low energies and momentum transfers. All these interaction processes are modelled by \texttt{GENIE}. However, there are differences between the models used in the G18\_02a and G18\_10a tunes for the quasielastic and meson exchange current interactions. The G18\_02a tune uses the Llewellyn-Smith model~\cite{LlewellynSmith:1971uhs} for the QE interactions and an empirical MEC model, while the G18\_10a tune uses the Fermi-Gas approximation of~\cite{Nieves:2004wx,Nieves:2005,Nieves:2011pp} for both interactions. This leads to important differences in the interaction rates at low energies, particularly for electron neutrinos and anti-neutrinos. In particular, for energies of $\mathcal{O}(\rm{MeV})$, the cross-sections are larger in the G18\_02a tune by nearly a factor of two on oxygen nuclei. On the other hand below 100~MeV, the cross-sections are much larger for the G18\_10a tune. Accordingly, we study the effects of both tunes, with the computed cross sections for $\nu_e$ and $\nu_\mu$ shown in Fig.\ref{fig:xsec}. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{figs/Xsections_nue.pdf} \includegraphics[width=0.49\textwidth]{figs/Xsections_numu.pdf} \caption{The total charged current cross sections for $\nu_e$ (left) and $\nu_\mu$ (right) from ~\texttt{GENIE} pre-computed cross sections on hydrogen and oxygen, for tunes G18\_02a and G18\_10a. The main differences are at low energies for electron neutrinos. } \label{fig:xsec} \end{figure} The SuperK and HyperK detectors do not measure the reaction modes of individual events. Instead, the detectors measure final states or topologies, such as final states with $1\mu^-$ and zero pions (denoted by $\{1\mu^-, 0\pi\}$). For example, quasi-elastic scattering mainly yields $\{1\mu^-, 0\pi\}$ events, with a background of misidentified charged current single pion production where the pion is absorbed or not seen by the detector. The final state produced by a given interaction determines the event category that an event is classified as. This is discussed in further detail in Section~\ref{sec:cat}, following an account of our tracking and smearing procedure in Section~\ref{sec:tracking}. First, however, we discuss the so-called invisible muon and spallation backgrounds. Low energy atmospheric $\nu_\mu$ can interact to produce muons with $E\lesssim 50$~MeV, which is below the threshold for the muon to emit Cherenkov photons. Consequently, when the muon decays, the resulting electron cannot be associated with the decay of a visible muon. These are known as invisible muons. These events form the dominant background in event categories that contain low energy $\nu_e$ and $\bar{\nu}_e$. The shape of the invisible muon background is determined by the Michel spectrum. We do not simulate the invisible muons in our detector simulation, but take the expected HyperK distribution (assuming neutron tagging with 70\% efficiency) between 10 and 50~MeV directly from Fig.~188 of the HyperK Design Report~\cite{Abe:2018uyc}. We then rescale these to correspond to 20 years of exposure. For the tail of the distribution above 50~MeV we use the same shape as in the SuperK DSNB search~\cite{Bays:2011si}. Another important background at low energies is from muon-induced spallation products. These occur when muons from cosmic rays traverse through or near the detector and lead to the formation of radioactive isotopes. The decay products of these unstable isotopes are the dominant background for searches below 16~MeV~\cite{Abe:2018uyc}. The nature of these events at SuperK were the subject of a recent series of intensive theoretical studies for water Cherenkov detectors~\cite{Li:2014sea,Li:2015kpa,Li:2015lxa}, followed by SuperK observations~\cite{Super-Kamiokande:2015xra}. In the absence of details of these backgrounds in the HyperK DR, we consider 16~MeV as the lower threshold for our projected searches. Since HyperK is at a shallower site than SuperK, muon fluxes and hence spallation backgrounds are expected to be more important there. Accordingly, a detailed study of these backgrounds and their impact on dark matter searches at HyperK would be of great interest. Neutron tagging provides a means of mitigating both of these backgrounds. One method to achieve this is via the addition of gadolinium to the water in the detector at the 0.1\% level~\cite{Beacom:2003nk}, which will soon be implemented at SuperK. Gadolinium is a neutron absorber; following neutron capture, excited Gd nuclei de-excite through emission of 3-4 $\gamma$-rays with a total energy of $\sim8$~MeV after a characteristic time of 30~$\mu\rm{sec}$. Use of timing and vertex information can thus be used to tag neutrons, allowing the identification of true inverse beta-decay events, $\bar{\nu}_e + p \to n + e^+$, providing a means of suppressing the invisible muon and spallation backgrounds. While Gd may also be used in HyperK, improved photosensors and photocathode coverage will enable a much higher efficiency for neutron tagging on {\it hydrogen} than is possible at SuperK. It is expected that the 2.2~MeV photon resulting from neutron capture on hydrogen will be detected in HyperK with an efficiency of order 70\%~\cite{Abe:2018uyc}. We do not model the neutron tagging process in our HyperK simulations, but instead take distributions for the invisible muon spectrum with and without neutron tagging from the HyperK Design Report. \subsection{Tracking and smearing} \label{sec:tracking} Charged current interactions with neutrino initial states lead to the creation of charged leptons, which propagate in and through the detector volume. Accurately modelling the rate of energy loss of these particles is important for assigning events to the partially-contained or fully-contained event classes. The rate of energy loss (or mass stopping power) for heavy particles in matter is described by the Bethe-Bloch equation, \begin{equation} \left\langle - \frac{dE}{dx} \right\rangle = a(E) + b(E)E \,, \end{equation} where $a(E)$ and $b(E)$ represent the electronic stopping power and losses due to radiative processes, respectively. We use this formula to model the track length of muons in liquid water and ``standard rock" (with Z=11 and A=22) using an interpolating function based on the energy-dependent parameters from~\cite{Groom:2001kq}\footnote{Available online at \href{http://pdg.lbl.gov/AtomicNuclearProperties/}{pdg.lbl.gov/AtomicNuclearProperties/}}. The muon energy loss rate is dominated by ionisation for muon energies less than about 100~GeV, above which radiative and other losses become important. For electrons, we take the radiation lengths from the tables in ref.~\cite{Tsai:1973py} and the critical energy from the ``Passage of particles through matter" section of the PDG~\cite{Tanabashi:2018oca}. The lepton energies and momenta are given for each event by \texttt{GENIE}, as described above. The energy resolution of the HK detector is determined by the photocathode coverage. For low energy particles with $E_{\rm kin}\leq30~\, {\rm MeV}$, the resolution can be approximated using the number of PMT hits. The number of PMT hits is related to $E_{\rm{kin}}$, and we use an interpolating formula based on the PMT hit distribution given in Fig.~113 of ref.~\cite{Abe:2018uyc}. We find that an adequate fit can be obtained using the same functional form used for the SuperK resolution in ref.~\cite{PalomaresRuiz:2007eu}. This gives the energy resolution in MeV as \begin{equation} \sigma = 0.325\sqrt{E_{\rm kin}/\, {\rm MeV}}+0.024E_{\rm kin}, \end{equation} where $E_{\rm kin}$ is given in MeV. A comparison with the Design Report results is shown in the left panel of Fig.~\ref{fig:energyres}. For higher energy leptons with $E_{\rm kin}>30\, {\rm MeV}$ we use the total charge distribution at several electron and muon momenta shown in the right panel of Fig.~\ref{fig:energyres}, which is adapted from Fig.~112 of ref.~\cite{Abe:2018uyc}, again assuming 40\% PMT coverage. In this case we use a linear spline interpolation. For leptons with momentum higher than 1 GeV, we keep the energy resolution of a lepton with $p_\ell=1\, {\rm GeV}$. The kinetic energy, $E_{\rm kin}$, of each event will be then smeared by a Gaussian distribution with a mean $E_{\rm kin}$ and width $\sigma(E_{\rm kin})$. Note that we present our distributions in terms of the final state lepton kinetic energy, $E_{\rm kin}$. This is not directly measured by SuperK. Rather, they make cuts on a quantity $E_{\rm vis}$, which is the energy of an electromagnetic shower that yields the same amount of Cherenkov light~\cite{Jiang:2019xwn} in a given event. We show the relation between $E_\nu$ and $E_{\rm kin}$ for electron neutrinos and anti-neutrinos at low energies in the left and right-hand panels of Fig.~\ref{fig:EvsEkin}, respectively. The diffuse magenta dots in the bottom left are from scattering off oxygen, while the more tightly grouped blue dots in the right panel correspond to scattering off hydrogen. No hydrogen scattering is visible for the neutrinos in the left-hand panel since the cross-section is highly suppressed (see Fig.~\ref{fig:xsec} left). \begin{figure}[h] \centering \includegraphics[width=0.495\textwidth]{figs/HK_PMThits_eres.pdf} \includegraphics[width=0.49\textwidth]{figs/RMS_total_charge_distrib.pdf} \caption{Left: RMS/mean of the number of PMT that register a hit. We show data taken from Fig.~113 of the HyperK DR~\cite{Abe:2018uyc} for electrons injected with different kinetic energy values assuming 40\% photocoverage (light blue points and line), together with our energy resolution fit (magenta). Right: RMS/total charge distributions for electrons (light blue) and muons (orange).} \label{fig:energyres} \end{figure} In principle one could simulate the individual PMT responses using a code such as \texttt{WCSim}~\cite{wcsim}, thus allowing calculations and cuts based on $E_{\rm{vis}}$. We also do not smear the location of the primary vertex. This makes only a small difference to events that are located near the boundary of the fiducial volume. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figs/Ekin_Enue_1810a.pdf} \includegraphics[width=0.49\textwidth]{figs/Ekin_Enuebar_1810a.pdf} \caption{$E_{\rm kin}$ vs. neutrino energy $E_{\nu}$ for $\nu_e$ (left) and $\overline{\nu}_e$ (right) events in Fig.~\ref{fig:FCnueFLUKA_Ekin}. Magenta dots correspond to scattering off oxygen, and blue dots to scattering off hydrogen. The neutrino scattering cross-section off hydrogen is suppressed, as shown in Fig.~\ref{fig:xsec}.} \label{fig:EvsEkin} \end{figure} \subsection{Event Categories} \label{sec:cat} Depending on where the interaction takes place, the direction of the resulting final state products, and how they appear in the detector, SuperK and HyperK divide events into a number of different classes. For validating our detector simulation and projecting limits for DM indirect detection at HyperK, we will consider the fully contained (FC) and partially contained (PC) categories, for electron and muon neutrinos and anti-neutrinos. Fully contained events are those in which all the energy is deposited in the inner detector. A partially contained event involves a high-energy muon that leaves the inner detector and deposits energy in the outer detector. A third category used in atmospheric neutrino and other DM analyses involves muons created in the rock surrounding the detector and then observed travelling up through the detector volume: these are upward-going muons (Up-$\mu$). While important at higher energies, at the low neutrino energies we are studying the number of upward-going muons are small~\cite{Ashie:2005ik}, and so we omit them from our study. Downward-going muons samples are highly contaminated by cosmic rays and so not used. FC and PC events can be subdivided into further categories, based on the properties of the Cherenkov rings, such as their number, energy and particle ID. We do not simulate the Cherenkov radiation, instead keeping all the FC events in a single class (and similarly for the PC events), and distinguishing only between electron and muon neutrinos. Identification of $\mu^{\pm}$ and $e^\pm$ can be done with high efficiency based on the properties of the associated Cherenkov rings. On the other hand, SuperK and HyperK cannot distinguish the charge of the particle responsible for a given ring. There are likelihood based methods that enable the construction of event samples that are enriched in $\nu_e$ and $\overline{\nu}_e$ respectively, but these are far from pure and only apply to Multi-GeV ($E_{\rm{vis}}>1.33$~GeV) events, not the Sub-GeV events we study. Thus, our analysis will be based on three composite event classes: FC$(\nu_e+\overline{\nu}_e)$, FC$(\nu_\mu+\overline{\nu}_\mu)$ and PC$(\nu_\mu+\overline{\nu}_\mu)$. Fully contained events are generated inside the inner detector (ID), with all secondary particles also required to be within the ID. The primary vertex must lie within the fiducial volume, which is the region of the ID with a boundary 1.5~m inside the inner wall~\cite{Abe:2018uyc}. A lower energy cut similar to that applied to SK events \cite{Ashie:2005ik}, $P_\mu>200$ MeV for $\nu_\mu$ and $\overline{\nu}_\mu$ events, has been imposed on single-ring events \cite{Ashie:2005ik}, along with a cut $E_{\rm kin}>30$~MeV. We do not apply such a cut to $\nu_e$ and $\overline{\nu}_e$ events. Multi-ring events are induced if the neutrino interaction gives rise to further charged particles (e.g. $\pi^\pm$) along with the resulting lepton. In ref.~\cite{Ashie:2005ik}, multi-ring events were identified as $\mu$-like when the most energetic ring had $P_\mu > 600$ MeV and visible energy $E_{\rm vis} > 600$ MeV. Multi-ring events are not an important contribution in our study. At low neutrino energies the dominant interaction modes are charged-current quasi-elastic scattering (CCQE) and meson-exchange current (MEC) interactions. Both of these lead to single-ring events. Partially contained events are those with a primary vertex within the inner detector but with a high energy muon exiting the ID fiducial volume. These events are further required to have a minimum track length of 2.5 m and muon momentum $P_\mu \geq 700$~MeV. PC events are thus generally associated with higher energy events, and are less important than FC events for deriving limits on light dark matter. We close this section with a comment on triggers. The current SuperK searches for dark matter use the atmospheric neutrino event samples based on the High Energy (HE) trigger. Lower energy searches, such as for the DSNB, use a separate Low Energy (LE) trigger. HyperK will have a similar trigger structure (there is a also a Super Low Energy trigger relevant for solar neutrino analyses). This use of the atmospheric neutrino event samples is part of the reason that SuperK have not extended their DM limits to lower masses. The HyperK DR is similar, and in fact the DM Monte-Carlo event sample used there is simply a sample of reweighted atmospheric neutrinos. Setting limits on DM annihilation from low energies up to 1~GeV may require the use of the LE trigger, or combining LE- and HE-based event samples in some way. \subsection{Validation} \label{sec:validation} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs/SK_FC_eventrate.pdf} \caption{FC $\nu_e$ (left) and $\nu_\mu$ (right) event rate for SK. Electron neutrino events selected by reaction mode and muon neutrinos selected by topology. Results are shown for the G18\_02a and G18\_10a {\tt GENIE } tunes, and compared with the expected events for SK I taken from Fig.~1 of ref.~\cite{Ashie:2005ik}, as labelled.\label{fig:FCvalidation} } \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{figs/SK_PC_eventrate.pdf} \caption{PC $\nu_\mu$ event rate for SK. Events selected by topology. Results are shown for the G18\_02a and G18\_10a {\tt GENIE } tunes, and compared with the expected events for SK I taken from Fig.~1 of ref.~\cite{Ashie:2005ik}. \label{fig:PCvalidation} } \end{figure} We have validated our detector simulation and analysis pipeline against SuperK measurements of the atmospheric neutrino spectrum~\cite{Ashie:2005ik}. There are more up-to-date atmospheric neutrino measurements than~\cite{Ashie:2005ik} by SuperK, but they incorporate data from multiple running configurations of the experiment, which have substantial differences between them. These include decreased PMT coverage in SK-II (due to the implosion incident in 2001) and the improvements in reconstruction electronics in SK-IV. Rather than attempt to reproduce all of these, we choose instead to validate our simulations against data from SK-I, and then scale our simulation up to the dimensions and performance parameters of HyperK. The SuperK measurements took place over three years of solar minimum, one transition year and a single year of solar maximum for a total of 1489 live-days of data. We compare this with simulations for the flux at solar minimum. We show in Fig.~\ref{fig:FCvalidation} the differential event-rate per 1000 days as a comparison between our results for fully contained electron (left) and muon (right) neutrinos, compared with the data from Fig.~1 of~\cite{Ashie:2005ik} shown as a solid blue line. We show both the G18\_02a and G18\_10a tunes in cyan and magenta respectively. Fig.~\ref{fig:PCvalidation} is a similar plot for the partially contained muon category. At the lowest energies $E_\nu \lesssim 100$~MeV the scattering cross-section for the G18\_02a tune is much smaller than for G18\_10a. However, between 100~MeV and 1 GeV the G18\_02a cross-section is slightly larger, as can be seen in Fig.~\ref{fig:xsec}. We see that our simulation matches the SuperK data quite well. We have examined selecting events in categories by either specific reaction modes and by final state topology. For $\nu_e$ events we find that selecting by reaction mode gives better agreement with the SK data, while for $\nu_\mu$ we find that selection by topology does. Consequently, we adopt these slightly different criteria for $\nu_e$ and $\nu_\mu$ for the rest of our study. We have also compared the acceptance of our HyperK simulation with that of our SuperK simulation. While broadly similar there are some changes explained by the differing geometric sizes of the detectors. For instance, we find that more events at higher energy make it into the fully contained categories at HyperK than at SuperK. We use the G18\_10a in our HyperK analysis, since it is more up to date and the prescriptions for the CCQE and other cross-sections are more similar to the latest version of \texttt{NEUT}~\cite{Hayato:2002sd,Abe:2017aap,Jiang:2019xwn} used by SuperK and HyperK. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/HK_atmo_FCnue_HKKM11_FLUKA_eventrate.pdf} \caption{Combined fully contained $\nu_e$ (left) and $\overline{\nu}_e$ (right) event rates at HyperK for the HKKM11 and FLUKA (below 100 MeV) fluxes, calculated with {\tt GENIE } tune G18\_10a and 20 years of livetime.\label{fig:HKbg} } \end{figure} \begin{figure} \centering \includegraphics[width=0.55\textwidth]{figs/HK_atmo_below100MeV_eventrate_ekin_smeared.pdf} \caption{The expected $\nu_e$ (upper solid line, blue) and $\overline{\nu}_e$ (lower solid line, orange) event rates below 100 MeV (obtained from FLUKA + HKKM11) at HyperK as a function of the final lepton kinetic energy, $E_{\rm kin}$. The coloured regions show the background contributions from invisible muons with (blue) and without (magenta) neutron tagging, assuming a 70\% tagging efficiency. The grey area at the left is dominated by spallation backgrounds.} \label{fig:FCnueFLUKA_Ekin} \end{figure} In Fig.~\ref{fig:HKbg} we show the total atmospheric backgrounds we use for the FC $\nu_e$ (left) and $\overline{\nu}_e$ (right) categories, given by the sum of the HKKM11 and FLUKA fluxes. The grey band at the left hand side of the plots shows where the spallation backgrounds become dominant. Fig.~\ref{fig:FCnueFLUKA_Ekin} shows the low energy region and the invisible muon backgrounds, with and without neutron tagging. Both sets of plots are for 20 years of running time. HKKM11 events were generated for 10 years of solar minimum and 10 years of solar maximum, while FLUKA events are for 20 years of solar average. In our final analysis we sum over these backgrounds since HyperK cannot effectively discriminate between $\nu_e$ and $\overline{\nu}_e$. \section{Dark Matter Search and Projected Limits} \label{sec:limits} We now provide details of our calculation of the dark matter signal from the Galactic Centre. The differential flux of neutrinos from dark matter annihilation is given by \begin{equation} \frac{{d\Phi_\nu}_{\Delta\Omega}}{dE_\nu} = \frac{\langle \sigma v \rangle}{8\pi m_{\rm DM}^2} J_{\Delta\Omega} \frac{dN_\nu}{dE_\nu} \, , \label{eq:GCneuflux} \end{equation} where $m_{\rm DM}$ is the dark matter mass, $\langle \sigma v \rangle$ is the thermally averaged annihilation cross-section, $J_{\Delta\Omega}$ is the angle-averaged J-factor defined below, and $\frac{dN_\nu}{dE_\nu}$ is the differential neutrino energy spectrum (obtained from \texttt{DarkSUSY}). This equation holds for the sum of the neutrino and anti-neutrino fluxes, and for Majorana dark matter. For the neutrino final state the differential flux at production is a delta function (although this becomes smeared by detector effects), for muons it becomes smeared out due to the muon decays. The J-factor for DM annihilation in the ($b$, $l$) direction in Galactic coordinates is obtained integrating the DM density squared over the line of sight $s$~\cite{Gordon:2013vta}, \begin{equation} J(b,l) = \int_0^{s_{\rm max}} \rho^2\left(\sqrt{r_\odot^2-2s \, r_\odot\cos b \cos l+s^2}\right) ds, \end{equation} where $r_\odot=8.5\, {\rm kpc}$ is the distance from the Solar System to the Galactic Centre and \begin{equation} s_{\rm max} = \sqrt{R_{\rm MW}^2-r_\odot^2+r_\odot^2 \cos^2 b \cos^2 l} + r_\odot \cos b \cos l, \end{equation} where $R_{\rm MW}=40\, {\rm kpc}$ is the Galaxy halo size. The J factor averaged over a solid angle $\Delta\Omega$ is then defined as \begin{equation} J_{\Delta\Omega} = \frac{1}{\Delta \Omega}\int_{\Delta \Omega} J(b,l) \, d\Omega, \end{equation} where $d\Omega = \cos b \, db \, dl$. To quantify some of the astrophysical uncertainties involved in our limits we will present results for three different dark matter halo profiles, which can all be expressed in the form \begin{equation} \rho(r) = \frac{\rho_0}{\left(\frac{r}{r_s}\right)^\gamma\left[1+\left(\frac{r}{r_s} \right)^\alpha \right]^{(\beta-\gamma)/\alpha}} \, . \label{eq:DMdensprofile} \end{equation} These are the standard Navarro-Frenk-White (NFW) profile~\cite{Navarro:1995iw}, as well as the Moore~\cite{Moore:1999gc}, and Isothermal~\cite{Bahcall:1980} profiles. The appropriate values of alpha, beta and gamma for the chosen profiles can be found in Table~\ref{tab:DMprofiles}. The Moore profile is cuspier than NFW and leads to a larger J-factor, and hence a larger DM signal and stronger limits. The Isothermal profile is less cuspy and leads to weaker limits than NFW. The different choices in $r_s$ and $\rho(r_\odot)$ are from SuperK and the HyperK Design Report. \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline Halo model & $\alpha$ & $\beta$ & $\gamma$ & $r_s[\, {\rm kpc}]$ & $\rho(r_\odot)[\, {\rm GeV}\, {\rm cm}^{-3}]$\\ \hline NFW & 1 & 3 & 1 & 20 & 0.3\\ Moore & 1.5 & 3 & 1.5 & 28 & 0.27\\ Isothermal & 2 & 2 & 0 & 5 & 0.3\\ \hline \end{tabular} \caption{Parameters of eq.~\ref{eq:DMdensprofile} that determine the NFW~\cite{Navarro:1995iw}, Moore~\cite{Moore:1999gc} and Isothermal~\cite{Bahcall:1980} density profiles. } \label{tab:DMprofiles} \end{table} For simplicity we have fixed the location of the GC at the zenith $z=90^\circ$ of the HyperK experiment. We have investigated the effects of changing the GC location relative to the detector orientation but find them to be relatively small (see appendix~\ref{sec:app2}), since HyperK has approximately the same measurements in all dimensions. We also do not include the contribution from extragalactic dark matter, which leads to a diffuse and isotropic signal. This has been used in the past to set conservative limits on the total DM annihilation cross-section~\cite{Beacom:2006tt}. The HyperK DR does not include the extragalactic contribution in setting their limits, and so we follow them in order to achieve a fair comparison with their results. Unlike the GC contribution, whose spectrum has the form of a delta-function, the extragalactic neutrino spectrum is smeared out by the effects of redshift. The total flux however is of similar size to that from the GC~\cite{Klop:2018ltd}. Consequently, one may think of our analysis as a conservative all-sky analysis that uses signal only from DM annihilations in the Galactic Centre, but includes all relevant background contributions. Accordingly, including the extragalactic contribution would lead to slightly better limits, as would binning in $\cos\theta_{\rm GC}$, see ref.~\cite{Arguelles:2019ouk} for further recent discussion. The strictly correct way to do the coordinate transformation between Galactic and horizon coordinates is from Galactic to equatorial coordinates (centred at Earth) and then to horizon coordinates at the detector location. We use a simplifying assumption for the coordinate transformation relations between Galactic and horizon coordinate systems, assuming that the GC is located at azimuth angle $0^\circ$ and zenith angle $z_{\rm GC}$, is obtained by rotating the $z$ axis (north Galactic pole) by $90^\circ-z_{\rm GC}$, \begin{eqnarray} \begin{bmatrix} \cos a \sin z \\ \sin a \sin z\\ \cos z \end{bmatrix}&=& R_y\left(\frac{\pi}{2}-z_{\rm GC}\right) \begin{bmatrix} \cos l \cos b \\ \sin l \cos b\\ \sin b \end{bmatrix} \\ &=& \begin{bmatrix} \sin z_{\rm GC} & 0 & -\cos z_{\rm GC} \\ 0 & 1 & 0 \\ \cos z_{\rm GC} & 0 & \sin z_{\rm GC} \end{bmatrix} \begin{bmatrix} \cos l \cos b \\ \sin l \cos b\\ \sin b \end{bmatrix}. \end{eqnarray} The transformation from $(l,b)$ to $(a,z)$ is given by, \begin{eqnarray} \tan a &=& \frac{\sin l}{\sin z_{\rm GC} \cos l- \cos z_{\rm GC} \tan b},\\ \cos z &=& \sin z_{\rm GC} \sin b + \cos z_{\rm GC} \cos b \cos l . \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figs/Jfactor_hor_z90} \includegraphics[width=0.49\textwidth]{figs/Jfactoravg_hor_z90} \caption{Left: The J-factor in horizontal coordinates, assuming that the GC is located at altitude $0^\circ$ and azimuth $a=0^\circ$. Right: Same as left but binned in solid angle $\Delta\Omega$ as described in the text.} \label{fig:Jfactorbinned} \end{figure} We bin the J-factor in the following way. As for the background HKKM11 flux, we have considered 12 bins in azimuth and 20 bins in $\cos z$, \begin{eqnarray} \Delta\Omega_{i,j} &=& \Delta a_i \int_{z_j}^{z_{j+1}} \sin z \, dz, \\ J_{\Delta\Omega_{i,j}} &=& \frac{1}{\Delta \Omega_{i,j}} \int_{a_i}^{a_{i+1}} \int_{z_j}^{z_{j+1}} J(a,z) \, \sin z \, dz \, da. \label{eq:Jfactorbinned} \end{eqnarray} The J-factor binned in solid angle is shown in the right panel of Fig.~\ref{fig:Jfactorbinned}, where we have assumed that the Galactic centre is located at a zenith angle $z_{\rm GC}$ and azimuth $a=0^\circ$. To bin the neutrino flux, we consider 20 bins per decade as for the HKKM11 flux. The number of energy bins we use depends on the DM mass, $E_\nu\in [10\, {\rm MeV},m_{\rm DM}]$ for DM annihilation into muons and $E_\nu\in [10\, {\rm MeV},10^{\log(\lceilm_{\rm DM}\rceil})]$ for annihilation into neutrinos, where $\lceilm_{\rm DM}\rceil$ is the ceiling function. Note that in setting limits we combine the signal from all bins, resulting in an all-sky analysis. \begin{table}[h] \centering \begin{tabular}{|l|c|c|} \hline & \multicolumn{2}{|c|}{$m_{\rm DM}$ (GeV)} \\ \hline Event category & $\chi\chi\rightarrow \mu^+\mu-$ & $\chi\chi\rightarrow \nu \overline{\nu}$ \\ \hline FC $\overline{\nu}_e$ & 0.11 - 50 & 0.017 - 50 \\ FC $\nu_e$ & 0.11 - 50 & 0.05 - 50 \\ FC $\nu_\mu$, $\overline{\nu}_\mu$ & 0.3 - 50 & 0.25 - 50 \\ PC $\nu_\mu$, $\overline{\nu}_\mu$ & 2 - 50 & 2 - 50 \\ \hline \end{tabular} \caption{The dark matter mass ranges used for generating signal events in the different event categories. Each category also includes anti-neutrino events of the same flavour.} \label{tab:massrange} \end{table} The range of DM masses we use for the different event categories is shown in Table~\ref{tab:massrange}. We take an upper bound of 50~GeV in all cases in order to allow comparison with the HyperK Design Report. The lower bound varies from category to category. For muon final states the DM mass must be at least as large as $m_\mu$ from kinematic considerations. We find this absolute lower bound is only relevant for the FC $\nu_e + \overline{\nu}_e$ category, and that the FC $\nu_\mu$ and PC $\nu_\mu$ classes do not have any acceptance below the DM masses shown in the table. For neutrino final states we choose the lower bound at 17~MeV, below which the experimental backgrounds are dominated by spallation. Again, this is only relevant in the FC $\overline{\nu}_e$ class. As can be seen from Fig.~\ref{fig:EvsEkin} at low energies $E_{\rm kin} \lesssim 50$~MeV this event class is dominated by $\overline{\nu}_e$ events, since the scattering cross-sections on hydrogen for $\nu_e$ are very suppressed. As expected, the partially contained category is associated with higher energy neutrinos (and hence DM masses). \begin{figure} \centering \includegraphics[width=\textwidth]{figs/HK_signal_background_eventrate.pdf} \caption{Expected FC $\nu_e+\overline{\nu}_e$ signal (orange) and background (blue) event rate at HyperK for DM annihilation into neutrinos, $m_{\rm DM}=50\, {\rm MeV}$ (left) and $m_{\rm DM}=100\, {\rm MeV}$ (right), and 20 years of livetime. The total background includes atmospheric neutrinos from HKKM11 and FLUKA (cyan), invisible muons (light blue) and DSNB (magenta). A 70\% neutron tagging efficiency is assumed. The DM signal corresponds to the 90\% CL $\langle \sigma v \rangle$ upper bound shown in every panel. Bins in the spallation region are not considered in the projected limit calculation. \label{fig:signalbkg} } \end{figure} We use the \texttt{Swordfish} package~\cite{Edwards:2017mnf,Edwards:2017kqw} to derive 90\% confidence level (CL) upper bounds on the thermal annihilation cross-section. \texttt{Swordfish} calculates limits based on a maximum likelihood estimation, after reduction of the problem at hand to a single bin problem determined by an equivalent number of signal and background events. For each DM mass and event class, we bin the all-sky data in $E_{\rm kin}$ between the lower threshold of 16~MeV and $1.2\,m_{\rm DM}$. For masses above 40~MeV we use 20 bins and below 40~MeV we use 5 to avoid overbinning the data. We include an uncertainty of 10\% in the overall normalisation of the signal and backgrounds. This roughly agrees with the uncertainty in the atmospheric neutrino flux in~\cite{Abe:2017aap}. We include the total background as discussed in the sections above, which includes the HKKM11 and FLUKA atmospheric neutrino fluxes, the invisible muon contribution and the DSNB. There is also considerable uncertainty around the exact normalisation of the DSNB. In our analysis we have simply fixed it to that used in the HyperK Design Report. However, a simultaneous analysis that takes into account the uncertainties due to the possible presence of dark matter and the uncertainties around the DSNB would be very interesting. The use of angular information could help, even though the angular resolution of HyperK is more limited at lower energies. Fig.~\ref{fig:signalbkg} shows the size of the signal corresponding to the 90\% CL limit, together with the backgrounds, for two different choices of dark matter mass. Note that there are contributions to the signal at values of $E_{\rm kin}$ that are much lower than the DM mass. This arises because, for the case of scattering from oxygen, $E_\nu$ and $E_{\rm kin}$ are not tightly correlated (see Fig.~\ref{fig:EvsEkin}). \begin{figure} \centering \includegraphics[scale=0.75]{figs/sigmavnunu_limits.pdf} \caption{Limits derived for $\chi\chi \to \nu\bar{\nu}$ annihilation for the event categories defined in the text for 20 years run time at HyperK. A 70\% neutron tagging efficiency is assumed. The central line in each band shows the limit for an NFW halo profile, with the upper and lower bounds being set by the limits for the Isothermal and Moore profiles respectively. We also show the limits from the SuperK GC dark matter search~\cite{Frankiewicz:2017trk} (solid with dots), and the projected limit from the HyperK DR~\cite{Abe:2018uyc} (dashed), a limit at low masses derived from the SuperK DSNB search~\cite{PalomaresRuiz:2007eu,Campo:2018dfh} (grey solid), projections for the low mass limit achievable in HyperK with Gd enhancement~\cite{Campo:2018dfh} (grey dashed)and the expected $\langle \sigma v \rangle$ for a thermal relic calculated with \texttt{DarkSUSY} (blue dashed).\label{fig:nulimits}} \end{figure} We show our results for the neutrino and muon final states in Fig.~\ref{fig:nulimits} and~\ref{fig:mulimits} respectively. The solid black lines with points shows current bounds from the SuperK GC search~\cite{Frankiewicz:2017trk} and the dashed black line shows the projected limits from the HyperK DR~\cite{Abe:2018uyc}. In Fig.~\ref{fig:nulimits} we also show a limit at low masses derived from the SuperK DSNB search~\cite{PalomaresRuiz:2007eu,Campo:2018dfh} as a grey solid line, and projections for the low mass limit achievable in HyperK with Gd enhancement~\cite{Campo:2018dfh} as a grey dashed line. Our limits for the FC $\nu_e+\overline{\nu}_e$, FC $\nu_\mu+\overline{\nu}_\mu$ and PC $\nu_\mu+\overline{\nu}_\mu$ categories are shown as purple, light blue and orange regions respectively, with the FC $\nu_e+\overline{\nu}_e$ category giving the strongest limit at low masses. The central line in each region corresponds to the limit for the NFW profile, while the upper and lower margins are the limits for Isothermal and Moore profiles respectively. Note that both the Isothermal and Moore profiles examples are unrealistic choices for the halo profile. They serve as useful extremes to define a conservative error band that reflects uncertainty in the halo profile; the true uncertainty will be smaller. \begin{figure} \centering \includegraphics[scale=0.75]{figs/sigmavmumu_limits.pdf} \caption{Limits derived for $\chi\chi \to \mu^+ \mu^-$ annihilation for the event categories defined in the text for 20 years run time at HyperK. The central line in each band shows the limit for an NFW halo profile, with the upper and lower bounds being set by the limits for the Isothermal and Moore profiles respectively. We also show the limits from the SuperK GC dark matter search~\cite{Frankiewicz:2017trk} (solid with dots), and the projected limit from the HyperK Design Report~\cite{Abe:2018uyc} (dashed).\label{fig:mulimits}} \end{figure} All our projections are for a 20 year running time, to facilitate comparison with the HyperK DR. In both plots we observe good agreement at high masses with the HyperK DR, indicating that our scaled-up version of the SuperK detector simulator captures sufficiently well the relevant features of HyperK. In Fig.~\ref{fig:nulimits} our results are broadly consistent with the projections of ref.~\cite{Campo:2018dfh}, which assume a 10 year running time. This is also a non-trivial cross-check: their results are derived by re-interpreting the results of a SuperK DSNB search as a constraint on DM annihilation, and then rescaling those limits up to HyperK. The thermal relic annihilation cross section \cite{Steigman:2012nb} is shown as the blue dashed line in Fig.~\ref{fig:nulimits}. For the case of neutrino final states, we see that our projected sensitivity dips below the thermal annihilation cross-section of $\sim4\times 10^{-26}\, {\rm cm}^2$ at around 20~MeV, assuming the NFW halo profile. For muon final states the projected limits are approximately an order of magnitude higher than for neutrinos. Note, however, that annihilation to muons is subject to stronger constraints arising from the CMB at low mass and Fermi/AMS at high mass~\cite{Leane:2018kjk}. Finally, in Fig.~\ref{fig:limit_nfw_noGd}, we compare the projected limits with and without neutron tagging, assuming annihilation into neutrinos and an NFW halo profile. Since this mainly affects the invisible muon background at low energies, we only show the FC $\nu_e+\overline{\nu}_e$ event category in this plot. Neutron tagging would have a negligible effect on the limits for annihilation into muons. The impact on the projected sensitivity for annihilation to neutrinos is seen below 70~MeV, where the limit without n-tagging (shown as an orange line) becomes weaker by a factor of about two relative to the n-tagged case. We also show the projected limits with (dashed light blue) and without (dot-dashed light green) n-tagging at HyperK from ref.~\cite{Campo:2018dfh}. \begin{figure} \centering \includegraphics[scale=0.75]{figs/sigmavnunu_NFW_noGd_limits.pdf} \caption{The limit on the annihilation cross-section, with and without neutron tagging, assuming a 70\% tagging efficiency. The purple line is identical to the limit from Fig.~\ref{fig:nulimits} for the NFW profile, while the orange line is the corresponding limit without neutron tagging. The dashed light blue and dot-dashed light green lines show the projections from~\cite{Campo:2018dfh} for HyperK with and without neutron tagging respectively. The effects are confined to the low mass regime. \label{fig:limit_nfw_noGd}} \end{figure} \section{Conclusions} \label{sec:conc} We have projected the reach of the Hyper-Kamiokande experiment in searches for light dark matter that annihilates to neutrinos and muons. To generate our results we have used an original detector simulation validated against results from Super-Kamiokande and then scaled up according to parameters in the HyperK Design Report~\cite{Abe:2018uyc}. We have focused on an annihilation signal originating in the centre of the Galaxy (although we technically undertake an all-sky analysis neglecting the extragalactic signal contribution) finding that HyperK should be able to probe thermal annihilation cross-sections for DM masses around 20~MeV for annihilations into neutrinos (for an NFW halo). The sensitivity for muon final states is always at least an order of magnitude above the thermal cross-section. We note that there are substantial uncertainties in these projections depending on the exact form of the DM halo profile, and possible galactic substructure. At low masses a critical background derives from muon-induced spallation products. Based on information in the HyperK DR~\cite{Abe:2018uyc}, we have adopted 17~MeV as the lower threshold for our projections. However, neutron tagging would allow these spallation events to be identified~\cite{Li:2014sea,Li:2015kpa,Li:2015lxa}, possibly allowing a search down to neutrino energies of 10~MeV. While the current limits on thermal dark matter annihilating into neutrinos from the Cosmic Microwave Background and Big Bang Nucleosynthesis are $\mathcal{O}(1-10)$~MeV\footnote{The precise number depends on the dark matter spin and other properties.}, CMB Stage-IV experiments should be able to probe masses up to 10-15~MeV~\cite{Escudero:2018mvt,Sabti:2019mhn}. The upcoming JUNO and DUNE experiments are also projected to have sensitivity to thermal cross-sections at low masses~\cite{Klop:2018ltd,Arguelles:2019ouk}. The combination of HyperK, CMB Stage-IV, JUNO and DUNE data may thus allow a comprehensive probe of dark matter annihilating to neutrinos up to masses of several tens of MeV, substantially extending the reach of current experiments. We thus consider it important that HyperK undertakes searches for dark matter over the full range of experimental sensitivity, and in particular down to the lowest energies. The prospects for DM discovery at HyperK would be further enhanced with the presence of a second detector in South Korea~\cite{Abe:2016ero}. Although the main advantages of such a detector would be in neutrino oscillation physics, the limits on the dark matter annihilation cross-section could be expected to improve by an $\mathcal{O}(1)$ factor, assuming identical detectors and exposure times. We note that the larger overburdens at the proposed South Korean sites would be particularly beneficial in searches for light dark matter through reducing the spallation backgrounds at low energies. This paper opens a number of possible directions for future work through improving our simulations and extending the signatures studied. The Hyper-Kamiokande experiment will be an exciting new tool in the quest to understand dark matter, and we plan to return to these issues in the future. \acknowledgments We thank John Beacom for detailed comments on the manuscript and Jost Migenda and Tommaso Boschi for helpful discussions and comments. NFB, MJD and SR are supported by the Australian Research Council. This research was supported by the Munich Institute for Astro and Particle Physics (MIAPP) which is funded by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy EXC-2094 390783311.
2204.07483
\section*{Acknowledgements} Thanks to everyone who helped! In particular, we would like to thank... \section{Example Appendix} \label{sec:appendix} This is an appendix. \section{Conclusions} In this paper, we described a new method for polling online data sources that uses broad keywords (e.g. cuisine = "American", stars = "3") to extract a corpora that is used to train a TLM such as the GPT. The finetuned model captures sociolinguistic patterns of the group polled that can then be accurately queried using highly targeted prompts such as "no vegetarian options". This unique method of querying a population on content that may not exist explicitly in the ground truth can be achieved due to TLMs capacity for memorization (learning repeating patterns), interpolation (creating variations on existing values), and extrapolation (inferring new content from existing). We demonstrated that using TLMs in this way is actually more reliable/accurate than using ground truth queries that produce sparse results, even if the TLM model is not trained on the specific topics of interest. This opens up a tremendous opportunity for textual research where relevant data is missing, in small quantity, or volatile. \section{Discussion} Polling transformer language models has provided us with a new lens to assess public attitude/opinions well beyond dining options. The same technique can be used on to determine social, political and public health issues using corpora from a variety of sources. It is dynamic and can be used to answer questions using latent information. Further, is computationally inexpensive and does not require any costly human annotated ground truth to train. The strength of the GPT is also a weakness. Because it stochastically generates each new token based on the ones that preceded it, but also on randomness-introducing parameters such as temperature, it can be difficult to make it behave in ways that are both predictable and dynamic. A temperature of zero will produce the same result repeatedly, but then the distribution of responses to the prompt will be lost. The best way to use these models may be to focus on the statistics of large-scale patterns rather than looking at individual responses. Stochasticity ensures that some percentage of texts will untrustworthy, but at scale such outliers can be identified and handled appropriately. In addition, prompt design is tricky. Small changes in prompts may result significant changes in results (e.g., \enquote{some vegetarian options} versus \enquote{many vegetarian options}). Limitations of the TLMs themselves may also prevent them from providing accurate information. For example, although humans can link affordances (\textit{I can walk inside my house}) and properties to recover information that is often left unsaid (\textit{the house is larger than me}), TLMs struggle on such tasks~\cite{forbes2019neural}. TLMs are also vulnerable to \textit{negated} and \textit{misprimed} probes. Simply adding \enquote{not} to a probe (e.g. \enquote{The theory of relativity was \textit{not} developed by}, often generates \enquote{Albert Einstein}. Mispriming, or the addition of unrelated content to the prompt (e.g. \enquote{Dinosaurs? Munich is located in} the probe can produce highly distorted results.~\cite{kassner2019negated} In this paper, we have shown that TLMs such as the GPT can be used as an effective data collection technique to gain a deeper understanding of sample populations. We believe these techniques can also be used to explore social, political and health issues, but it is important to understand their limitations. \section{Ethical Considerations} Large Transformer Language Models' capacity to rapidly generate unethical or dangerous content (e.g. realistic mass-shooter manifestos) is well understood. Beyond the risk of the generation of credible fake content, there are additional risks for social research using TLMs. The methods by which the latent information is stored in the model weights is a form of dimension reduction that cannot incorporate all of the nuance in the data it has trained on as it learns linguistic patterns in the data. As a result, it will inevitably fail to capture outlier behaviors in the model weights. Even for patterns which are largely correct, the models are capable of making informational errors, such as the improper attribution demonstrated in our Floober example in Section~\ref{sec:introduction}. The model followed the highly credible linguistic pattern of an academic or Wikipedia description of an animal, complete with a likely animal family, and attributed the discovery to a person, but it was the \textit{wrong} person. This class of error makes the latent information in TLMs valuable for population scale questions, but potentially dangerous for attributable content. The results of the model are that of generalized linguistic behaviors, and not attributable to a specific individual. Prompt tuning the model with quotes from a particular individual might provide salacious or unethical content which not only has never been produced by the individual, but includes ideas they may abhor. In fact, persistent biases or stereotypical behaviors often exist within the model's weights~\cite{abid2021persistent} \cite{nadeem2020stereoset}. As a result, it would be extremely dangerous to utilize this sort of latent information to perform predictive actions on individuals as a result of the output of these models. AI is increasingly being applied to predictive tools for law enforcement, employment screening, and other systems that judge individuals based on an algorithmic assessment~\cite{broadhurst2019artificial} \cite{ponce2021ai}. Attempting to leverage the techniques we've demonstrated for a system of that nature would be potentially misleading, possibly dangerous, and certainly unethical. \subsection{The Floober} Contextual interpoation examples by the GPT-3 \textbf{The Floober is a large flightless bird that lives in semi-arid areas in Africa. Like its cousin the Ostrich}, it is capable of running at speeds of up to 60 miles per hour. It uses its long legs and claws to dig in the sand and expose roots and tubers. The Floober also has a sharp beak that can crack and peel bark from trees, allowing the bird to eat the inner bark. The Floober has a keen sense of smell, much like a pig. In fact, it was first located when it was discovered that it was destroying the crops of local farmers by uprooting them. Unfortunately for the farmers, once they had located the Floober, they could not find a way to eliminate the bird without damaging the local ecosystem. It was later discovered that the Floober eats a specialized type of fungus that is prevalent in its habitat, and the farmers eventually learned to live with their new neighbor. \textbf{The Floober is a large flightless bird that lives in semi-arid areas in Africa. Like its cousin the Ostrich}, the Floober is a member of the Struthionidae family. The Floober was discovered in the early 19th century by explorers and naturalists like William Cornwallis Harris, who named them after the sound they made: "Floober". They were first described in 1821 by Thomas Burchell, the discoverer of Burchell's zebra. Other species were later discovered, including the Giant Floober and the Calico Floober. The Giant Floober is known only from fossils and was probably hunted to extinction by humans. Modern science recognizes two species of Floobers, the Eastern Floober, which is found in the savannas and grasslands of central Africa, and the Western Floober, which is found in the savannas and grasslands of southern Africa. \section{Future Work} So far, we have only scratched the surface trying to probe and understand the latent knowledge captured in a transformer language model. Our next work will involve using this technique to poll latent information on Twitter regarding public health issues. This will involve training our models on left-wing, right-wing and other groups participating in the ongoing COVID-19 online discussions. We will also be exploring the effects of negation, mispriming, and other techniques that may distort the latent knowledge captured by these models. \section{Introduction} \label{sec:introduction} Large-scale research involving humans is difficult, and often relies on labor-intensive mechanisms such as polling, where statistically representative populations will be surveyed using landline and cellphone interviews, web surveys, and mixed-mode techniques that combine modes. Often, participants in a survey may need to be recontacted to update responses as a result of changing events~\cite{fowler2013survey}. As social media has developed, many attempts have been made to determine public opinion by mining data that is available from online providers such as Twitter and Reddit, e.g.~\cite{colleoni2014echo, sloan2015tweets}. However, though social data can be analyzed in a variety of ways, it cannot replace the pollster asking about items that do not explicitly exist in the data. Due to the the emergence of Transformer-based Language Models (TLMs) this may be ready to change. These models, such as the Generative Pre-trained Transformer (GPT) series developed by OpenAI, have been trained on millions of high-quality web pages. The model generates text (with words represented as high-dimensional vectors) as a function of an input sentence and its previous hidden states. These hidden states are in turn functions of the input sentence and the previous hidden states, etc. Since the model is not trained using any hand-crafted rules about language, it effectively learns its own set of rules for generating natural language. In doing so, it captures semantic and syntactic regularities in language that were previously hidden to traditional statistical methods. As such, these models contain tremendous amounts of sociolinguistic information such as the patterns of speech, word choice, and dialects~\cite{meyerhoff2018introducing} that can be accessed \textit{interactively}. By using carefully constructed probes, the model can be \enquote{polled}, allowing researchers to access latent knowledge contained in the model in ways that are analogous to accessing the knowledge in a population by calling a random sample. Large transformer-based language models such as the GPT encode short-range and long-range statistical relationships between tokens. Tokens represent words, and clusters of tokens represent more nebulous structures that incorporate \textit{concepts} and \textit{beliefs}. A key point in understanding how these models behave are the concepts of \textit{interpolation} and \textit{extrapolation}~\cite{rytting2021leveraging}. Consider the following example using the GPT-3, where a prompt (in bold face) describing a fictional bird, a \enquote{Floober}, is used to create a contextually consistent description of this imaginary beast: \begin{displayquote} \textit{\textbf{The Floober is a large flightless bird that lives in semi-arid areas in Africa. Like its cousin the Ostrich}, the Floober is a member of the Struthionidae family. The Floober was discovered in the early 19th century by explorers and naturalists like William Cornwallis Harris, who named them after the sound they made: "Floober". They were first described in 1821 by Thomas Burchell, the discoverer of Burchell's zebra.} \end{displayquote} In this example, the GPT-3 is able to \textit{interpolate} by arranging tokens in the model's information space related to African zoology based on the attention relationship relationship to the given prompt. The GPT also \textit{extrapolates} from the prompt cue of \enquote{Floober} and \enquote{Ostrich} by accessing the concept of \textit{Struthionidae}, which include ostriches. These relationships are encoded as statistical dependencies among tokens, which means that when a token is missing from a query, the model can use its contextual knowledge to predict which other tokens should be included. This does not mean that the GPT-3 is foolproof. In this case, it makes a factual error by accessing tokens related Thomas Burchell (1799–1846)~\footnote{en.wikipedia.org/wiki/Thomas\_Burchell} rather than William John Burchel (1781 – 1863)\footnote{en.wikipedia.org/wiki/William\_John\_Burchell}, who was the first Westerner to describe the zebra for science. Because of this ability to synthesize responses, language models such as GPTs can provide capabilities for capturing the human opinions and beliefs encoded in the training text that more resemble the traditional polling model. Rather than performing training data analysis (e.g., supervised classification), we can \textit{poll} the model's responses to probes. But to do this effectively requires that we develop methods to systematically reveal the relevant information captured in these models. In this paper, we finetune~\cite{sun2019fine} a set of GPT-2 models on a Yelp corpora that reflect populations of users with distinctive views. We then use prompt-based queries to probe these models to reveal insights into the biases and opinions of the users. We demonstrate how this approach can be used to produce results more accurately than traditional keyword or keyphrase searches, particularly when data is sparse or missing. In addition to the concepts of interpolation and extrapolation, we introduce the concept of language model \textit{memorization}, where models can be trained to incorporate repeating patterns. We incorporate this concept by introducing the technique of \textit{meta-wrapping}, which adds information to the training corpora that aids in the automated identifying of particular parts of the generated text. We further find a correlation of when the model is trained sufficiently to accurately reproduce these wrappings and the overall accuracy of the model in representing the explicit and latent information that it has been trained on. Lastly, we provide methods for validating transformer language models in each of these contexts. We extensively study our methodology on Yelp data, where we have ground truth in the form of user-submitted stars, and discuss applications in other domains. \section{Methods} For all the research involving finetuning, we used the Huggingface~\cite{wolf2019huggingface} 117M parameter GPT-2 model. This was done for two reasons: \begin{enumerate} \item Increased speed: During the course of this study, we finetuned 48 models. We were able to finetune a model in 2-3 hours using one NVidia TITAN RTX. \item Reduced carbon footprint: It is clearly possible to train larger models using more hardware in the same amount of time, but since this was a \textit{comparative} study, there was no need to add the cost and energy of spinning up a multi-GPU cloud instance. \end{enumerate} Our methods focus on understanding the \textit{memorization}, \textit{interpolation}, and \textit{extrapolation} behaviors of these language models. To do this, we made use of the Yelp Open Dataset\footnote{www.yelp.com/dataset}. The Yelp dataset contains reviews of different businesses by customers. It incorporates social-media-like text, locations, business names, and star reviews, which can serve as a form of ground truth for performing sentiment analysis on review text. More specifically, we created specific sets of corpora for these GPT behaviors: \begin{itemize} \item \textit{Memorization -- Ratings and votes}: This corpora includes numeric information only, including stars and votes. This data was used to evaluate the \textit{Global} characteristics of the model. \item \textit{Interpolation -- Reviews with stars}: This corpora includes a review and the associated stars. We evaluate the star rating and its relationship to the review text in the ground truth and generated data. This is used to evaluate the \textit{Local} characteristics of the model. \item \textit{Extrapolation -- Masked reviews}: This corpora is trained using the same set of reviews as the previous item, only without any review that contains the phrase \enquote{vegetarian options}. It is used to compare the behavior of the model in zero-shot situations when compared to ground truth and the model trained using the masked data. \end{itemize} For the purposes of our research, we concentrate on reviews of \textit{American} restaurants. At 1,795,036 reviews, this subset is more than three times larger than Italian, the next most common cuisine. This provided us with the widest spectrum of options with respect to sub-queries of ground truth. The overall technique used to create models, then generate and evaluate results is as follows: \begin{enumerate} \item Download and store the Yelp dataset in a MySQL database. \item Analyze number of reviews by category. \item Create a corpora, wrapping with meta-information (e.g. Figure~\ref{fig:example_corpora}). \item Fine-tune models, using the Huggingface API. \item Evaluate the model on a set of prompts and store the results. Each experiment contains an id, date, description, model name, list of textual probes, seed, and model hyperparameters. \item Calculate sentiment and parts-of-speech analysis on generated text\footnote{github.com/flairNLP/flair}. We also ran the same sentiment evaluation on a subset of \enquote{ground truth} reviews taken from the Yelp dataset. \item Generate charts by running queries on the database and performing analytics. \end{enumerate} We trained and evaluated three sets of models. The first sets were trained exclusively on stars and votes (See training corpora example in Figure~\ref{fig:example_corpora}). This was used to evaluate the statistical properties of the GPT against well-characterized numeric data. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/stars_votes}} \caption[]{\detokenize{Corpus section with meta-wrapping}} \label{fig:example_corpora} \end{figure} The second sets were trained using corpora of reviews followed by stars (Figure~\ref{fig:example_corpora2}). These models were used to evaluate how effectively the models learned the relationship of the generated text to the star review. In these corpora, the training and test text were wrapped in meta information consisting of the text \enquote{review: }, \enquote{, stars: }, and terminated by a \enquote{\texttt{-{}- <CR>}}. The use of this wrapping allowed a rapid evaluation of the level of training of the model (i.e. did it learn the wrapping pattern effectively), and once learned, the meta-wrapping supported easy extraction of the synthetic data using regular expressions. The third set was trained using a masked corpora that did not include the phrase \enquote{vegetarian options} to compare against the other model and ground truth. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/reviews_stars}} \caption[]{\detokenize{yelp_review-stars_test_American_6.txt}} \label{fig:example_corpora2} \end{figure} \section{Related Work} \label{sec:related} Since the introduction of the transformer model in 2017, TLMs have become a field of study in themselves. The transformer uses self attention, where the model computes its own representation of its input and output~\cite{vaswani2017attention}. So far, significant research has been in increasing the performance of these models, particularly as these systems scale into the billions of parameters, e.g. ~\cite{radford2019language}. Among them, BERT~\cite{devlin2018bert} and GPT~\cite{radford2018improving} are two of the most well known TLMs used widely in boosting the performance of diverse NLP applications. Understanding how and what kind of knowledge is stored in all those parameters is becoming a sub-field in the study of TLMs. Among them, ~\cite{petroni2019language} used probes that present a query to the mode as a cloze statement, where the model fills in a blank (e.g. \enquote{Twinkle twinkle \rule{.9cm}{0.15mm} star}). Research is also being done on the creation of effective prompts. Published results show that mining-based and paraphrasing approaches can increase effectiveness in masked BERT prompts over manually created prompts~\cite{jiang2020can}. For example, mined prompts can be produced by mining phrases in the Wikipedia corpus that can be generalized as template questions such as \textit{x was born in y} and \textit{capital of x is y}. These can then be filled in using sets of subject-object pairs. Improvements using this technique can be substantial, with improvements of 60\% over manual prompts. Paraphrasing, or the simplification of a prompt using techniques such as back-translation can enhance these results further~\cite{jiang2020can}. Using TLMs to evaluate social data is still nascent. A study by \cite{palakodety2020mining} used BERT fine tuned on YouTube comments to gain insight into community perception of the 2019 Indian election. They created weekly corpora of comments and constructed a tracking poll based on the prompts \enquote{Vote for MASK} and \enquote{MASK will win} and then compared the probabilities for the tokens for the parties BJP/CONGRESS and candidates MODI/RAHUL. The results substantially tracked traditional polling. Lastly, we cannot ignore the potential dangers of TLMs. OpenAI has shown that the GPT-3 can be \enquote{primed} using \enquote{few-shot learning}~\cite{brown2020language}. In their paper \textit{The radicalization risks of GPT-3 and advanced neural language models}~\cite{mcguffie2020radicalization}, the GPT-3 was primed using mass-shooter manifestos with chilling results. We will discuss these and other related issues in the ethics section. \begin{comment} \subsection{Linguistics references to check out} \begin{itemize} \item Frequency effects in language processing: a review with implications for theories of implicit and explicit language acquisition~\cite{ellis2002frequency}. \item Frequency of Use and the Organization of Language~\cite{bybee2007frequencyofuseandthe} \item How do we use language? Shared patterns in the frequency of word use across 17 world languages~\cite{calude2011we} \item Frequency of word-use predicts rates of lexical evolution throughout Indo-European history~\cite{pagel2007frequency} \item Lera Boroditsky \url{https://scholar.google.com/citations?hl=en&user=8mm3GBsAAAAJ} \end{itemize} \end{comment} \section{Results} \label{sec:results} In this section, we describe how the GPT is able to incorporate memorization, interpolation, and extrapolation into its behavior. We find that each one of these contexts provides useful mechanisms for determining the performance of such models. \subsection{Memorization} In this section, we focus on the ability of the GPT to memorize repeating patterns while also reproducing statistically similar data with respect to ground truth. To do this, we generated \textit{meta-wrappers} from the ground truth. In this case, the number of stars, useful votes, funny votes and cool votes contained in the Yelp data. Examples of this are shown in Figure~\ref{fig:example_corpora}. When given an insufficiently large corpora, the model would fail to learn the pattern correctly resulting in generated strings like: \begin{displayquote} \small \texttt{\detokenize{stars_votes = 0stars_stars_stars_min = 2.0, useful_votes = 0,}} \end{displayquote} However, once the corpus contained more then 50,000 lines, the model learned the pattern perfectly, and there were no more errors (Column 'error \%' in Table~\ref{tab:memorization_error_correlation}). \begin{table}[t] \small \centering \begin{tabular}{lrr} \toprule model (lines) & error \% & correlation \%\\ \midrule 6k & 0.24\% & 0.36\%\\ 12k & 0.22\% & 0.62\%\\ 25k & 0.14\% & 0.86\%\\ 50k & 0.00\% & 0.96\%\\ 100k & 0.00\% & 0.99\%\\ 200k & 0.00\% & 0.98\%\\ \bottomrule \end{tabular} \caption{Memorization Error \& Correlation} \label{tab:memorization_error_correlation} \end{table} We also tested the effects of corpus size on the ability of the model to reproduce the statistical properties of the ground truth numeric data\footnote{The vote data is mostly zeros and not as useful as the star information}. We found that increasing the number of lines improved the learning of the statistical information by the models using Pearson's correlation. However, as can be seen in the 'correlation \%' column of Table~\ref{tab:memorization_error_correlation}, it appears that the best training occurs at 50k-100k lines, with the 200k line model overfitting and no longer generalizing~\cite{dietterich1995overfitting}. These results indicate that the TLMs can both memorize the structure of data and reproduce arbitrary amounts of information using that structure that are substantially similar to ground truth. These memorization properties allow us to evaluate the quality of models by injecting known ground truth into the data using meta-wrapping and evaluating the statistical properties of the results. \subsection{Interpolation} In this section, we explore how finetuned GPT models are able to generate data that appropriately represents the behaviors of the group that provided the corpora. In this case, we trained models on 50k and 100k review corpora using the \enquote{American} cuisine. The arrangement of the corpus used for training is shown in Figure~\ref{fig:example_corpora2}. We then trained a model using 50k corpora and 6 epochs to compare to ground truth data. We then generated 10,000 reviews using the prompt \enquote{review:} and parsed and stored the results. Any review that ran too long to generate a star value was rejected resulting in a total of 9,228 usable review/star pairs. This model accurately reflected the distribution of stars in the ground truth with a Pearson's correlation of 99.6\% (Figure~\ref{fig:american_gpt_gt}). \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/american_gpt_gt}} \caption[]{American Ground Truth and GPT star distribution} \label{fig:american_gpt_gt} \end{figure} An extract from a generated 4-star review is shown below: \begin{displayquote} \textit{\enquote{Service is good, staff is very friendly and helpful. Prices are reasonable and the restaurant is clean. The food was great. I had the veggie burger, which was great.}} \end{displayquote} To determine sentiment for reviews like this, we used the Flair sentiment analysis API~\cite{akbik2019flair} for each review and stored the results (6,926 positive, 2,302 negative). We also did this for 10,000 Yelp reviews selected from the \enquote{American} cuisine (6,624 positive, 3,376 negative). We then calculated the average number of stars for a POSITIVE review and a NEGATIVE review for the generated and ground truth data. The results of this comparison are shown in Table~\ref{tab:avg_star_sentiment}. \begin{table}[t] \small \centering \begin{tabular}{lrrr} \toprule Avg star rating & GPT & GT & \% difference \\ \midrule NEGATIVE & 2.56 & 2.29 & 5.45\% \\ POSITIVE & 4.45 & 4.44 & 0.25\% \\ \bottomrule \end{tabular} \caption{Star ratings for Sentiment} \label{tab:avg_star_sentiment} \end{table} The generated results are nearly identical with the ground truth, and show how well the GPT is able to generate internally consistent reviews and stars. To see how different this was from the pretrained model, we used the prompt \enquote{What follows is a typical example of a restaurant review of an American-style taken from Yelp's database:}. This was more complex in that there was no meta wrapped output, so more complicated parsing had to be done. For instance, the GPT would sometimes rate on a 10-point rating and these scores had to be converted to the 5-point scale. Figure~\ref{fig:pretrained_gpt_vs_gt} shows a bias towards positive (4-star) reviews that is inherent in the pretrained model, while the ground truth is biased towards 5 stars. The correlation here is nowhere near the 99.6\% of the finetuned model, though it is still significant at 47.86\%. The match of sentiment to stars is also still apparent in this data (Table~\ref{tab:pretrained_avg_star_sentiment}) even though it is less pronounced than in the finetuned GPT output. This may be partially accounted for by the ways that ratings had to be parsed and combined. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/pretrained_gpt_vs_gt}} \caption[]{Pre-trained GPT vs Ground Truth} \label{fig:pretrained_gpt_vs_gt} \end{figure} \begin{table}[t] \small \centering \begin{tabular}{lrrr} \toprule Avg star rating & GPT-pre & GT & \% difference \\ \midrule NEGATIVE & 3.25 & 2.29 & 29.46\% \\ POSITIVE & 4.04 & 4.44 & 9.9\% \\ \bottomrule \end{tabular} \caption{Star ratings for Sentiment (pretrained GPT-2)} \label{tab:pretrained_avg_star_sentiment} \end{table} This bias towards positive reviews in the pre-trained model may have led to some interesting behavior on the part of the finetuned models when we tried to elicit negative (e.g. 1-star, 2-star, etc.) reviews. Although it was possible to produce bad reviews given a sufficiently negative prompt, the effort required to produce a one-star review was perplexing. Figure~\ref{fig:no_veg_options_unbalanced} shows a prompt \enquote{No vegetarian options} that produced substantially negative reviews in the original reviews but produces generally positive reviews when submitted to the GPT trained on American reviews (Pearson's correlation of -63\%). \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/no_veg_options_unbalanced}} \caption[]{\enquote{No vegetarian options} Unbalanced} \label{fig:no_veg_options_unbalanced} \end{figure} Figure~\ref{fig:many_veg_options_unbalanced} shows a similar behavior for a generally positive prompt, \enquote{Many vegetarian options}. In the ground truth, there are more 5-star reviews than any other, while in synthetic reviews, the peak is again at 4 stars. This is roughly the same pattern that appears in the pretrained GPT (Figure~\ref{fig:pretrained_gpt_vs_gt}) and the negative review (Figure~\ref{fig:no_veg_options_unbalanced}). \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/many_veg_options_unbalanced}} \caption[]{\enquote{Many vegetarian options} Unbalanced} \label{fig:many_veg_options_unbalanced} \end{figure} To create an overwhelmingly one-star review with this model required the prompt \enquote{Everything about this place is terrible. The food is crap. The staff is terrible}. Clearly the model is capable of producing one-star reviews, but requires more extensive prompt tuning to do so. It appears that although there are many pathways to produce 3, 4, and 5 star reviews, there is a smaller \enquote{prompt space} that produce a sequence of tokens that produce negative reviews. Remarkably, even when the model is trained on a corpus that is \textit{balanced with respect to stars}, it still produces substantially more positive reviews for the \enquote{No vegetarian options} prompt (Figure~\ref{fig:no_veg_options_balanced}) and less 5 star reviews than the ground truth for the positive prompt \enquote{Many vegetarian options}. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/no_veg_options_balanced}} \caption[]{\enquote{No vegetarian options} Balanced} \label{fig:no_veg_options_balanced} \end{figure} To generate the appropriate sentiment/star behavior, we had to train 5 models, one for each star rating for reviews with the \enquote{American} category. Each model was trained with a 50k review corpora created from the ground truth database as shown in Figure~\ref{fig:example_corpora2}. Each model was prompted with the \detokenize{no/some/several/many vegetarian} options described above. The ratio of positive to negative sentiment for each model was compared to the sentiment ratio of 1, 2, 3, 4, and 5 star reviews in the ground truth data. As we can see in Figure~\ref{fig:GPT-GT_isolated_positive} and Figure~\ref{fig:GPT-GT_isolated_negative}, these correlations are much stronger (Pearson's correlation of 99.97\%) than any of the previous approaches. \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/GPT-GT_isolated_positive}} \caption[]{GPT/GT Isolated Star Positive} \label{fig:GPT-GT_isolated_positive} \end{figure} \begin{figure}[t] \centering \fbox{\includegraphics[width=0.9\linewidth]{figures/GPT-GT_isolated_negative}} \caption[]{GPT/GT Isolated Star Negative} \label{fig:GPT-GT_isolated_negative} \end{figure} We believe that the reason that this works is because each star review represents a distinct linguistic population. On one end of the spectrum are the disgruntled, often using language that focuses on poor service such as in this extract: \begin{displayquote} \textit{\enquote{We were basically seated at a table by the host, then told quite rudely by the server that we couldn't sit there. Then we proceeded to watch as the host and server fought over whether we could sit there or not.}} \end{displayquote} At the other end of the spectrum is the 5-star group who have had a perfect meal with great service. These reviews are overwhelmingly classified as positive. We can show this relationship of these emotional terms to stars from a different perspective by using the Linguistic Inquiry and Word Count (LIWC) Dictionary~\cite{pennebaker2001linguistic}, which calculates the representation percentages of certain sets of words. One set of terms in the LIWC has to do with affect, ranging from positive (e.g. happy, pretty, good) to negative (e.g. hate, worthless, enemy). We can see in Table~\ref{tab:liwc_affect} how dissimilar the one and five star groups are: \begin{table}[t] \small \centering \begin{tabular}{lrr} \toprule Affect & Pos Emo & Neg Emo \\ \midrule GPT 1 star & 2.869\% & 1.461\% \\ GT 1 star & 2.710\% & 1.936\% \\ GPT 5 star & 7.241\% & 0.358\% \\ GT 5 star & 8.277\% & 0.572\% \\ \bottomrule \end{tabular} \caption{LIWC Affect Terms for GT and GPT reviews} \label{tab:liwc_affect} \end{table} These clusterings and patterns of usage allow the GPT to effectively learn the linguistic behaviors of the population so that it can accurately generate novel text that has the same sentiment patterns. And as we will see in the next section, these models are able to accurately \textit{extrapolate} text in response to prompts that do not appear in the training data, a critical element if we are to be able to use these models for polling and survey purposes. \subsection{Extrapolation} In our ground truth Yelp dataset, some queries result in very few reviews. When looking at only reviews with the keywords \enquote{some vegetarian options} or \enquote{no vegetarian options} there are only a handful or in the most extreme cases \textbf{no} related reviews. We can see this in the sample from the Yelp data in Tables~\ref{tab:veg_gt_pos_counts} and \ref{tab:veg_gt_neg_counts}. \begin{table}[t] \small \centering \begin{tabular}{lrrrr} \toprule POSITIVE & no & some & several & many \\ \midrule 1 star & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\ 2 star & \textbf{0} & 1 & \textbf{0} & \textbf{0} \\ 3 star & 4 & 8 & 7 & 21 \\ 4 star & 6 & 31 & 29 & 90 \\ 5 star & 6 & 29 & 27 & 100 \\ \bottomrule \end{tabular} \caption{Vegetarian ground truth positive review counts} \label{tab:veg_gt_pos_counts} \end{table} \begin{table}[t] \small \centering \begin{tabular}{lrrrr} \toprule NEGATIVE & no & some & several & many \\ \midrule 1 star & 21 & 1 & 1 & 6 \\ 2 star & 24 & 6 & 8 & 18 \\ 3 star & 13 & 6 & 7 & 31 \\ 4 star & 1 & 2 & 2 & 8 \\ 5 star & \textbf{0} & 2 & \textbf{0} & 3 \\ \bottomrule \end{tabular} \caption{Vegetarian ground truth negative review counts} \label{tab:veg_gt_neg_counts} \end{table} This problem often occurs with datasets where questions may not have been asked, conditions have changed (such as the rapidly evolving information space surrounding COVID-19) or where the structure of the data makes certain responses unlikely. This makes obtaining information about these cases difficult or impossible with traditional methods. Extrapolation can address this problem by letting the model extrapolate from \enquote{adjacent} information to generate relevant, zero-shot data as we saw in the Floober example in the introduction. To demonstrate this, we trained a new set of isolated star models on a 50k corpora that had all reviews containing the phrase \enquote{vegetarian options} \textit{removed}, or masked. These models then generated \textit{extrapolated} responses to the \enquote{no/some/several/many} prompts. We then compared the behavior of the \textit{interpolating} model that had been trained on corpora \enquote{vegetarian options} reviews, and a baseline of statistical samples taken from the known ground truth of 97 samples of all three-star reviews in our set of \enquote{no/some/several/many} samples. We chose baseline sample sizes of 8, 18, and 24 because those were the average size of the number of negative, positive, and combined reviews in our samples. Each sample (baseline and GPT) was randomly sampled 1,000 times and averaged for subsequent calculations. Because the GPT is able to produce unlimited reviews, we were able to use a sample size of 1,000 for these synthetic reviews. We derived the l2 distance from POS/NEG percentage calculated from the Known Ground Truth (40.25\% / 59.74\%) for the GPT and baseline versions, which is shown in Table~\ref{tab:gt_vs_extrap_vs_baseline}. We can clearly see that the baseline(8) has the highest l2 error (20.01\%), while the GPT trained on the unmasked corpus has the lowest. Remarkably, the masked, \textit{extrapolating} GPT model has the second-lowest error, and has less than half the error of the baseline(26) evaluation. \begin{table}[t] \small \centering \begin{tabular}{llrrr} \toprule & Pos \% & Neg \% & Error l2 \\ \midrule Ground Truth & 40.25\% & 59.74\% & 0.00\% \\ GPT & 40.71\% & 59.28\% & 1.89\% \\ GPT (no veg) & 37.58\% & 62.41\% & 3.88\% \\ baseline(26) & 39.38\% & 60.12\% & 9.16\% \\ baseline(18) & 40.55\% & 59.44\% & 11.78\% \\ baseline(8) & 39.87\% & 60.12\% & 20.01\% \\ \bottomrule \end{tabular} \caption{Ground Truth vs. Extrapolation vs. Baseline} \label{tab:gt_vs_extrap_vs_baseline} \end{table} This is important because it demonstrates that the GPT (no veg) model is able to generate text related to vegetarian options \textit{despite being trained on data with no reviews related to vegetarian options}. These results are substantially better than the baseline even when the baseline includes over 25\% of the existing vegetarian samples. The model's ability to generate matching sentiment reviews is based purely on extrapolating between the rest of the reviews it was trained on. These results mean that we can use language models such as the GPT to effectively learn the linguistic behaviors of the population and generate accurate responses to questions that have never been asked of the original group but are \textit{latent} in the weights of the model. This technique creates a powerful new capability for polling and survey purposes. \section{Useful Links} \begin{itemize} \item Conference Home: \url{https://2022.naacl.org/} \item Call for papers: \url{https://2022.naacl.org/calls/papers/} \item Author policies: \url{https://aclrollingreview.org/cfp#authorship} \item Word embeddings quantify 100 years of gender and ethnic stereotypes: \url{https://www.pnas.org/content/pnas/115/16/E3635.full.pdf} \item Dynamic Word Embeddings for Evolving Semantic Discovery: \url{https://arxiv.org/abs/1703.00607} \item Inferring Mechanisms for Global Constitutional Progress: \url{https://arxiv.org/abs/1606.04012} \item Mark Pagel Professor of Evolutionary Biology \url{https://scholar.google.com/citations?hl=en&user=ddDQGCYAAAAJ} \item Claire Bowern Professor of Linguistics, Yale University \url{https://scholar.google.com/citations?hl=en&user=uLun3nEAAAAJ} \item Russell D. Gray Max Planck Institute for Evolutionary Anthropology \url{https://scholar.google.com/citations?hl=en&user=sksPd1cAAAAJ} \end{itemize}
2208.14956
\chapter[Glassy features and complex dynamics in ecological systems]{Glassy features and complex dynamics in ecological systems\\ \label{Ch 27}} \author[A. Altieri]{A. Altieri\footnote{ada.altieri@u-paris.fr}} \address{Laboratoire Matière et Systèmes Complexes (MSC), Université Paris Cité\\ CNRS, 75013 Paris, France } \vspace{0.3cm} \begin{abstract} In this report, I will review some of the most used models in theoretical ecology along with appealing reformulations and recent results in terms of diversity, stability and functioning of large well-mixed ecological communities. \end{abstract} \body \section{Introduction} \label{sec1} Emergent properties of many-species ecological communities have a variety of applications: for example, the activity of the gut microbiota is believed to be crucial for human health; sustaining natural diversity is essential for services such as food supply, pollination and climate regulation. There is growing awareness that human activity is causing irreversible species extinction and ecosystem simplifications, generally considered a \emph{global biodiversity crisis}. The Earth Microbiome Project\footnote{https://earthmicrobiome.org/} and the Human Microbiome Project\footnote{\url{https://www.hmpdacc.org/}} are designed in this direction aiming to identify and characterize all diverse microorganisms and their relationship to ecological stability and disease development. The incredible biodiversity that characterizes natural ecosystems has attracted ecologists for long time but more recently has started gathering interest also among theoretical physicists. From a theoretical perspective, modeling the interactions between many different components – from bacteria in a microbial community to plant-pollinator impact in a forest to starling murmurations – can become extremely complicated. A single, well-established theory allowing one to bridge the gap between empirical data made available from an enormous number of controlled experiments and more sophisticated techniques is nevertheless still missing. In addition to the need for a general criterion that would enable to discriminate between \emph{niche theory} -- for which each niche is occupied by a single species according to the competitive exclusion principle \cite{Hardin1960} -- and \emph{neutral models} -- in which differences are only attributed to stochasticity -- other crucial questions come to the stage and play an increasingly key role: i) relaxation either to a single fixed point or a multiple fixed point regime; ii) definition of ecosystem diversity, \emph{i.e.} the number of surviving species; iii) fluctuation and functional response typical behavior under the effect of external perturbations; iv) investigation of the interplay between stochastic and deterministic processes and how community diversity and variability are related to them; v) emergence of possible chaotic dynamics and limiting cycles to be experimentally measured. In this chapter, we aim to present different statistical physics frameworks that rely on advanced spin-glass techniques, for which Giorgio Parisi has been a pioneer as well as a beacon outlining the right direction in a multitude of complex scenarios. \subsection{More is Different} Theory has long predicted that large complex systems are intrinsically unstable \cite{May1972, May1976}, which is a long-standing puzzle given the complexity observed in Nature. In the last years, there is nevertheless a growing interest in systems composed of an enormous number of species interacting in myriad ways in very complex environments. Such systems can thus be rephrased through the prism of statistical physics using sophisticated concepts and powerful methods in this direction \cite{Faust2012, Fisher2013, Kessler2015, Bunin2017, Altieri2019, Servan2018, Marsland2020, pearce2020stabilization, wu2021understanding}. In a bottom-up approach, the detailed structure of individual interactions and how such coefficients scale with the system size is unknown since particularly difficult to infer in diversity-rich ecosystems. Hence, to tackle the staggering complexity of large ecological communities, one can follow a long tradition rooted in Robert May's seminal works \cite{May1972, May1976} and assume the interaction matrix to be random. May considered a \emph{community matrix} $H$ of size $S \times S$, $S$ being the total number of species in the pool and $H_{ij}$ standing for the effect of species $j$ on $i$ around a feasible fixed point. In this picture, the self-regulation term corresponding to diagonal elements is fixed to $-1$, whereas off-diagonal elements are drawn from a random distribution with zero mean and variance $\sigma^2$ -- sometimes referred to as heterogeneity parameter -- with associated probability $C$. According to May's conjecture, if $\sigma \sqrt{S C} >1$ the system is inevitably unstable under infinitesimally small perturbations and cannot persist. Hence, as a system becomes more diverse (controlled by the number of species $S$ in the pool), more connected (in terms of the connectivity $C$), and strongly interacting (tuned by $\sigma$), a transition to instability occurs with a probability of persisting close to zero. In the large $S$ limit, random matrix theory comes into play claiming that the eigenvalues of the community (or Jacobian) matrix must be contained inside a circle of radius $\sigma \sqrt{SC}$ in the complex plane. Therefore, the system's stability is conditional on the fact that the resulting circle is located in the left half-plane with all eigenvalues having negative real parts. To provide general criteria that could encompass all diversified cases, one can then play with the interaction matrix by changing the strength and mutual sign. A suitable reshuffling of local interactions clearly raises a number of questions on how different combinations of them affect the stability of the overall community and what would be a good trade-off (weak/strong, mutualistic/competitive) to avoid, for instance, destabilization of a prey-predator chain if weak interactions are preponderant \cite{Allesina2012}. \section{High-dimensional MacArthur model at the edge of stability} In the following, we shall focus on mathematical models that offer a suitable platform to understand ecosystems' behavior: giving some input information, predictions on species survival, responses to external perturbations, and the emergence of robust structures can be extracted as an output. We will start with a very influential one, the MacArthur resource-consumer model, originally designed to shape competition among $S$ different species for $N$ non-interacting resources \cite{Macarthur1970}. Notably, if the dynamics describing resource evolution is much faster than the populations' one, the former can be integrated out leading to the generalized Lotka-Volterra equations\cite{Lotka1920, Volterra1927}. The random Lotka-Volterra model will thus represent the second core of this chapter, through which we will figure out how to overcome certain inherent limitations of such a resource-consumer model. By taking advantage of the definition of self-averaging quantities, MacArthur's model has been recently reformulated as a problem of statistical physics of disordered systems and then solved analytically in the limit of an infinite number of species and resources \cite{Tikhonov2017}. We will especially use it to probe several underlying connections between the phenomenology of jamming \cite{Altieri2019book} and criticality in large ecosystems. The dynamics of the model is defined by linear differential equations for $n_\mu$ individuals, where the index $\mu=1,..., S$ denotes the different species: \begin{equation} \frac{d n_\mu}{dt} \propto n_\mu \Delta_\mu \ , \label{dn-mu} \end{equation} and $\Delta_\mu$ is the \emph{resource surplus}. As far as one is concerned with equilibrium, the proportionality factor in the dynamical equation above can be safely neglected. The equilibrium condition from Eq. (\ref{dn-mu}) leads to two possibilities: i) $n_\mu >0$ $\&$ $\Delta_\mu=0$ (survival); ii) $n_\mu =0$ $\&$ $\Delta_\mu<0$ (extinction)\footnote{The case $\Delta_\mu >0$ is actually forbidden by the model definition.}. The variables $\Delta_\mu$ depend then on the availabilities of resources $h_i$ (with $i=1,..., N$) and the \emph{metabolic strategies}, $\sigma_{\mu i}$'s, by which species demand and possibly meet their requirement $\chi_\mu$:\begin{equation} \Delta_\mu=\sum_{i=1}^{N} \sigma_{\mu i} h_i -\chi_\mu \ . \label{Delta} \end{equation} For each species $\mu$, the metabolic strategy represents a random binary vector whose components $\sigma_{\mu i}$ are extracted from a distribution that takes values $1$ and $0$ with probabilities $p$ and $1-p$ respectively. The parameter $p$ determines whether the species in the ecosystem are either specialists ($p \ll 1$), each requiring a small number of well-defined metabolites necessary for their survival, or generalists ($p \sim 1$), meaning that many different metabolites can be appropriate for their needs. In turn, individuals $n_\mu$ depend on the availability of resources, $h_i$, according to a feedback loop mechanism, which is essentially modulated by the efficiencies through which species exploit resources. By defining a total demand, $T_i=\sum_\mu n_\mu \sigma_{\mu i}$, the availabilities $h_i$ can simply be expressed as a decreasing function of it. For instance, one can consider $h_i= \frac{R_i}{\sum_\mu n_{\mu} \sigma_{\mu i}}$ where $R_i$ is the resource surplus whose average is constant whereas its variance, $\delta R^2$, can fluctuate and be used to reproduce the resulting phase diagram. Over the years several mechanisms have been put forward to explain the fact that complex -- and in particular living -- systems tend to be poised at the edge of stability: edge of chaos \cite{Kauffman1991}, self-organized criticality \cite{Bak2013nature}, self-organized instability, scale-free behavior, etc. Here we propose an example that leverages on an alternative principle\cite{Altieri2019}. It is based on recasting the MacArthur model in terms of a Constraint Satisfaction Problem (CSP). Hence, in analogy with a standard CSP, above the hyperplane $\vec{h} \cdot \vec{\sigma}_\mu$ species are able to survive and multiply; conversely, if $\vec{h} \cdot \vec{\sigma_{\mu}} < \chi_\mu$, the sustainability of the species' pool is no longer guaranteed. All $\vec{h}$ such that $\vec{h} \cdot \vec{\sigma_\mu} < \chi_\mu$ define the so-called \emph{unsustainable region}, for each species $\mu$. One can now re-express the requirement $\chi_\mu$ via a random variable \emph{i.e.} $\chi_\mu=\sum_i \sigma_{\mu i} +\epsilon x_\mu$ \cite{Tikhonov2017}, where the parameter $\epsilon$ plays the role of an infinitesimal cost scatter and $x_\mu$ is a zero-mean and unit-variance Gaussian variable. It has been shown that, in the $\epsilon \rightarrow 0$ limit, the model undergoes a phase transition between two qualitatively different regimes: i) a \emph{shielded phase}; ii) a \emph{vulnerable phase} \cite{Tikhonov2017}. In the shielded phase, $\mathcal{S}$, a collective behavior emerges with no influence of external conditions. If the availabilities are set to one in such a way that neither specialists nor generalists are favored, and a sufficiently small perturbation is applied to the system, a feedback mechanism between $h_i$ and $n_\mu$ contributes to adjusting mutual species' abundance and to keeping the availabilities almost unchanged, $\forall i$. The situation is quite different in the \emph{vulnerable phase}, $\mathcal{V}$, where species cannot self-sustain and turn out to be strongly affected by changes and improvements in the immediate environment. To characterize the stability of a general competing system against perturbations in a more rigorous way, one can introduce a Lyapunov function and compute the density of fluctuations in the two phases. The positive or vanishing behavior of such a function, together with its time derivative, provide information on whether the equilibrium is unstable, locally asymptotically stable, or globally asymptotically stable. In this specific case, the Lyapunov function reads \begin{equation} F(\lbrace n_\mu \rbrace)=\sum_{i} R_i \log \left(\sum_\mu n_\mu \sigma_{\mu i} \right) -\sum_\mu n_\mu \chi_\mu \ , \label{lyapunov} \end{equation} which is bounded from above, hence guaranteeing that an equilibrium always exists. By differentiating Eq. (\ref{lyapunov}) to the second order, one eventually obtain \begin{equation} \frac{d ^2 F}{d n_\mu d n_\nu}=-\sum_{i} \sigma_{\mu i} \sigma_{\nu i} \frac{R_i}{(\sum_{\rho} n_{\rho} \sigma_{\rho i} )^2}=-\sum_{i} \sigma_{\mu i} \sigma_{\nu i} \left( \frac{h_i^2}{R_i} \right) \ . \label{Hessian_n} \end{equation} In the $\mathcal{S}$ phase, \emph{i.e.} for $h_i \simeq 1$, this expression leads to a modified Wishart matrix whose eigenvalue distribution is defined by a Marchenko-Pastur law \cite{MarchenkoP} in the limit of a large number of species and resources. Accordingly, the resulting spectral density reads: \begin{equation} \rho(\lambda)= \frac{1}{2 \pi} \frac{\sqrt{(\lambda-\lambda_{-})(\lambda_{+}-\lambda)}}{\lambda} \ , \end{equation} where the upper and lower edges of the spectrum are $\lambda_{\pm}= (\sqrt{[1]}\pm 1)^2$. The quantity $[1]$ denotes the fraction of active species at criticality or, borrowing the Constraint Satisfaction Problem (CSP) jargon, the fraction of \emph{satiated constraints} for which $\Delta_\mu=0$. In analogy with the so-called SAT/UNSAT transition, we can associate the $\mathcal{V}$ phase to a \emph{hypostatic regime}, with a smaller number of saturated constraints with respect to the total number of variables \cite{Wyart2005, Franz2016}: this case corresponds to a gapped spectral density without any signature of an emerging criticality. Conversely, the $\mathcal{S}$ phase would correspond to an \emph{isostatic regime} -- where the number of vanishing constraints equals the overall space dimension, and a gapless spectrum for the distribution of eigenvalues appears. Because the lower edge of the spectrum $\lambda_{-} \rightarrow 0$ tends to zero upon approaching the $\mathcal{V}/\mathcal{S}$ transition line, the eigenvalue density contribution in the $\mathcal{S}$ phase becomes:\begin{equation} \rho(\lambda) \sim \sqrt{\left(4-\lambda\right)/{\lambda}} \ . \end{equation} A vanishing lower edge is in turn related to the appearance of a zero mode in the Hessian matrix of the replicated free energy (so-called \emph{replicon eigenvalue}): this translates into a diverging spin-glass susceptibility \cite{MPV, DeDominicis2006} as further evidence of being close to a critical point. A large response function can be interpreted as the fact that -- rather than being governed by a single leader -- the system tends to self-organize and respond collectively to external perturbations \cite{Mora2011}. It is worth noticing that since the Lyapunov function in Eq. (\ref{lyapunov}) is convex everywhere, a replica-symmetry-broken regime cannot occur. The most likely scenario taking place here is akin to the phenomenology of a \emph{random linear programming} problem\cite{Franz2016}. Even though replica symmetry continues to hold, a marginally stable regime takes place for some specific values of the control parameters. The advantage of introducing a high-dimensional version of the MacArthur model is that it provides an appealing and easily-defined reference model albeit, in its current form, lends itself to describing only competitive interactions. To suitably address a wider spectrum of ecological scenarios, the random version of the Lotka-Volterra model will be presented in the following accounting either for the competitive or cooperative case. \section{The generalized random Lotka-Volterra model} A wide range of phenomena in population dynamics, including predation, mutualism, and resource-consumer interactions, can be reasonably well captured by a much simpler reference model: the disordered Lotka-Volterra model whose typical features are shown off by tuning a few control (universal) parameters. Moreover, it not only reproduces phenomenologically multiple facets of well-mixed ecosystems \cite{Barbier2018} but also turns out to be of great interest in interdisciplinary domains such as genetics, epidemiology\cite{holt1985infectious}, and evolutionary game theory \cite{Galla2013, Sanders2018} up to the modelization of complex financial markets\cite{Sprott2004, Moran2019}. The Lotka-Volterra equations describe the evolution of $S$ species subject to random interactions $\alpha_{ij}$\cite{Kessler2015,Bunin2017}: \begin{equation} \frac{d N_i}{dt}= N_i \left[1-N_i -\sum_{j, (j \neq i)} \alpha_{ij} N_j \right] +\sqrt{N_i} \eta_i(t) +\lambda_i \ , \label{dynamical_eqT} \end{equation} where $N_i(t)$ is the relative abundance of species $i$ (with $i=1,...,S$) at time $t$ meaning that the population is normalized with respect to the total number of individuals $N_\text{ind}$ that would be present in the absence of interaction. The elements of the random matrix $\alpha_{ij}$ are independent and identically distributed with mean $\langle \alpha_{ij}\rangle =\mu/S$, variance $\langle \alpha_{ij}^2\rangle_c=\sigma^2/S$ and $ \langle \alpha_{ij} \alpha_{ji} \rangle_c=\gamma \langle \alpha_{ij}^2\rangle_c$, where the subscript $_c$ stands for the connected part of the correlation. The parameter $\gamma$ ranges from $-1$ (completely antisymmetric case to which prey-predator interactions belong) to $1$ (fully symmetric, for which a Lyapunov function can be safely defined). The demographic noise contribution, accounting for deaths, births, and other unpredictable events, is modelled by $\eta_i(t)$, a Gaussian variable with zero mean and variance $\langle \eta_i(t)\eta_j(t')\rangle=2T \delta_{ij} \delta(t-t')$, whose amplitude $T$ is inversely proportional to the total number of individuals $N_\text{ind}$. Such a multiplicative noise term allows us to investigate the effect of demographic stochasticity in a continuous setting \cite{Domokos2004discrete, Rogers2012, weissmann_simulation_2018}: the larger the global population, the smaller the strength $T$ of the demographic noise. Then, to guarantee the probability distribution to be integrable at small abundances, we need to introduce a small but finite immigration rate, which will be assumed to be constant over species, \emph{i.e.} $\lambda_i=\lambda$. This is a smart way to avoid an absorbing boundary in $N_i=0$ due to the introduction of a finite demographic noise, which in turn would push a finite fraction of species to zero\footnote{With no demographic noise and no immigration, a similar model was proposed in the nineties by Biscari and Parisi \cite{Biscari1995} and analyzed by studying the stability of the replica symmetric solution (single fixed point regime).}. In the case of random symmetric interactions, the stochastic process induced by Eq. (\ref{dynamical_eqT}) admits an equilibrium-like stationary distribution \cite{Biroli2018_eco, Altieri2021} with associated Hamiltonian: \begin{equation} H= {{-}} \sum_i \left(N_i-\frac{N_i^2}{2}\right)+\sum_{i<j}\alpha_{ij}N_iN_j+ \sum_i [T\ln N_i - \ln \theta(N_i-\lambda)] \ . \label{Hamiltonian_0} \end{equation} The before-last term is due to the demographic noise\footnote{The parameter $T$ plays the role of the temperature in a statistical mechanics setting. The mapping can be easily established by writing the corresponding Fokker-Planck equation with a white Gaussian noise.} whereas the counterbalancing role of the immigration is formally modelled by is the Heaviside function $\theta(x)$, which corresponds to imposing a reflecting wall at $N_i=\lambda$. \subsection{Glassy phases and out-of-equilibrium dynamics} Adding a finite demographic noise not only allows us to get a more general picture but also to properly characterize the resulting phase diagram -- see Fig. 2 -- connecting peculiar properties of each regime to the ones of equilibria. \begin{figure}[h] \center \includegraphics[scale=0.36]{phase_diagram_log.pdf} \caption{Phase diagram showing how the variation of the demographic noise strength, $T$, and the heterogeneity of interactions, $\sigma$, can lead to three different phases. In particular: i) a single equilibrium phase where the configurational landscape is purely convex; ii) a multiple equilibria regime, which is characterized by a $1$RSB stable solution and an exponential number of locally stable equilibria; iii) a \emph{Gardner phase}, which turns out to be associated with a hierarchical organization of the equilibria in the free-energy landscape. Figure taken from \cite{Altieri2021}.} \end{figure} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{inplot_correlation} \label{fig:test1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.85\linewidth]{1RSB_aging} \label{fig:test2} \end{minipage} \caption{Numerical simulations based on DMFT. Two-time correlator $C(t,t')$ in the single-equilibrium phase (RS, on the left) compared with the same correlator in the multiple equilibria phase (1RSB, on the right) plotted for different $t'$ and $S=500$. The dashed lines correspond to the values of the overlap parameters, which are obtained by the replica method. The inset in the left plot highlights a divergence in the decorrelation time as $T \rightarrow T_\text{$1$RSB}$, the critical temperature associated with an instability of the RS solution. Figures taken from \cite{Altieri2021}.} \end{figure} Then one may wonder how all these outcomes are expected to change when asymmetric interactions are also taken into account and which strategy proves to be the most appropriate in this case. Non-symmetric interactions strongly complicate the analysis since they correspond to plugging non-conservative forces in the dynamics thus violating the Fluctuation-Dissipation theorem (FDT) and bringing the system out of equilibrium. Since it is no longer possible to define a Hamiltonian to be minimized and analyzed in terms of harmonic fluctuations around each of the minima, the cavity \cite{MPV, Mezard2009} and Dynamical Mean-Field Theory \cite{Roy2019numerical, Altieri2020dyn} formalisms come into play. The last method, in particular, allows us to map a multi-variable problem into a single-body stochastic formalism, which eventually involves time-delayed friction and colored noise whose features have to be determined self-consistently. In other words, the two-time correlation $C(t,t')$ and response $R(t,t')$ functions are fixed in a self-consistent way given the probability distribution associated with the stochastic process and the distribution of random interactions. A similar analysis, as illustrated for the symmetric case in Fig. (2), can be performed. Without demographic fluctuations, increasing the variability of the interactions $\sigma$ would destabilize the single-fixed-point regime and eventually result in chaotic phases as for neural networks and spin-glass models in the presence of asymmetric couplings. The introduction of a positive immigration rate would lead to the stabilization of chaotic dynamics -- with an indefinitely long lifetime -- corresponding to what we have referred to as \emph{multiple equilibria regime} in the purely symmetric case. However, as soon as the immigration rate is set to zero, the chaotic regime is no longer stable \cite{Roy2020, pearce2020stabilization}, replaced by slower and slower dynamics (\emph{aging}). \subsection{Non-logistic growth functions and pseudo-gap distributions} The Lotka-Volterra equations analyzed in the large-$S$ limit thus far allow us to get analytical advances in a very broad class of problems. In particular, by slightly modifying the dynamical Eq. (\ref{dynamical_eqT}) through the introduction of a higher-order one-species potential, one can also investigate the so-called \emph{Allee effect}\cite{Allee1926}, which describes a positive correlation between mean individual fitness (or per-capita growth rate) and population density over some finite interval\cite{Gascoigne2004, Kramer2018}. This positive feedback loop mechanism, which inherited the name from the famous zoologist Allee, essentially relies on the observation that in many species under-crowding, and not only competition, contributes to limiting population growth. The Allee effect is called \emph{strong} if there exists an initial population threshold in the sense that the species pool needs a sufficiently large initial population to avoid extinction, whereas it is denoted as \emph{weak} if no threshold exists. Even in this second case, intra-specific cooperation leads to an initial increase in the growth rate as population increases (see Fig. (3)). \begin{figure}[h] \center \includegraphics[scale=0.28]{Allee_ws.pdf} \caption{Sketch of the strong Allee effect (in red) compared to a weak Allee effect (in orange). In the former, the finite threshold corresponds to an unstable fixed point (empty black circle); in the latter, no threshold in the population exists.} \end{figure} In the same spirit as before, one can take advantage of thermodynamic analysis and shed light on the resulting phase diagram by tuning the strength of random interactions and the demographic noise. Remarkable differences emerge with respect to the Lotka-Volterra logistic-growth case \cite{Altieri2022}. First, the number of states below the critical transition line is no longer exponential in the system size nor separated by extensive barriers, exactly as it would happen in equilibrium states of mean-field spin glasses (\emph{i.e.} the Sherrington-Kirkpatrick model \cite{MPV}). Furthermore, as soon as one considers a non-linear functional response of the species abundances, a pseudo-gap distribution in the local curvatures of the single-species effective potential appears\footnote{With the exponent $\alpha \ge 1$.}, $P(V^{''}_\text{eff}(N^{*})) \sim \vert V^{''}_\text{eff}(N^{*}) \vert^{\alpha}$, as a clear signature of a marginal low-demographic noise (low-temperature) phase\cite{Altieri2022}. This outcome nicely generalizes the pseudo-gap distribution that was found for instantaneous local fields in mean-field spin glasses -- and was obtained before only in the case of discrete degrees of freedom\cite{Palmer1979} -- to a complex ecological model. \section{Conclusions and perspectives} Along the different sections of this short report, I have mostly discussed analytical outcomes made possible by the use of mean-field limits. These pages are therefore intended as a tribute to Giorgio Parisi, a way to thank him for the innovative and insightful techniques I have been learning over the years, and that have been successfully applied to such diverse and interdisciplinary contexts. As for future research, an interesting direction would be the investigation of spatially extended models either in a completely-connected topology where multiple patches (locations in space) are coupled by diffusion or in a sparse network with finite connectivity on each site. On the one hand, this metapopulation scenario, as originally proposed by Levins\cite{Hanski1998, Hanski2000, Etienne2002}, would allow us for a more tangible comparison with real data, starting for instance with populations of small mammals and insects\cite{Elmhagen2001}; on the other hand, new appealing phenomena -- such as pattern formation, traveling waves and activity fronts \cite{Curatolo2020, Manna2021} -- are expected to appear. A rigorous theoretical analysis with an increasingly large number of species and, possibly, not only pairwise interactions is still missing. A parallel line of research would concern an in-depth analysis of the role of different kinds of fluctuations -- demographic and environmental ones that might violate Detailed Balance – and their interplay with the deterministic dynamics. Such a classification will drive a better comparison with observational data, in particular for reproducing Species Abundance Distributions (SAD) of large ecological communities as well as for achieving a deeper understanding of the formal expression of functional responses given by local perturbations. This information would be extremely useful in the attempt to recover power-law and log-normal distributions for the species abundances that have not yet been identified in models accounting only for demographic fluctuations and symmetric interactions \cite{Lorenzana2022}. \newpage \section*{Acknowledgments} I thank Matthieu Barbier, Giacomo Gradenigo and Frédéric van Wijland for a critical reading of the draft. \bibliographystyle{ws-rv-van} \chapter[Using World Scientific's Review Volume Document Style]{Using World Scientific's Review Volume\\ Document Style\label{ch1}} \author[F. Author and S. Author]{First Author and Second Author\footnote{Author footnote.}} \address{World Scientific Publishing Co, Production Department,\\ 5 Toh Tuck Link, Singapore 596224 \\ f\_author@wspc.com.sg\footnote{Affiliation footnote.}} \begin{abstract} The abstract should summarize the context, content and conclusions of the paper in less than 200 words. It should not contain any references or displayed equations. Typeset the abstract in 9 pt Times roman with baselineskip of 11 pt, making an indentation of 1.5 pica on the left and right margins. \end{abstract} \body \section{Using Other Packages}\label{sec1} The WSPC class file has already loaded the packages \verb|amsfonts, amsmath,| \verb|amssymb, epsfig, rotating,| and \verb|url| at the startup. Please try to limit your use of additional packages. They frequently introduce incompatibilities. This problem is not specific to the WSPC styles, it is a general \LaTeX{} problem. Check this manual whether the required functionality is already provided by the WSPC class file. If you do need third-party packages, send them along with the paper. In general, you should use standard \LaTeX{} commands as much as possible. \verb|Check.tex| is an utility to test for all the files required by World Scientific review volume project are available in your present \LaTeX\ installation. \noindent {\bf Usage:} \verb|latex check.tex| \section{Layout} In order to facilitate our processing of your article, please give easily identifiable structure to the various parts of the text by making use of the usual \LaTeX{} commands or by your own commands defined in the preamble, rather than by using explicit layout commands, such as \verb|\hspace, \vspace, \large, \centering|, etc. Also, do not redefine the page-layout parameters. \section{User Defined Macros} User defined macros should be placed in the preamble of the article, and not at any other place in the document. Such private definitions, i.e. definitions made using the commands \verb|\newcommand, \renewcommand|, \verb|\newenvironment| or \verb|\renewenvironment|, should be used with great care. Sensible, restricted usage of private definitions is encouraged. Large macro packages and definitions that are not used in the article should be avoided. Do not change existing environments, commands and other standard parts of \LaTeX. \subsection{Options and extra packages} The class options: \begin{alphlist}[onethmnum] \item [\tt draft] To draw border line around text area.\\ Default: no border line around text area. \item[\tt onethmnum] To number all theorem-like objects in a single sequence, e.g. Theorem~1, Definition 2, Lemma 3, etc.\\ Default: individual numbering on different theorem-like objects, e.g. Theorem 1, Definition 1, Lemma 1, etc. \end{alphlist} Apart from the packages mentioned in Sec~\ref{sec1}, the WSPC class also requires the following inhouse packages for customizing the citations and references. \noindent{\tablefont \begin{tabular}{@{}ll@{}}\\ {\bf Vancouver (numbered)}\\ \quad\verb|\usepackage{ws-rv-van}| & -- Superscript$^1$\\ \quad\verb|\usepackage[square]{ws-rv-van}| & -- Bracketed [1]\\[6pt] {\bf Harvard (author-date)}\\ \quad\verb|\usepackage{ws-rv-har}| & -- (Author, 1994) \end{tabular}\\[6pt]} The contributors are advised to consult the managing editor for the chosen style. You can obtain these files from our web pages at: \url{http://www.wspc.com.sg/style/review_style.shtml} and \url{http://www.icpress.co.uk/authors/stylefiles.shtml#review}. \section{Chapters} Each chapter should normally be in a separate file. The chapter title is typeset by using the \verb|\chapter[#1]{#2}| command, where \verb|[#1]| is an optional short title to be used as a running head if the title is too long and \verb|#2| is the full title of the chapter. The short, edited version of the title appears in the table of contents and running head. The chapter title should be typed in a way such that only the initial character is in upper case and the rest is in lower case. \section{Sectional Units} Sectional units are obtained in the usual way, i.e. with the \LaTeX{} instructions \verb|\section|, \verb|\subsection|, \verb|\subsubsection|, and \verb|\paragraph|. \section{Section} Text... \subsection{Subsection} Text... \subsubsection{Subsubsection} Text... \paragraph{Paragraph} Text... \subparagraph{Subparagraph} Text... \section*{Unnumbered Section} Unnumbered sections can be obtained by \verb|\section*|. \section{Lists of Items}\index{lists} Lists are broadly classified into four major categories that can randomly be used as desired by the author: \begin{alphlist}[(d)] \item Numbered list. \item Lettered list. \item Unnumbered list. \item Bulleted list. \end{alphlist} \subsection{Numbered and lettered list}\index{lists!numbered and lettered} \begin{enumerate} \item The \verb|\begin{arabiclist}[]| command is used for the arabic number list (arabic numbers appearing within or without parenthesis), e.g., (1), (2); 1., 2., etc.\index{lists!numbered and lettered!arabic (1, 2, 3...)} \smallskip \item The \verb|\begin{romanlist}[]| command is used for the roman number list (roman numbers appearing within parenthesis), e.g., (i), (ii), etc.\index{lists!numbered and lettered!roman (i, ii, iii...)} \smallskip \item The \verb|\begin{Romanlist}[]| command is used for the capital roman \hbox{number list} (capital roman numbers appearing within parenthesis), e.g., (I), (II), etc.\index{lists!numbered and lettered!Roman (I, II, III...)} \smallskip \item The \verb|\begin{alphlist}[]| command is used for the alphabetical list (alphabetical characters appearing within parenthesis), e.g., (a), (b), etc.\index{lists!numbered and lettered!alphabetical (a, b, c...)} \smallskip \item The \verb|\begin{Alphlist}[]| command is used for the capital alphabetical list (capital alphabetical characters appearing within parenthesis), e.g., (A), (B), etc.\index{lists!numbered and lettered!Alphabetical (A, B, C...)} \end{enumerate} Note: For all the above mentioned lists (with the exception of alphabetic list), it is obligatory to enter the last entry's number in the list within the square bracket, to enable unit alignment. Items numbered with lowercase Roman numerals: \begin{romanlist}[(iii)] \item item one \item item two \begin{alphlist}[(a)] \item item one \item lists within lists can be numbered with lowercase alphabets \end{alphlist} \item item three \item item four. \end{romanlist} \subsection{Bulleted and unnumbered list}\index{lists!bulleted and unnumbered} \begin{enumerate} \item The \verb|\begin{itemlist}| command is used for the bulleted list. \smallskip \item The \verb|\begin{unnumlist}| command is used for creating the unnumbered list with the turnovers hangindent by 1\,pica. \end{enumerate} Lists may be laid out with each item marked by a dot: \begin{itemlist} \item item one \item item two \item item three \item item four. \end{itemlist} \subsection{Proofs} The WSPC document styles also provide a predefined proof environment for proofs. The proof environment produces the heading `Proof' with appropriate spacing and punctuation. It also appends a `Q.E.D.' symbol, $\square$, at the end of a proof, e.g., \begin{verbatim} \begin{proof} This is just an example. \end{proof} \end{verbatim} \noindent produces \begin{proof} This is just an example. \end{proof} The proof environment takes an argument in curly braces, which allows you to substitute a different name for the standard `Proof'. If you want to display, `Proof of Lemma', then write \begin{verbatim} \begin{proof}[Proof of Lemma] This is just an example. \end{proof} \end{verbatim} \noindent produces \begin{proof}[Proof of Lemma] This is just an example. \end{proof} \section{Theorems and Definitions}\index{theorems} The WSPC document styles contain a set of pre-defined environments for theorems, definitions, proofs, remarks etc. All theorem-like objects use individual numbering scheme by default. To number them in a single sequence, load the class option \verb|onethmnum| in the preamble., e.g., \verb|\documentclass[onethmnum]{ws-rv10x7}|. The following environments are available by default with WSPC document styles: \begin{center} {\tablefont \begin{tabular}{ll} \toprule Environment & Heading\\\colrule \verb|algorithm| & Algorithm\\ \verb|answer| & Answer\\ \verb|assertion| & Assertion\\ \verb|assumption| & Assumption\\ \verb|case| & Case\\ \verb|claim| & Claim\\ \verb|comment| & Comment\\ \verb|condition| & Condition\\ \verb|conjecture| & Conjecture\\ \verb|convention| & Convention\\ \verb|corollary| & Corollary\\ \verb|criterion| & Criterion\\ \verb|definition| & Definition\\ \verb|example| & Example\\ \verb|lemma| & Lemma\\ \verb|notation| & Notation\\ \verb|note| & Note\\ \verb|observation| & Observation\\ \verb|problem| & Problem\\ \verb|proposition| & Proposition\\ \verb|question| & Question\\ \verb|remark| & Remark\\ \verb|solution| & Solution\\ \verb|step| & Step\\ \verb|summary| & Summary\\ \verb|theorem| & Theorem\\\botrule \end{tabular}}\label{theo} \end{center} \begin{verbatim} \begin{theorem}[Longo, 1998] For a given $Q$-system... \[ N = \{x \in N; T x = \gamma (x) T, T x^* = \gamma (x^*) T\}\,, \] and $E_\Xi (\cdot) = T^* \gamma ... \end{theorem} \end{verbatim} \noindent generates \begin{theorem}[Longo, 1998] For a given $Q$-system... \noindent\[ N = \{x \in N; T x = \gamma (x) T, T x^* = \gamma (x^*) T\}\,, \] and $E_\Xi (\cdot) = T^* \gamma (\cdot) T$ gives a conditional expectation onto $N$. \end{theorem} \begin{verbatim} \begin{theorem} We have $\# H^2 (M \supset N) < ... \end{theorem} \end{verbatim} \noindent produces \begin{theorem} We have $\# H^2 (M \supset N) < \infty$ for an inclusion $M \supset N$ of factors of finite index. \end{theorem} \LaTeX{} provides \verb|\newtheorem| to create new theorem environments. To add a new theorem-type environments to a chapter, use \begin{verbatim} \newtheorem{example}{Example}[section] \let\Examplefont\upshape \def\Exampleheadfont{\bfseries} \end{verbatim} \section{Mathematical Formulas} \paragraph{Inline:} For in-line formulas use \verb|\( ... \)| or \verb|$ ... $|. Avoid built-up constructions like, fractions and matrices, in in-line formulas. Fractions in inline can be typed with a solidus. e.g. \verb|x+y/z=0|. \index{equations!inline} \paragraph{Display:} For numbered display formulas use the displaymath environment \index{equations!display} \verb|\begin{equation}...| \verb|\end{equation}|. And for unnumbered display formula use \verb|\[ ... \]|. For numbered displayed one line formulas always use the equation environment. Do not use \verb|$$ ... $$|. For example, the input for: \noindent\begin{equation} \mu(n, t) = \frac{\sum\limits^\infty_{i=1}1 (d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1(N(\sigma)=n)d\sigma}\,. \label{eq1} \end{equation} \noindent is: \begin{verbatim} \begin{equation} \mu(n,t)=\frac{\sum\limits^\infty_{i=1}1(d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1(N(\sigma)=n)d\sigma}\,.\label{eq1} \end{equation} \end{verbatim} For displayed multi-line formulas use the eqnarray environment. \begin{verbatim} \begin{eqnarray} \zeta\mapsto\hat{\zeta} & = & a\zeta+b\eta\label{eq2}\\ \eta\mapsto\hat{\eta} & = & c\zeta+d\eta\label{eq3} \end{eqnarray} \end{verbatim} \noindent\begin{eqnarray} \zeta\mapsto\hat{\zeta} & = & a\zeta+b\eta\label{eq2}\\ \eta\mapsto\hat{\eta} & = & c\zeta+d\eta\label{eq3} \label{eq2n3} \end{eqnarray} Superscripts and subscripts that are words or abbreviations, as in \( \pi_{\mathrm{low}} \), should be typed as roman letters; this is done as \verb|\( \pi_{\mathrm{low}} \)| instead of \( \pi_{low} \) done by \verb|\( \pi_{low} \)|. For geometric functions, e.g. exp, sin, cos, tan, etc. please use the macros \verb|\sin, \cos, \tan|. These macros gives proper spacing in mathematical formulas. It is also possible to use the \AmS-\LaTeX{} package,\cite{ams04} which can be obtained from the \AmS, from various \TeX{} archives. \section{Floats}\index{floats} \subsection{Tables}\index{floats!tables} Put the tables and figures in the text with the table and figure environments, and position them near the first reference of the table or figure in the text. Please avoid long caption texts in figures and tables. Do not put them at the end of the article. \begin{table}[b] \tbl{Sample table caption.} {\begin{tabular}{@{}cccc@{}} \toprule Piston mass$^{\text a}$ & Analytical frequency & \% Error \\ & (Rad/s) & (Rad/s) \\ \colrule 1.000 & \hphantom{0}281.0 & 0.07 \\ 0.010 & 2441.0 & 0.00 \\ 0.001 & 4130.0 & 0.16\\ \botrule \end{tabular} } \begin{tabnote} $^{\text a}$Sample table footnote. \end{tabnote} \label{tbl1} \end{table} \begin{verbatim} \begin{table}[b] \tbl{Sample table caption.} {\begin{tabular}{@{}ccc@{}} \toprule Piston mass$^{\text a}$ & ... \\ & (Rad/s) & (Rad/s) \\ \colrule 1.000 & ...\\ \botrule \end{tabular}} \begin{tabnote} $^{\text a}$Sample table footnote. \end{tabnote}\label{tbl1} \end{table} \end{verbatim} For most tables, the horizontal rules are obtained by: \begin{description}\index{floats!tables!rules} \item[toprule] one rule at the top \item[colrule] one rule separating column heads from data cells \item[botrule] one bottom rule \item[Hline] one thick rule at the top and bottom of the tables with multiple column heads \end{description} To avoid the rules sticking out at either end of the table add \verb|@{}| before the first and after the last descriptors, e.g. {@{}llll@{}}. Please avoid vertical rules in tables. But if you think the vertical rule is must, you can use the standard \LaTeX{} \verb|tabular| environment. By using \verb|\tbl| command in table environment, long captions will be justified to the table width while the short or single line captions are centered. If we need the fixed width for the tables, the command is \verb|\begin{tabular*}{#1}{@{}ll@{}}| and \verb|\end{tabular*}|. The argument \verb|#1| takes the value of table width. For example, if we need a table with 25pc width, then the command is \verb|\begin{tabular*}{25pc}{@{\extracolsep| \verb|{\fill}}ll@{}}|. Headings which span for more than one column should be set using \verb|\multicolumn{#1}{#2}{#3}| where \verb|#1| is the number of columns to be spanned, \verb|#2| is the argument for the alignment of the column head which may be either {c} --- for center alignment; {l} --- for left alignment; or {r} --- for right alignment, as desired by the users. Use {c} for column heads as this is the WS style and \verb|#3| is the heading. A simplified alternative version is \verb|\centre{#1}{#2}| where \verb|#1| is the number of columns to be spanned and \verb|#2| the heading. There should be a rule spanning the same columns below the heading. Termed as spanner or bridge rule, it is generated using the command \verb|\cline{n-m}| where \verb|n| is the number of the first spanned column and \verb|m| that of the last spanned column. \verb|\cline| should not be part of a row but follow immediately after a \verb|\\|. If a table contains note(s), as a universal thumb-rule they should appear beneath the table set to its width and seldom at the foot of the page. For the footnotes in the table environment the command is \verb|{\begin{tabnote}<text>\end{tabnote}}|. Appropriate symbols should be included in the body of the table matching their corresponding symbols in the footnotes where the footnotes are to be placed immediately after the \verb|{\begin{tabnote}| command and terminated before \verb|\end{tabnote}}\end{table}| command. The tables are designed to have a uniform style throughout the whole book. We prefer the border lines to be of the style as shown in our sample Tables. For the inner lines of the table, it looks better if they are kept to a minimum. \def\p{\phantom{$-$}} \def\pc{\phantom{,}} \def\p0{\phantom{0}} \begin{sidewaystable} \tbl{Positive values of $X_0$ by eliminating $Q_0$ from Eqs.~(15) and (16) for different values of the parameters $f_0$, $\lambda_0$ and $\alpha_0$ in various dimension.} {\begin{tabular}{@{}ccccccccccc@{}} \toprule\\[-6pt] $f_0$ &$\lambda_0$ &$\alpha_0$ &\multicolumn{8}{c}{Positive roots ($X_0$)}\\[3pt] \hline\\[-6pt] && &4D &5D &6D &7D &8D &10D &12D &16D\\[3.5pt] \hline\\[-6pt] \phantom{1}$-0.033$ &0.034 &\phantom{0}0.1\phantom{.01} &6.75507,\p0 &4.32936,\p0 &3.15991,\p0 &2.44524,\p0 &1.92883,\p0 &0.669541, &--- &---\\[3.5pt] &&&1.14476\pc\p0 &1.16321\pc\p0 &1.1879\pc\phantom{00} &1.22434\pc\p0 &1.29065\pc\p0 &0.415056\pc\\[3.5pt] \phantom{1}$-0.1$\phantom{33} &0.333 &\phantom{0}0.2\phantom{.01} &3.15662,\p0 &1.72737,\p0 &--- &--- &--- &--- &--- &---\\[3.5pt] &&&1.24003\pc\p0 &1.48602\pc\p0\\[3.5pt] \phantom{1}$-0.301$ &0.302 &0.001 &2.07773,\p0 &--- &--- &--- &--- &--- &--- &---\\[3.5pt] &&&1.65625\pc\p0\\[3.5pt] \phantom{1}$-0.5$\phantom{01} &0.51\phantom{2} &\phantom{0}0.001 &--- &--- &--- &--- &--- &--- &--- &---\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &0.1\phantom{02} &\phantom{0}2\phantom{.001} &1.667,\phantom{000} &1.1946\phantom{00,} &--- &--- &--- &--- &--- &---\\[3.5pt] &&&0.806578\pc &0.858211\pc\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &0.1\phantom{33} &10\phantom{.001} &0.463679\pc &0.465426\pc &0.466489\pc &0.466499\pc &0.464947\pc &0.45438\pc\p0 &0.429651\pc &0.35278\pc\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &1\phantom{.333} &\phantom{0}0.2\phantom{01} &--- &--- &--- &--- &--- &--- &--- &---\\[3.5pt] $\phantom{-0}$1\phantom{.033} &0.001 &\phantom{0}2\phantom{.001} &0.996033, &0.968869, &0.91379,\p0 &0.848544,&0.783787, &0.669541, &0.577489, &---\\[3.5pt] &&&0.414324\pc &0.41436\pc\p0 &0.414412\pc &0.414489\pc &0.414605\pc &0.415056\pc &0.416214\pc\\[3.5pt] \phantom{10}\phantom{.033} &0.001 &\phantom{0}0.2\phantom{01} &0.316014, &0.309739, &--- &--- &--- &--- &--- &---\\[3.5pt] &&&0.275327\pc &0.275856\pc\\[3.5pt] \phantom{10}\phantom{.033} &0.1\phantom{33} &\phantom{0}5\phantom{.001} &0.089435\pc &0.089441\pc &0.089435\pc &0.089409\pc &0.08935\pc\p0 &0.089061\pc &0.088347\pc &0.084352\pc\\[3.5pt] \phantom{10}\phantom{.033} &1\phantom{.333} &\phantom{0}3\phantom{.001} &0.128192\pc &0.128966\pc &0.19718,\p0 &0.169063, &0.142103, &--- &--- &---\\[3.5pt] &&&& &0.41436\pc\p0 &0.414412\pc &0.414489\pc\\[3pt] \Hline \end{tabular}}\label{tbl2} \end{sidewaystable} Landscape tables and figures can be typeset with following environments: \begin{itemize}\index{floats!tables!landscape} \item \verb|sidewaystable| and \item \verb|sidewaysfigure|. \end{itemize} \noindent {\bf Example:} \begin{verbatim} \begin{sidewaystable} \tbl{Positive values of ...} {\begin{tabular}{@{}ccccccccccc@{}} ... \end{tabular}} \label{tbl2} \end{sidewaystable} \end{verbatim} \subsection{Figures}\index{floats!figures} The preferred graphics are tiff and Encapsulated PostScript, eps in short, for any type of graphic. Our \TeX\ installation requires eps, but we can easily convert tiff to eps. Many other formats, e.g. pict (Macintosh), wmf (Windows) and various proprietary formats, are not suitable. Even if we can read such files, there is no guarantee that they will look the same on our systems as on yours. Next adjust the scaling of the figure until it's correctly positioned, and remove the declarations of the lines and any anomalous spacing. If instead you wish to use some other method, then it's most important to leave the right amount of vertical space in the figure declaration to accommodate your figure (i.e.~remove the lines and change the space in the example). A figure is obtained with the following commands \begin{verbatim} \begin{figure} \centerline{\includegraphics[width=5.2cm]{rv-fig1}} \caption{Figure caption.} \label{fig1} \end{figure} \end{verbatim} \begin{figure} \centerline{\includegraphics[width=5.2cm]{rv-fig1}} \caption{Figure caption.} \label{fig1} \end{figure} \begin{sidewaysfigure} \begin{center} \includegraphics[width=6.6in]{rv-fig2} \end{center} \caption{The bifurcating response curves of system $\alpha=0.5$, $\beta=1.8$; $\delta=0.2$, $\gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{fig2} \end{sidewaysfigure} Very large figures and tables should be placed on a page by themselves, e.g., \index{floats!figures!landscape} \begin{verbatim} \begin{sidewaysfigure} \begin{center} \includegraphics[width=6.6in]{rv-fig2} \end{center} \caption{The bifurcating response curves of system $\alpha=0.5$, $\beta=1.8$; $\delta=0.2$, $\gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{fig2} \end{sidewaysfigure} \end{verbatim} \begin{figure} \begin{center} \subfigure[\label{fig3a}]{\includegraphics[width=2in]{rv-fig3a}}\qquad \subfigure[\label{fig3b}]{\includegraphics[width=2in]{rv-fig3b}} \caption{Two figures side-by-side. (a) Figure caption for figure 3a. (b) Figure caption for figure 3b. \label{fig3} \end{center} \end{figure} Figures Fig.~\ref{fig3a} and \fref{fig3b} are referred with \verb|Fig.~\ref{fig3a}| and \verb|\fref{fig3b}| commands. Side-by-side figures are obtained with: \begin{verbatim} \begin{figure} \begin{center} \subfigure[\label{fig3a}]{\includegraphics[width=2in]{rv-fig3a}}\qquad \subfigure[\label{fig3b}]{\includegraphics[width=2in]{rv-fig3b}} \caption{Two figures side-by-side. (a) Figure caption for figure 3a. (b) Figure caption for figure 3b. \label{fig3} \end{center} \end{figure} \end{verbatim} \section{Cross-references} Use \verb|\label| and \verb|\ref| for cross-references to equations, figures, tables, sections, subsections, etc., instead of plain numbers. Every numbered part to which one wants to refer, should be labelled with the instruction \verb|\label|. For example: \begin{verbatim} \begin{equation} \mu(n, t) = \frac{\sum ... d\sigma}\,. \label{eq1} \end{equation} \end{verbatim} With the instruction \verb|\ref| one can refer to a numbered part that has been labelled: \begin{verbatim} ..., see also Eq. (\ref{eq1}) \end{verbatim} \begin{itemize} \item labels should not be repeated. \item The \verb|\label| instruction should be typed immediately after (or one line below), e.g., \verb|\caption{Sample ... }\label{fig2.1}|. Labels should not be typed inside the argument of a number-generating instruction such as \verb|\section| or \verb|\caption|, \item For chapters, labels should be placed inside \verb|\chapter|, e.g.,\\ \verb|\chapter{Chapter Title\label{ch2}}|. \end{itemize} \begin{table}[t] \begin{center}{\tablefont Some useful shortcut commands for cross-referencing.\\ \begin{tabular}{@{}lll@{}} \toprule Shortcut & Equivalent & Output \\ command & \TeX\ command\\\colrule \multicolumn{3}{@{}l@{}}{In the middle of a sentence:}\\ \verb|\eref{eq1}| & Eq.~(\verb|\ref{eq1}|) & \eref{eq1}\\ \verb|\sref{sec1}| & Sec.~\verb|\ref{sec1}| & \sref{sec1}\\ \verb|\cref{ch1}| & Chap.~\verb|\ref{ch1}| & \cref{ch1}\\ \verb|\fref{fig1}| & Fig.~\verb|\ref{fig1}| & \fref{fig1}\\ \verb|\tref{tbl1}| & Table~\verb|\ref{tbl1}| & \tref{tbl1}\\[3pt] \multicolumn{2}{@{}l}{At the starting of a sentence:}\\ \verb|\Eref{eq1}| & Equation (\verb|\ref{eq1}|) & \Eref{eq1}\\ \verb|\Sref{sec1}| & Section~\verb|\ref{sec1}| & \Sref{sec1}\\ \verb|\Cref{ch1}| & Chapter~\verb|\ref{ch1}| & \Cref{ch1}\\ \verb|\Fref{fig1}| & Figure~\verb|\ref{fig1}| & \Fref{fig1}\\ \verb|\Tref{tbl1}| & Table~\verb|\ref{tbl1}| & \Tref{tbl1}\\\botrule \end{tabular}} \end{center} \end{table} \section{Citations}\label{cit}\index{citation} World Scientific's preferred style for Review Volume is the Vancouver (numbered) system, unless if the text is not very heavily referenced in which case the Harvard (author-date) system may be used. \begin{center} {\tablefont \begin{tabular}{@{}ll@{}}\toprule System & Package\\\colrule \multicolumn{2}{@{}l}{Vancouver (numbered)}\\ $\bullet$ Bracketed [1] & \verb|\usepackage[square]{ws-rv-van}|\\ $\bullet$ Superscript$^1$ & \verb|\usepackage{ws-rv-van}|\\ &(Default style)\\[3pt] \multicolumn{2}{@{}l}{Harvard (author-date)}\\ \verb|[Brown (1988)]|&\verb|\usepackage{ws-rv-har}|\\\botrule \end{tabular}} \end{center} Citations in the text use the labels defined in the bibitem declaration, for example, the first paper by Jarlskog\cite{jarl88} is cited using the command \verb|\cite{jarl88}|. The bibitem labels should not be repeated. For multiple citations do not use \verb|\cite{1}\cite{2}|, but use \verb|\cite{1,2}| instead. \subsection{Vancouver Style}\index{citation!numbered} Reference citations in the text are to be numbered consecutively in Arabic numerals, in the order of first appearance. The numbered citations can appear in two ways: \begin{romanlist}[(ii)] \item bracketed \item superscript (default style) \end{romanlist} \subsubsection{Bracketed}\index{citation!numbered!bracketed} References cited in the text are within square brackets, e.g., \begin{arabiclist}[(2)] \item \verb|``One can deduce from Ref.~\cite{benh93} that...''|\\ ``One can deduce from Ref.~[3] that...'' \smallskip \item \verb|``See Refs.~\cite{ams04,bake72, benh93,brow88} and \cite{davi93}|\\ \verb| for more details.''|\\ ``See Refs.~[1--3, 5] and [7] for more details.'' \end{arabiclist} \subsubsection{Superscript}\index{citation!numbered!superscript} References cited in the text appear as superscripts, e.g., \begin{arabiclist}[(2)] \item \verb|``...in the statement.\cite{ams04}''|\\ ``...in the statement.$^1$'' \smallskip \item \verb|``...have proven\cite{bake72} that this equation...''|\\ ``...have proven$^2$ that this equation...'' \end{arabiclist} When the reference forms part of the sentence, it should appear with ``Reference'' or ``Ref.'', e.g., \begin{arabiclist}[(2)] \item \verb|``One can deduce from Ref.~\refcite{benh93} that...''|\\ ``One can deduce from Ref.~3 that...'' \smallskip \item \verb|``See Refs.~\refcite{brow88} and \refcite{davi93} for more details.''|\\ ``See Refs.~5 and 7 for more details.'' \end{arabiclist} When superscripted citations are used, there should not be a space before \verb|\cite{key}|, e.g., citation: \verb|see\cite{zipf}|\hskip-60pt\lower8pt\hbox{$\uparrow$}\hskip-4pt\lower16pt\hbox{no character space here} \subsection{Harvard Style}\index{citation!author-date} Citations in the text use the labels defined in the \verb|bibitem| declaration, for example, [Jarlskog (1988)] is cited using the command \verb|\cite{jarl88}|. While \verb|\citet {jarl88}| produces Jarlskog (1988). See Sec.~\ref{secbib} for more details on coding references in Vancouver and Harvard styles. \section{Footnote} Footnotes are denoted by a Roman letter superscript in the text. Footnotes can be used as \begin{verbatim} ... total.\footnote{Sample footnote.} \end{verbatim} \noindent {\bf Output:} ... in total.\footnote{Sample footnote.} \section{Acknowledgments} Acknowledgments to funding bodies etc. may be placed in a separate section at the end of the text, before the Appendices. This should not be numbered so use \verb|\section*{Acknowledgements}|. \section{Appendix}\index{appendix} Appendices should be used only when absolutely necessary. They should come before the References. \begin{verbatim} \begin{appendix}[Optional Title] \section{Sample Appendix} Text... \subsection{Appendix subsection} Text... \end{appendix} \end{verbatim} \section{Bibliography}\label{secbib}\index{bibliography} \subsection{\btex\ users} \btex\index{BIBTeX} users should use our bibliography style file: \begin{itemize} \item For Vancouver (numbered) styled references \begin{verbatim} \usepackage{ws-rv-van} \bibliographystyle{ws-rv-van} \chapter[Glassy features and complex dynamics in ecological systems]{Glassy features and complex dynamics in ecological systems\\ \label{Ch 27}} \author[A. Altieri]{A. Altieri\footnote{ada.altieri@u-paris.fr}} \address{Laboratoire Matière et Systèmes Complexes (MSC), Université Paris Cité\\ CNRS, 75013 Paris, France } \vspace{0.3cm} \begin{abstract} In this report, I will review some of the most used models in theoretical ecology along with appealing reformulations and recent results in terms of diversity, stability and functioning of large well-mixed ecological communities. \end{abstract} \body \section{Introduction} \label{sec1} Emergent properties of many-species ecological communities have a variety of applications: for example, the activity of the gut microbiota is believed to be crucial for human health; sustaining natural diversity is essential for services such as food supply, pollination and climate regulation. There is growing awareness that human activity is causing irreversible species extinction and ecosystem simplifications, generally considered a \emph{global biodiversity crisis}. The Earth Microbiome Project\footnote{https://earthmicrobiome.org/} and the Human Microbiome Project\footnote{\url{https://www.hmpdacc.org/}} are designed in this direction aiming to identify and characterize all diverse microorganisms and their relationship to ecological stability and disease development. The incredible biodiversity that characterizes natural ecosystems has attracted ecologists for long time but more recently has started gathering interest also among theoretical physicists. From a theoretical perspective, modeling the interactions between many different components – from bacteria in a microbial community to plant-pollinator impact in a forest to starling murmurations – can become extremely complicated. A single, well-established theory allowing one to bridge the gap between empirical data made available from an enormous number of controlled experiments and more sophisticated techniques is nevertheless still missing. In addition to the need for a general criterion that would enable to discriminate between \emph{niche theory} -- for which each niche is occupied by a single species according to the competitive exclusion principle \cite{Hardin1960} -- and \emph{neutral models} -- in which differences are only attributed to stochasticity -- other crucial questions come to the stage and play an increasingly key role: i) relaxation either to a single fixed point or a multiple fixed point regime; ii) definition of ecosystem diversity, \emph{i.e.} the number of surviving species; iii) fluctuation and functional response typical behavior under the effect of external perturbations; iv) investigation of the interplay between stochastic and deterministic processes and how community diversity and variability are related to them; v) emergence of possible chaotic dynamics and limiting cycles to be experimentally measured. In this chapter, we aim to present different statistical physics frameworks that rely on advanced spin-glass techniques, for which Giorgio Parisi has been a pioneer as well as a beacon outlining the right direction in a multitude of complex scenarios. \subsection{More is Different} Theory has long predicted that large complex systems are intrinsically unstable \cite{May1972, May1976}, which is a long-standing puzzle given the complexity observed in Nature. In the last years, there is nevertheless a growing interest in systems composed of an enormous number of species interacting in myriad ways in very complex environments. Such systems can thus be rephrased through the prism of statistical physics using sophisticated concepts and powerful methods in this direction \cite{Faust2012, Fisher2013, Kessler2015, Bunin2017, Altieri2019, Servan2018, Marsland2020, pearce2020stabilization, wu2021understanding}. In a bottom-up approach, the detailed structure of individual interactions and how such coefficients scale with the system size is unknown since particularly difficult to infer in diversity-rich ecosystems. Hence, to tackle the staggering complexity of large ecological communities, one can follow a long tradition rooted in Robert May's seminal works \cite{May1972, May1976} and assume the interaction matrix to be random. May considered a \emph{community matrix} $H$ of size $S \times S$, $S$ being the total number of species in the pool and $H_{ij}$ standing for the effect of species $j$ on $i$ around a feasible fixed point. In this picture, the self-regulation term corresponding to diagonal elements is fixed to $-1$, whereas off-diagonal elements are drawn from a random distribution with zero mean and variance $\sigma^2$ -- sometimes referred to as heterogeneity parameter -- with associated probability $C$. According to May's conjecture, if $\sigma \sqrt{S C} >1$ the system is inevitably unstable under infinitesimally small perturbations and cannot persist. Hence, as a system becomes more diverse (controlled by the number of species $S$ in the pool), more connected (in terms of the connectivity $C$), and strongly interacting (tuned by $\sigma$), a transition to instability occurs with a probability of persisting close to zero. In the large $S$ limit, random matrix theory comes into play claiming that the eigenvalues of the community (or Jacobian) matrix must be contained inside a circle of radius $\sigma \sqrt{SC}$ in the complex plane. Therefore, the system's stability is conditional on the fact that the resulting circle is located in the left half-plane with all eigenvalues having negative real parts. To provide general criteria that could encompass all diversified cases, one can then play with the interaction matrix by changing the strength and mutual sign. A suitable reshuffling of local interactions clearly raises a number of questions on how different combinations of them affect the stability of the overall community and what would be a good trade-off (weak/strong, mutualistic/competitive) to avoid, for instance, destabilization of a prey-predator chain if weak interactions are preponderant \cite{Allesina2012}. \section{High-dimensional MacArthur model at the edge of stability} In the following, we shall focus on mathematical models that offer a suitable platform to understand ecosystems' behavior: giving some input information, predictions on species survival, responses to external perturbations, and the emergence of robust structures can be extracted as an output. We will start with a very influential one, the MacArthur resource-consumer model, originally designed to shape competition among $S$ different species for $N$ non-interacting resources \cite{Macarthur1970}. Notably, if the dynamics describing resource evolution is much faster than the populations' one, the former can be integrated out leading to the generalized Lotka-Volterra equations\cite{Lotka1920, Volterra1927}. The random Lotka-Volterra model will thus represent the second core of this chapter, through which we will figure out how to overcome certain inherent limitations of such a resource-consumer model. By taking advantage of the definition of self-averaging quantities, MacArthur's model has been recently reformulated as a problem of statistical physics of disordered systems and then solved analytically in the limit of an infinite number of species and resources \cite{Tikhonov2017}. We will especially use it to probe several underlying connections between the phenomenology of jamming \cite{Altieri2019book} and criticality in large ecosystems. The dynamics of the model is defined by linear differential equations for $n_\mu$ individuals, where the index $\mu=1,..., S$ denotes the different species: \begin{equation} \frac{d n_\mu}{dt} \propto n_\mu \Delta_\mu \ , \label{dn-mu} \end{equation} and $\Delta_\mu$ is the \emph{resource surplus}. As far as one is concerned with equilibrium, the proportionality factor in the dynamical equation above can be safely neglected. The equilibrium condition from Eq. (\ref{dn-mu}) leads to two possibilities: i) $n_\mu >0$ $\&$ $\Delta_\mu=0$ (survival); ii) $n_\mu =0$ $\&$ $\Delta_\mu<0$ (extinction)\footnote{The case $\Delta_\mu >0$ is actually forbidden by the model definition.}. The variables $\Delta_\mu$ depend then on the availabilities of resources $h_i$ (with $i=1,..., N$) and the \emph{metabolic strategies}, $\sigma_{\mu i}$'s, by which species demand and possibly meet their requirement $\chi_\mu$:\begin{equation} \Delta_\mu=\sum_{i=1}^{N} \sigma_{\mu i} h_i -\chi_\mu \ . \label{Delta} \end{equation} For each species $\mu$, the metabolic strategy represents a random binary vector whose components $\sigma_{\mu i}$ are extracted from a distribution that takes values $1$ and $0$ with probabilities $p$ and $1-p$ respectively. The parameter $p$ determines whether the species in the ecosystem are either specialists ($p \ll 1$), each requiring a small number of well-defined metabolites necessary for their survival, or generalists ($p \sim 1$), meaning that many different metabolites can be appropriate for their needs. In turn, individuals $n_\mu$ depend on the availability of resources, $h_i$, according to a feedback loop mechanism, which is essentially modulated by the efficiencies through which species exploit resources. By defining a total demand, $T_i=\sum_\mu n_\mu \sigma_{\mu i}$, the availabilities $h_i$ can simply be expressed as a decreasing function of it. For instance, one can consider $h_i= \frac{R_i}{\sum_\mu n_{\mu} \sigma_{\mu i}}$ where $R_i$ is the resource surplus whose average is constant whereas its variance, $\delta R^2$, can fluctuate and be used to reproduce the resulting phase diagram. Over the years several mechanisms have been put forward to explain the fact that complex -- and in particular living -- systems tend to be poised at the edge of stability: edge of chaos \cite{Kauffman1991}, self-organized criticality \cite{Bak2013nature}, self-organized instability, scale-free behavior, etc. Here we propose an example that leverages on an alternative principle\cite{Altieri2019}. It is based on recasting the MacArthur model in terms of a Constraint Satisfaction Problem (CSP). Hence, in analogy with a standard CSP, above the hyperplane $\vec{h} \cdot \vec{\sigma}_\mu$ species are able to survive and multiply; conversely, if $\vec{h} \cdot \vec{\sigma_{\mu}} < \chi_\mu$, the sustainability of the species' pool is no longer guaranteed. All $\vec{h}$ such that $\vec{h} \cdot \vec{\sigma_\mu} < \chi_\mu$ define the so-called \emph{unsustainable region}, for each species $\mu$. One can now re-express the requirement $\chi_\mu$ via a random variable \emph{i.e.} $\chi_\mu=\sum_i \sigma_{\mu i} +\epsilon x_\mu$ \cite{Tikhonov2017}, where the parameter $\epsilon$ plays the role of an infinitesimal cost scatter and $x_\mu$ is a zero-mean and unit-variance Gaussian variable. It has been shown that, in the $\epsilon \rightarrow 0$ limit, the model undergoes a phase transition between two qualitatively different regimes: i) a \emph{shielded phase}; ii) a \emph{vulnerable phase} \cite{Tikhonov2017}. In the shielded phase, $\mathcal{S}$, a collective behavior emerges with no influence of external conditions. If the availabilities are set to one in such a way that neither specialists nor generalists are favored, and a sufficiently small perturbation is applied to the system, a feedback mechanism between $h_i$ and $n_\mu$ contributes to adjusting mutual species' abundance and to keeping the availabilities almost unchanged, $\forall i$. The situation is quite different in the \emph{vulnerable phase}, $\mathcal{V}$, where species cannot self-sustain and turn out to be strongly affected by changes and improvements in the immediate environment. To characterize the stability of a general competing system against perturbations in a more rigorous way, one can introduce a Lyapunov function and compute the density of fluctuations in the two phases. The positive or vanishing behavior of such a function, together with its time derivative, provide information on whether the equilibrium is unstable, locally asymptotically stable, or globally asymptotically stable. In this specific case, the Lyapunov function reads \begin{equation} F(\lbrace n_\mu \rbrace)=\sum_{i} R_i \log \left(\sum_\mu n_\mu \sigma_{\mu i} \right) -\sum_\mu n_\mu \chi_\mu \ , \label{lyapunov} \end{equation} which is bounded from above, hence guaranteeing that an equilibrium always exists. By differentiating Eq. (\ref{lyapunov}) to the second order, one eventually obtain \begin{equation} \frac{d ^2 F}{d n_\mu d n_\nu}=-\sum_{i} \sigma_{\mu i} \sigma_{\nu i} \frac{R_i}{(\sum_{\rho} n_{\rho} \sigma_{\rho i} )^2}=-\sum_{i} \sigma_{\mu i} \sigma_{\nu i} \left( \frac{h_i^2}{R_i} \right) \ . \label{Hessian_n} \end{equation} In the $\mathcal{S}$ phase, \emph{i.e.} for $h_i \simeq 1$, this expression leads to a modified Wishart matrix whose eigenvalue distribution is defined by a Marchenko-Pastur law \cite{MarchenkoP} in the limit of a large number of species and resources. Accordingly, the resulting spectral density reads: \begin{equation} \rho(\lambda)= \frac{1}{2 \pi} \frac{\sqrt{(\lambda-\lambda_{-})(\lambda_{+}-\lambda)}}{\lambda} \ , \end{equation} where the upper and lower edges of the spectrum are $\lambda_{\pm}= (\sqrt{[1]}\pm 1)^2$. The quantity $[1]$ denotes the fraction of active species at criticality or, borrowing the Constraint Satisfaction Problem (CSP) jargon, the fraction of \emph{satiated constraints} for which $\Delta_\mu=0$. In analogy with the so-called SAT/UNSAT transition, we can associate the $\mathcal{V}$ phase to a \emph{hypostatic regime}, with a smaller number of saturated constraints with respect to the total number of variables \cite{Wyart2005, Franz2016}: this case corresponds to a gapped spectral density without any signature of an emerging criticality. Conversely, the $\mathcal{S}$ phase would correspond to an \emph{isostatic regime} -- where the number of vanishing constraints equals the overall space dimension, and a gapless spectrum for the distribution of eigenvalues appears. Because the lower edge of the spectrum $\lambda_{-} \rightarrow 0$ tends to zero upon approaching the $\mathcal{V}/\mathcal{S}$ transition line, the eigenvalue density contribution in the $\mathcal{S}$ phase becomes:\begin{equation} \rho(\lambda) \sim \sqrt{\left(4-\lambda\right)/{\lambda}} \ . \end{equation} A vanishing lower edge is in turn related to the appearance of a zero mode in the Hessian matrix of the replicated free energy (so-called \emph{replicon eigenvalue}): this translates into a diverging spin-glass susceptibility \cite{MPV, DeDominicis2006} as further evidence of being close to a critical point. A large response function can be interpreted as the fact that -- rather than being governed by a single leader -- the system tends to self-organize and respond collectively to external perturbations \cite{Mora2011}. It is worth noticing that since the Lyapunov function in Eq. (\ref{lyapunov}) is convex everywhere, a replica-symmetry-broken regime cannot occur. The most likely scenario taking place here is akin to the phenomenology of a \emph{random linear programming} problem\cite{Franz2016}. Even though replica symmetry continues to hold, a marginally stable regime takes place for some specific values of the control parameters. The advantage of introducing a high-dimensional version of the MacArthur model is that it provides an appealing and easily-defined reference model albeit, in its current form, lends itself to describing only competitive interactions. To suitably address a wider spectrum of ecological scenarios, the random version of the Lotka-Volterra model will be presented in the following accounting either for the competitive or cooperative case. \section{The generalized random Lotka-Volterra model} A wide range of phenomena in population dynamics, including predation, mutualism, and resource-consumer interactions, can be reasonably well captured by a much simpler reference model: the disordered Lotka-Volterra model whose typical features are shown off by tuning a few control (universal) parameters. Moreover, it not only reproduces phenomenologically multiple facets of well-mixed ecosystems \cite{Barbier2018} but also turns out to be of great interest in interdisciplinary domains such as genetics, epidemiology\cite{holt1985infectious}, and evolutionary game theory \cite{Galla2013, Sanders2018} up to the modelization of complex financial markets\cite{Sprott2004, Moran2019}. The Lotka-Volterra equations describe the evolution of $S$ species subject to random interactions $\alpha_{ij}$\cite{Kessler2015,Bunin2017}: \begin{equation} \frac{d N_i}{dt}= N_i \left[1-N_i -\sum_{j, (j \neq i)} \alpha_{ij} N_j \right] +\sqrt{N_i} \eta_i(t) +\lambda_i \ , \label{dynamical_eqT} \end{equation} where $N_i(t)$ is the relative abundance of species $i$ (with $i=1,...,S$) at time $t$ meaning that the population is normalized with respect to the total number of individuals $N_\text{ind}$ that would be present in the absence of interaction. The elements of the random matrix $\alpha_{ij}$ are independent and identically distributed with mean $\langle \alpha_{ij}\rangle =\mu/S$, variance $\langle \alpha_{ij}^2\rangle_c=\sigma^2/S$ and $ \langle \alpha_{ij} \alpha_{ji} \rangle_c=\gamma \langle \alpha_{ij}^2\rangle_c$, where the subscript $_c$ stands for the connected part of the correlation. The parameter $\gamma$ ranges from $-1$ (completely antisymmetric case to which prey-predator interactions belong) to $1$ (fully symmetric, for which a Lyapunov function can be safely defined). The demographic noise contribution, accounting for deaths, births, and other unpredictable events, is modelled by $\eta_i(t)$, a Gaussian variable with zero mean and variance $\langle \eta_i(t)\eta_j(t')\rangle=2T \delta_{ij} \delta(t-t')$, whose amplitude $T$ is inversely proportional to the total number of individuals $N_\text{ind}$. Such a multiplicative noise term allows us to investigate the effect of demographic stochasticity in a continuous setting \cite{Domokos2004discrete, Rogers2012, weissmann_simulation_2018}: the larger the global population, the smaller the strength $T$ of the demographic noise. Then, to guarantee the probability distribution to be integrable at small abundances, we need to introduce a small but finite immigration rate, which will be assumed to be constant over species, \emph{i.e.} $\lambda_i=\lambda$. This is a smart way to avoid an absorbing boundary in $N_i=0$ due to the introduction of a finite demographic noise, which in turn would push a finite fraction of species to zero\footnote{With no demographic noise and no immigration, a similar model was proposed in the nineties by Biscari and Parisi \cite{Biscari1995} and analyzed by studying the stability of the replica symmetric solution (single fixed point regime).}. In the case of random symmetric interactions, the stochastic process induced by Eq. (\ref{dynamical_eqT}) admits an equilibrium-like stationary distribution \cite{Biroli2018_eco, Altieri2021} with associated Hamiltonian: \begin{equation} H= {{-}} \sum_i \left(N_i-\frac{N_i^2}{2}\right)+\sum_{i<j}\alpha_{ij}N_iN_j+ \sum_i [T\ln N_i - \ln \theta(N_i-\lambda)] \ . \label{Hamiltonian_0} \end{equation} The before-last term is due to the demographic noise\footnote{The parameter $T$ plays the role of the temperature in a statistical mechanics setting. The mapping can be easily established by writing the corresponding Fokker-Planck equation with a white Gaussian noise.} whereas the counterbalancing role of the immigration is formally modelled by is the Heaviside function $\theta(x)$, which corresponds to imposing a reflecting wall at $N_i=\lambda$. \subsection{Glassy phases and out-of-equilibrium dynamics} Adding a finite demographic noise not only allows us to get a more general picture but also to properly characterize the resulting phase diagram -- see Fig. 2 -- connecting peculiar properties of each regime to the ones of equilibria. \begin{figure}[h] \center \includegraphics[scale=0.36]{phase_diagram_log.pdf} \caption{Phase diagram showing how the variation of the demographic noise strength, $T$, and the heterogeneity of interactions, $\sigma$, can lead to three different phases. In particular: i) a single equilibrium phase where the configurational landscape is purely convex; ii) a multiple equilibria regime, which is characterized by a $1$RSB stable solution and an exponential number of locally stable equilibria; iii) a \emph{Gardner phase}, which turns out to be associated with a hierarchical organization of the equilibria in the free-energy landscape. Figure taken from \cite{Altieri2021}.} \end{figure} \begin{figure}[h] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.87\linewidth]{inplot_correlation} \label{fig:test1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \includegraphics[width=0.85\linewidth]{1RSB_aging} \label{fig:test2} \end{minipage} \caption{Numerical simulations based on DMFT. Two-time correlator $C(t,t')$ in the single-equilibrium phase (RS, on the left) compared with the same correlator in the multiple equilibria phase (1RSB, on the right) plotted for different $t'$ and $S=500$. The dashed lines correspond to the values of the overlap parameters, which are obtained by the replica method. The inset in the left plot highlights a divergence in the decorrelation time as $T \rightarrow T_\text{$1$RSB}$, the critical temperature associated with an instability of the RS solution. Figures taken from \cite{Altieri2021}.} \end{figure} Then one may wonder how all these outcomes are expected to change when asymmetric interactions are also taken into account and which strategy proves to be the most appropriate in this case. Non-symmetric interactions strongly complicate the analysis since they correspond to plugging non-conservative forces in the dynamics thus violating the Fluctuation-Dissipation theorem (FDT) and bringing the system out of equilibrium. Since it is no longer possible to define a Hamiltonian to be minimized and analyzed in terms of harmonic fluctuations around each of the minima, the cavity \cite{MPV, Mezard2009} and Dynamical Mean-Field Theory \cite{Roy2019numerical, Altieri2020dyn} formalisms come into play. The last method, in particular, allows us to map a multi-variable problem into a single-body stochastic formalism, which eventually involves time-delayed friction and colored noise whose features have to be determined self-consistently. In other words, the two-time correlation $C(t,t')$ and response $R(t,t')$ functions are fixed in a self-consistent way given the probability distribution associated with the stochastic process and the distribution of random interactions. A similar analysis, as illustrated for the symmetric case in Fig. (2), can be performed. Without demographic fluctuations, increasing the variability of the interactions $\sigma$ would destabilize the single-fixed-point regime and eventually result in chaotic phases as for neural networks and spin-glass models in the presence of asymmetric couplings. The introduction of a positive immigration rate would lead to the stabilization of chaotic dynamics -- with an indefinitely long lifetime -- corresponding to what we have referred to as \emph{multiple equilibria regime} in the purely symmetric case. However, as soon as the immigration rate is set to zero, the chaotic regime is no longer stable \cite{Roy2020, pearce2020stabilization}, replaced by slower and slower dynamics (\emph{aging}). \subsection{Non-logistic growth functions and pseudo-gap distributions} The Lotka-Volterra equations analyzed in the large-$S$ limit thus far allow us to get analytical advances in a very broad class of problems. In particular, by slightly modifying the dynamical Eq. (\ref{dynamical_eqT}) through the introduction of a higher-order one-species potential, one can also investigate the so-called \emph{Allee effect}\cite{Allee1926}, which describes a positive correlation between mean individual fitness (or per-capita growth rate) and population density over some finite interval\cite{Gascoigne2004, Kramer2018}. This positive feedback loop mechanism, which inherited the name from the famous zoologist Allee, essentially relies on the observation that in many species under-crowding, and not only competition, contributes to limiting population growth. The Allee effect is called \emph{strong} if there exists an initial population threshold in the sense that the species pool needs a sufficiently large initial population to avoid extinction, whereas it is denoted as \emph{weak} if no threshold exists. Even in this second case, intra-specific cooperation leads to an initial increase in the growth rate as population increases (see Fig. (3)). \begin{figure}[h] \center \includegraphics[scale=0.28]{Allee_ws.pdf} \caption{Sketch of the strong Allee effect (in red) compared to a weak Allee effect (in orange). In the former, the finite threshold corresponds to an unstable fixed point (empty black circle); in the latter, no threshold in the population exists.} \end{figure} In the same spirit as before, one can take advantage of thermodynamic analysis and shed light on the resulting phase diagram by tuning the strength of random interactions and the demographic noise. Remarkable differences emerge with respect to the Lotka-Volterra logistic-growth case \cite{Altieri2022}. First, the number of states below the critical transition line is no longer exponential in the system size nor separated by extensive barriers, exactly as it would happen in equilibrium states of mean-field spin glasses (\emph{i.e.} the Sherrington-Kirkpatrick model \cite{MPV}). Furthermore, as soon as one considers a non-linear functional response of the species abundances, a pseudo-gap distribution in the local curvatures of the single-species effective potential appears\footnote{With the exponent $\alpha \ge 1$.}, $P(V^{''}_\text{eff}(N^{*})) \sim \vert V^{''}_\text{eff}(N^{*}) \vert^{\alpha}$, as a clear signature of a marginal low-demographic noise (low-temperature) phase\cite{Altieri2022}. This outcome nicely generalizes the pseudo-gap distribution that was found for instantaneous local fields in mean-field spin glasses -- and was obtained before only in the case of discrete degrees of freedom\cite{Palmer1979} -- to a complex ecological model. \section{Conclusions and perspectives} Along the different sections of this short report, I have mostly discussed analytical outcomes made possible by the use of mean-field limits. These pages are therefore intended as a tribute to Giorgio Parisi, a way to thank him for the innovative and insightful techniques I have been learning over the years, and that have been successfully applied to such diverse and interdisciplinary contexts. As for future research, an interesting direction would be the investigation of spatially extended models either in a completely-connected topology where multiple patches (locations in space) are coupled by diffusion or in a sparse network with finite connectivity on each site. On the one hand, this metapopulation scenario, as originally proposed by Levins\cite{Hanski1998, Hanski2000, Etienne2002}, would allow us for a more tangible comparison with real data, starting for instance with populations of small mammals and insects\cite{Elmhagen2001}; on the other hand, new appealing phenomena -- such as pattern formation, traveling waves and activity fronts \cite{Curatolo2020, Manna2021} -- are expected to appear. A rigorous theoretical analysis with an increasingly large number of species and, possibly, not only pairwise interactions is still missing. A parallel line of research would concern an in-depth analysis of the role of different kinds of fluctuations -- demographic and environmental ones that might violate Detailed Balance – and their interplay with the deterministic dynamics. Such a classification will drive a better comparison with observational data, in particular for reproducing Species Abundance Distributions (SAD) of large ecological communities as well as for achieving a deeper understanding of the formal expression of functional responses given by local perturbations. This information would be extremely useful in the attempt to recover power-law and log-normal distributions for the species abundances that have not yet been identified in models accounting only for demographic fluctuations and symmetric interactions \cite{Lorenzana2022}. \newpage \section*{Acknowledgments} I thank Matthieu Barbier, Giacomo Gradenigo and Frédéric van Wijland for a critical reading of the draft. \bibliographystyle{ws-rv-van} \chapter[Using World Scientific's Review Volume Document Style]{Using World Scientific's Review Volume\\ Document Style\label{ch1}} \author[F. Author and S. Author]{First Author and Second Author\footnote{Author footnote.}} \address{World Scientific Publishing Co, Production Department,\\ 5 Toh Tuck Link, Singapore 596224 \\ f\_author@wspc.com.sg\footnote{Affiliation footnote.}} \begin{abstract} The abstract should summarize the context, content and conclusions of the paper in less than 200 words. It should not contain any references or displayed equations. Typeset the abstract in 9 pt Times roman with baselineskip of 11 pt, making an indentation of 1.5 pica on the left and right margins. \end{abstract} \body \section{Using Other Packages}\label{sec1} The WSPC class file has already loaded the packages \verb|amsfonts, amsmath,| \verb|amssymb, epsfig, rotating,| and \verb|url| at the startup. Please try to limit your use of additional packages. They frequently introduce incompatibilities. This problem is not specific to the WSPC styles, it is a general \LaTeX{} problem. Check this manual whether the required functionality is already provided by the WSPC class file. If you do need third-party packages, send them along with the paper. In general, you should use standard \LaTeX{} commands as much as possible. \verb|Check.tex| is an utility to test for all the files required by World Scientific review volume project are available in your present \LaTeX\ installation. \noindent {\bf Usage:} \verb|latex check.tex| \section{Layout} In order to facilitate our processing of your article, please give easily identifiable structure to the various parts of the text by making use of the usual \LaTeX{} commands or by your own commands defined in the preamble, rather than by using explicit layout commands, such as \verb|\hspace, \vspace, \large, \centering|, etc. Also, do not redefine the page-layout parameters. \section{User Defined Macros} User defined macros should be placed in the preamble of the article, and not at any other place in the document. Such private definitions, i.e. definitions made using the commands \verb|\newcommand, \renewcommand|, \verb|\newenvironment| or \verb|\renewenvironment|, should be used with great care. Sensible, restricted usage of private definitions is encouraged. Large macro packages and definitions that are not used in the article should be avoided. Do not change existing environments, commands and other standard parts of \LaTeX. \subsection{Options and extra packages} The class options: \begin{alphlist}[onethmnum] \item [\tt draft] To draw border line around text area.\\ Default: no border line around text area. \item[\tt onethmnum] To number all theorem-like objects in a single sequence, e.g. Theorem~1, Definition 2, Lemma 3, etc.\\ Default: individual numbering on different theorem-like objects, e.g. Theorem 1, Definition 1, Lemma 1, etc. \end{alphlist} Apart from the packages mentioned in Sec~\ref{sec1}, the WSPC class also requires the following inhouse packages for customizing the citations and references. \noindent{\tablefont \begin{tabular}{@{}ll@{}}\\ {\bf Vancouver (numbered)}\\ \quad\verb|\usepackage{ws-rv-van}| & -- Superscript$^1$\\ \quad\verb|\usepackage[square]{ws-rv-van}| & -- Bracketed [1]\\[6pt] {\bf Harvard (author-date)}\\ \quad\verb|\usepackage{ws-rv-har}| & -- (Author, 1994) \end{tabular}\\[6pt]} The contributors are advised to consult the managing editor for the chosen style. You can obtain these files from our web pages at: \url{http://www.wspc.com.sg/style/review_style.shtml} and \url{http://www.icpress.co.uk/authors/stylefiles.shtml#review}. \section{Chapters} Each chapter should normally be in a separate file. The chapter title is typeset by using the \verb|\chapter[#1]{#2}| command, where \verb|[#1]| is an optional short title to be used as a running head if the title is too long and \verb|#2| is the full title of the chapter. The short, edited version of the title appears in the table of contents and running head. The chapter title should be typed in a way such that only the initial character is in upper case and the rest is in lower case. \section{Sectional Units} Sectional units are obtained in the usual way, i.e. with the \LaTeX{} instructions \verb|\section|, \verb|\subsection|, \verb|\subsubsection|, and \verb|\paragraph|. \section{Section} Text... \subsection{Subsection} Text... \subsubsection{Subsubsection} Text... \paragraph{Paragraph} Text... \subparagraph{Subparagraph} Text... \section*{Unnumbered Section} Unnumbered sections can be obtained by \verb|\section*|. \section{Lists of Items}\index{lists} Lists are broadly classified into four major categories that can randomly be used as desired by the author: \begin{alphlist}[(d)] \item Numbered list. \item Lettered list. \item Unnumbered list. \item Bulleted list. \end{alphlist} \subsection{Numbered and lettered list}\index{lists!numbered and lettered} \begin{enumerate} \item The \verb|\begin{arabiclist}[]| command is used for the arabic number list (arabic numbers appearing within or without parenthesis), e.g., (1), (2); 1., 2., etc.\index{lists!numbered and lettered!arabic (1, 2, 3...)} \smallskip \item The \verb|\begin{romanlist}[]| command is used for the roman number list (roman numbers appearing within parenthesis), e.g., (i), (ii), etc.\index{lists!numbered and lettered!roman (i, ii, iii...)} \smallskip \item The \verb|\begin{Romanlist}[]| command is used for the capital roman \hbox{number list} (capital roman numbers appearing within parenthesis), e.g., (I), (II), etc.\index{lists!numbered and lettered!Roman (I, II, III...)} \smallskip \item The \verb|\begin{alphlist}[]| command is used for the alphabetical list (alphabetical characters appearing within parenthesis), e.g., (a), (b), etc.\index{lists!numbered and lettered!alphabetical (a, b, c...)} \smallskip \item The \verb|\begin{Alphlist}[]| command is used for the capital alphabetical list (capital alphabetical characters appearing within parenthesis), e.g., (A), (B), etc.\index{lists!numbered and lettered!Alphabetical (A, B, C...)} \end{enumerate} Note: For all the above mentioned lists (with the exception of alphabetic list), it is obligatory to enter the last entry's number in the list within the square bracket, to enable unit alignment. Items numbered with lowercase Roman numerals: \begin{romanlist}[(iii)] \item item one \item item two \begin{alphlist}[(a)] \item item one \item lists within lists can be numbered with lowercase alphabets \end{alphlist} \item item three \item item four. \end{romanlist} \subsection{Bulleted and unnumbered list}\index{lists!bulleted and unnumbered} \begin{enumerate} \item The \verb|\begin{itemlist}| command is used for the bulleted list. \smallskip \item The \verb|\begin{unnumlist}| command is used for creating the unnumbered list with the turnovers hangindent by 1\,pica. \end{enumerate} Lists may be laid out with each item marked by a dot: \begin{itemlist} \item item one \item item two \item item three \item item four. \end{itemlist} \subsection{Proofs} The WSPC document styles also provide a predefined proof environment for proofs. The proof environment produces the heading `Proof' with appropriate spacing and punctuation. It also appends a `Q.E.D.' symbol, $\square$, at the end of a proof, e.g., \begin{verbatim} \begin{proof} This is just an example. \end{proof} \end{verbatim} \noindent produces \begin{proof} This is just an example. \end{proof} The proof environment takes an argument in curly braces, which allows you to substitute a different name for the standard `Proof'. If you want to display, `Proof of Lemma', then write \begin{verbatim} \begin{proof}[Proof of Lemma] This is just an example. \end{proof} \end{verbatim} \noindent produces \begin{proof}[Proof of Lemma] This is just an example. \end{proof} \section{Theorems and Definitions}\index{theorems} The WSPC document styles contain a set of pre-defined environments for theorems, definitions, proofs, remarks etc. All theorem-like objects use individual numbering scheme by default. To number them in a single sequence, load the class option \verb|onethmnum| in the preamble., e.g., \verb|\documentclass[onethmnum]{ws-rv10x7}|. The following environments are available by default with WSPC document styles: \begin{center} {\tablefont \begin{tabular}{ll} \toprule Environment & Heading\\\colrule \verb|algorithm| & Algorithm\\ \verb|answer| & Answer\\ \verb|assertion| & Assertion\\ \verb|assumption| & Assumption\\ \verb|case| & Case\\ \verb|claim| & Claim\\ \verb|comment| & Comment\\ \verb|condition| & Condition\\ \verb|conjecture| & Conjecture\\ \verb|convention| & Convention\\ \verb|corollary| & Corollary\\ \verb|criterion| & Criterion\\ \verb|definition| & Definition\\ \verb|example| & Example\\ \verb|lemma| & Lemma\\ \verb|notation| & Notation\\ \verb|note| & Note\\ \verb|observation| & Observation\\ \verb|problem| & Problem\\ \verb|proposition| & Proposition\\ \verb|question| & Question\\ \verb|remark| & Remark\\ \verb|solution| & Solution\\ \verb|step| & Step\\ \verb|summary| & Summary\\ \verb|theorem| & Theorem\\\botrule \end{tabular}}\label{theo} \end{center} \begin{verbatim} \begin{theorem}[Longo, 1998] For a given $Q$-system... \[ N = \{x \in N; T x = \gamma (x) T, T x^* = \gamma (x^*) T\}\,, \] and $E_\Xi (\cdot) = T^* \gamma ... \end{theorem} \end{verbatim} \noindent generates \begin{theorem}[Longo, 1998] For a given $Q$-system... \noindent\[ N = \{x \in N; T x = \gamma (x) T, T x^* = \gamma (x^*) T\}\,, \] and $E_\Xi (\cdot) = T^* \gamma (\cdot) T$ gives a conditional expectation onto $N$. \end{theorem} \begin{verbatim} \begin{theorem} We have $\# H^2 (M \supset N) < ... \end{theorem} \end{verbatim} \noindent produces \begin{theorem} We have $\# H^2 (M \supset N) < \infty$ for an inclusion $M \supset N$ of factors of finite index. \end{theorem} \LaTeX{} provides \verb|\newtheorem| to create new theorem environments. To add a new theorem-type environments to a chapter, use \begin{verbatim} \newtheorem{example}{Example}[section] \let\Examplefont\upshape \def\Exampleheadfont{\bfseries} \end{verbatim} \section{Mathematical Formulas} \paragraph{Inline:} For in-line formulas use \verb|\( ... \)| or \verb|$ ... $|. Avoid built-up constructions like, fractions and matrices, in in-line formulas. Fractions in inline can be typed with a solidus. e.g. \verb|x+y/z=0|. \index{equations!inline} \paragraph{Display:} For numbered display formulas use the displaymath environment \index{equations!display} \verb|\begin{equation}...| \verb|\end{equation}|. And for unnumbered display formula use \verb|\[ ... \]|. For numbered displayed one line formulas always use the equation environment. Do not use \verb|$$ ... $$|. For example, the input for: \noindent\begin{equation} \mu(n, t) = \frac{\sum\limits^\infty_{i=1}1 (d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1(N(\sigma)=n)d\sigma}\,. \label{eq1} \end{equation} \noindent is: \begin{verbatim} \begin{equation} \mu(n,t)=\frac{\sum\limits^\infty_{i=1}1(d_i < t, N(d_i) = n)} {\int\limits^t_{\sigma=0}1(N(\sigma)=n)d\sigma}\,.\label{eq1} \end{equation} \end{verbatim} For displayed multi-line formulas use the eqnarray environment. \begin{verbatim} \begin{eqnarray} \zeta\mapsto\hat{\zeta} & = & a\zeta+b\eta\label{eq2}\\ \eta\mapsto\hat{\eta} & = & c\zeta+d\eta\label{eq3} \end{eqnarray} \end{verbatim} \noindent\begin{eqnarray} \zeta\mapsto\hat{\zeta} & = & a\zeta+b\eta\label{eq2}\\ \eta\mapsto\hat{\eta} & = & c\zeta+d\eta\label{eq3} \label{eq2n3} \end{eqnarray} Superscripts and subscripts that are words or abbreviations, as in \( \pi_{\mathrm{low}} \), should be typed as roman letters; this is done as \verb|\( \pi_{\mathrm{low}} \)| instead of \( \pi_{low} \) done by \verb|\( \pi_{low} \)|. For geometric functions, e.g. exp, sin, cos, tan, etc. please use the macros \verb|\sin, \cos, \tan|. These macros gives proper spacing in mathematical formulas. It is also possible to use the \AmS-\LaTeX{} package,\cite{ams04} which can be obtained from the \AmS, from various \TeX{} archives. \section{Floats}\index{floats} \subsection{Tables}\index{floats!tables} Put the tables and figures in the text with the table and figure environments, and position them near the first reference of the table or figure in the text. Please avoid long caption texts in figures and tables. Do not put them at the end of the article. \begin{table}[b] \tbl{Sample table caption.} {\begin{tabular}{@{}cccc@{}} \toprule Piston mass$^{\text a}$ & Analytical frequency & \% Error \\ & (Rad/s) & (Rad/s) \\ \colrule 1.000 & \hphantom{0}281.0 & 0.07 \\ 0.010 & 2441.0 & 0.00 \\ 0.001 & 4130.0 & 0.16\\ \botrule \end{tabular} } \begin{tabnote} $^{\text a}$Sample table footnote. \end{tabnote} \label{tbl1} \end{table} \begin{verbatim} \begin{table}[b] \tbl{Sample table caption.} {\begin{tabular}{@{}ccc@{}} \toprule Piston mass$^{\text a}$ & ... \\ & (Rad/s) & (Rad/s) \\ \colrule 1.000 & ...\\ \botrule \end{tabular}} \begin{tabnote} $^{\text a}$Sample table footnote. \end{tabnote}\label{tbl1} \end{table} \end{verbatim} For most tables, the horizontal rules are obtained by: \begin{description}\index{floats!tables!rules} \item[toprule] one rule at the top \item[colrule] one rule separating column heads from data cells \item[botrule] one bottom rule \item[Hline] one thick rule at the top and bottom of the tables with multiple column heads \end{description} To avoid the rules sticking out at either end of the table add \verb|@{}| before the first and after the last descriptors, e.g. {@{}llll@{}}. Please avoid vertical rules in tables. But if you think the vertical rule is must, you can use the standard \LaTeX{} \verb|tabular| environment. By using \verb|\tbl| command in table environment, long captions will be justified to the table width while the short or single line captions are centered. If we need the fixed width for the tables, the command is \verb|\begin{tabular*}{#1}{@{}ll@{}}| and \verb|\end{tabular*}|. The argument \verb|#1| takes the value of table width. For example, if we need a table with 25pc width, then the command is \verb|\begin{tabular*}{25pc}{@{\extracolsep| \verb|{\fill}}ll@{}}|. Headings which span for more than one column should be set using \verb|\multicolumn{#1}{#2}{#3}| where \verb|#1| is the number of columns to be spanned, \verb|#2| is the argument for the alignment of the column head which may be either {c} --- for center alignment; {l} --- for left alignment; or {r} --- for right alignment, as desired by the users. Use {c} for column heads as this is the WS style and \verb|#3| is the heading. A simplified alternative version is \verb|\centre{#1}{#2}| where \verb|#1| is the number of columns to be spanned and \verb|#2| the heading. There should be a rule spanning the same columns below the heading. Termed as spanner or bridge rule, it is generated using the command \verb|\cline{n-m}| where \verb|n| is the number of the first spanned column and \verb|m| that of the last spanned column. \verb|\cline| should not be part of a row but follow immediately after a \verb|\\|. If a table contains note(s), as a universal thumb-rule they should appear beneath the table set to its width and seldom at the foot of the page. For the footnotes in the table environment the command is \verb|{\begin{tabnote}<text>\end{tabnote}}|. Appropriate symbols should be included in the body of the table matching their corresponding symbols in the footnotes where the footnotes are to be placed immediately after the \verb|{\begin{tabnote}| command and terminated before \verb|\end{tabnote}}\end{table}| command. The tables are designed to have a uniform style throughout the whole book. We prefer the border lines to be of the style as shown in our sample Tables. For the inner lines of the table, it looks better if they are kept to a minimum. \def\p{\phantom{$-$}} \def\pc{\phantom{,}} \def\p0{\phantom{0}} \begin{sidewaystable} \tbl{Positive values of $X_0$ by eliminating $Q_0$ from Eqs.~(15) and (16) for different values of the parameters $f_0$, $\lambda_0$ and $\alpha_0$ in various dimension.} {\begin{tabular}{@{}ccccccccccc@{}} \toprule\\[-6pt] $f_0$ &$\lambda_0$ &$\alpha_0$ &\multicolumn{8}{c}{Positive roots ($X_0$)}\\[3pt] \hline\\[-6pt] && &4D &5D &6D &7D &8D &10D &12D &16D\\[3.5pt] \hline\\[-6pt] \phantom{1}$-0.033$ &0.034 &\phantom{0}0.1\phantom{.01} &6.75507,\p0 &4.32936,\p0 &3.15991,\p0 &2.44524,\p0 &1.92883,\p0 &0.669541, &--- &---\\[3.5pt] &&&1.14476\pc\p0 &1.16321\pc\p0 &1.1879\pc\phantom{00} &1.22434\pc\p0 &1.29065\pc\p0 &0.415056\pc\\[3.5pt] \phantom{1}$-0.1$\phantom{33} &0.333 &\phantom{0}0.2\phantom{.01} &3.15662,\p0 &1.72737,\p0 &--- &--- &--- &--- &--- &---\\[3.5pt] &&&1.24003\pc\p0 &1.48602\pc\p0\\[3.5pt] \phantom{1}$-0.301$ &0.302 &0.001 &2.07773,\p0 &--- &--- &--- &--- &--- &--- &---\\[3.5pt] &&&1.65625\pc\p0\\[3.5pt] \phantom{1}$-0.5$\phantom{01} &0.51\phantom{2} &\phantom{0}0.001 &--- &--- &--- &--- &--- &--- &--- &---\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &0.1\phantom{02} &\phantom{0}2\phantom{.001} &1.667,\phantom{000} &1.1946\phantom{00,} &--- &--- &--- &--- &--- &---\\[3.5pt] &&&0.806578\pc &0.858211\pc\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &0.1\phantom{33} &10\phantom{.001} &0.463679\pc &0.465426\pc &0.466489\pc &0.466499\pc &0.464947\pc &0.45438\pc\p0 &0.429651\pc &0.35278\pc\\[3.5pt] $\phantom{1-}$0.1\phantom{01} &1\phantom{.333} &\phantom{0}0.2\phantom{01} &--- &--- &--- &--- &--- &--- &--- &---\\[3.5pt] $\phantom{-0}$1\phantom{.033} &0.001 &\phantom{0}2\phantom{.001} &0.996033, &0.968869, &0.91379,\p0 &0.848544,&0.783787, &0.669541, &0.577489, &---\\[3.5pt] &&&0.414324\pc &0.41436\pc\p0 &0.414412\pc &0.414489\pc &0.414605\pc &0.415056\pc &0.416214\pc\\[3.5pt] \phantom{10}\phantom{.033} &0.001 &\phantom{0}0.2\phantom{01} &0.316014, &0.309739, &--- &--- &--- &--- &--- &---\\[3.5pt] &&&0.275327\pc &0.275856\pc\\[3.5pt] \phantom{10}\phantom{.033} &0.1\phantom{33} &\phantom{0}5\phantom{.001} &0.089435\pc &0.089441\pc &0.089435\pc &0.089409\pc &0.08935\pc\p0 &0.089061\pc &0.088347\pc &0.084352\pc\\[3.5pt] \phantom{10}\phantom{.033} &1\phantom{.333} &\phantom{0}3\phantom{.001} &0.128192\pc &0.128966\pc &0.19718,\p0 &0.169063, &0.142103, &--- &--- &---\\[3.5pt] &&&& &0.41436\pc\p0 &0.414412\pc &0.414489\pc\\[3pt] \Hline \end{tabular}}\label{tbl2} \end{sidewaystable} Landscape tables and figures can be typeset with following environments: \begin{itemize}\index{floats!tables!landscape} \item \verb|sidewaystable| and \item \verb|sidewaysfigure|. \end{itemize} \noindent {\bf Example:} \begin{verbatim} \begin{sidewaystable} \tbl{Positive values of ...} {\begin{tabular}{@{}ccccccccccc@{}} ... \end{tabular}} \label{tbl2} \end{sidewaystable} \end{verbatim} \subsection{Figures}\index{floats!figures} The preferred graphics are tiff and Encapsulated PostScript, eps in short, for any type of graphic. Our \TeX\ installation requires eps, but we can easily convert tiff to eps. Many other formats, e.g. pict (Macintosh), wmf (Windows) and various proprietary formats, are not suitable. Even if we can read such files, there is no guarantee that they will look the same on our systems as on yours. Next adjust the scaling of the figure until it's correctly positioned, and remove the declarations of the lines and any anomalous spacing. If instead you wish to use some other method, then it's most important to leave the right amount of vertical space in the figure declaration to accommodate your figure (i.e.~remove the lines and change the space in the example). A figure is obtained with the following commands \begin{verbatim} \begin{figure} \centerline{\includegraphics[width=5.2cm]{rv-fig1}} \caption{Figure caption.} \label{fig1} \end{figure} \end{verbatim} \begin{figure} \centerline{\includegraphics[width=5.2cm]{rv-fig1}} \caption{Figure caption.} \label{fig1} \end{figure} \begin{sidewaysfigure} \begin{center} \includegraphics[width=6.6in]{rv-fig2} \end{center} \caption{The bifurcating response curves of system $\alpha=0.5$, $\beta=1.8$; $\delta=0.2$, $\gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{fig2} \end{sidewaysfigure} Very large figures and tables should be placed on a page by themselves, e.g., \index{floats!figures!landscape} \begin{verbatim} \begin{sidewaysfigure} \begin{center} \includegraphics[width=6.6in]{rv-fig2} \end{center} \caption{The bifurcating response curves of system $\alpha=0.5$, $\beta=1.8$; $\delta=0.2$, $\gamma=0$: (a) $\mu=-1.3$; and (b) $\mu=0.3$.} \label{fig2} \end{sidewaysfigure} \end{verbatim} \begin{figure} \begin{center} \subfigure[\label{fig3a}]{\includegraphics[width=2in]{rv-fig3a}}\qquad \subfigure[\label{fig3b}]{\includegraphics[width=2in]{rv-fig3b}} \caption{Two figures side-by-side. (a) Figure caption for figure 3a. (b) Figure caption for figure 3b. \label{fig3} \end{center} \end{figure} Figures Fig.~\ref{fig3a} and \fref{fig3b} are referred with \verb|Fig.~\ref{fig3a}| and \verb|\fref{fig3b}| commands. Side-by-side figures are obtained with: \begin{verbatim} \begin{figure} \begin{center} \subfigure[\label{fig3a}]{\includegraphics[width=2in]{rv-fig3a}}\qquad \subfigure[\label{fig3b}]{\includegraphics[width=2in]{rv-fig3b}} \caption{Two figures side-by-side. (a) Figure caption for figure 3a. (b) Figure caption for figure 3b. \label{fig3} \end{center} \end{figure} \end{verbatim} \section{Cross-references} Use \verb|\label| and \verb|\ref| for cross-references to equations, figures, tables, sections, subsections, etc., instead of plain numbers. Every numbered part to which one wants to refer, should be labelled with the instruction \verb|\label|. For example: \begin{verbatim} \begin{equation} \mu(n, t) = \frac{\sum ... d\sigma}\,. \label{eq1} \end{equation} \end{verbatim} With the instruction \verb|\ref| one can refer to a numbered part that has been labelled: \begin{verbatim} ..., see also Eq. (\ref{eq1}) \end{verbatim} \begin{itemize} \item labels should not be repeated. \item The \verb|\label| instruction should be typed immediately after (or one line below), e.g., \verb|\caption{Sample ... }\label{fig2.1}|. Labels should not be typed inside the argument of a number-generating instruction such as \verb|\section| or \verb|\caption|, \item For chapters, labels should be placed inside \verb|\chapter|, e.g.,\\ \verb|\chapter{Chapter Title\label{ch2}}|. \end{itemize} \begin{table}[t] \begin{center}{\tablefont Some useful shortcut commands for cross-referencing.\\ \begin{tabular}{@{}lll@{}} \toprule Shortcut & Equivalent & Output \\ command & \TeX\ command\\\colrule \multicolumn{3}{@{}l@{}}{In the middle of a sentence:}\\ \verb|\eref{eq1}| & Eq.~(\verb|\ref{eq1}|) & \eref{eq1}\\ \verb|\sref{sec1}| & Sec.~\verb|\ref{sec1}| & \sref{sec1}\\ \verb|\cref{ch1}| & Chap.~\verb|\ref{ch1}| & \cref{ch1}\\ \verb|\fref{fig1}| & Fig.~\verb|\ref{fig1}| & \fref{fig1}\\ \verb|\tref{tbl1}| & Table~\verb|\ref{tbl1}| & \tref{tbl1}\\[3pt] \multicolumn{2}{@{}l}{At the starting of a sentence:}\\ \verb|\Eref{eq1}| & Equation (\verb|\ref{eq1}|) & \Eref{eq1}\\ \verb|\Sref{sec1}| & Section~\verb|\ref{sec1}| & \Sref{sec1}\\ \verb|\Cref{ch1}| & Chapter~\verb|\ref{ch1}| & \Cref{ch1}\\ \verb|\Fref{fig1}| & Figure~\verb|\ref{fig1}| & \Fref{fig1}\\ \verb|\Tref{tbl1}| & Table~\verb|\ref{tbl1}| & \Tref{tbl1}\\\botrule \end{tabular}} \end{center} \end{table} \section{Citations}\label{cit}\index{citation} World Scientific's preferred style for Review Volume is the Vancouver (numbered) system, unless if the text is not very heavily referenced in which case the Harvard (author-date) system may be used. \begin{center} {\tablefont \begin{tabular}{@{}ll@{}}\toprule System & Package\\\colrule \multicolumn{2}{@{}l}{Vancouver (numbered)}\\ $\bullet$ Bracketed [1] & \verb|\usepackage[square]{ws-rv-van}|\\ $\bullet$ Superscript$^1$ & \verb|\usepackage{ws-rv-van}|\\ &(Default style)\\[3pt] \multicolumn{2}{@{}l}{Harvard (author-date)}\\ \verb|[Brown (1988)]|&\verb|\usepackage{ws-rv-har}|\\\botrule \end{tabular}} \end{center} Citations in the text use the labels defined in the bibitem declaration, for example, the first paper by Jarlskog\cite{jarl88} is cited using the command \verb|\cite{jarl88}|. The bibitem labels should not be repeated. For multiple citations do not use \verb|\cite{1}\cite{2}|, but use \verb|\cite{1,2}| instead. \subsection{Vancouver Style}\index{citation!numbered} Reference citations in the text are to be numbered consecutively in Arabic numerals, in the order of first appearance. The numbered citations can appear in two ways: \begin{romanlist}[(ii)] \item bracketed \item superscript (default style) \end{romanlist} \subsubsection{Bracketed}\index{citation!numbered!bracketed} References cited in the text are within square brackets, e.g., \begin{arabiclist}[(2)] \item \verb|``One can deduce from Ref.~\cite{benh93} that...''|\\ ``One can deduce from Ref.~[3] that...'' \smallskip \item \verb|``See Refs.~\cite{ams04,bake72, benh93,brow88} and \cite{davi93}|\\ \verb| for more details.''|\\ ``See Refs.~[1--3, 5] and [7] for more details.'' \end{arabiclist} \subsubsection{Superscript}\index{citation!numbered!superscript} References cited in the text appear as superscripts, e.g., \begin{arabiclist}[(2)] \item \verb|``...in the statement.\cite{ams04}''|\\ ``...in the statement.$^1$'' \smallskip \item \verb|``...have proven\cite{bake72} that this equation...''|\\ ``...have proven$^2$ that this equation...'' \end{arabiclist} When the reference forms part of the sentence, it should appear with ``Reference'' or ``Ref.'', e.g., \begin{arabiclist}[(2)] \item \verb|``One can deduce from Ref.~\refcite{benh93} that...''|\\ ``One can deduce from Ref.~3 that...'' \smallskip \item \verb|``See Refs.~\refcite{brow88} and \refcite{davi93} for more details.''|\\ ``See Refs.~5 and 7 for more details.'' \end{arabiclist} When superscripted citations are used, there should not be a space before \verb|\cite{key}|, e.g., citation: \verb|see\cite{zipf}|\hskip-60pt\lower8pt\hbox{$\uparrow$}\hskip-4pt\lower16pt\hbox{no character space here} \subsection{Harvard Style}\index{citation!author-date} Citations in the text use the labels defined in the \verb|bibitem| declaration, for example, [Jarlskog (1988)] is cited using the command \verb|\cite{jarl88}|. While \verb|\citet {jarl88}| produces Jarlskog (1988). See Sec.~\ref{secbib} for more details on coding references in Vancouver and Harvard styles. \section{Footnote} Footnotes are denoted by a Roman letter superscript in the text. Footnotes can be used as \begin{verbatim} ... total.\footnote{Sample footnote.} \end{verbatim} \noindent {\bf Output:} ... in total.\footnote{Sample footnote.} \section{Acknowledgments} Acknowledgments to funding bodies etc. may be placed in a separate section at the end of the text, before the Appendices. This should not be numbered so use \verb|\section*{Acknowledgements}|. \section{Appendix}\index{appendix} Appendices should be used only when absolutely necessary. They should come before the References. \begin{verbatim} \begin{appendix}[Optional Title] \section{Sample Appendix} Text... \subsection{Appendix subsection} Text... \end{appendix} \end{verbatim} \section{Bibliography}\label{secbib}\index{bibliography} \subsection{\btex\ users} \btex\index{BIBTeX} users should use our bibliography style file: \begin{itemize} \item For Vancouver (numbered) styled references \begin{verbatim} \usepackage{ws-rv-van} \bibliographystyle{ws-rv-van}
1406.5138
\section{Introduction} For a non-zero Laurent polynomial $P(z) \in \mathbb{C}[z, z^{-1}],$ the $k$-higher Mahler measure of $P$ is defined \cite{R2} as \[ m_k(P) = \int _{0}^{1} \log^k \left|P\left(e^{2\pi it}\right)\right| \,\mathrm{d} {t}. \] For $k=1$ this coincides with the classical (log) Mahler measure defined as \[ m(P) = \log|a| + \sum_{j=1}^{n} \log \left(\max \{1,|r_j|\} \right), \,\,\,\mbox{for} \,\,\, P(z)=a \prod_{j=1}^{n} (z-r_j), \] since by Jensen's formula $m(P)=m_1(P)$ \cite{book}. Though classical Mahler measure was studied extensively, higher Mahler measure was introduced and studied very recently by Kurokawa, Lalín and Ochiai \cite{R2} and Akatsuka \cite{R1}. It is very difficult to evaluate $k$-higher Mahler measure for polynomials except few specific examples shown in \cite{R1} and \cite{R2} , but it is relatively easy to find their limiting values. In \cite{sinha} Lalin and Sinha answered Lehmer's question \cite{book} for higher Mahler measure by finding non-trivial lower bounds for $m_k$ on $\mathbb{Z}[z]$ for $k \geq 2.$ In \cite{B} it has been shown using Akatsuka's zeta function of \cite{R2} that for $|r|=1$, $|m_k(z+r)|/k! \to 1/\pi$ as $k \to \infty.$ In this paper we generalize this result by computing the same limit for an arbitrary Laurent polynomial $P(z) \in \mathbb{C}[z, z^{-1}]$ using a different technique. \begin{thm} \label{thm:main} Let $P(z) \in \mathbb{C}\left[z, z^{-1}\right]$ be a Laurent polynomial, possibly with repeated roots. Let $z_1,\dots, z_n$ be the distinct roots of $P$. Then \[ \lim_{k \to \infty} \frac{\left| m_k(P) \right|}{k!} = \frac{1}{\pi} \, \sum_{z_j \in S^1} \frac{1}{\left| P'(z_j) \right|}, \] where $S^1$ is the complex unit circle $|z|=1$, and the right-hand side is taken as $\infty$ if $P'(z_j)=0$ for some $z_j\in S^1$, i.e., if $P$ has a repeated root on $S^1$. \end{thm} \section{Proof of the theorem} We first prove several lemmas which essentially show that the integrand may be linearly approximated near the roots of $P$ on $S^1$. \begin{lem} \label{lem1} Let $P(z) \in \mathbb{C}\left[z, z^{-1}\right] $ be a Laurent polynomial and $A \subseteq [0,1]$ be a closed set such that $P\left(e^{2 \pi i t}\right) \neq 0$ for all $t \in A.$ Then \[ \lim_{k\to\infty} \frac{1}{k!} \int_A \log^k \left| P\left(e^{2 \pi i t}\right) \right| \d{t}=0 \] \end{lem} \begin{proof} Since $A$ is closed, due to the periodicity of $e^{2\pi it}$ and continuity of $P(e^{2\pi it})$ there exist constants $b$ and $B$ such that $0 < b \leq \left| P\left(e^{2\pi it}\right) \right| \leq B$ on $A$. Then for each positive integer $k$, $(\log^k \left| P\left(e^{2\pi it}\right) \right|)/k!$ is bounded between $(\log ^k b)/k!$ and $(\log ^k B)/k!$, and therefore $(1/k!)\int _A\log^k \left| P\left(e^{2\pi it}\right) \right| \d{t}$ is bounded between $(\mu A \, \log ^k b)/k!$ and $(\mu A \, \log ^k B)/k!,$ where $\mu A$ is the Lebesgue measure of $A$. The result follows by letting $k$ tend to infinity. \end{proof} \begin{lem} \label{lem:lapprx} Let $P(z) \in \mathbb{C}\left[z, z^{-1}\right] $ be a Laurent polynomial with a root of order one at $z_0=e^{2\pi it_0},$ and $P'(z)$ be its derivative with respect to $z$. Then for each $\varepsilon\in(0,1)$ there exists $\delta >0$ such that $|t-t_0| < \delta$ implies \[ \left| 2 \pi (1-\varepsilon)(t-t_0) P' \left( e^{2 \pi i t_0} \right) \right| \leq \left| P\left( e^{2 \pi i t} \right) \right| \leq \left| 2 \pi (1+\varepsilon)(t-t_0) P' \left( e^{2 \pi i t_0} \right) \right|. \] \end{lem} \begin{proof} Set $f(t)=P\left(e^{2 \pi i t}\right).$ Then $f'(t_0)=2 \pi i P'\left(e^{2 \pi i t_0}\right)\ne 0$ and \[ f'(t_0)=\lim_{t \to t_0} \frac{f(t)-f(t_0)}{t-t_0}. \] Since $f'(t_0)\ne 0$, it follows that for each $\varepsilon\in(0,1)$ there exists $\delta > 0$ such that $0<|t-t_0|<\delta$ implies \[ 1-\varepsilon < \left| \frac{f(t)-f(t_0)}{(t-t_0)} \cdot \frac{1}{f'(t_0)} \right| < 1+\varepsilon, \] which proves the lemma since $f(t_0)=P(z_0)=0.$ \end{proof} \begin{lem} \label{lem3} Let $c \neq 0,$ and $t_0 \in \mathbb{R}.$ Then for all $\varepsilon > 0,$ \[ \lim_{k \to \infty} \frac{1}{k!} \left|\,\, \int_{t_0 - \varepsilon}^{t_0 + \varepsilon} \log^k|c(t-t_0)| \d{t} \right| = \frac{2}{|c|}. \] \end{lem} \begin{proof} For $k \geq 1$ and $x>0,$ it follows from integration by parts and induction that \[ \int_{0}^{x} \log^k u \d{u} = x \log ^k x +x \sum_{j=1}^{k} \frac{(-1)^j \, k! \, \log^{k-j} \, x}{(k-j)!}. \] Using the even symmetry of the integrand and substituting $u=|c(t-t_0)|$, we have \[ \frac{1}{k!} \left|\,\,\int_{t_0 - \varepsilon}^{t_0 + \varepsilon} \log^k|c(t-t_0)| \d{t} \right| = \frac{2}{|c|\,k!} \left|\,\int_{0}^{|c\varepsilon|} \log^k u \,\d{u} \right|, \] and it follows that \begin{eqnarray*} \lim_{k \to \infty} \frac{1}{k!} \left|\,\,\int_{t_0 - \varepsilon}^{t_0 + \varepsilon} \log^k|c(t-t_0)| \d{t} \right| &=& \lim_{k \to \infty} \frac{2}{|c|\,k!} \left|\,\int_{0}^{|c\varepsilon|} \log^k u \d{u} \right|\\ &=& 2 \varepsilon \lim_{k \to \infty} \! \left| \frac{\log^k |c \varepsilon|}{k!} + \sum_{j=1}^{k} \frac{(-1)^j \log^{k-j} |c \varepsilon|}{(k-j)!} \right| \\ &=& 2 \varepsilon \left| \sum_{n=0}^{\infty} \frac{(-1)^{n} \log^{n} |c \varepsilon|}{n!} \right| \\ &=& 2 \varepsilon e^{- \log |c \varepsilon|} = 2/|c|. \end{eqnarray*} \end{proof} \begin{lem} \label{lem4} Let $P(z) \in \mathbb{C}\left[z, z^{-1}\right] $ be a Laurent polynomial with a root of order one at $z_0=e^{2 \pi i t_0}.$ Then for all sufficiently small $\delta > 0,$ \[ \lim_{k \to \infty} \frac{1}{k!} \left|\,\, \int_{t_0-\delta}^{t_0+\delta} \log^k \left| P \left( e^{2 \pi i t} \right) \right| \d{t} \right| =\frac{1}{\pi \left| P'\left( e^{2\pi it_0}\right) \right|}. \] \end{lem} \begin{proof} First notice that since $z_0$ has order one, it cannot be a root of $P'(z).$ Now let $\varepsilon \in (0,1).$ By Lemma \ref{lem:lapprx} there is a $\delta > 0$ such that $|t-t_0| < \delta$ implies \[ \left| 2\pi(1-\varepsilon)(t-t_0) P'\left( e^{2\pi it_0}\right)\right| \leq \left|P\left(e^{2\pi it}\right)\right| \leq \left|2\pi(1+\varepsilon)(t-t_0) P'\left(e^{2\pi it_0}\right)\right| \leq 1. \] Setting $c=2\pi(1-\varepsilon)P'\left(e^{2\pi it_0}\right)$ and $d=2\pi(1+\varepsilon)P'\left(e^{2\pi it_0}\right)$ it follows that for $0< |t-t_0|<\delta$, \[ \log |c(t-t_0)| \leq \log\left|P\left(e^{2\pi it}\right)\right| \leq \log|d(t-t_0)| \leq 0, \] and hence \[ \left|\log^k|c(t-t_0)|\right| \geq \left|\log^k\left|P\left(e^{2\pi it}\right)\right|\right| \geq \left|\log^k|d(t-t_0)| \right| \geq 0, \] for all $k \in \mathbb{N}.$ Therefore, \[ \int \limits_{t_0-\delta}^{t_0+\delta} \left|\log^k|c(t-t_0)|\right| \d{t} \geq \! \! \! \int \limits_{t_0-\delta}^{t_0+\delta} \left|\log^k\left|P\left(e^{2\pi it}\right)\right|\right|\d{t} \geq \! \! \! \int \limits_{t_0-\delta}^{t_0+\delta} \left|\log^k|d(t-t_0)| \right| \d{t} \geq 0. \] But on $(t_0-\delta,t_0+\delta),$ for each fixed $k$, either all three functions $\log^k |c(t-t_0)|$, $\log^k \left|P\left(e^{2\pi it}\right)\right|$ and $\log^k |d(t-t_0)|$ are negative (if $k$ is odd), or positive (if $k$ is even). So the integrals of their absolute values are equal to the absolute values of their integrals and therefore we have \[ \left|\,\,\int \limits_{t_0-\delta}^{t_0+\delta} \log^k |c(t-t_0)| \d{t} \right| \geq \left|\,\,\int \limits_{t_0-\delta}^{t_0+\delta} \log^k \left|P\left(e^{2\pi it}\right)\right| \d{t}\right| \geq \left|\,\,\int \limits_{t_0-\delta}^{t_0+\delta} \log^k |d(t-t_0)| \d{t} \right|. \] By Lemma \ref{lem3} it follows that \[ \frac{2}{|c|} \geq \lim_{k \to \infty} \frac{1}{k!} \left|\,\, \int_{t_0-\delta}^{t_0+\delta} \log^k \left|P\left(e^{2\pi it}\right)\right|\right| \geq \frac{2}{|d|}. \] Since $c=2 \pi (1-\varepsilon)P' \left( e^{2 \pi i t_0} \right)$ and $d=2 \pi (1+\varepsilon)P' \left( e^{2 \pi i t_0} \right)$ and $\varepsilon > 0$ is arbitrary, we are done. \end{proof} With these lemmas, we now proceed to prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:main}] First notice that \[ \frac{m_k(P)}{k!} = \frac{1}{k!} \int_{0}^{1} \log^k \left|P\left(e^{2\pi it}\right)\right|\d{t}. \] If $P(z)$ does not have any roots on $S^1$ then choosing $A=[0,1]$ and applying Lemma \ref{lem1} we see that $ \left| m_k(P) \right|/k! \to 0$ as $k \to \infty$ and the theorem holds in this case. Now let $t_1,\dots, t_m\in[0,1]$ such that $e^{2\pi it_1},\dots, e^{2\pi it_m}$ are the distinct roots of $P$ on $S^1$. Let $\delta>0$ be sufficiently small so that $|P(e^{2\pi it_j})|<1$ on each interval $(t_j-\delta, t_j+\delta)$, $j=1,\dots, m$, and these intervals are disjoint and define \[ A=[0,1] \smallsetminus \bigcup _{j=1}^{m} (t_j -\delta, t_j + \delta). \] Using Lemma \ref{lem1}, and the fact that $\log|P(e^{2\pi it})|<0$ on $[0,1]\setminus A$, we find that \begin{eqnarray} \lim_{k\to\infty}\frac{|m_k(P)|}{k!} &=& \lim_{k\to\infty} \frac{1}{k!} \left| \int_A \log^k|P(e^{2\pi it})|\d{t} + \int_{[0,1]\setminus A} \log^k|P(e^{2\pi it})|\d{t} \right| \nonumber \\ &=& \lim_{k\to\infty} \sum_{j=1}^m \frac{1}{k!} \left| \int_{t_j-\delta}^{t_j+\delta} \log^k|P(e^{2\pi it})|\d{t} \right| \label{eqn:decomp} \end{eqnarray} If $P$ has no repeated roots on $S^1$, by Lemma \ref{lem4}, this final sum is equal $\pi^{-1}\sum_{j=1}^m |P'(e^{2\pi it_j})|^{-1}$, and so the theorem is proven in this case. Finally, if $P$ has a repeated root on $S^1$, we may assume without loss of generality that $P(z_1)=P'(z_1)=0$ where $z_1=e^{2 \pi i t_1}$. With $f(t)=P(e^{2\pi it})$, we have that $f(t_1)=f'(t_1)=0$. Then for each $\varepsilon\in(0,1)$ there is a $\delta_\varepsilon \in (0,1)$ such that \[ \left| \frac{f(t)}{t-t_1}\right| = \left| \frac{f(t)-f(t_1)}{t-t_1}\right| \le \varepsilon, \hspace{12pt}\mbox{ for all } 0<|t-t_1|<\delta_\varepsilon. \] It follows that $\log|f(t)|\le \log|\varepsilon(t-t_1)|<0$ for all $0<|t-t_1|<\delta_\varepsilon$, and so \[ \left| \log^k|f(t)|\right| \ge \left| \log^k|\varepsilon(t-t_1)|\right|, \hspace{12pt}\mbox{ for all } 0<|t-t_1|<\delta_\varepsilon. \] We may assume that $\delta_\varepsilon < \delta$, and using \eqref{eqn:decomp} and Lemma \ref{lem3} deduce that \begin{eqnarray*} \lim_{k\to\infty}\frac{|m_k(P)|}{k!} &\ge & \lim_{k\to\infty} \left| \int_{t_1-\delta}^{t_1+\delta} \log^k|P(e^{2\pi it})|\d{t} \right| \\ & = & \lim_{k\to\infty} \int_{t_1-\delta}^{t_1+\delta} \left| \log^k|P(e^{2\pi it})| \right| \d{t} \\ &\ge & \lim_{k\to\infty} \int_{t_1-\delta_\varepsilon}^{t_1+\delta_\varepsilon} \left| \log^k|\varepsilon(t-t_0)| \right| \d{t} \\ &=& \frac{2}{|\varepsilon|}. \end{eqnarray*} Since $\varepsilon\in(0,1)$ was arbitrary, the limit in question diverges to $\infty$ and the theorem is proven. \end{proof} \bibliographystyle{elsarticle-num}
2302.04117
\section{Introduction} Polynomial programming is a class of mathematical programming that seeks to minimize a polynomial objective function subject to polynomial constraints. These are optimization problems of the form \begin{align} \min_{x \in \mathbb{R}^n} \ f_0(x) \quad \text{subject to} \quad F(x) = 0, \tag{Opt} \label{eq:opt} \end{align} where $F(x) = \{f_1(x),\ldots,f_m(x)\}$ and $f_i(x) \in \mathbb{R}[x_1,\ldots,x_n]$ are polynomials. Throughout this paper we use the standard multi-index notation for polynomials. Namely, we denote \[f(x) = \sum_{\alpha \in \mathcal{A}} c_\alpha x^\alpha\] where $\mathcal{A} \subset \mathbb{N}^n$ is the monomial support of $f$ and for $\alpha \in \mathbb{N}^n$, $x^\alpha := x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. Polynomial programs have broad modelling power and therefore have naturally arisen in many applications including signal processing, combinatorial optimization, power systems engineering and more \cite{tan2001the,poljak1995a,molzahn2019a}. In general, these problems are NP hard to solve \cite{vavasis1990quadratic} but there exist many solution techniques and heuristics to tackle \eqref{eq:opt}. Some popular examples include the moment/SOS hierarchy \cite{parrilo2003semidefinite,lasserre2000global,wang2021tssos,invariants2022lindberg}, local methods \cite{boyd2004convex,potra2000interior} and the method of Lagrange multipliers \cite{bertsekas2014constrained}. This work proposes solving \eqref{eq:opt} by using the method of Lagrange multipliers along with techniques from numerical algebraic geometry. The method of Lagrange multipliers works by taking a constrained optimization problem and lifting it to a higher dimensional space and then considering an unconstrained optimization problem. Given a problem of the form \eqref{eq:opt} we define its \emph{Lagrangian} as \[L(x,\lambda) = f_0(x) - \sum_{j=1}^m \lambda_j f_j(x). \] The corresponding \emph{Lagrange system} is then defined as $\mathcal{L}_{f_0,F} = \{\ell_1,\ldots,\ell_n,f_1,\ldots,f_m\}$ where \[ \ell_i = \frac{\partial}{\partial x_i} L = \frac{\partial}{\partial x_i} (f_0 - \sum_{j=1}^m \lambda_j f_j).\] The main idea behind using Lagrange multipliers is that smooth critical points of \eqref{eq:opt} are zeroes of $\mathcal{L}_{f_0,F}$. Therefore, if we find all $(x,\lambda) \in \mathbb{R}^{n+m}$ that satisfy $\mathcal{L}_{f_0,F}(x,\lambda) = 0$, we will find all smooth local critical points, and therefore (so long as the variety of $F(x) = 0$ is smooth) the global optimum. For fixed $f_0,F$ the number of complex solutions $\mathcal{L}_{f_0,F}= 0$ is called the \emph{algebraic degree} of \eqref{eq:opt}. For generic $f_0,F$ a formula for the algebraic degree is given in \cite[Theorem 2.2]{MR2507133} as \begin{align} d_1 \cdots d_m S_{n-m}(d_0-1,d_1-1,\ldots,d_m-1) \label{eq: alg deg} \end{align} where $d_i = \deg(f_i)$ and \[S_{r}(n_1, \ldots,n_k) = \sum_{i_1+\ldots+i_k =r} n_1^{i_1}\cdots n_k^{i_k}.\] The algebraic degree has also been defined and studied for other classes of convex optimization problems in \cite{MR2496496} and \cite{MR2546336}. When $f_0$ is the Euclidean distance function, i.e., $f_0 = \lVert x - u \rVert_2^2$ for some $u \in \mathbb{R}^n$, then the number of complex critical points to \eqref{eq:opt} is called the \emph{ED degree} of $F$. The ED degree was first defined in \cite{DraismaTheEDD}. Since then, other work has bounded the ED degree of a variety \cite{MR3451425}, studied the ED degree for real algebraic groups \cite{baaijensRealAlgGroups}, Fermat hypersurfaces \cite{leeFermat}, orthogonally invariant matrices \cite{drusvyatskiyOrthogonally}, smooth complex projective varieties \cite{aluffiEDComplex}, the multiview variety \cite{maximMultiview} and when $F$ consists of a single polynomial \cite{breiding2020euclidean}. Similarly, when $f_0$ is the likelihood function then the number of complex critical points of \eqref{eq:opt} is called the \emph{ML degree}. The ML degree was first defined in \cite{MR2230921, HostenSolving} and since then the relationship between ML degrees and Euler characteristics \cite{MR3103064}, Euler obstruction functions~\cite{MR3686780} and toric geometry \cite{MR3907355, MR4103774, lindberg2021the} has been extensively studied. Further, the ML degree of various statistical models has also been considered \cite{MR2988436,manivel,MR4219257,MR4196404}. More recently, the algebraic degree of \eqref{eq:opt} has been considered when $f_0,\ldots,f_m$ are defined by sparse polynomials. In this case, the algebraic degree may be less than the bound given in \eqref{eq: alg deg}. The authors in \cite{ourpaper} showed that in some situations, the algebraic degree is equal to the mixed volume of the corresponding Lagrange system. One corollary of this result, as well as the analogous results for the ML degree and Euclicdean distance degree in \cite{ourpaper, lindberg2021the,breiding2020euclidean}, is that if $f_0,F$ have generic coefficients, then \emph{polyhedral homotopy} algorithms are optimal for solving the corresponding Lagrange system in the sense that exactly one path is tracked for each complex solution of $\mathcal{L}_{f_0,F} =0$. A downside of polyhedral homotopy algorithms is that there is a bottleneck associated with computing a start system. The work in this paper makes progress in this regard by explicitly designing a polyhedral homotopy algorithm for \eqref{eq:opt} when $m = 1$, circumventing the standard bottle neck. We see this paper as the first step and inspiration for an exciting new line of research, namely explicitly constructing optimal homotopy algorithms for specific parameterized polynomial systems of equations. The results of this paper are organized as follows. In \Cref{sec:poly homotopy}, we review the main idea behind polyhedral homotopy. In \Cref{sec:hyper} we explicitly construct a polyhedral homotopy algorithm for the case when there exists a single constraint. In \Cref{sec: hyp refined} we generalize this result to when this constraint is sparse. We present numerical results which show that our algorithm outperforms existing polyhedral homotopy solvers in \Cref{sec: numerics} and explicitly compute the algebraic degree of a certain multiaffine polynomial program in \Cref{sec: multi}. \section{Polyhedral homotopy continuation} \label{sec:poly homotopy} Homotopy continuation algorithms are a broad class of numerical algorithms used for finding all isolated solutions to a square system of polynomial equations. Specifically, suppose you have a square system of polynomial equations \[F(x) = \{f_1(x),\ldots,f_n(x) \} = 0\] where $f_i \in \mathbb{R}[x_1,\ldots,x_n]$ and the number of complex solutions to $F(x) = 0$ is finite. Homotopy continuation works by tracking solutions from an `easy' system of polynomial equations (called the \emph{start system}) to the desired one (called the \emph{target system}). This is done by constructing a \emph{homotopy}, \[H(t;x) : [0,1] \times \mathbb{C} ^n \longrightarrow \mathbb{C} ^n,\] such that \begin{enumerate} \item $H(0 ;x) = G(x)$ and $H(1;x) = F(x)$, \item the solutions to $G(x) = 0$ are isolated and easy to find \item $H$ has no singularities along the path $t \in [0,1)$ and \item $H$ is sufficient for $F$. \end{enumerate} Here we call a homotopy $H$ \emph{sufficient} for $F = H(1; x)$ if, by solving the ODE initial value problems $\frac{\partial H}{\partial t} + \frac{\partial H}{\partial x} \dot{x} = 0$ with initial values $\{ x \ : \ G(x) = 0\}$, all isolated solutions of $F(x) = 0$ can be obtained. One example of a homotopy, known as a \textit{straight line homotopy}, is defined as a convex combination of the start and target systems: \begin{align*} H(t;x) &= \gamma (1-t)G(x) + t F(x) \end{align*} where $\gamma \in \mathbb{C}$ is a generic constant. Choosing generic $\gamma$ ensures $H(x;t)$ is non-singular for $t \in [0,1)$. Path tracking is typically done using standard predictor-corrector methods. For more information, see \cite{bertini-book, Sturmfels-CBMS}. The main question when employing homotopy continuation techniques is how to select such an `easy' start system. If the target system roughly achieves the \emph{Bezout bound} then a \emph{total degree} start system is suitable. An example of this is \[G(x) = \{x_1^{d_1} -1,\ldots,x_n^{d_n} -1 \} \] where $\deg(f_i) = d_i$. Often in applications, the target system is defined by sparse polynomial equations. In this case, the Bezout bound can be a strict upper bound on the total number of complex solutions so using a total degree start system leads to wasted computation. A celebrated result, known as the \emph{BKK bound}, gives an upper bound on the number of complex solutions in the torus to a sparse polynomial system. In order to state the BKK bound, we need a few preliminary definitions but recommend \cite{mvoltext} for more details. Given a polynomial $f = \sum_{\alpha \in \mathcal{A}} c_\alpha x^\alpha \in \mathbb{C}[x_1,\ldots,x_n]$ the \emph{Newton polytope} of $f$ is \[ {\mathrm{Newt}}(f) = {\mathrm{Conv}} \{\alpha \ : \alpha \in \mathcal{A}\}. \] Given convex polytopes $P_1,\dots,P_n \subset \mathbb{R}^n$, consider the Minkowski sum $\mu_1 P_1 +\cdots +\mu_n P_n.$ A classic result shows that \[Q(\mu_1,\dots,\mu_n) = {\mathrm{Vol}}(\mu_1 P_1 +\cdots +\mu_n P_n)\] is a homogeneous degree $n$ polynomial in $\mu_1,\dots, \mu_n$. The \emph{mixed volume} of $P_1,\dots,P_n$ is the coefficient of $\mu_1 \cdots \mu_n$ of $Q$. We denote it as ${\mathrm{MVol}}(P_1,\ldots, P_n)$. \begin{theorem}[BKK Bound \cite{bernshtein1979the,khovanskii1978newton,kouchnirenko1976polyedres}]\label{thm:BKK} Let $F = \{ f_1,\dots,f_n \}$ be a sparse polynomial system in $\mathbb{C}[x_1,\dots,x_n]$ and let $P_1,\dots,P_n$ be their respective Newton polytopes. The number of isolated $\mathbb{C}^*$-solutions to $F=0$ is bounded above by ${\mathrm{MVol}}(P_1,\ldots,P_n)$. Moreover, if the coefficients of $F$ are general, then this bound is achieved with equality. \end{theorem} If the BKK bound is much less than the Bezout bound, a \emph{polyhedral} start system is a better choice since using a total degree start system will lead to wasted computation tracking homotopy paths that diverge to infinity. The downside of polyhedral homotopy is that the start system is more difficult to construct. This is not surprising since computing the mixed volume is $\#$P hard \cite{Khachiyan1993}. Even so, there is an algorithm that computes this start system \cite{huber1995a}. We briefly outline the idea behind polyhedral homotopy here but give \cite{huber1995a} as a more complete reference. Recall that $F = \{f_1,\ldots,f_n\}$, where $f_i = \sum_{\alpha \in \mathcal{A}_i}c_\alpha x^\alpha \in \mathbb{C}[x_1,\ldots,x_n]$. For each monomial, $\alpha \in \mathcal{A}_i$, we consider a \emph{lifting}, $w(\alpha)$, and the corresponding lifted system $F^w(x,t) = (f_1^w(x,t),\ldots,f_n^w(x,t))$ where \begin{align} f_i(x,t) = \sum_{\alpha \in \mathcal{A}_i} c_\alpha x^\alpha t^{w(\alpha)}. \label{eq:lifted poly} \end{align} Solutions to $F^w(x,t) = 0$ are algebraic functions in the parameter $t$. Such solutions can be written as \[ x(t) = (x_1(t),\ldots,x_n(t)). \] In a neighborhood of $t = 0$, each solution can be written as $x(t) = (x_1(t),\ldots,x_n(t))$ where \[ x_i(t) = y_i t^{u_i} \ + \quad \text{higher order terms in } t \] where $y_i\neq 0$ is a constant and $u_i \in \mathbb{Q}$. Substituting this into \eqref{eq:lifted poly} we have \[ f_i(x,t) = c_\alpha y^\alpha t^{u^T \alpha + w(\alpha)} \ + \quad \text{higher order terms in } t. \] By \cite[Lemma 3.1]{huber1995a} We wish to find $u$ such that \[ \min_{u \in \mathbb{R}^n} \ \{ u^T \alpha + w(\alpha) \} \] is achieved twice. For each solution $u$, the vector $(u,1)$ is an inner normal to one of the lower facets of the \emph{Cayley polytope} of $F$. Further more, each such solution, $u$, then induces a binomial polynomial system $\mathcal{B}_u$ which can be solved using Smith normal forms as well as a homotopy to track solutions from $\mathcal{B}_u(x) =0$ to $F(x)=0$. The sum of the number of solutions to $B_u(x) = 0$ for each solution $u$ is equal to the BKK bound of $F(x)$. Therefore, if the coefficients of $F$ are generic with respect to its monomial support, then polyhedral homotopy will track one homotopy path for each solution to $F(x) = 0$. We illustrate this on a small example. \begin{example}\label{ex:1} Consider the system of one polynomial equation in one unknown \[ f(x) = x^3-x^2+2x-1=0. \] We wish to solve this polynomial system using homotopy continuation and a polyhedral start system. To do this we consider a lifted system of $f$ which we obtain by weighting each monomial of $f$ by some power of $t$: \[ f_t = t^{\omega_3} x^3 - t^{\omega_2}x^2 +2t^{\omega_1}x - t^{\omega_0}. \] Now suppose we choose weighting $(\omega_0, \omega_1,\omega_2, \omega_3) = (0,3,1,2)$ so \[ f_t = t^2x^3 - tx^2 + 2xt^3 - t^0. \] A figure of this lifting is given in Figure~\ref{fig:ex1}. Solutions to $f_t = 0$ lie in the field of Puiseux series of $t$ and are of the form \[x(t) = \hat{x} t^a + \text{ higher order terms in } t\] where $a \in \mathbb{Q}$ and $\hat{x} \in \mathbb{C}^*$. For $x(t)$ to be a root of $f_t$, the lowest terms in $t$ must cancel out. Substituting in $x(t) = \hat{x} t^a $ into $f_t$, we have \begin{align} f_t(x(t)) = \hat{x}^3 t^{3a+2} - \hat{x}^2t^{2a+1} + 2 \hat{x} t^{a+3} - t^0. \label{eq_ft} \end{align} To have cancellation of the lowest terms, we must have the minimum exponent in $t$ achieved twice. In other words, \begin{align} \min_a \{3a+2, 2a+1, a+3, 0\} \label{eq: trop ex}. \end{align} must be achieved twice. There are six options: \begin{enumerate} \item $3a+2=2a+1 <a+3,0$ \item $3a+2=a+3<2a+1,0$ \item $3a+2 = 0<2a+1,a+3$ \item $2a+1 = a+3<3a+2,0$ \item $2a+1 = 0<3a+2,a+3$ \item $a+3 = 0 < 3a+2, 2a+1$ \end{enumerate} The only feasible solutions are the first and fifth where $a = -1$ and $a = -\frac{1}{2}$, respectively. For the first case, we substitute $a = -1$ into \eqref{eq_ft} giving \[ \hat{x}^3 t^{-1} - \hat{x}^2 t^{-1} + 2 \hat{x}t^2 -1 \] Multiplying through by $t$, we get \[h_1(\hat{x},t) = \hat{x}^3 - \hat{x}^2 + 2 \hat{x}t^3 - t. \] When $t = 0$ we have $h_1(\hat{x},0) = \hat{x}^3 - \hat{x}^2$ which has a unique $\mathbb{C}^*$ solution, $\hat{x} = 1$. Similarly, we consider when $a = -\frac{1}{2}$ and substitute this value of $a$ into \eqref{eq_ft} to get \[ h_2(\hat{x},t) = \hat{x}^3 t^{\frac{1}{2}} - \hat{x}^2 + 2 \hat{x}^{\frac{7}{2}} - 1. \] When $t = 0$ we have $h_2(\hat{x},0) = - \hat{x}^2 -1$ which has two $\mathbb{C}^*$ solutions, $\hat{x} = \pm \sqrt{-1}$. Therefore, to find all three solutions to $f(x) = 0$, we track the solution $\hat{x} = 1$ using the homotopy $h_1(\hat{x},t)$ from $t = 0$ to $t = 1$ and the solutions $\hat{x} = \pm \sqrt{-1}$ using the homotopy $h_2(\hat{x},t)$ from $t = 0$ to $t = 1$. A graphical depiction of the homotopy $h_1$ is shown in Figure~\ref{fig:homotopyex}. \begin{figure} \centering \includegraphics[width = 0.3\textwidth]{homotopy_ex.png} \caption{The homotopy $h_1(\hat{x},t)$ from Example~\ref{ex:1}. The red point is the starting point induced by the binomial system $\hat{x}^3 - \hat{x}^2 = 0$ while the green point is the target solution, namely a zero of $f(x) = 0$. } \label{fig:homotopyex} \end{figure} \begin{figure} \centering \includegraphics[width = 0.3\textwidth]{poly_lift.pdf} \caption{The polyhedral lift from Example~\ref{ex:1}} \label{fig:ex1} \end{figure} Finally, one can observe in \Cref{fig:ex1} that the lifted polytope of ${\mathrm{Newt}}(f)$ has two lower facets, $\mathcal{F}_1 = {\mathrm{Conv}}\{(0,0),(2,1)\}$ and $\mathcal{F}_2 = {\mathrm{Conv}}\{(2,1),(3,2)\}$. $\mathcal{F}_1$ has inner normal given by $(-\frac{1}{2},1)$ while $\mathcal{F}_2$ has inner normal given by $(-1,1)$. These are precisely the solutions to \eqref{eq: trop ex}. \end{example} The main bottleneck with employing polyhedral homotopy algorithms is finding the binomial start systems and corresponding homotopies. Example~\ref{ex:1} shows how finding these start systems is equivalent to solving a tropical system for a fixed lifting. The main contribution of this paper is to find these binomial start systems for polynomial systems arising as the Lagrange systems of polynomial optimization programs. \section{General hypersurface}\label{sec:hyper} \label{section:general hypersurface} We consider \eqref{eq:opt} when $m = 1$ and $\deg(f_0) = 1$. Specifically, we consider a polynomial optimization problem of the form \begin{align} \min_{x \in \mathbb{R}^n} \ u^T x \quad \text{s.t.} \quad f(x) = 0 \label{eq: hyp1} \end{align} where $u \in \mathbb{R}^n$ and $f(x)$ is a general degree $d\geq 2$ polynomial. We wish to design a homotopy algorithm to find all critical points to \eqref{eq: hyp1}. We first consider the Lagrange system $\mathcal{L}_{u,f} = \{\ell_1,\ldots, \ell_n, f\}$ of \eqref{eq: hyp1} where \begin{equation} \begin{aligned} \ell_i = u_i - \lambda \frac{\partial}{\partial x_i} f(x). \end{aligned} \end{equation} If $f$ is a generic degree $d$ polynomial and $u \in \mathbb{R}^n$ is generic, then by \cite{ourpaper}, the number of critical points to \eqref{eq: hyp1} is the same as that of \begin{align} \min_{x \in \mathbb{R}^n} \ u^T x \quad \text{s.t.} \quad \hat{f}(x) = 0 \label{eq: hyp2} \end{align} where $\hat{f} = \sum_{i=1}^n c_i x_i^d$ and $c_i$ is generic for $i \in [n]$. The Lagrange system of \eqref{eq: hyp2} is $\mathcal{L}_{u,\hat{f}} = \{\hat{\ell}_1,\ldots,\hat{\ell}_n, \hat{f}\}$ where for $i \in [n]$ \begin{equation} \begin{aligned}\label{eq: lag hyp} \hat{\ell}_i &= u_i - d \lambda c_i x_i^{d-1} \end{aligned} \end{equation} Observe that by \cite{ourpaper}, not only are the algebraic degrees of $(u,f)$ and $(u, \hat{f})$ the same, but the BKK bound of $\mathcal{L}_{u,\hat{f}}$ is the same as that of $\mathcal{L}_{u,f}$. The Lagrange system $\mathcal{L}_{u,\hat{f}}$ is sparser than $\mathcal{L}_{u,f}$ and and in fact a binomial start system $G$ for $\mathcal{L}_{u,\hat{f}}$ can be constructed efficiently. The following lemma shows that this is desirable since start systems for $\mathcal{L}_{u,\hat{f}}$ are start systems for $\mathcal{L}_{u,f}$ as well. We first need an observation about the existence of straight line homotopies. \begin{proposition} \label{prop: straight-line homotopies} Let $F(x; p) : \mathbb{C} ^n \times C^k \longrightarrow \mathbb{C} ^n$ denote a family of polynomials systems $F(x; p)$ that depends polynomially on parameters $p \in \mathbb{C} ^k$ and $F(x; p_1)$ a fixed member of that family. Then there is a nonempty set $U \subseteq \mathbb{C} ^k$, open and dense in the Euclidean topology, such that for every parameter $p_0$ in $U$ the straight line homotopy \[ H(t;x) = (1-t) F(x; p_1) + t F(x; p_0) \] is sufficient for $F(x; p_1)$. \end{proposition} \begin{proof} By the \emph{Parameter Continuation Theorem} by Morgan and Sommese \cite{sommese2005numerical} there exists a proper algebraic subvariety $\Sigma \subset \mathbb{C} ^k$ with the following property: Let $\rho: [0, 1] \rightarrow \mathbb{C} ^k$ be any smooth path and $H(t, x) = F(x,\rho(t))$ the corresponding homotopy. If $$\rho([0,1)) \cap \Sigma = \emptyset ,$$ then as $t~\rightarrow~1$, the limits of the solution paths $x(t)$ satisfying $H(x(t), t) = 0$ include all the isolated solutions to $F(x; \rho(1))$ = 0. In particular, $H(t, x)$ is a sufficient homotopy. From now on we identify the complex affine space $\mathbb{C} ^k$ with real affine space $\mathbb{R}^{2k}$ and denote by $\overline{\Sigma}$ the closure of $\Sigma$ in real projective space $\P_{\mathbb{R}}^{2k+1}$. Consider the projection $\pi : \P_{\mathbb{R}}^{2k+1} \dashrightarrow \P_{\mathbb{R}}^{2k}$ away from the point $p_1$. Since the codimension of $\overline{\Sigma}$, considered as a manifold, is at least two, the image $\pi \left( \overline{\Sigma} \right)$ has codimension at least one in $\P_{\mathbb{R}}^{2k}$. In particular, the image $\pi(p_0)$ of a generic element $p_0$ is not contained in $\pi \left( \overline{\Sigma} \right)$. Since the image $\rho( [0,1) )$ of the straight path $$ \rho(t) = (1-t) p_1 + t p_0 $$ between $p_0$ and $p_1$ is is contained in the fiber $\pi^{-1}(\pi(p_0))$, it does not intersect $\Sigma$. Consequently the to $\rho$ associated straight line homotopy is sufficient. \end{proof} \begin{lemma} \label{lem:hom} Let $G$ be a zero dimensional quadratic system of polynomials with exactly $BKK(\mathcal{L}_{u,\hat{f}})$ solutions. There is a sufficient homotopy connecting $G$ to $\mathcal{L}_{u,f}$. \end{lemma} \begin{proof} Let $ F(x; c) $ denote the family of polynomials with monomial support contained in the support of $\mathcal{L}_{u,f}$. In particular, the coefficient vector $c$ has one entry for each monomial of each polynomial of $\mathcal{L}_{u,f}$. We denote by $F(x; c_0)$ a generic member of this family. The desired homotopy will be constructed explicitly as a composition. We start by connecting $F(x; c_0)$ to both $\mathcal{L}_{u,f}$ and $G$ with a straight line homotopy, which by Proposition \ref{prop: straight-line homotopies} is a sufficient homotopy in both cases. We denote the straight line homotopy from $F(x; c_0)$ to $G$ by $H$. It now suffices to prove that $H$ does not merge any solutions of $F(x; c_0)$, allowing us to define the inverted homotopy $H^*$ by setting for $t$ in $(0,1)$ $H^*(t,x) = H(1-t,x)$ and $H^*(0,x) = G(x)$. Since tracking the roots of $F(x; c_0)$ to the roots of $G$ along the sufficient homotopy $H$ defines a surjective map, it is enough to prove that $F(x; c_0)$ and $G$ have the same number of solutions. By the results of Bernstein and Kushnirenko \cite{bernshtein1979the,kouchnirenko1976polyedres}, the number of solutions of $F(x; c_0)$ is equal to the BKK bound of $\mathcal{L}_{u,f}$. In \cite{ourpaper} the authors prove that the polynomial system $\mathcal{L}_{u,f}$ achieves this bound. Furthermore, as we noted at the beginning of Section \ref{section:general hypersurface}, $\mathcal{L}_{u,f}$ and $\mathcal{L}_{u,\hat{f}}$ have the same number of solutions: \begin{equation} \label{eq: inequalities one} \# \{ \mathcal{L}_{u,\hat{f}} \} = \# \{ \mathcal{L}_{u,f} = 0 \} = BKK(\mathcal{L}_{u,\hat{f}}). \end{equation} At the same time the number of solutions to $G$ is equal to the BKK bound of $\mathcal{L}_{u,\hat{f}}$, which is upper bounded by the BKK bound of $\mathcal{L}_{u,f}$ by inclusion on Newton poytopes: \begin{equation} \label{eq: inequalities two} \# \{ \mathcal{L}_{u,\hat{f}} \} \leq \# \{ G = 0 \} \leq BKK(\mathcal{L}_{u,\hat{f}}). \end{equation} Together, inequalities \eqref{eq: inequalities one} and \eqref{eq: inequalities two} imply that $\mathcal{L}_{u,\hat{f}}$ and $G$ have the same root count. \end{proof} We now give the main result of this section. \begin{theorem}\label{thm:hyp} For any $d \geq 2$ consider the Lagrange system of \eqref{eq: hyp1}. Then for generic $u$ and $f$ there are $d(d-1)^{n-1}$ complex solutions to the corresponding Lagrange system. Moreover, all of these solutions can be found via the homotopy \[H(x, \lambda ; t) = (1-t)B(x, \lambda) + t \gamma \mathcal{L}_{u,f}(x, \lambda) \] where \begin{align}\label{eq:bin hyp} B(x,\lambda) &= \begin{cases} u_1 - d \lambda c_1 x_1^{d-1} = 0 \\ \ \vdots \\ u_n - d \lambda c_n x_n^{d-1} = 0\\ \ c_0 + c_1 x_1^d = 0, \end{cases} \end{align} $\gamma \in \mathbb{C}$ is a generic constant and $\mathcal{L}_{u,f}(x, \lambda) $ is the Lagrange system of \eqref{eq: hyp1}. \end{theorem} \begin{proof} In order to design a polyhedral homotopy algorithm as described in \cite{huber1995a}, in the following we construct a binomial start system $B$ of $\mathcal{L}_{u, \hat{f}}$ by solving a tropical system. By the proof of Lemma~\ref{lem:hom} we then obtain a homotopy from $B$ to $\mathcal{L}_{u, f}$. Note that, by genericity of $f$, this homotopy can be chosen to be a straight line homotopy. By Lemma~\ref{lem:hom} it suffices to design a polyhedral homotopy algorithm as described in \cite{huber1995a} for $\mathcal{L}_{u, \hat{f}}$. In order to define this algorithm, we need to first find a binomial start system of $\mathcal{L}_{u, \hat{f}}$ which can be done by solving a tropical system. Let $a_i$ be the tropical variable corresponding to $x_i$ and $b$ the tropical variable corresponding to $\lambda$. Then for a given lifting $\omega \in \mathbb{R}^{3n+1}$, the corresponding tropical system that we want to solve is \begin{equation}\label{eq:trop general} \begin{aligned} &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{1,1}, (d-1)a_1 + b + \omega_{1,2}\} \\ &\ \ \vdots \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{n,1}, (d-1)a_n + b + \omega_{n,2}\} \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{n+1,1}, d a_1 + \omega_{n+1,2},\ldots,da_n + \omega_{n+1,n+1}\} \end{aligned} \end{equation} We consider a specific lifting that induces a unique solution to \eqref{eq:trop general}, giving a homotopy from one binomial start system to the desired target system \eqref{eq: lag hyp}. With the particular lifting \begin{equation}\label{eq:lift hyp} \begin{aligned} \omega_{ij} = \begin{cases} 0 & \text{if} \quad 1 \leq i \leq n+1, \ j = 1 \\ 1-d & \text{if} \quad 1 \leq i \leq n, \ j = 2 \\ -d & \text{if}\quad (i,j) = (n+1,2) \\ 1-d &\text{else} \end{cases} \end{aligned} \end{equation} This gives the following tropical system: \begin{equation}\label{eq:trop_hyp} \begin{aligned} &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, (d-1)a_1 + b + 1-d\} \\ &\ \qquad \vdots \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, (d-1)a_n + b + 1-d\} \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, d a_1 -d , d a_2 + 1-d, \ldots,da_n + 1-d\} \end{aligned} \end{equation} We claim there is a unique solution to \eqref{eq:trop_hyp} given by $a_i = 1$ for $i \in [n]$ and $b = 0$. First, observe that the first $n$ equations of \eqref{eq:trop_hyp} force $(d-1) a_i + b + 1 - d = 0$ for $i \in [n]$. This gives $a_i = \frac{d-1 -b}{d-1}$. Substituting this into the final equation and simplifying we have that \[\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, \frac{bd}{1-d} , \frac{bd}{1-d}+1, \ldots,\frac{bd}{1-d}+1\} \] must have minimum attained twice. It is then clear that the only solution is $b = 0$ where the minimum is achieved at the first two terms. Back substituting then gives that $a_i = \frac{d-1}{d-1} = 1$ for $i \in [n]$. The binomial start system $\mathcal{B}(x, \lambda)$ defined in \eqref{eq:bin hyp} then follows immediately from the solution to this tropical system. \end{proof} Observe that Bezout's Theorem gives an upper bound that \eqref{eq: lag hyp} has at most $d^{n+1}$ solutions but we see that the binomial system \eqref{eq:bin hyp} has $d(d-1)^{n-1}$ solutions. This gives another proof of the bound given in \cite{nie2009algebraic} for hypersurfaces and highlights the benefit of using a polyhedral start system over a total degree start system. Finally, we wish to remark that homotopy defined in \Cref{thm:hyp} will work for finding all smooth critical points for the optimization of a linear function over any hypersurface, $f$, so long as ${\mathrm{Newt}}(f)$ is contained in ${\mathrm{Conv}}\{0,de_1,\ldots,d e_n\}$. When ${\mathrm{Newt}}(f)$ is a strict subset of ${\mathrm{Conv}}\{0,de_1,\ldots,d e_n\}$, then algebraic degree of $f$ can be less than $d(d-1)^{n-1}$ meaning, this homotopy may lead to wasted computation in tracking divergent paths. \section{Refined hypersurface}\label{sec: hyp refined} We wish to now refine the hypersurface cased discussed in the previous section. Instead of assuming $f(x)$ generic degree $d$ hypersurface, we assume ${\mathrm{Newt}}(f) = {\mathrm{Conv}} \{ 0, d_1 e_1, \ldots, d_n e_n \}$. As above, to design an optimal binomial start system we first consider the monomials only corresponding to vertices of ${\mathrm{Newt}}(f)$. In this case, we consider $f = c_0 + \sum_{i=1}^n c_i x_i^{d_i}$ where $c_i$ are generic constants. In this case, the Lagrange system corresponding to \eqref{eq: hyp1} is $\mathcal{L}_{u,f} = \{\ell_1,\ldots, \ell_n, f\}$ where for $i \in [n]$ \[ \ell_i = u_i - d_i c_i \lambda x_i^{d_i - 1}. \] \begin{theorem}\label{thm:hyp refined} Consider \eqref{eq: hyp1} where $u$ is generic and \[{\mathrm{Newt}}(f) = {\mathrm{Conv}}\{0,d_1e_1,\ldots,d_n e_n \}\] where $1 \leq d_1 \leq d_2 \leq \cdots \leq d_n$ and the non-zero coefficients of $f$ are generic. The algebraic degree of \eqref{eq: hyp1} is \[d_1 \cdot (d_2-1) \cdots (d_n - 1).\] Moreover, all solutions of $\mathcal{L}_{u,f}(x) = 0$ can be found via the homotopy $H(x, \lambda ; t) = (1-t)B(x, \lambda) + \gamma t \mathcal{L}_{u,f}(x, \lambda) $ where \begin{align}\label{eq:bin hyp 2} B(x,\lambda) &= \begin{cases} u_1 - d_1 \lambda c_1 x_1^{d_1-1} = 0 \\ \ \vdots \\ u_n - d_n \lambda c_n x_n^{d_n-1} = 0\\ \ c_0 + c_1 x_1^{d_1} = 0, \end{cases} \end{align} $\gamma \in \mathbb{C}$ is generic. \end{theorem} \begin{proof} As before, we design a polyhedral homotopy algorithm as described in \cite{huber1995a} for $\mathcal{L}_{u, \hat{f}}$. Let $a_i$ be the tropical variable corresponding to $x_i$ and $b$ the tropical variable corresponding to $\lambda$. Then for a given lifting $\omega \in \mathbb{R}^{3n+1}$, the corresponding tropical system that we want to solve is \begin{equation}\label{eq:trop general 2} \begin{aligned} &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{1,1}, (d_1-1)a_1 + b + \omega_{1,2}\} \\ &\ \ \vdots \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{n,1}, (d_n-1)a_n + b + \omega_{n,2}\} \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{\omega_{n+1,1}, d_1 a_1 + \omega_{n+1,2},\ldots,d_na_n + \omega_{n+1,n+1}\} \end{aligned} \end{equation} We consider a specific lifting that induces a unique solution to \eqref{eq:trop general 2}, giving a homotopy from one binomial start system to the desired target system. Consider the particular lifting \begin{equation}\label{eq:lift hyp 2} \begin{aligned} \omega_{ij} = \begin{cases} 0 & \text{if} \quad 1 \leq i \leq n+1, \ j = 1 \\ 1-d_i & \text{if} \quad 1 \leq i \leq n, \ j = 2 \\ -d_1 & \text{if}\quad (i,j) = (n+1,2) \\ 1-d_i &\text{if}\quad i = n+1, \ 3 \leq j \leq n+1 \end{cases} \end{aligned} \end{equation} This gives the following tropical system: \begin{equation}\label{eq:trop_hyp 2} \begin{aligned} &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, (d_1-1)a_1 + b + 1-d_1\} \\ &\ \qquad \vdots \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, (d_n-1)a_n + b + 1-d_n\} \\ &\min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, d_1 a_1 -d_1 , d_2 a_2 + 1-d_2, \ldots,d_na_n + 1-d_n\} \end{aligned} \end{equation} We claim there is a unique solution to \eqref{eq:trop_hyp 2} given by $a_i = 1$ for $i \in [n]$ and $b = 0$. First, observe that the first $n$ equations of \eqref{eq:trop_hyp 2} force $(d_i-1) a_i + b + 1 - d_i = 0$ for $i \in [n]$. This gives $a_i = \frac{d_i-1 -b}{d_i-1}$. Substituting this into the final equation and simplifying we have that \begin{align} \min_{a\in\mathbb{Q}^n, b \in \mathbb{Q}} \ \{0, \frac{bd_1}{1-d_1} , \frac{bd_2}{1-d_2}+1, \ldots,\frac{bd_n}{1-d_n}+1\} \label{eq:last trop} \end{align} must have minimum attained twice. It is clear that there is a solution when $b = 0$, where the minimum is achieved at the first two terms. Back substituting then gives that $a_i = \frac{d-1}{d-1} = 1$ for $i \in [n]$. The binomial start system $B(x, \lambda)$ defined in \eqref{eq:bin hyp} then follows immediately from the solution to this tropical system. It remains to show that there are no other solutions to \eqref{eq:last trop}. There are three cases to rule out: \begin{enumerate} \item the minimum of \eqref{eq:last trop} is not attained at $0$ and $\frac{b d_i}{1-d_i} +1$ for $2 \leq i \leq n$; \item the minimum of \eqref{eq:last trop} is not attained at $\frac{b d_i}{1-d_i} +1$ and $\frac{b d_1}{1-d_1}$ for $2 \leq i \leq n$; and \item the minimum of \eqref{eq:last trop} is not attained at $\frac{b d_i}{1-d_i} +1$ and $\frac{b d_j}{1-d_j} +1$ for $i \neq j$, $2 \leq i,j \leq n$ \end{enumerate} For the first case, observe that if $0 = \frac{b d_i}{1-d_i} + 1$ for some $2 \leq i \leq n$, then $b = \frac{d_i -1}{d_i}$. This then implies that $\frac{b d_1}{1-d_1} = \frac{d_1}{1-d_1} \cdot \frac{d_i - 1}{d_i}<0$ so the minimum is not attained at $0$. To rule out case $(2)$, consider when $\frac{b d_i}{1-d_i} + 1 = \frac{bd_1}{1-d_1}$. If $d_i = d_1$ then there is no solution so suppose $d_i > d_1$. In this case, $b = \frac{(d_1 -1)(d_i -1)}{d_1 - d_i}<0$ and $\frac{d_1b}{1-d_1} = \frac{d_1(d_i-1)}{d_i-d_1}>0$ so the minimum would be attained at $0$ instead. Finally, if $\frac{b d_i}{1-d_i} +1 = \frac{b d_j}{1-d_j} +1$ this implies that $b = 0$ and in this case the minimum is attained at $0$ and $\frac{bd_1}{1-d_1}$. \end{proof} As a corollary we now have a families of hypersurfaces with algebraic degree one and zero. \begin{corollary}\label{cor:alg deg 1} Consider the Lagrange system of \eqref{eq: hyp1} where $u$ and $f$ are generic and ${\mathrm{Newt}}(f) = {\mathrm{Conv}} \{0, e_1, 2e_2,\ldots,2e_n \}$. Then the algebraic degree of $(u,f)$ is one. \end{corollary} We remark that this is the first instance that the authors are aware of that gives a partial classification of polynomial programs with algebraic degree one. This is in contrast to the ML degree, where \cite{MR3103064} classifies very affine varieties with ML degree one. It is an interesting open question to give a complete classification of polynomial programs with algebraic degree one. \begin{example} Consider the optimization problem \begin{align}\label{ex: opt} \min_{x_1,x_2 \in \mathbb{R}} \ u_1 x_1 + u_2 x_2 \quad \text{s.t.} \quad c_0 + c_1 x_1 + c_2 x_2 +c_3 x_2^2 = 0 \end{align} where $u_1,u_2,c_0,c_1,c_2,c_3 \in \mathbb{R}$ are real valued parameters. By \Cref{cor:alg deg 1}, \eqref{ex: opt} has algebraic degree one, meaning the Lagrange system \begin{align*} u_1 - \lambda c_1 &=0 \\ u_2 - \lambda(c_2 + 2 c_3 x_2) &=0 \\ c_0 + c_1 x_1 + c_2 x_2 +c_3 x_2^2 &= 0 \end{align*} has one solution. This solution can then be expressed as a rational function of the problem data $u_1,u_2,c_0,c_1,c_2,c_3$. In this case, the unique solution is \[x_1 = \frac{c_2^2 u_1^2 - 4c_0c_3u_1^2 - c_1^2u_2^2}{4c_1c_3u_1^2}, \quad x_2 = \frac{ c_1 u_2 - c_2u_1}{2c_3u_1}, \quad \lambda = \frac{u_1}{c_1}. \] \end{example} Similarly, \Cref{thm:hyp refined} also gives a family of polynomial programs with algebraic degree zero. \begin{corollary} Consider the Lagrange system of \eqref{eq: hyp1} where $u$ and $f$ are generic and ${\mathrm{Newt}}(f) = {\mathrm{Conv}} \{0, e_1,\ldots,e_k,2e_{k+1},\ldots,2e_n \}$ for some $2 \leq k \leq n$. Then the algebraic degree of $(u,f)$ is zero. \end{corollary} \section{Numerical results}\label{sec: numerics} \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} $n$& $20$ & $30$ & $40$ & $50$ & $60$ & $70$ & $80$ & $90$ \\ Polyhedral & $0.14$ & $0.51$ & $1.01$ & $2.30$ & $4.49$ & NA & NA & NA \\ $H$ & $0.07$ & $0.20$ & $0.35$ & $0.87$ & $1.65$ & $2.54$ & $3.78$ & $6.45$ \\ \end{tabular} \caption{Average time (sec) to find all critical points to \eqref{eq:opt} when $d = 2$ using standard polyhedral homotopy versus the homotopy, $H$, outlined in Theorem~\ref{thm:hyp}.} \label{tab1} \end{table} \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c} $n$& $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\ Polyhedral & $0.29$ & $0.93$ & $3.06$ & $9.79$ & $27.42$ & $88.37$ & $556.92$ \\ $H$ & $0.21$ & $0.68$ & $2.29$ & $7.35$ & $20.35$ & $70.02$ & $395.64$\\ \end{tabular} \caption{Average time (sec) to find all critical points to \eqref{eq:opt} when $d = 3$ using standard polyhedral homotopy versus the homotopy, $H$, outlined in Theorem~\ref{thm:hyp}.} \label{tab2} \end{table} \begin{table} \centering \begin{tabular}{c|c|c|c|c|c|c|c} $n$& $3$ & $4$ &$5$ & $6$ & $7$ & $8$ & $9$ \\ Polyhedral & $0.03$ & $0.17$ & $1.16$ & $7.04$ & $40.11$ & $228.48$ & $1225.78$ \\ $H$ & $0.03$ & $0.15$ & $0.83$ & $5.15$ & $34.79$ & $181.11$ & $1027.64$ \\ \end{tabular} \caption{Average time (sec) to find all critical points to \eqref{eq:opt} when $d = 4$ using standard polyhedral homotopy versus the homotopy, $H$, outlined in Theorem~\ref{thm:hyp}.} \label{tab3} \end{table} We implement the homotopy in Theorem~\ref{thm:hyp} with start system \eqref{eq:bin hyp} using the path tracking function in \texttt{HomotopyContinuation.jl}. We compare our implementation of the homotopy outlined in Theorem~\ref{thm:hyp} against the polyhedral one in \texttt{HomotopyContinuation.jl} and give the time it takes to run each homotopy algoirthm in Table~\ref{tab1}, Table~\ref{tab2} and Table~\ref{tab3}. The computations are all run using a $2018$ Macbook Pro with 2.3 GHz Quad-Core Intel Core i5. In all cases, our homotopy algorithm is much faster than the standard off the shelf software. When the hypersurface is of degree two, there are only two complex critical points. Despite this, standard polyhedral homotopy was unable to compute a start system when $n \geq 70$. In contrast, our specialized algorithm was able to find both critical points in a few seconds. We note that in this case, the Bezout bound of the corresponding polynomial system is $2^{n+1}$ where $n$ is the number of variables. When $n = 70$, the Bezout bound is $\approx 2.36 \times 10^{21}$, so it is unreasonable to expect that a total degree homotopy would work in this case. Similarly, in Table~\ref{tab2} and Table~\ref{tab3} we see that when the degree of the hypersurface is three or four, our algorithm increasingly outperforms the state-of-the-art polyhedral homotopy software as the number of variables increases. \section{Multiaffine optimization}\label{sec: multi} In this final section, we compute the algebraic degree of the following optimization problem: \begin{align} \min_{x \in \mathbb{R}^n} \ g(x) \quad \text{subject to} \quad f(x) = 0 ,\label{eq:multiaffine} \end{align} where both $g$ and $f$ are multiaffine, meaning ${\mathrm{Newt}}(f) = {\mathrm{Newt}}(g) = {\mathrm{Conv}}(\{0,1\}^n)$. \begin{theorem} The algebraic degree of \eqref{eq:multiaffine} is $!(n+1)$ i.e. the number of derangements of $\{0,1,\dots,n\}$. \end{theorem} \begin{proof} By \cite{ourpaper,rose2022multi} the Lagrange system corresponding to the optimization problem \eqref{eq:multiaffine} is BKK exact. Hence the algebraic degree of \eqref{eq:multiaffine} is equal to the normalized mixed volume of the Newton polytopes of Lagrange system $\mathcal{L}_{g,f} = \{\ell_1,\ldots, \ell_n, f\}$. We denote this value as ${\mathrm{MVol}}(\mathcal{L}_{g,f})$. Let us denote by $I_j$ the unit interval ${\mathrm{Conv}}(0, e_j)$ in the $j$-th coordinate direction in $\mathbb{R}^{n+1}$, then the Newton polytope ${\mathrm{Newt}}(\ell_i)$ of $\ell_i$ is given by the Minkowski sum $$ {\mathrm{Newt}}(\ell_i)= I_0+ I_1 +\ldots + \hat I_i + \ldots + I_n = - I_i + \sum_{j=0}^n I_j. $$ By definition, the mixed volume of the Newton polytopes of the Lagrange system $\mathcal{L}_{g,f} = \{\ell_1,\ldots, \ell_n, f\}$ is a coefficient in front of the monomial $\lambda_0\lambda_1\cdots\lambda_n$ in the polynomial expansion of \begin{equation*} \begin{aligned} &\mathrm{Vol}(\lambda_0{\mathrm{Newt}}(f)+\lambda_1{\mathrm{Newt}}(\ell_1)+\ldots+{\mathrm{Newt}}(\ell_n)) = \\ & \mathrm{Vol}\left((\Lambda - \lambda_0)\cdot I_0 + (\Lambda - \lambda_1)\cdot I_1 + \ldots + (\Lambda - \lambda_n)\cdot I_n\right), \end{aligned} \end{equation*} where $\Lambda=\sum_{i=0}^n \lambda_i$. A direct computation using multilinearity of mixed volume shows that \begin{equation*} \begin{aligned} &\mathrm{Vol}\left((\Lambda - \lambda_0)\cdot I_0 + (\Lambda - \lambda_1)\cdot I_1 + \ldots + (\Lambda - \lambda_n)\cdot I_n\right) =\\ & \lambda_0\lambda_1\ldots\lambda_n \sum_{K\subset \{0,\ldots,n\}} (-1)^{|K|}\cdot(n+1-|K|)! + \text{ other terms.} \end{aligned} \end{equation*} In total, we get the following expression for the mixed volume of the Lagrange system and hence for the algebraic degree of \eqref{eq:multiaffine}: \begin{equation*} \begin{aligned} {\mathrm{MVol}}(\mathcal{L}_{g,f}) = & \sum_{k=0}^{n+1}(n+1-k)!\cdot(-1)^k\cdot\binom{n+1}{k} \\ = & \sum_{t=0}^{n+1} (t)!\cdot(-1)^{n+1-t}\cdot\binom{n+1}{t} = !(n+1). \end{aligned} \end{equation*} \end{proof} \section{Conclusion} In this paper we presented a homotopy continuation algorithm for finding all complex critical points to a class of polynomial optimization problems. For generic problem parameters, our algorithm is optimal in the sense that it tracks one path for each complex critical point. The main benefit of our work is that we explicitly construct a start system, circumventing the standard bottle neck associated with polyhedral homotopy algorithms. This advantage was seen in our numerical results which showed that our algorithm was always faster than off-the-shelf homotopy continuation methods and it was able to find all complex critical points when other methods failed. Finally, we concluded by giving an explicit formula for the algebraic degree of a multiaffine polynomial optimization problem. \bibliographystyle{plain}
1705.09437
\section{Introduction}\label{sec:introduction}} Robust fitting of geometric models to data contaminated with both noise and outliers is a well studied problem with many applications in computer vision \cite{Fischler1981, Delong2012, Elhamifar2013, Haifeng2003}. Visual data often contain multiple underlying structures and there are pseudo-outliers (measurements representing structured other than the structure of interest \cite{Stewart1997}) as well as gross-outliers (produced by errors in the data generation process). Fitting models to this combination of data involves solving a highly complex multi-model fitting problem. The above multi-model fitting problem can be viewed as a combination of two sub problems: \textit{data labeling} and \textit{model estimation}. Although solving one of the sub-problems, when the solution to the other is given, is straightforward, solving both problems simultaneously remains a challenge. Traditional approaches to multi-model fitting were based on fit and remove strategy: apply a high breakdown robust estimator (e.g. RANSAC \cite{Fischler1981}, least k-th order residual) to generate a model estimate and remove its inliers to prevent the estimator from converging to the same structure again. However, this approach is not optimal as errors made in the initial stages tend to make the subsequent steps unreliable (e.g. small structures can be absorbed by models that are created by accidental alignment of outliers with several structures) \cite{Zuliani2005}. To address this issue, energy minimization methods have been proposed. They are based on optimizing a cost function consisting of a combination of data fidelity and model complexity (number of model instances) terms \cite{Boykov2001}. In this approach, the cost function is optimized to simultaneously recover the number of structures and their data association. Commonly such cost functions are optimized using discrete optimization methods (metric labeling \cite{Delong2012}). They start form a large number of proposal hypotheses and gradually converge to the true models. The outcome of those methods depends on the appropriate balance between the two terms in the cost function (controlled by an input parameter) as well as the quality of initial hypotheses. The method proposed in this paper is primarily designed to avoid the use of parameters that are difficult to tune. Sensitivity to the parameters included for the summation of terms with different dimensions is also an issue associated with the application of several other subspace learning and clustering methods. For instance, Robust-PCA \cite{candes2011robust} splits the data matrix into a low-rank matrix and a sparse error matrix. The aim is to minimize the cost function (which is a norm of the error matrix) while it is regularized by a rank of representation matrix. In factorization methods such as \cite{cabral2013unifying} the low-rank representation is obtained by learning a dictionary and coefficients for each data point. The effect of regularization is included using a parameter. These parameters often depend on noise scales, complexity of structures and even depend on the number of underlying structures and their data points. As such, these variables vary between data-sets and therefore limits the application of those methods. Another approach to multi-model fitting is to pose the problem as a clustering problem \cite{Agarwal2005} \cite{Govindu2005}. In this approach, the idea is that a pure sample (members of the same structure) of the observed data from a cluster can be represented by a linear combination of other data points from the same cluster. Then the relations of all points to each sampled subset can encode the relations between data points. For example Sparse Subspace Clustering SSC \cite{Elhamifar2013} tries to find a sparse block-diagonal matrix that relates data points in each cluster. The optimization task in this work is to minimize the error as well as the $L_1$ norm of this latent sparse matrix. In contrast, the regularization term in LRR \cite{LRRliu2013robust} uses nuclear norm of this sparse matrix. Our proposed method is computationally faster than these methods and does not need the parameter brought in both cases for the regularization. Recently \cite{LRRDetliu2016deterministic} gave a deterministic analysis of LRR and suggested that the regularization parameter can be estimated by looking at the number of data points. Although this improves the speed and accuracy of those methods, it remains unclear what would happen when the number of data points is very high (similar to databases studied in this work). We should also note that methods such as LRSR \cite{LRSRNCwang2016lrsr} and CLUSTEN \cite{TIP16kim2016robust}, with more constraints for the regularization and therefore more parameters, have also been proposed. A similar strategy is also taken to solve the problem of Global Dimension Minimization in \cite{poling2014new} which is used to estimate the fundamental matrix for the problem of two-view motion segmentation. The method is somewhat more accurate than LRR and SSC but it is computationally expensive. Another widely used clustering method is called Spectral Clustering \cite{Ng2002}. The main idea is to search for possible relations between data points and form a graph that encodes the relations obtained by this search. Spectral clustering, based on eigen-analysis of a pairwise similarity graph, finds a partitioning of the similarity graph such that the data points between different clusters have very low similarities and the data points within a cluster have high similarities. A simple measure of similarity between a pair of points lying on a vector field is the euclidean distance. However, such measures based on just two points will not work when the problem is to identify data points that are explained by a known structure with multiple degrees of freedom. For instance, in a 2D line fitting problem, any two points will perfectly fit a line irrespective of their underlying structure, hence a similarity cannot be derived by just using two points. In such cases an effective similarity measure can be devised using higher order affinities (e.g. for a 2D line fitting problem least square error between three or more points will provide a suitable affinity measure indicating how well those points approximate a line \cite{Agarwal2005}). There are several methods to represent higher order affinities using either a hyper-graph or a higher order tensor. Since spectral clustering cannot be applied directly to those higher order representations, they are commonly projected to a graph (discussed further in \secref{background}). It is also known that the number of elements in a higher order affinity tensor (or number of edges in a hyper-graph) will increase exponentially with the order of the affinities ($h$), which is directly related to the complexity of the model ($p$). Hence, for complex models it would not be computationally feasible (in terms of memory utilization or computation time) to generate the full affinity tensor (or hyper-graph) even for a moderate size dataset. The commonly used method to overcome this problem is to use a sampled version of the full tensor (or hyper-graph) obtained using random sampling \cite{Govindu2005}, \cite{Agarwal2005}. The information content of the projected graph heavily dependents on the quality of the samples used \cite{Chen2009}, \cite{Ochs2012}, \cite{Purkait2014} and we analyze this behavior in \secref{background}. In this paper, we propose an efficient sampling method called cost based sampling (CBS), to obtain a highly accurate approximation of the full graph required to solve multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the actual parameter distribution for the given data. The approach is similar to the one proposed in \cite{MoGCVPRli2015subspace} where Mixture of Gaussian is used to find the structures in the parameter space. The search is initialized by a few Gaussians and the parameters of the mixture is obtained through Expectation-Maximization steps. The grouping strategy is based on the above mentioned optimization approach and similarly involves the use of a regularization parameter that is difficult to tune. When the number of Gaussians is too low, which is to seek a few perfect samples, the noise cannot be characterized properly and some structures may be missed. Increasing the number of Gaussians is computationally expensive for the EM part. This is where our approach is most effective. Our proposed method benefits from a fast greedy optimization method to generate many samples and makes use on the inherent robustness of Spectral Clustering for occasional samples that may not be perfect. The underlying assumption in this approach is that the parameter distribution can reveal the underlying structures and the generation of many good samples is the key to properly construct the distribution for successful clustering. This basic approach can be implemented with different choices of cost functions and optimization methods. The choice of the optimization method mostly determines the speed and the choice of the cost function affects the accuracy. For example, LBF \cite{LBFzhang2012hybrid} attempts to improve the generated samples of the cost function (chosen to be the $\beta$-number of the residuals of a model) by guiding the samples and increasing their size. Its optimization method is slower than our proposed method, which uses the derivatives of the cost function and the chosen cost function is very steep around the structures, which makes the initialization of the method very difficult and can lead to missing structures. The recipe to overcome these shortcomings is based on using extra constraint, such as spatial contiguity, to ensure the purity of samples before increasing their sizes. In this paper, we approximate this actual parameter distribution using the k-th order cost function, which in turn enables us to generate samples using a greedy algorithm that incorporates a faster optimization method. The advantage of the proposed method is that it only uses information present in data with respect to a putative model and does not require any additional assumptions such as spatial smoothness. The main contribution of this paper is the introduction of a fast and accurate data segmentation method based on effective combination of the accuracy of a new sampling method with the speed of a good clustering method. The paper presents a reformulation of these methods in way that it makes them complementary. The proposed sampler is ensured to visit all structures in data (by a high probability) and guide each sample to represent the closest structure. This is achieved by focusing on the distribution of putative models in parameter space and by providing samples with highest likelihoods from each structure. The choice of maximum likelihood method plays an important role in the speed of the sampler where the accuracy is still preserved. Furthermore, compared to other techniques, the proposed method incorporates less sensitive parameters that are difficult to tune. In particular, we compare the proposed method with ones using a scale parameter to combine two unrelated cost functions. Such a parameter is often data dependent and difficult to tune for a general solution. The rest of this paper is organized as follows. \secref{background} discusses the use of clustering techniques for robust model fitting and the need for better sampling methods. \secref{method} describes the proposed method in detail and \secref{experimentalanalysis} presents experimental results involving real data, and comparisons with state-of-the-art model-fitting techniques. Additional discussion regarding the merits and shortcomings of the method is presented in \secref{Discussion} followed by a conclusion in \secref{conclusion}. \section{Background} \label{sec:background} Consider the problem of clustering data points $ X = \left[ x_i \right]^N_{i=1}; x_i \in \mathbb{R}^d$ assuming that there are underlying models (structures) $\Theta = \left[ \theta^{(j)}\right]_{j=1}^m; \theta^{(j)} \in \mathbb{R}^p$ that relate some of those points together. Here $N$ is the number of data points and $m$ is the number of structures in the dataset with zeroth structure assigned for outliers. Clustering a data-set, in such a way that elements of the same group have higher similarity than the elements in different groups is a well-studied problem with attractive solutions like spectral clustering. Spectral clustering operates on a pairwise undirected graph with affinity matrix, $G$, that contain affinities between pairs of points in the dataset. As explained earlier, for model fitting applications, only higher than pairwise order affinities reveal useful similarity measure and spectral clustering cannot be directly applied to higher order affinities. Agrawal \mbox{\emph{et al.\ }} \cite{Agarwal2005} introduced an algorithm where the higher order affinities (in multi-structural multi-model fitting problems) were represented as a hyper-graph. They proposed a two step approach to partition a hyper-graph with $h=p+1$ ($p$ is the number of parameters of the model) affinities. In the first step, the hyper-graph was approximated with a weighted graph using clique averaging technique. The resulting graph was then segmented using spectral clustering. Constructing the hyper-graph with all possible $p+1$ edges is very expensive to implement. As such, they used a sampled version of the hyper-graph constructed by random sampling. Govindu \cite{Govindu2005} posed the same problem in a tensor theoretic approach where the higher order affinities were represented as an $h$-dimensional tensor $\mathcal{P}$. Using the relationship between higher order SVD (HOSVD) of the $h$-mode representation and the eigan value decomposition \cite{Govindu2005} showed that the supper symmetric tensor $\mathcal{P}$ (the similarity does not depend on the ordering of points in the $h$-tuple) can be decomposed in to a pairwise affinity matrix using $G = PP^\top$. Here $P$ is the flattened matrix representation\footnote{The flattened matrix ($P_d$) along dimension $d$ is a matrix with each column obtained by varying the index along dimension $d$ while holding all other dimensions fixed.} of $\mathcal{P}$ along any dimension. The size of the matrix $P$ is still very large. For example, the size of $P$ for a similarity tensor constructed using $h$-tuples from a dataset containing $N$ data points is $N \times N^{h-1}$. As with the hyper-graphs, to make the computation tractable Govindu \cite{Govindu2005} suggested a sampled version of the flattened matrix ($H \approx P$) to be used. Each column of $H$ was obtained using the residuals to a model ($\theta$) estimated using randomly picked $h-1$ data points. In the remainder of the text we adopt this tensor theoretic approach. The sampling strategy used to construct the sample matrix $H$ critically affects the clustering and thus, overall performance of the model fitting solution. \subsection{Why distribution of sampling is important?} In tensor theoretic approach, pairwise affinity matrix $G$ is constructed by multiplying the matrix $H$ with its transpose where $H(i,l) = e^{-r^2_{\theta_l}(i) /2\sigma^2}$, $r^2_{\theta_l}(i)$ is the squared residual of point $i$ to model $\theta_l$ (obtained by fitting to a tuple $\tau_l$) and $\sigma$ is a normalization constant. \begin{equation} G_{[N \times N]} = HH^\top = \sum_{l=1}^{n_H} \underbrace{\left[H^{(l)}{H^{(l)}}^\top \right]}_{G^{(l)}_{[N \times N]}} \label{equ:graphCont} \end{equation} where $H^{(l)}$ is the $l^{th}$ column of $H$ corresponding to the hypothesis $\theta_l$, $G^{(l)}$ is the contribution of hypothesis $\theta_l$ to the overall affinity matrix ($G$) and $n_H$ is the total number of hypotheses. When a model hypothesis $\theta_l$ is close to an underlying structure in data (Hypothesis A in \figref{lineEx:subfig1}), the inlier points of that structure would have relatively small residuals and the resulting $G^{(l)}$ (\figref{lineEx:subfig2}) would have high affinities between the inliers and low affinity values for all other point pairs (outlier-outlier, outlier-inlier). On the other hand, when a model hypothesis $\theta_l$ is far (in parameter space) from any underlying structure, the presumption is that the resulting residual would be large, leading to a $G^{(l)} \approx \textbf{0}_{[N \times N]}$. However, as seen in \figref{lineEx:subfig1} (for Hypothesis B), this is not always the case in model fitting. It is highly likely that there exists some data points that give small residuals even for such hypothesis (far from any underlying model) leading to high $H(i,l)$ values. The resulting $G^{(l)}$ (\figref{lineEx:subfig3}) would have high affinities between some unrelated points that can be seen as noise in the overall graph. The effect of these bad hypothesis can be amplified by the fact that the normalization factor, $\sigma$ is often overestimated (using robust statistical methods) when the hypothesis $\theta_l$ is far (in parameter space) from any underlying structure. It is important to note that if none of the hypotheses (used in constructing the graph) are close to a underlying structure, then the overall graph would not have higher affinities between the data points in that structure and the clustering methods would not be able to segment that structure. The above example shows that the sampling process influences the level of noise in the graph. While spectral clustering can tolerate some level of noise, it has been proved that this noise level is related to the size of the smallest cluster we want to recover (tolerable noise level goes up rapidly with the size of the smallest cluster) \cite{Balakrishnan2011}. As model fitting often involves recovering small structures, it is highly important to limit the noise level in the affinity matrix. \begin{figure} \centering \subfloat[Data] { \includegraphics[width=3.25in]{LnoiseEx_data.pdf} \label{fig:lineEx:subfig1} } \subfloat[$G^{(A)}$]{ \includegraphics[width=1.6in]{LnoiseEx_Ha.pdf} \label{fig:lineEx:subfig2} } \subfloat[$G^{(B)}$]{ \includegraphics[width=1.6in]{LnoiseEx_Hb.pdf} \label{fig:lineEx:subfig3} } \caption{An example line fitting scenario on a synthetic dataset containing two lines and some outliers. The lines A and B show two model hypotheses while the shaded areas around the lines indicate to the corresponding $\sigma$ values. (b) and (c) show the contributions of hypotheses A and B to the overall graph respectively. The data points are sorted according to their model affiliation, where the first 50 data points belong to line one followed by line two (50 points) and the outliers (20 points). The dashed lines indicate the cluster boundaries.} \label{fig:lineEx} \end{figure} For any two data points $x_i,x_j$ we can write: \begin{equation} G(i,j) = \frac{1}{n_H} \sum_{l=1}^{n_H} \underbrace{e^{-\frac{\left (r^2_{\theta_l}(i) + r^2_{\theta_l}(j)\right )}{2\sigma^2}}}_{g_{ij}(\theta_l)} \xrightarrow[n_H\uparrow]{as} \int P_\theta \cdot g_{ij}(\theta_l)~d\theta \label{equ:gij} \end{equation} For any model fitting problem with $p > 2$ there exists infinite number of models $\theta_l$ where $g_{ij}(\theta_l) \rightarrow 1$. This implies that for any two points, $G(i,j)$ (according to \eqnref{gij}) can be maximized or minimized by choosing $P_\theta$ accordingly. For a graph to have the block diagonal structure suitable for clustering, $G(i,j)$ needs to be large for $x_i \wedge x_j \in \theta_t$ and small otherwise. If hypotheses are selected from a Gaussian mixture distribution with sharp peaks around the underlying model parameters and low density in other places and $\theta_t$ representing the true underlying structures, we have: \begin{equation} P_\theta = \sum_{t=1}^{m} \phi_t ~\mathcal{N} (\theta_t, \Sigma_t ). \end{equation} the edge weights approach the following values when $\Sigma_t \to \textbf{0}$: \begin{equation} G(i,j) \to \left\{\begin{matrix} \phi_t & i \wedge j \in \theta_t\\ 0 & i \wedge j \notin \theta_t \end{matrix}\right. \end{equation} The $G$ results in a graph that has a block diagonal structure suitable for clustering. Of course, generating sample hypotheses form this distribution is not possible because it is unknown until the problem is solved. This point is further illustrated using a simple model fitting experiment using a synthetic dataset containing four lines. Each line contain 100 data points with additive Gaussian noise $\mathcal{N}(0, 0.02^2)$, while 50 gross outliers were also added to those lines. First, $500$ hypotheses were generated using uniform sampling, random sampling (using 5-tuples) and the sampling scheme proposed in this paper (CBS). These hypotheses were then used to generate the three graphs shown in \figref{examleGraph}. As the data is arranged based on the structures membership, a properly constructed graph should show a block diagonal structure with high similarities between points in the same structure and low similarities for data from different structures.The figure shows that while the CBS method has resulted in a graph favorable for clustering the other two sampling strategies have produced graphs with little information. The corresponding hypothesis distributions (\figref{examleGraph} (e-f)) show that only CBS has generated high amount of hypotheses closer to the underlying structure. \begin{figure*}[!t] \centering \subfloat[Data]{ \includegraphics[width=1.5in] {LfitEx_data.pdf} \label{fig:EstimationErrorCDF:subfig1} } \subfloat[Cost-based sampling]{ \includegraphics[width=1.5in] {LfitEx_mCBS.pdf} \label{fig:EstimationErrorCDF:subfig2} } \subfloat[Uniform sampling]{ \includegraphics[width=1.5in] {LfitEx_mU.pdf} \label{fig:EstimationErrorCDF:subfig3} } \subfloat[Random sampling]{ \includegraphics[width=1.5in] {LfitEx_mR.pdf} \label{fig:EstimationErrorCDF:subfig4} } \subfloat[Cost-based sampling]{ \includegraphics[width=1.5in] {LfitEx_pCBS.pdf} \label{fig:EstimationErrorCDF:subfig5} } \subfloat[Uniform sampling]{ \includegraphics[width=1.5in] {LfitEx_pU.pdf} \label{fig:EstimationErrorCDF:subfig6} } \subfloat[Random sampling]{ \includegraphics[width=1.5in] {LfitEx_pR.pdf} \label{fig:EstimationErrorCDF:subfig7} } \caption{The synthetic dataset containing four line structures is shown in (a) while the graphs produced by the cost based sampling, random and uniform sampling (-10,10) methods are shown in (b-d) respectively. The respective hypothesis distributions are shown in (e-f). While the CBS method has resulted in a graph favorable for clustering the other two sampling strategies have produced graphs with little information.} \label{fig:examleGraph} \end{figure*} Govindu \cite{Govindu2005} used randomly sampled $h-1$ (for affinities of order $h$) data points and calculated a column of $H$ by computing the affinity from those to each point in the dataset. It is well known that the probability of obtaining a clean sample, leading to a hypothesis close to a true structure in data, decreases exponentially with the size of the tuple \cite{Agarwal2005}. Hence it becomes increasingly unlikely to obtian a good graph for models with high number of parameters using random sampling. There are several techniques in the literature that try to tackle the clustering problem by tapping into available information regarding the likelihood distribution of good hypotheses. For instance, spectral curvature clustering \cite{Chen2009}, which is an algorithm designed for affine subspace clustering, employs an iterative sampling mechanism that increases the chance of finding good hypotheses. In this scheme, a randomly chosen affinity matrix ($H$) is used to build a graph and partitions it using the spectral clustering method to generate an initial segmentation of the dataset. Data points within each segment of this clustering are then sampled to generate a new set of columns of $H$. This process is repeated several times to improve the final clustering results. Similarly, Ochs and Brox \cite{Ochs2012} used higher order affinities in a hyper-graph setting for motion segmentation of video sequences. In their method, the affinity matrix is obtained using a sampling strategy that is partly random and partly deterministic. The higher order affinities are based on 3-tuples generated by choosing two points randomly. The third points are then chosen as a mixture of $12$ nearest neighbor points and $30$ random 3rd points. The previous guided sampling approaches generate the columns of the affinity matrix using the minimal size tuples. Purkait \mbox{\emph{et al.\ }} \cite{Purkait2014} advocated the use of larger tuples and showed that if those tuples are selected correctly, the hypotheses distribution would be closer to the true model parameters compared to smaller tuples. However selecting larger all inlier (correct) tuples using random sampling is highly unlikely. Purkait \mbox{\emph{et al.\ }} \cite{Purkait2014} suggested to use Random Cluster Models (RCM) \cite{Swendsen1987} to improve the sampling efficiency. RCM is based on selecting the tuples iteratively in a way that at every iteration the samples are selected using the segmentation results obtained by enforcing the spatial smoothness on the results of the previous iteration. This approach is particularly advantageous where the application satisfies the spatial smoothness requirements. Our proposed approach for constructing the affinity matrix, without relying on the existence of spatial smoothness, is explained in the next section. \section{Proposed Method} \label{sec:method} This section describes a new approach for multi-structural model fitting problem. Similar to \cite{Agarwal2005}, \cite{Govindu2005}, we approach multi-structural fitting as a clustering problem with the intention of applying spectral clustering. In this approach, the pairwise affinity matrix $G$ for spectral clustering is obtained by projecting the higher order affinity tensor ($\mathcal{P}$) via multiplying an approximated flattened matrix $H$ with its transpose. For affinities of order $h$, each column of $H$ is obtained by sampling $h-1$ data points and calculating the affinity of each point to those sampled points. The affinity of a data point $i$ to a $h-1$ tuple is calculated as $e^{-r^2_{\theta_l} (i)/(2\sigma^2)}$ where $\theta_l$ is the model parameters fitted to $h-1$ tuple and $\sigma$ is the normalization factor. For the sake of clarity, in the remainder of this text, a $h-1$ tuple ($\tau_l$) used to generate a column of $H$ is referred to as an edge while its respective model ($\theta_l$) is called a hypothesis. As discussed in \secref{background}, the way we sample the edges affects the information content of the resulting graph and our ultimate goal is to sample edges in such a way that the distribution of their associated hypotheses resembles the true distribution of the model parameters. While the true distribution of the model parameters for a given dataset $p(\theta~|~X)$ is unknown until the problem is solved, using Bayes' theorem it can be written as follows: \begin{equation} p(\theta|X) \propto p(X|\theta) p(\theta) \end{equation} where $p(X|\theta)$ is the likelihood of observing data $X$ under the model $\theta$ and $p(\theta)$ is the prior distribution of $\theta$. Given that the prior is uninformative (i.e. any parameter vector is equally likely), the posterior is largely determined by the data (the posterior is data-driven) and can be approximated by: $p(\theta|X) \propto p(X|\theta)$. A robust objective function is often used in multi-structural model fitting applications to quantify the likelihood of existence of a structure in data \cite{Stewart1997}. On that basis, we would argue that it can be a good approximation of the model parameters likelihood. For example the sample consensus objective function as employed in RANSAC is expected to have a peak in places where a true structure is present (in the parameter space) and low values where there are no structures. It should be noted here that when there are structures of different size, the sample consensus function associates higher values for larger structures (hence it is biased towards large structures). In this work, we select the cost function of the least k-th order statistics (LkOS) estimator as the objective function, as it has shown to perform with stability and high breakdown point \cite{Rousseeuw2005} in various applications and it is not biased towards large structures (LkOS is biased towards structures with low variance, which is a desirable property). A modified version of the LkOS cost function used in \cite{Bab-Hadiashar2008} is as follows: \begin{equation} C({\theta}) = \sum_{j=0}^{p-1}r_{i_{j-m,\theta}}^2(\theta) \label{equ:kthSortedCostFunction} \end{equation} where $r_{i}^2(\theta)$ is the $i$-th sorted squared residual with respect to model $\theta$ and $i_{k, \theta}$ is the index of the $k$-th sorted squared residual with respect to model $\theta$. Here $k$ refers to the minimum acceptable size of a structure in a given application and its value should be significantly larger than the dimension of the parameter space ($k \gg p$). Because the above cost function is designed to have minima around the underlying structures, the model parameters likelihood function can be expressed as: \begin{equation} P_\theta \propto p(X|\theta) \approx \frac{1}{Z} e^{-C({\theta})} . \label{equ:SamplingDist} \end{equation} The above function is highly non-linear and its evaluation over the entire parameter space, required for calculating the normalizing constant $Z$, would not be feasible. The common approach for sampling from a distribution that can only be evaluated up to a proportional constant on specified points is to use the Markov Chain Monte Carlo (MCMC) method (e.g. by using Metropolis-Hasting algorithm). However such algorithms need a good update distribution to be effective, and simple update distributions like random walk would be inefficient and may not traverse the full parameter space \cite{Andrieu2003}. In particular, setting up random walk distributions need the information regarding the span of model parameters, which is unknown until the problem is solved. \subsection{Sampling edges using the robust cost function } Using derivatives of the order statistics function in \eqnoref{kthSortedCostFunction}, a greedy iterative sampling strategy was proposed in \cite{Bab-Hadiashar2008} that is intentionally biased towards generating data samples from a structure in the data. This sampling strategy was then used to generate putative model hypotheses for different size tuples in conjunction with the fit and remove strategy to recover multiple structures in data \cite{Tennakoon2015}, \cite{Bab-Hadiashar2008}. Because fit and remove strategy is susceptible to errors in the initial stages, the sampling had to be reinitialized (randomly) several times to reduce the probability of error propagation in the sequential fit and remove stages. In this paper, we propose a modified version of this iterative update procedure (recalled in Algorithm~\ref{alg:HMSS}) to generate model estimates (edges) that are close to the peaks of the true parameter density function $p(\theta|X)$. Each edge used in constructing the $H$ matrix of the proposed method is obtained as follows: Initially a $h$-tuple ($h = p+2$) is picked according to the inclusion weights $W$ (this will be explained later). Using this tuple as the starting point the following update is run until convergence. A model hypothesis is generated using the selected tuple, and the residuals form each data point to this hypothesis are calculated. These residuals are then sorted and the $h$ points around the $k$-th sorted index are selected as the updated tuple for next iteration. In practice, the above update step has the following property: If the current $h$-tuple is a clean sample (all inliers) from a structure in data, there is a high probability that the next sample will also be from the same structure as there should be at least $k$ points agreeing to each true structure. On the other hand if the current hypothesis is not supported by $k$ points (not a structure in data), the next hypothesis would be at a distance in the parameter space. It is shown that residuals of a data structure with respect to an arbitrary hypothesis have a high probability of clustering together in the sorted residual space \cite{Toldo2008}, \cite{Haifeng2003}. As the next sample is selected from the sorted residual space, the probability of hitting a clean sample would then be higher than selecting it randomly. Following \cite{Tennakoon2015}, we use the following criterion to decide whether the update procedure is converged to a structure in data: \begin{equation} \begin{split} F_{stop} = \left ( r_{i_{k,\theta_l}}^2(\theta_l) < \frac{1}{h}\sum_{j=k-h+1}^{k}\underbrace{{r_{i_{j,\theta_{(l-1)}}}^2(\theta_l)}}_\text{(a)} \right ) \wedge \\ \left ( r_{i_{k,\theta_l}}^2(\theta_l) < \frac{1}{h}\sum_{j=k-h+1}^{k}\underbrace{{r_{i_{j,\theta_{(l-2)}}}^2(\theta_l)}}_\text{(b)} \right ). \end{split} \label{equ:stopCriterion} \end{equation} Here $(a)$ and $(b)$ are the squared residuals of the edge points in iterations $l-1$ and $l-2$ with respect to the current parameters $\theta_l$. This criterion checks the data points associated with the two previous samples to see if the average residuals of those points (with respect to the current parameters) are still lower than the inclusion threshold associated with having $k$ points (assuming that a structure has at least $k$ points implies that data points with residuals less than $ r_{i_{k,\theta_l}}^2(\theta_l)$ are all inliers). This indicates that the samples selected in the last three iterations are likely to be from the same structure hence the algorithm has converged. \renewcommand{\algorithmicrequire}{\textbf{Inputs:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \newcommand{\LINEIF}[2]{% \STATE\algorithmicif\ {#1}\ \algorithmicthen\ {#2} \algorithmicend\ \algorithmicif% } \begin{algorithm}[!t] \caption{Step-by-step algorithm of sample generation ($runCBS\_SG$)} \label{alg:HMSS} \begin{algorithmic} [1] \REQUIRE Data Points (${X} \in \left [ x_i \right ]_{i=1}^N$), minimum cluster size ($k$), $T$, inclusion weights ($W$) \ENSURE Final data indexes $I_{l}$, Scale $\sigma$ \STATE $l_{max} \gets 50$, $h \gets p+2$, $l \gets 0$ \STATE Select a $h$-tuple ($I_{0}$) from the data points according to weights $W$. \STATE Generate model hypothesis $\theta_0$ using the $h$-tuple $I_{0}$. \REPEAT \STATE $[r^2(\theta_l), i_{\theta_l}] = $SortedRes(${X}, \theta_l)$. \STATE $I_{l+1} \gets [x_{i_{\theta_l}(j)}]_{j=k-h+1}^{k}$ \STATE $\theta_{l+1} \gets$ LeastSquareFit$\left ( I_{l+1} \right )$ \STATE Evaluate the stopping criterion ($F_{stop}$) \LINEIF{$F_{stop}$}{break} \UNTIL{$( l{++} > l_{max}) $} \STATE $\sigma \gets$ MSSE($X,\theta_{l}, k , T$) \end{algorithmic} \end{algorithm} \subsection{Sub-sampling data} Although the above update procedure has a high probability of generating an edge that results in a hypothesis close to a peak in $p(\theta|X)$, there is no guarantee that all the structures present in the data will be visited given that the update step is reinitialized from random locations. If some of the structures were not visited by the sampling procedure, the resulting graph would not contain the information required to identify those structures. To ensure that the algorithm would visit all the structures in data, we propose to use a data sub-sampling strategy. Each run of the the update procedure in Algorithm~\ref{alg:HMSS} is executed only on a subset of data selected based on an inclusion weight ($W$). The inclusion weight, which is initialized to one, is designed in such a way that at every iteration, it will give higher importance to data points that are not modeled by the hypothesis used in the previous iterations. This will progressively increase the chance of unmodeled data to be included in the sampling process. This idea is similar to the Bagging predictors \cite{Breiman1996} with boosting \cite{Freund1996},\cite{Freund1997} in machine learning. In Bagging predictors multiple subsets of data formed by bootstrap replicates of the dataset are used to estimate the models, which are then aggregated to get the final model. Boosting improves the bagging process by giving importance to unclassified data points in successive classifiers. The overall edge generation procedure is as follows: A data subset of size $N_s$ is sampled from data using the inclusion weights $W$ without replacement ($W$ is normalized in $sampleData(\cdot)$ function). This sub-sample is then used in the update procedure in algorithm~\ref{alg:HMSS}, which produces an edge. Next the inclusion weights $W$ of the inliers to the above hypothesis are decreased while the inclusion weights of the remaining points are increased. This process is repeated for a fixed number of iterations. The complete steps of the proposed method (CBS) are listed in Algorithm~\ref{alg:PM}. The scale of noise plays a crucial role in the success of segmentation methods. In spectral clustering based model fitting methods, the scale is used to convert the residuals to an affinity measure. While most competing algorithms require this as an input parameter \cite{Purkait2014}, \cite{Pham2014}, the proposed method estimates the scale of noise from the given data. In this implementation, we selected the MSSE \cite{Bab-Hadiashar1999} to estimate the scale of noise. The MSSE algorithm requires a constant threshold $T$ as an input. This threshold defines the inclusion percentage of inliers. Assuming a normal distribution for noise, it is usually set to 2.5, i.e. $T=2.5$ will include 99\% of normally distributed inliers. Desirable properties of this estimator for dealing with small structures were discussed in \cite{Hoseinnezhad2010}. \begin{algorithm}[!t] \caption{Step-by-step algorithm of proposed model-fitting methods} \label{alg:PM} \begin{algorithmic} [1] \REQUIRE Data Points ($X_{d \times N} \gets \left [ x_i \right ]_{i=1}^N$), minimum size of structure ($k$), Number of structures ($n_c$), number of hypothesis ($n_H$), $T \gets [2.0 \sim 3.5]$ \STATE $W \gets [\frac{1}{N} \dots \frac{1}{N}]_{1 \times N}$; $N_s \gets N/n_c$; $w \gets \frac{20}{N}$ \REPEAT \STATE Sample $N_s$ data points from $X$ based on inclusion weights $W$; $[X_s, W_s] \gets sampleData(X, W, S_f)$. \STATE $[I_s, \sigma] \gets runCBS\_SG(X_s, k, T, W_s)$ \STATE Calculate residuals ($r_{I_s}^2$) to all data points from the h-tuple $I_s$. \STATE $H(:,i) \gets exp(-r_{I_s}^2/2\sigma_i^2)$ \STATE Calculate inliers $C_{inl}$ using $r_{I_s},\sigma_i$. \STATE $W \gets W \times 2$ \STATE $W(C_{inl}) \gets W(C_{inl}) \div 4$ \STATE $W(W>w) \gets {1}/{N}$ \STATE $W \gets {W}/{sum(W)}$ \UNTIL{$i{++} > n_H$} \STATE $G \gets H H^\top$ \STATE $[labels] \gets spectralClustering(G, n_c)$ \end{algorithmic} \end{algorithm} \section{Experimental Results} \label{sec:experimentalanalysis} We have evaluated the performance of the proposed method for multi-object motion segmentation in several well-known datasets. The results of the proposed cost-based sampling method (CBS) were then compared with state-of-the-art robust multi-model fitting methods. The selected methods use higher order affinities Spectral Curvature Clustering (SCC \cite{Chen2009}, HOSC\cite{Purkait2014} and OB \cite{Ochs2012}) or are based on energy minimization (RCMSA \cite{Pham2014}, PEARL \cite{Boykov2001} and QP-MF \cite{Jin2011}). The accuracy of all methods was evaluated using the commonly used clustering error (CE) measure given in \cite{Purkait2014}: \begin{equation} CE = \min_{\Gamma} \frac{\sum_{i=1}^{N} \delta \left( L^*(i) \neq L_r^\Gamma (i) \right) }{N} \times 100 \label{equ:custeringError} \end{equation} where $L^*(i)$ is the true label of point $i$, $L_r(i)$ is the label obtained via the method under evaluation and $\Gamma$ is a permutation of labels. The function $\delta(\cdot)$ returns one when the input condition is true and zero otherwise. The proposed CBS algorithm was coded in MATLAB (The code is publicly available: https://github.com/RuwanT/model-fitting-cbs) and the results for competing methods were generated using the code provided by the authors of those works. The experiments were run on a Dell Precision M3800 laptop with Intel i7-4712HQ processor. \subsection{Analysis of the proposed method} In this section we investigate the significance of each part of the proposed algorithm and the effect of its parameters on its accuracy. This analysis was conducted using a Two-view motion segmentation problem (see \secref{twoviewMS} for more details). We used the ``\textit{posters-checkerboard}'' sequence from RAS dataset \cite{Rao2010} to evaluate the significance of the main components of the CBS method. This sequence contain three rigid moving objects with 100, 99, 81 point matches respectively and 99 outlier points. In the first experiment the matrix $H$ was generated with edges obtained by: pure random sampling (RDM), with the CBS method without the sub-sampling strategy, i.e. lines 3, 7-10 removed from Algotihm~\ref{alg:PM} (CBS-nSS) and the complete proposed method (CBS) respectively. For each sampling method the number of hypothesis ($n_H$) was varied and the mean clustering error and the run time was recorded (averaged over 100 runs per each $n_H$). \figref{posterCheckerboard:subfig5} shows the variation of mean clustering error with the sampling time (computing time). The results show that for this problem accurate identification of models could not be achieved with pure random sampling even when large number of edges were sampled. It also shows that the sub-sampling strategy of the proposed CBS method significantly contributes towards accurate and efficient identification of the underlying models in data. Next we use the same image sequence to study the variations in accuracy of the proposed method with the value of parameter $k$. This parameter defines the minimal acceptable size for a structure (in number of points) in a given application. Here we vary the value of $k$ from 10 to 80 (CBS use edge of size 10 and the smallest structure in this sequence has only 81 points hence any value outside this range is not realistic). The number of hypothesis was set to 100 for both sampling methods. Results plotted in \figref{posterCheckerboard:subfig6} show that for CBS-nSS and CBS the clustering error reduces steeply up to around $k=20$. In CBS-nSS the CE remains relatively unchanged after that while in CBS the clustering error start to increase when $k$ goes beyond $40$. This behavior can be explained as follows: The CBS method estimates the scale of noise from data and the analysis of \cite{Hoseinnezhad2010} showed that the estimation of the noise scale from data requires \textit{at least} $20$ data points to limit the effects of finite sample bias. As such, the CBS method would not have high accuracy when $k<20$. In addition the data sub-sampling in CBS reduces the number of points available for each run of the sample generator hence the increased clustering error for large $k$ values. Using large values for $k$ is also not desirable because the smaller size structures would be ignored. Next, we compared the proposed hypothesis generation process against several well known sampling methods for robust model fitting (e.g. MultiGS \cite{Tat-Jun2012} and Lo-RANSAC~\cite{Chum2003}). These methods are designed to bias the sampling process towards selecting points from a structure in data. For completeness we have also included pure spatial sampling (generate hypothesis using points closer in space picked via a KDtree) and random sampling. Similar to the proposed method the hypothesis from these sampling methods were used to generate a graph which is cut to perform the clustering. The \figref{posterCheckerboard:subfig6} shows that the CBS method is capable of generating highly accurate clusterings faster than other sampling methods. It should be noted here that while we have only presented the results for one two-view motion segmentation case, similar trends were observed across all other problems tested in this paper. \begin{figure*}[!t] \centering \subfloat[Ground truth]{ \includegraphics[height=1.25in,width=1.5in] {posterCheckerBoard_GT.pdf} \label{fig:posterCheckerboard:subfig1} } \subfloat[Random]{ \includegraphics[height=1.25in,width=1.5in] {posterCheckerBoard_R.pdf} \label{fig:posterCheckerboard:subfig2} } \subfloat[CBS-nSS]{ \includegraphics[height=1.25in,width=1.5in] {posterCheckerBoard_CBSnSS.pdf} \label{fig:posterCheckerboard:subfig3} } \subfloat[CBS]{ \includegraphics[height=1.25in,width=1.5in] {posterCheckerBoard_CBS.pdf} \label{fig:posterCheckerboard:subfig4} } \subfloat[]{ \includegraphics[height=2.0in,width=2.2in] {posterCheckerBoard_res.pdf} \label{fig:posterCheckerboard:subfig5} } \subfloat[]{ \includegraphics[height=1.9in,width=2.2in] {posterCheckerBoard_k.pdf} \label{fig:posterCheckerboard:subfig6} } \subfloat[]{ \includegraphics[height=2.0in,width=2.2in] {posterCheckerBoard_sampling.pdf} \label{fig:posterCheckerboard:subfig7} } \caption{The results on ``\textit{posters-checkerboard}'' sequence, \ref{fig:posterCheckerboard:subfig1} shows the ground truth clustering while \ref{fig:posterCheckerboard:subfig2} - \ref{fig:posterCheckerboard:subfig4} shows the clustering obtained with RDM, CBS-nss and CBS at 1s. \ref{fig:posterCheckerboard:subfig5} and \ref{fig:posterCheckerboard:subfig6} shows the variation of clustering error with time and the value of parameter $k$ respectively, while \ref{fig:posterCheckerboard:subfig7} shows the variation in clustering error with the value of parameter $k$ (best viewed in color).} \label{fig:posterCheckerboard} \end{figure*} \subsection{Two-view motion segmentation} \label{sec:twoviewMS} Two-view motion segmentation is the task of identifying the points correspondences of each object in two views of a dynamic scene that contains multiple independently moving objects. Provided that the point matches between the two views are given as $[X_1, X_2]$ where $X_i = (x, y, 1)^\top$ is a coordinate of a point in view $i$, each motion can be modeled using the fundamental matrix $F \in \mathcal{R}^{3 \times 3}$ as \cite{Torr1997}: \begin{equation} X_1^\top F X_2 = 0 \end{equation} The distance from a given model to a point pair can be measured using the Sampson distance \cite{Hartley2003}. We tested the performance of the CBS method on the Adelaide-RMF dataset \cite{HoiSim2011} which contains key-point matches (obtained using SIFT) of dynamic scenes together with the ground truth clustering. The clustering error and the computational time of the CBS method on each sequence together with those of the competing methods (PEARL, FLOSS, RCMSA and QP-MF) are given in \tabref{fundamentalRes}. The results show that in comparison to the competing methods, the proposed method has achieved comparable or better accuracy over all sequences. Moreover, on average the computation time of the proposed method is around 4 times less than that of QP-MF and twice that of the RCMSA when its computational bottlenecks are implemented using C (MATLAB MEX) whereas our method is implemented using simple MATLAB script. One would expect significant improvements in terms of speed by using C language implementation. In these experiments the parameter $k$ of the proposed method was set to $k = min(0.1 \times N , 20)$. The number of samples in QP-MF was set to 200 (determined through trial and error: no significant improvement of accuracy was observed when the number of samples were increased beyond 200 for a test sequence). \begin{table*}[htbp] \centering \caption{Two-view motion segmentation results on Adelaide-RMF dataset. The Median CE values of PEARL and FLOSS \cite{Lazic2009} repoted in \cite{Pham2014} are used here.} \begin{tabular}{|r|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{3}[4]{*}{}} & \multirow{2}[2]{*}{PEARL} & \multirow{2}[2]{*}{FLOSS} & \multicolumn{2}{c|}{\multirow{2}[2]{*}{\textit{QP-MF}}} & \multicolumn{2}{c|}{\multirow{2}[2]{*}{\textit{RCMSA}}} & \multicolumn{2}{c|}{\multirow{2}[2]{*}{\textit{CBS}}} \\ \multicolumn{1}{|c|}{} & & & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \cline{2-9} \multicolumn{1}{|c|}{} & \textit{Median CE} & \textit{Median CE} & \textit{Median CE} & \textit{Time} & \textit{Median CE} & \textit{Time} & \textit{Median CE} & \textit{Time} \\ \hline \textit{biscuitbookbox} & {8.11} & 11.58 & 5.02 & 4.78 & 7.72 & 0.56 & \textbf{0.00} & 0.95 \\ \hline \textit{boardgame} & {16.85} & 17.92 & 17.38 & 4.49 & 12.09 & 0.50 & \textbf{11.28} & 0.99 \\ \hline \textit{breadcartoychips} & {12.24} & 15.82 & 8.65 & 4.52 & 9.97 & 0.64 & \textbf{5.63} & 0.93 \\ \hline \textit{breadcubechips} & {9.57} & 11.74 & 3.04 & 4.47 & 9.78 & 0.54 & \textbf{0.87} & 0.85 \\ \hline \textit{breadtoycar} & {10.24} & 11.75 & 6.33 & 4.20 & 8.73 & 0.44 & \textbf{3.96} & 0.75 \\ \hline \textit{carchipscube} & {10.30} & 16.97 & 17.27 & 3.59 & 4.85 & 0.42 & \textbf{2.44} & 0.65 \\ \hline \textit{cubebreadtoychips} & {9.02} & 11.31 & 2.14 & 5.07 & 8.87 & 0.71 & \textbf{1.91} & 1.13 \\ \hline \textit{dinobooks} & {19.17} & 20.28 & 17.92 & 5.20 & 17.50 & 0.73 & \textbf{12.98} & 1.25 \\ \hline \textit{toycubecar} & {12.00} & 13.75 & 14.50 & 3.71 & \textbf{11.00} & 0.38 & 19.19 & 0.70 \\ \hline \end{tabular}% \label{tab:fundamentalRes}% \end{table*}% \subsection{3D-motion segmentation of rigid bodies} The objective of 3D motion segmentation is to identify multiple moving objects using point trajectories through a video sequence. If the projections (to the image plane) of $N$ points tracked through $F$ frames are available, $ \left [ x_{f\alpha} \right ]_{\alpha =1 \dots N}^{f=1 \dots F}: x_{f\alpha} \in \mathcal{R}^2$ then \cite{Sugaya2004} has shown that the point trajectories $ P_\alpha = \left [ x_{1\alpha}, y_{1\alpha},x_{2\alpha}, \dots x_{F\alpha}, y_{F\alpha} \right ]^\top \in \mathcal{R}^{2F} $ that belong to a single rigid moving object are contained within a subspace of $rank \leq 4$, under the affine camera projection model. Hence, the problem of 3D motion segmentation can be reduced to a subspace clustering problem. One of the characteristics in subspace segmentation is that the dimension of the subspaces may vary between two and four, depending on the nature of the motions. This means that the model we are estimating is not fixed. The proposed method, which was not specifically developed to solve this problem (unlike some competing techniques \cite{Elhamifar2013}) is not capable of identifying the number of dimensions of a given motion and requires this information as an input. In our implementation we have used the Eigan values of the sampled data point to select a dimension $d$ of the model such that $2 \leq d \leq 4$. We utilized the commonly used ``checkerboard'' image sequence in the Hopkins 155 dataset \cite{Tron2007} to evaluate the CBS algorithm. This dataset contains trajectory information of 104 video sequences that are categorized into two main groups depending on the number of motions in each sequence (two or three motions). The clustering error (mean and median) and the computation time for CBS together with competing higher order affinity based methods are shown in \tabref{hopkins155}. The results show that CBS has achieved comparable clustering accuracies to those achieved by competing methods while being significantly faster than those methods (specially on 3-motion sequence). For completeness we have also included the results for some energy minimization (PEARL\cite{Boykov2001}, QP-MF\cite{Jin2011}) and fit \& remove (RANSAC, HMSS\cite{Tennakoon2015}) based methods as reported in \cite{Jin2011}. To gain a better understanding of the methods (that has good accuracy) across all sequences we have plotted the cumulative distributions of the errors per sequence in \figref{hopkinHist:subfig1} (two motion sequences) and \figref{hopkinHist:subfig2} (three motion sequences). For algorithms with a random elements the mean error across 100 runs is used. To provide a qualitative measure of the performance the final segmentation results of several sequences in the Hopkins 155 dataset, where CBS was both successful and unsucessful, are shown in \figref{hopkinQuality}. The sequences contained in the Hopkins 155 dataset are outlier-free. In order to test robustness to outliers, we added synthetically generated outlier trajectories to each three-motion sequence of Hopkins 155 dataset\footnote{The MATLAB code provided by http://www.vision.jhu.edu/data/hopkins155/ was used.}. The clustering results of the CBS method together with those obtained by the best performing method (SCC) are plotted in \figref{hopkinHist:subfig3}. The results show that CBS was able to achieve high accuracy in the presence of outliers on higher number of sequences. It should be noted here that the SSC algorithm is not designed to handle outliers and therefore was not included in this analysis. \begin{table*}[htbp] \centering \caption{Comparative performance in terms of accuracy and speed using Hopkings 155 checkerboard sequence.} \begin{tabular}{|r|r|r|r|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{\multirow{2}[4]{*}{}} & \multicolumn{1}{c|}{RANSAC} & \multicolumn{1}{c|}{PEARL} & \multicolumn{1}{c|}{QP-MF} & \multicolumn{1}{c|}{HMSS} & \multicolumn{1}{c|}{SSC\textsuperscript{*}} & \multicolumn{1}{c|}{SCC} & \multicolumn{1}{c|}{HOSC} & \multicolumn{1}{c|}{CBS} \\ \cline{2-9} \multicolumn{1}{|c|}{} & \multicolumn{8}{c|}{\textit{2 Motion Sequences}} \\ \hline Mean & \multicolumn{1}{c|}{6.52} & \multicolumn{1}{c|}{5.28} & \multicolumn{1}{c|}{9.98} & \multicolumn{1}{c|}{3.98} & \multicolumn{1}{c|}{2.23} & \multicolumn{1}{c|}{\textbf{1.40}} & \multicolumn{1}{c|}{5.28} & \multicolumn{1}{c|}{1.60} \\ \hline Median & \multicolumn{1}{c|}{1.75} & \multicolumn{1}{c|}{1.83} & \multicolumn{1}{c|}{1.38} & \multicolumn{1}{c|}{0.00} & \multicolumn{1}{c|}{0.00} & \multicolumn{1}{c|}{\textbf{0.04}} & \multicolumn{1}{c|}{0.02} & \multicolumn{1}{c|}{0.10} \\ \hline Time & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.66} & \multicolumn{1}{c|}{1.27} & \multicolumn{1}{c|}{\textbf{0.48}} \\ \hline & \multicolumn{8}{c|}{\textit{3 Motion Sequences}} \\ \hline Mean & \multicolumn{1}{c|}{25.78} & \multicolumn{1}{c|}{21.38} & \multicolumn{1}{c|}{15.61} & \multicolumn{1}{c|}{11.06} & \multicolumn{1}{c|}{5.77} & \multicolumn{1}{c|}{5.74} & \multicolumn{1}{c|}{7.38} & \multicolumn{1}{c|}{\textbf{4.98}} \\ \hline Median & \multicolumn{1}{c|}{26.01} & \multicolumn{1}{c|}{21.14} & \multicolumn{1}{c|}{8.82} & \multicolumn{1}{c|}{1.20} & \multicolumn{1}{c|}{\textbf{0.95}} & \multicolumn{1}{c|}{1.48} & \multicolumn{1}{c|}{1.53} & \multicolumn{1}{c|}{1.04} \\ \hline Time & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{1.47} & \multicolumn{1}{c|}{1.29} & \multicolumn{1}{c|}{2.00} & \multicolumn{1}{c|}{\textbf{0.55}} \\ \hline \multicolumn{9}{l}{\textsuperscript{*}\footnotesize{The results for SSC are generated using the faster ADMM \cite{Elhamifar2013} implementation provided}}\\ \multicolumn{9}{l}{\footnotesize{in http://vision.jhu.edu/ without any modifications. The SSC CSX implementation \cite{Elhamifar2009} }}\\ \multicolumn{9}{l}{\footnotesize{is more accurate but has significantly higher computational cost.}} \\ \end{tabular}% \label{tab:hopkins155}% \end{table*}% \begin{figure*}[!t] \centering \subfloat[Two motion sequences]{ \includegraphics[height=2.0in,width=2.25in] {Hopkings_2motionHist.pdf} \label{fig:hopkinHist:subfig1} } \subfloat[Three motion sequences]{ \includegraphics[height=2.0in,width=2.25in] {Hopkings_3motionHist.pdf} \label{fig:hopkinHist:subfig2} } \subfloat[Three motion with Outliers]{ \includegraphics[height=2.0in,width=2.25in] {Hopkings_3motionHist_out.pdf} \label{fig:hopkinHist:subfig3} } \caption{Cumulative distributions of the clustering errors (CE) per sequence of the Hopkings dataset. \figref{hopkinHist:subfig1} Two motion sequences, \figref{hopkinHist:subfig2} Three motion sequences and \figref{hopkinHist:subfig3} Three motion sequences with added synthetic outliers. } \label{fig:hopkinHist} \end{figure*} \begin{figure*}[!t] \centering \subfloat[three-cars]{ \includegraphics[height=1.5in,width=1.6in] {HresG_three-cars.pdf} \label{fig:hopkinQuality:subfig1} } \subfloat[articulated]{ \includegraphics[height=1.5in,width=1.6in] {HresG_articulated.pdf} \label{fig:hopkinQuality:subfig2} } \subfloat[cars5]{ \includegraphics[height=1.5in,width=1.6in] {HresG_cars5.pdf} \label{fig:hopkinQuality:subfig3} } \subfloat[cars2-07]{ \includegraphics[height=1.5in,width=1.6in] {HresG_cars2_07.pdf} \label{fig:hopkinQuality:subfig4} } \subfloat[1RT2RTCRT-B]{ \includegraphics[height=1.5in,width=1.6in] {HresB_1RT2RTCRT_B.pdf} \label{fig:hopkinQuality:subfig5} } \subfloat[2RT3RCR]{ \includegraphics[height=1.5in,width=1.6in] {HresB_2RT3RCR.pdf} \label{fig:hopkinQuality:subfig6} } \subfloat[cars9]{ \includegraphics[height=1.5in,width=1.6in] {HresB_cars9.pdf} \label{fig:hopkinQuality:subfig7} } \subfloat[cars2B]{ \includegraphics[height=1.5in,width=1.6in] {HresB_cars2B.pdf} \label{fig:hopkinQuality:subfig8} } \caption{Clustering results obtained using the proposed method on several examples sequences from the Hopkings dataset. The top row show cases where the proposed method has been sucessful whereas the bottom row show cases where the proposed method failed to identify all the clusters correctly (best viewed in colour). } \label{fig:hopkinQuality} \end{figure*} \subsection{Long-term analysis of moving objects in video} The point trajectories of the ``Hopkings155'' dataset used in the above analysis are hand tunned (i.e. the point trajectories of each sequence are cleaned by a human such that they do not contain gross-outliers or incomplete trajectories). Recently, more realistic ``Berkeley Motion Segmentation Dataset'' (BMS-26) was introduced by \cite{Ochs2014}, \cite{Brox2010} for long-term analysis of moving objects in video. This dataset consist of point trajectories that are obtained by running a state of the art feature point tracker (the large displacement optical flow \cite{Brox2011}), on 26 videos directly without any further post processing. Thus those feature trajectories contain noise and outliers and most importantly include incomplete trajectories. Incomplete trajectories are trajectories that do not run for the whole duration of the video, they can appear in any frame of the video and disappear on or before the last frame. These incomplete trajectories are mainly caused by occlusion and disocclusion. The traditional approach of using two views to segment objects is susceptible to short term variations (e.g. human standing for a short time can be merged with the background). Hence Brox and Malik \cite{Brox2010} proposed long-term video analysis where a similarity between two points trajectories was used to build a graph that was segmented using spectral clustering. Such pairwise affinities only model translations and do not account for scaling and rotation. Ochs and Brox \cite{Ochs2012} used affinities defined on higher order tuples, which results in a hyper-graph. Using a nonlinear projection this hyper-graph was then converted to an ordinary graph which was segmented using spectral clustering. In this analysis we use the approach proposed by Ochs and Brox \cite{Ochs2012} where a motion of an object is modeled using a special similarity transform $\mathcal{T} \in$ SSim(2), with parameters scaling ($s$), rotation ($\alpha$) and translation ($v$). The distance from a trajectory ($c_i(t) \to c_i(t')$) to the model $\mathcal{T}_t$ is calculated using $L_2$-distance $d_{\mathcal{T}_t, i} = \left \| \mathcal{T}_t c_i(t) - c_i(t') \right \|$. A motion hypothesis $\mathcal{T}_t$ at time $t$ can be obtained using two or more point trajectories that exist in the interval $[t,t']$ . In our implementation we used edges of size $h=p+2=4$ to generate hypotheses. It should be noted here that the distance measure is only valid if the trajectories used to generate the hypothesis and the trajectory to which the distance is calculated all coexist in time. Hence a distance of infinity is assigned to all the points that does not exist in the time interval $[t,t']$. This behavior causes complications in the weight update of the proposed method as now some trajectories can be identified as outliers even though those belong to the same object. To overcome this we uniformly sample small windows (of size 7 frames) and limit the weight updates to that window alone. Another important feature of this dataset is that most sequences have a large number of frames and data points (e.g. sequence "tennis" even with 8 times down-scaling \cite{Ochs2012}, includes more than 450 frames and 40,000 data points). Storing a graph of that size is challenging specially on a PC. Hence, in cases where the number of frames is large, we divide the video into few large windows (e.g. 100 frames) and solve the problem in each large window independently. Next we calculated the mutual distance between each structure in different windows and clustered them using k-means to get the desired number of structures. The number of clusters is a parameter selected such that it would result in reasonable accuracy with least over-segmentation. Once the clustering was obtained they were evaluated using the method provided along with the dataset (man made masks on specific frames of the videos). We compare our results with \cite{Ochs2012}, \cite{Purkait2014}, which are based on higher order affinities. The results given in \tabref{longtermVid} show that our method has achieved similar accuracies to those with significant improvements in computation time. The computation time is related to the number of hyper-edges used and OB used $N^2 \times (30+12)$ hyper edges in their implementation where as HOSC used ${2N}/{5} + N$. In contrast our method uses fewer hyper-edges ($N/10$) selected using the k-th order cost function. The results show that if the edges are selected appropriately similar accuracies can be achieved and lower number of edges means a lower computational time. We also note here that while the two competing methods \cite{Ochs2012},\cite{Purkait2014} use spacial contiguity in selecting the edges to construct the affinity graph, the proposed method have not used any such additional information. \begin{table*} \caption{Motion segmentation results on Berkeley Motion Segmentation Dataset (BMS-26). } \begin{center} \footnotesize \begin{tabular}{ccccccc \hline & Density & Overall error & Average error& Over-segmentation rate & Extracted objects & Total Time(s)\\ \hline OB & 1.03\% & 5.68\% & 24.74\% & 1.48 & 30 & 434545\\ HOSC & 1.03\% & 8.05\% & 27.84\% & 2.1 & 22 & 11966\\ CBS & 1.03\% & 7.80\% & 22.60\% & 2.08 & 22 & 7875\\ \hline \end{tabular} \end{center} \label{tab:longtermVid} \end{table*} \section{Discussion} \label{sec:Discussion} The proposed method requires the value of $k$, which defines the minimal acceptable size for a structure in a given application, as an input. Any robust model fitting method needs to establish the minimal acceptable structure size (either explicitly or implicitly), or else it may result in a trivial solution. For example if we are given a set of 2D points and asked to identify lines in data without any additional constraint, there would be no basis to exclude the trivial solution because any two points will result in a perfect line. Hence, in order to find a meaningful solutions there must be some additional constraints such as the minimal acceptable size for a structure. The proposed method estimates the scale of noise from data and the analysis of \cite{Hoseinnezhad2010} showed that the estimation of the noise scale from data requires at least around 20 data points to limit the effects of finite sample bias. This leads to a lower bound of $k$ around $20$. Similar to competing clustering based methods (e.g. SCC \cite{Chen2009}, SSC \cite{Elhamifar2013}) the proposed method also requires prior knowledge on the number of clusters. This is one of the limitations of the proposed method. The problem of identifying the number of structures and the scale of noise simultaneously is still a highly researched area. Remaining outliers can always be seen as members of a model with large noise values. Zelnik-Manor and Perona \cite{Zelnik-Manor2004} proposed a method to automatically estimate the number of clusters in a graph using Eigenvector analysis. Since our focus in this paper is on efficiently generating the graph (not in how to cluster it), we have not included this in the evaluations. Some model fitting methods that are based on energy minimization \cite{Boykov2001} are devised to estimate the number of structures given the scale of noise. They achieve this by adding a model complexity term to the cost function that penalize additional structures in a given solution. However, these methods require an additional parameter that balances the data fidelity cost with the model complexity (number of structures in \cite{Purkait2014}). Our experiments on \cite{Purkait2014} showed that the output of these methods were heavily dependent on this parameter and required hand tunning on each image (of \tabref{fundamentalRes}) to generate reliable results. The proposed method uses a data-sub-sampling strategy based on a set of inclusion weights to bias the algorithm to produce edges from different structures. These inclusion weights iteratively calculated using the inlier/outlier dichotomy for each edge. However in case there are additional information about the problem such as spacial contiguity, one can use those to improve the sub-sampling. For example in two-view motion segmentation, the euclidean distance between points can be used to construct a KDtree, which can then be used to do the sampling directly (i.e. select initial point randomly and include $N_s$ points closest to that point as the data sub-sample). It is important to note that in the performance evaluations of this paper we have not used any such additional information. \section{Conclusion} \label{sec:conclusion} In this paper we proposed an efficient sampling method to obtain a highly accurate approximation of the full graph required to solve the multi-structural model fitting problems in computer vision. The proposed method is based on the observation that the usefulness of a graph for segmentation improves as the distribution of hypotheses (used to build the graph) approaches the actual parameter distribution for the given data. In this paper we approximate this actual parameter distribution using the $k$-th order statistics cost function and the samples are generated using a greedy algorithm coupled with a data sub-sampling strategy. The performance of the algorithm in terms of accuracy and computational efficiency was evaluated on several instances of the multi-object motion segmentation problems and was compared with state-of-the-art model fitting techniques. The comparisons show that the proposed method is both highly accurate and computationally efficient. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This research was partly supported under Australian Research Council (ARC) Linkage Projects funding scheme. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
hep-ph/9505349
\section{Introduction} Double-beta ($\beta\beta$) decay is an extremely rare process in which two nuclear neutrons simultaneously convert into two protons and two electrons. Within the Standard Electroweak Model (SM) this decay occurs at second order in the charged-current weak interactions, and is accompanied by the emission of two antineutrinos, giving rise to a characteristic ($\bb_{2\nu}$) electron spectrum. Despite the extremely long half-lives involved --- typically $10^{20}$ yr or more --- heroic efforts\cite{expreview} over the past ten years have been rewarded by its experimental detection. Because it is such a rare process, $\beta\beta$ decay experiments also furnish a unique window onto whatever new physics may replace the SM at energies very much higher than those that are directly accessible in today's accelerators. This is because the effects for these experiments of new interactions can in some circumstances compete with those of run-of-the-mill SM decays. In order to be detectable in $\beta\beta$ experiments, new physics must have either or both of the following properties: \begin{enumerate} \item It must violate a selection rule --- {\it e.g.}\/: electron-number ($L_e$) conservation --- which is satisfied by the SM contribution; \item It must contain new particle states that are light enough to be produced in $\beta\beta$ decay. Since $Q \sim 1$ MeV is typical of the energy release in these decays, any such new particles must be much lighter than this scale. \end{enumerate} The purpose of this article is to outline the features of types of new physics which satisfy the second of these properties, producing $\beta\beta$ decays in which new light particles are emitted. The title refers to `scalar-emitting' decay modes because, with the occasional exception\cite{vectors}, this new light particle is usually taken to be spinless. A spinless particle can be particularly well-motivated if it is a (possibly pseudo-) Nambu-Goldstone boson (NGB), since in that case its small mass is naturally understood. In particular, the focus is on comparatively recent work\cite{CMM,icnapp,multimajoron} which shows that models of this sort generally have very different features --- including qualitatively different experimental signatures, such as electron spectra --- than are usually assumed in the analyses of current $\beta\beta$ experiments. \section{The Electron Energy Spectrum} The experimental quantity that is used to distinguish exotic decays from the ordinary SM events in $\beta\beta$ experiments is the shape of the decay rate as a function of the energies, $\varepsilon_1$ and $\varepsilon_2$, of the two emitted electrons. For instance, if no new light particles are produced, then $L_e$-violating new physics can be identified if it produces decays which are `neutrinoless' ($\bb_{0\nu}$) in the sense that only electrons emerge from the decaying nucleus. In this case the decay rate vanishes unless the sum of the two electron energies, $\varepsilon = \varepsilon_1 + \varepsilon_2$, equals the released energy, $Q$. Decays into electrons plus light scalars, on the other hand, predict a continuous decay distribution throughout the entire kinematically allowed interval, $2 m_e \le \varepsilon \le Q$, that is distinguishable from both the SM $\bb_{2\nu}$ distribution, and the neutrinoless $\bb_{0\nu}$ contribution at the endpoint, $\varepsilon = Q$. Indeed, the prediction of the scalar-mediated ($\bb_{\varphi}$) decay in models like the Gelmini-Roncadelli (GR) model of lepton-number breaking\cite{GR,GGN} helped to motivate the original $\beta\beta$ experiments. \subsection{The Spectral Index} The different spectra that are possible in the various decays can be characterized (except for $\bb_{0\nu}$) in terms of a single integer\cite{CMM}, or `spectral index', $n$. This is because the decay distribution quite generally can be written in the following form: \begin{equation} \label{spectrum} {d \Gamma \over d\varepsilon_1 d\varepsilon_2} = \Gamma_0 \; (Q - \varepsilon_1 - \varepsilon_2)^n \; \left[ p_1 \varepsilon_1 F(\varepsilon_1) \right] \; \left[ p_2 \varepsilon_2 F(\varepsilon_2) \right] , \end{equation} where $F(\varepsilon_i)$ is the Fermi function which describes the distortion of the decay distribution due to the electrostatic field of the nucleus, and $p_i = |\bfc{p}_i|$, for $i=1,2$, represent the magnitudes of the three-momenta of the electrons. We neglect the mass, $\mu$, of the light scalar (or scalars) that are emitted in writing eq.~(\ref{spectrum}). Should $\mu$ not be negligible in comparison with $Q$, then the factor $(Q - \varepsilon_1 - \varepsilon_2)^n$ should be replaced by $[(Q - \varepsilon_1 - \varepsilon_2)^2 - \mu^2]^{n/2}$. Of course, if there are several types of possible decay, the total spectrum is a sum of terms like eq.~(\ref{spectrum}). A plot of the spectral shape which follows from eq.~(\ref{spectrum}) for various choices for $n$, is given in Fig. 1. The spectral shape is determined by $n$ because the normalization, $\Gamma_0$, does not depend, to a very good approximation, on the two electron energies, $\varepsilon_1$ and $\varepsilon_2$. This is because the most important quantity which sets the scale for contributions to $\Gamma_0$ is the typical momenta, $p_{\ss{N}} \sim 60$ MeV, of the decaying neutrons. Since the sizes of the light particle momenta are set by the net energy release, $Q \sim 1$ MeV, they are negligible in their contribution to $\Gamma_0$, and in this approximation the spectral shape becomes determined purely by phase space and the Fermi functions. For example, the phase space of the two emitted neutrinos (plus the electrons) of $\bb_{2\nu}$ in the SM implies the corresponding spectral index is $n_{\scrs SM} = 5$. Using the phase space of a single scalar instead of two neutrinos similarly gives the result $n_{\scrs GR} = 1$ for the spectrum predicted for $\bb_{\varphi}$ decay by the GR model. With one early exception\cite{twoscalar}, this same spectrum is also predicted by all of the alternative models for scalar-emitting decays which have been proposed ever since the original GR paper. Of course, Nature need not be limited to the two choices $n=5$ and $n=1$, and in general other values for $n$ might be expected to be possible signals for $\beta\beta$ experiments. It turns out that this naive expectation is true, and that models exist which predict both\cite{twoscalar,CMM,multimajoron} $n=3$ and\cite{multimajoron} $n=7$.\footnote{As will become clear later, the cases $n=5, 9,...$ can also be obtained by combining the features which produce the $n=1, 3$ and $7$ decays.} Furthermore, some of these models --- particularly those for which $n=3$ --- can produce observable signals in $\beta\beta$ experiments while preserving agreement with all other constraints, such as those arising from precision electroweak measurements on the $Z$ resonance. \vspace{0.08in} \begin{center} \let\picnaturalsize=N \def2.3in{2.3in} \deftwoscalar.ps{multispectrum.ps} \ifx\nopictures Y\else{\ifx\epsfloaded Y\else\input epsf \fi \let\epsfloaded=Y \centerline{\ifx\picnaturalsize N\epsfxsize 2.3in\fi \epsfbox{twoscalar.ps}}}\fi $\hbox{}$ \vspace{1.7cm} \end{center} \begin{quote} {\footnotesize The $\beta\beta$ spectrum as a function of the two electrons' total kinetic energy for various choices of the `spectral index' $n$. $n=1$ corresponds to the dotted line, $n=3$ is the dashed line, $n=5$ is the solid line and $n=7$ is the dash-dotted line. All four curves have been arbitrarily assigned the same maximal value for purposes of comparison.} \end{quote} \subsection{How to Generate $n \ne 1$} The remainder of this article is intended to briefly outline the properties of models with each of the values $n=1,3$ and $7$. Before doing so in detail, however, it is worth identifying the two general features which go into the prediction of the index $n$ for any model\cite{icnapp}. These are: \begin{enumerate} \item {\sl Phase Space:} As was noted above, the phase space that is associated with a decay into two electrons and a scalar in $\bb_{\varphi}$ decay implies an index $n=1$, as for the GR model. The phase space of each additional scalar that appears in the final state similarly increases $n$ by 2, so that in the absence of other contributions to $n$, a two-scalar decay ($\bb_{\varphi\varphi}$) should have: $n=3$, a three-scalar decay: $n=5$, and so on. \item {\sl Nambu-Goldstone Bosons:} As is well known, if a scalar is a NGB, then its couplings all must vanish in the limit of zero energy and momentum. This generally implies a suppression of the contribution of such a scalar in any low-energy process, and in particular for $\beta\beta$ decay. Since this suppression implies that the emission amplitude for each NGB is proportional to a factor of the NGB momentum, every such particle in the $\beta\beta$ final state should also increase $n$ by 2, in addition to the requirements of phase space. \footnote{ See, however, section 2.3 below for a qualification to this statement.} \end{enumerate} It is immediately clear how to generate scalar-emitting $\beta\beta$ decays for which $n \ne 1$. If two scalars whose emission amplitude is not derivatively suppressed are produced in $\bb_{\varphi\varphi}$ decay, then the index for the decay should be $n=3$. If $N$ such scalars are emitted then $n=2N-1$. Alternatively, if a single scalar is emitted in a $\bb_{\varphi}$ event, but this scalar has the derivatively-suppressed couplings of a NGB, then we again expect $n=3$. Similarly, if two such derivatively-suppressed scalars are emitted, then the index for the the corresponding decay is $n=7$. These arguments are borne out by detailed calculations.\cite{multimajoron} \subsection{A Puzzle with the GR Model} A puzzle remains as to how the original GR model itself, and its many successors over the years, fit into the above counting scheme. After all, the light scalar which is emitted in $\bb_{\varphi}$ decay in the GR model {\em is} a NGB: It is the NGB for the spontaneous breaking of lepton number. Yet even so, the spectral index which is predicted for this decay is $n_{\scrs GR}=1$, rather than $n=3$ as the previous counting would have predicted. The resolution of this puzzle is instructive because it reveals an additional criterion for increasing the spectral index of a model. The resolution to this puzzle\cite{CMM} goes as follows. It is useful to think in terms of variables for which the NGB of the GR model is explicitly derivatively coupled. (These variables may be obtained from the standard ones by performing a field-dependent lepton-number rotation.) In these new variables the NGB couples directly to the electron, as well as to the various neutrinos of the model. The graphs which then dominate $\bb_{\varphi}$ decay at low energies turn out to be those for which the NGB of the final state is emitted from the external electron lines, rather than from the neutrino lines. But the emission of a massless boson with a vector coupling by an external electron line introduces a potential singularity into the amplitude at low momenta. Indeed if the emitted particle had been a photon rather than a NGB, such graphs would really be infrared divergent. For NGB emission, however, the infrared singularity of these graphs is compensated by the derivative coupling of the massless scalar, giving a nonzero and finite result in the zero-momentum limit. In practice, then, in order to obtain a real suppression of $\bb_{\varphi}$ or $\bb_{\varphi\varphi}$ decay because of the NGB nature of the emitted scalar, it is also necessary to forbid the emission of the scalar from the external electrons in the decay. As is shown below, this is an automatic feature of many models once they are required to not produce experimentally unacceptable rates for $\bb_{0\nu}$ decay in a natural way. \section{Models: Preliminaries} The remainder of the article is devoted to outlining the properties of the various kinds of scalar-emitting decays which the $\beta\beta$ experiments can see. The present section sketches those features which are required as preliminaries to model building, and the properties of some explicit models are briefly discussed in the section immediately following. Some of the more general conclusions that can be drawn from a comparison of these models are finally summarized in section 5. \subsection{The Naturalness Problem} The purpose of this section is to argue that a serious fine-tuning problem exists for virtually {\em all} of the models that have been proposed to date which predict $n=1$ for the single-scalar $\bb_{\varphi}$ decays, or which predict $n=3$ for the two-scalar $\bb_{\varphi\varphi}$ decays. This fine tuning arises from a naturalness issue which has strong implications for any model which purports to predict a detectably large scalar-emitting $\beta\beta$ decay rate. This naturalness issue hinges on the following two questions which any such model must address: \begin{enumerate} \item {\sl Masses:} The first question is: How can an elementary scalar have a mass that is smaller than $Q \sim 1$ MeV, and so be light enough to permit its production in $\beta\beta$ decay? Being some five orders of magnitude lighter than the electroweak scale, such a small scalar mass introduces a fine-tuning, or `heirarchy', problem unless there is a mechanism which can protect it from virtual effects at the weak scale. \item $\bb_{0\nu}$ {\sl Decay:} The second question concerns the size that is predicted for $\bb_{0\nu}$ decay. A model which breaks $L_e$ generically also predicts a nonzero $\bb_{0\nu}$ decay rate, and this rate must not be larger than the current experimental limit. Since this rate is often related to the rate for the scalar-emitting $\beta\beta$ decay, which is by assumption detectable, it can be difficult to suppress the one reaction without suppressing both. This makes the resulting constraint quite powerful. \end{enumerate} There are several ways in which the various existing models handle these issues. Two mechanisms have been proposed which can permit such a naturally small scalar mass. Although (somewhat surprisingly) supersymmetry can be used to do the job\cite{oscar}, the resulting models are quite complicated and fairly contrived. The much simpler alternative is to simply follow the original workers\cite{GR,CMP}, and to require that the light scalar be the NGB for an exact or approximate global symmetry. The second question becomes important if the symmetry for which the scalar is the NGB is electron number itself, as is the case for virtually all proposed models which predict $n=1$ as the index for the scalar-emitting $\beta\beta$ decays. This includes essentially all models that have been considered until very recently, and is one of the main motivations for taking the models with $n\ne 1$ as important alternatives. The problem arises because the same vacuum expectation value ({\it v.e.v.}\/), $v$, which breaks $L_e$ typically also generates a Majorana mass for the electron neutrino whose size is $m_{\nu_e\nu_e} \sim g_{\rm eff} \, v$, where $g_{\rm eff}$ is the relevant scalar-neutrino Yukawa coupling constant. Such a Majorana mass gives rise to $\bb_{0\nu}$ decays which would have been seen if they exceed the current experimental limit, which is $|m_{\nu_e\nu_e}| \roughlydown< 1$ eV. But this limit cannot be satisfied simply by making $g_{\rm eff}$ very small, since if the $\bb_{\varphi}$ decay itself is to be observably large, then $g_{\rm eff}$ cannot be made smaller than\cite{expgeff,kai,heid} $\sim 10^{-4}$. From this lower limit for $g_{\rm eff}$ we learn that the $L_e$-breaking {\it v.e.v.}\/\ must satisfy $v \roughlydown< 10$ keV. But this upper bound requires the appearance in the scalar potential of a scale that is more than 5 orders of magnitude smaller than the electroweak scale. As such, it poses precisely the same type of fine-tuning problem that would have occured if it were the scalar mass itself that was to be fine tuned. In either case we are led to a mass scale in the scalar potential that is at largest several hundred keV or so. It is tempting to argue that a heirarchy of the size of order $v/M_{\scriptscriptstyle W} \sim 100 \, \hbox{keV}/100 \, \hbox{GeV} \sim 10^{-6}$ is not so small if the $\varphi - \nu$ coupling is really $g_{\rm eff} \sim 10^{-4}$, since in this case radiative corrections would be $\delta v \sim (g_{\rm eff}/4 \pi) \sim 10^{-5}$. Although seductive, this argument turns out to be wrong. The coupling, $g_{\rm eff}$, which appears in $\bb_{\varphi}$ decay is not simply a yukawa coupling, $g$, of the lagrangian, but is really an effective coupling in which a yukawa coupling is multiplied by a small mixing angle: $g_{\rm eff} \sim g \sin^2\theta$. The mixing angle arises because the precision data at LEP precludes the direct coupling of any new light scalar to the electroweak sector. Scalar-emitting $\beta\beta$ decay must therefore be induced through the mixing of an electroweak eigenstate with another state which couples to the light scalar. This mixing can occur in either the neutrino sector, giving sterile-neutrino mediated decays, or, for example, in the scalar sector. In either case, the mixing angles typically cannot be too big without running into conflict with other experiments. In sterile-neutrino models, for example, weak interaction bounds imply that $\sin\theta$ cannot be larger than 10\% or so. Now comes the main point. Because of its suppression by powers of $\sin\theta$, $g_{\rm eff}$ can only be as large as $10^{-4}$ if the underlying Yukawa coupling is considerably bigger. For example $\sin\theta \roughlydown< 10 \%$ implies $g \roughlydown> 10^{-2}$. But it is $g$ and not $g_{\rm eff}$ which controls the radiative corrections to the scalar potential, so $\delta v$ is of order $(g / 4 \pi)$ rather than $(g_{\rm eff}/4 \pi)$ as was argued above. As a result the estimate using $g_{\rm eff}$ underestimates the correction to $v$ by several orders of magnitude. More realistic estimates\cite{CMM,oscar} show that these corrections can only be sufficiently small in restrictive corners of parameters space, for which new degrees of freedom (like sterile neutrinos) are quite light, and so are strongly constrained phenomenologically. To be sure, the $\bb_{0\nu}$ decay rate could be made naturally small if $L_e$ were conserved, and so if $v$ were to vanish. But in this case the corresponding NGB disappears and the problem with the scalar mass must be solved in some other way. These considerations point to a natural way out of this dilemma. Both the scalar mass and the $\bb_{0\nu}$ decay rate can be naturally zero if: (a) the scalar is a NGB, and (b) $L_e$ is {\em unbroken}. It follows that the light scalar must then be a NGB for some symmetry other than that responsible for electron-number conservation. These two conditions are sufficient in themselves to imply a spectral index for the associated scalar-emitting $\beta\beta$ processes which is respectively $n=3$ for $\bb_{\varphi}$, and $n=7$ for $\bb_{\varphi\varphi}$ decay. This is easily seen since these decays can only proceed if the emitted scalar itself carries nonzero electron number which, together with $L_e$ and electric-charge conservation, precludes the possibility of scalar emission from the external electron lines. \subsection{$\Gamma_0$ and Nuclear Form Factors} Before turning to representative models for each type of decay, it is useful to pause to record expressions for the normalization of the various $\beta\beta$ decay rates in a way which facilitates the comparison of different models. These expressions are required in order to determine the kinds of couplings which would be necessary in order to obtain observable scalar-emitting $\beta\beta$ decay rates. It is useful to use the $\beta\beta$ decay rate in the form given in eq.~(\ref{spectrum}), with the normalization given by \begin{equation} \label{genericrate} \Gamma_0(\beta\beta_i) = {(G_{\scrs F}\cos\theta_{\scriptscriptstyle C})^4 \over 8(2\pi)^5} \; \left| \Sc{A}(\beta\beta_i) \right|^2 , \end{equation} where $G_{\scrs F}$ is the Fermi constant, $\theta_{\scriptscriptstyle C}$ the Cabibbo angle, and $\Sc{A}$ an amplitude which depends on the decay process being computed ($\beta\beta_i$, with $i = 2\nu$, $\varphi$ or $\varphi\varphi$), on the couplings of the model, and on some soon-to-be-identified nuclear matrix elements. $\Sc{A}$ can be written, for $0^+ \to 0^+$ transitions, as a convolution: \begin{equation} \label{convolution} \Sc{A} = \int {d^4\ell \over (2\pi)^4} \; L^{\mu\nu}(\ell) W_{\mu\nu}(\ell), \end{equation} where $L^{\mu\nu}$ depends on the detailed properties of the leptons in the model, and $W_{\mu\nu}(\ell)$ contains the nuclear matrix elements that are relevant to the decay: \begin{equation} \label{matrixelement} W_{\mu\nu}(\ell) \equiv (2 \pi)^3 \, \sqrt{ { E E'\over M M'}} \; \int d^4x \;\langle N'|T^* \left[ J_\mu(x) J_\nu(0) \right] | N\rangle \; e^{i\ell x} . \end{equation} Here $J_\mu = \bar{u} \gamma_\mu (1+\gamma_5) d$ is the weak charged current that causes transitions from neutrons to protons, and $|N \rangle$ and $|N' \rangle$ represent the initial and final $0^+$ nuclei in the decay. $E$ and $M$ are the energy and mass of the initial nucleus, $N$, while $E'$ and $M'$ are the corresponding properties for the final nucleus, $N'$. The symmetries of the problem ensure that the most general possible form for $W_{\mu\nu}$ is \cite{CMM}: \begin{eqnarray} \label{formfactors} W_{\mu\nu}(\ell) &=& w_1 \; \eta_{\mu\nu} + w_2 \; u_\mu u_\nu +w_3 \; \ell_\mu \ell_\nu + w_4 \; (\ell_\mu u_\nu + \ell_\nu u_\mu) \nonumber\\ &&+ w_5 \; (\ell_\mu u_\nu - \ell_\nu u_\mu) + iw_6 \; \epsilon_{\mu\nu\sigma\rho} u^\sigma \ell^\rho , \end{eqnarray} where $u_\mu$ is the four-velocity of the initial and final nucleus, and the six Lorentz-invariant form factors, $w_a = w_a(u\cdot \ell, \ell^2), a = 1,\dots,6$, are functions of the two independent invariants that can be constructed from $\ell_\mu$ and $u_\mu$. These form factors can be related\cite{CMM} to the nuclear matrix elements as they are quoted in the literature\cite{DoiTomoda,Haxton,Rosen,KK}. For example, in many situations (such as $\bb_{2\nu}$, $\bb_{0\nu}$ and some kinds of $\bb_{\varphi}$ and $\bb_{\varphi\varphi}$ decays) $L^{\mu\nu} \propto \eta^{\mu\nu}$ and so the decay rate only depends on the combination ${W^\mu}_\mu$. This is simply the difference between the Fermi and Gamow-Teller form factors, which are defined by $w_{\scrs F} \equiv W_{00}$ and $w_{\scrs GT} \equiv \sum_{k=1}^3 W_{kk}$. These form factors, when computed using the closure and nonrelativistic impulse approximations in a model of the nucleus, become \begin{eqnarray} \label{connection} w_{\scrs F} &=& {2i \epsilon g_{\scrs V}^2\over \ell_0^2 - \epsilon^2 + i\varepsilon} \; \langle\!\langle N'|\sum_{nm}e^{-i\bfc{l}\cdot{\bf r}_{\!nm}}\tau^+_n \tau^+_m |N \rangle\!\rangle;\nonumber\\ w_{\scrs GT} &= &{2i \epsilon g_{\scrs A}^2\over \ell_0^2 - \epsilon^2 + i\varepsilon} \; \langle\!\langle N'|\sum_{nm} e^{-i\bfc{l} \cdot {\bf r}_{\!nm}} \tau^+_n\tau^+_m \; \vec\sigma_n \! \cdot\! \vec\sigma_m |N \rangle\!\rangle . \end{eqnarray} Here $\epsilon \equiv \ol{E} - M$ is the average excitation energy of the intermediate nuclear state, ${\bf r}_{\!nm}$ is the separation in position between the two decaying nucleons, $\bfc{l}$ is the spatial component of $\ell^\mu$ in the nuclear rest frame, and $g_{\scrs V} \roughlyup- 1$ and $g_{\scrs A} \roughlyup- 1.25$ are the vector and axial couplings of the nucleon to the weak currents. Finally, $\langle\!\langle N'| \Sc{O} |N \rangle\!\rangle$ represents a reduced matrix element from which the nuclear centre-of-mass motion has been extracted, and $\tau^+_n$ (or $\vec\sigma_n$) are the raising operators for nuclear isospin (or the nuclear spin operators) acting on the $n$'th nucleon. \section{Models: Specific Examples} We now turn to the details of representative models for which $n=1,3$ and 7. Although much of what follows applies more generally we choose, for simplicity and for the purposes of comparison, models in which the corresponding $\beta\beta$ decay proceeds due to the mixing of the electron neutrino, $\nu_e$, with a collection of sterile neutrinos, $N_i$. The reader who is not interested in the specific properties of these models, can skip to the next, concluding, section in which their general features are compared. \subsection{The Case $n=1$} The simplest models to build are those for which the spectral index takes its traditional value, $n=1$. Theories of this sort, when constructed using sterile neutrinos, are similar to the original singlet-majoron models\cite{CMP}. They, as well as other variants with $n=1$ which do not rely on sterile neutrinos to produce $\bb_{\varphi}$ decays\cite{BSV}, have recently received renewed attention. Suppose the neutrino mass eigenstates are denoted generically by $\nu_i$, and their overlap with the electron-neutrino flavour eigenstate is called $V_{ei}$. Suppose also that $\varphi$ represents the light scalar of the model. Then, taking the general Yukawa coupling lagrangian between these particles to be \begin{equation} \label{yukawa} \Sc{L}_{\varphi\nu\nu} = - \frac{1}{2}\; \bar{\nu}_i (a_{ij} \gamma_{\scrs L} + b_{ij} \gamma_\rht) \; \nu_j \; \varphi^* + c.c. \, , \end{equation} and evaluating the Feynman graph of Fig. 2, gives an amplitude,\cite{CMM} $\Sc{A}$ of the form of eq.~(\ref{convolution}), with\footnote{Our conventions here use $\eta^{\mu\nu} = \hbox{diag}(-+++)$.} \begin{equation} \label{leptonterm} L^{\mu\nu}(\ell) = 4\sqrt{2} \sum_{ij} V_{ei} V_{ej} \left[ { (a_{ij} m_i m_j - \ell^2 b_{ij}) \; \eta^{\mu\nu}\over (\ell^2 + m_i^2 - i\varepsilon ) (\ell^2 + m_j^2 - i\varepsilon) } \right]. \end{equation} \vspace{0.08in} \begin{center} \let\picnaturalsize=N \def2.3in{2.3in} \deftwoscalar.ps{onescalar.ps} \ifx\nopictures Y\else{\ifx\epsfloaded Y\else\input epsf \fi \let\epsfloaded=Y \centerline{\ifx\picnaturalsize N\epsfxsize 2.3in\fi \epsfbox{twoscalar.ps}}}\fi \medskip {\bf Figure 2}\\ \medskip \end{center} \begin{quote} {\footnotesize The Feynman graph which is responsible for $\bb_{\varphi}$ decay in models for which $\beta\beta$ decay can arise because of sterile-neutrino exchange. } \end{quote} The remaining question is whether an observably large $\bb_{\varphi}$ decay rate can be obtained from these expressions without becoming in conflict with any existing phenomenological bounds. This can be addressed only by constructing an explicit model in which all of the relevant observables can be computed as a function of a common set of underlying parameters. We therefore next present an representative model for this kind of theory. An explicit sterile-neutrino model which produces $\bb_{\varphi}$ decays with $n=1$ is simple to construct\cite{CMM}. Add to the SM two electroweak-singlet, left-handed neutrinos $s_{\scriptscriptstyle \pm}$, which carry $\pm 1$ unit of an unbroken global lepton number symmetry. Add also a singlet scalar field with lepton number $-2$. The most general renormalizable couplings of these new particles, consistent with lepton number conservation and the SM gauge symmetries, are \begin{equation} \label{ommrenmodel} \Sc{L} = - \lambda \bar{L}H \gamma_\rht s_- - M \bar{s}_+ \gamma_\rht s_- -\frac{g_+}{2} \bar{s}_+ \gamma_\rht s_+ \varphi - \frac{g_- }{2} \bar{s}_- \gamma_\rht s_- \varphi^* + c.c. \end{equation} Here $L$ and $H$ respectively denote the usual SM lepton- and Higgs doublets, and $\gamma_{\scrs L}$ and $\gamma_\rht$ denote the usual projections onto left- and right-handed spinors. Majorana spinors are used throughout to represent the neutrinos, so the spinor conjugate used above employs the charge-conjugation matrix, $C$, according to $\ol{\nu}_i = \nu_i^\ss{T} \, C^{-1}$. As discussed previously, the experimental absence of $\bb_{0\nu}$ decay requires the expectation value of the scalar field, $\Avg{\varphi}$, not to be larger than $\sim 100$ keV. Rather than fine-tuning in such a small scale, it is simpler to take $\Avg{\varphi} = 0$, so that lepton number is not spontaneously broken. Then the fine-tuning simply becomes the requirement that the scalar mass be $\roughlydown< 10$ keV. In this case, the spectrum contains three massless neutrinos, $\nu_e'$, $\nu_\mu$ and $\nu_\tau$, together with a massive Dirac neutrino, $\nu_h$, which mixes with the electron-type charged-current weak interactions. The model can have a detectable\cite{CMM} $\bb_{\varphi}$ decay rate, provided that the masses and mixings are chosen judiciously. Because the scale of the factor $W_{\mu\nu}$ in eq.~(\ref{convolution}) is set by the nucleon momentum, $p_{\scriptscriptstyle N} \sim 60$ GeV, $\beta\beta$ decay rate tends to become suppressed if all neutrino masses are much larger than, or much smaller than this scale. As a result, a successful choice of parameters, which can also avoid bounds from other laboratory experiments, puts the heavy neutrino mass eigenstate at several hundred MeV, with a fairly large (somewhat less than 10\%) mixing with $\nu_e$. More of the qualitative features and problems that arise with these models are outlined in section 5, below. Those interested in the details of the analysis are referred to the recent literature.\cite{CMM,BSV} \subsection{The Case $n=3$: Two-scalar Decays} There are two possibilities for producing $n=3$ decays: two light scalars could be emitted without a NGB suppression of the emission amplitude, or one light NGB-suppressed scalar could be emitted. An example of the two-scalar decay is given here, even though the two-scalar emission models involve the same fine-tuning problems as do the $n=1$ models just described. The alternative models, for which $n=3$ arises in single-scalar decays, are the subject of the next subsection. Consider in this case a theory of two types of left-handed neutrinos, $\nu_i$ and $N_a$, which respectively carry lepton number $L_e(\nu_i) = +1$ and $L_e(N_a) = 0$, and which are coupled to the light scalar, $\varphi$, which has lepton number $L_e(\varphi) = +1$. The most general renormalizable and $L_e$-conserving Yukawa couplings involving these fields are: \begin{equation} \label{yukawaints} \Sc{L}_{\rm yuk} = - \, \ol{\nu}_i \left( A_{ia} \gamma_{\scrs L} + B_{ia} \gamma_\rht \right) N_a \; \varphi + {h.c.} , \end{equation} where $A_{ia}$ and $B_{ia}$ represent arbitrary Yukawa-coupling matrices. These neutrinos are endowed with a set of generic lepton-number-conserving masses, $m_{\nu_i}$ and $m_{\ss{N}_a}$. For the $L_e = 0$ neutrinos, $N_a$, this is accomplished by simply introducing a general majorana-mass term. For the $L_e = 1$ neutrinos, however, an additional collection of $L_e = 1$ right-handed neutrinos are required, with which $L_e$-invariant Dirac masses may be formed. This type of theory produces $\bb_{\varphi\varphi}$ decay due to the Feynman graph of Fig.~3. Two scalars must be emitted in this decay because of $L_e$ conservation. Evaluating this graph gives a result of the form of eqs.~\ref{spectrum}, \ref{genericrate} and \ref{convolution}, with\cite{multimajoron}: \begin{equation} \label{mmatrixelement} L^{\mu\nu}(\ell) = \left( { 2 \over 3 \pi^2} \right)^{\frac{1}{2}} \sum_{ija} \left[ {V_{e\nu_i} V_{e\nu_j} \Sc{N}_{ija} \; \eta^{\mu\nu} \over (\ell^2 + m_{\nu_i}^2 -i \epsilon) \, (\ell^2 + m_{\nu_j}^2 -i \epsilon) \, (\ell^2 + m_{\ss{N}_a}^2 -i \epsilon) } \right] , \end{equation} where the factor, $\Sc{N}_{ija}$, denotes: \begin{equation} \label{numerator} \Sc{N}_{ija} \equiv (-\ell^2) \Bigl[ A_{ia} B_{ja} m_{\nu_i} + A_{ja} B_{ia} m_{\nu_j} + B_{ia} B_{ja} m_{\ss{N}_a} \Bigr] + A_{ia} A_{ja} m_{\nu_i} m_{\nu_j} m_{\ss{N}_a} . \end{equation} \vspace{0.08in} \begin{center} \let\picnaturalsize=N \def2.3in{2.3in} \deftwoscalar.ps{twoscalar.ps} \ifx\nopictures Y\else{\ifx\epsfloaded Y\else\input epsf \fi \let\epsfloaded=Y \centerline{\ifx\picnaturalsize N\epsfxsize 2.3in\fi \epsfbox{twoscalar.ps}}}\fi \medskip {\bf Figure 3}\\ \medskip \end{center} \begin{quote} {\footnotesize The Feynman graph which is responsible for $\bb_{\varphi\varphi}$ decay in models for which $\beta\beta$ decay arises because of sterile-neutrino exchange. } \end{quote} It is just possible to obtain a detectably large decay rate in this kind of model without running into conflict with other laboratory bounds. Because of the comparatively soft electron spectrum (since $n=3$), the total integrated decay rate tends to be suppressed compared to $n=1$ models by an additional two powers of the small ratio $Q/p_{\scriptscriptstyle N} \sim 10^{-1}$. The additional phase space also introduces additional suppression due to dimensionless factors of $1/2\pi$. As a result the total rate tends to be much smaller than in a comparable model for which $n=1$. This makes it more difficult to obtain observable decays, and a sufficiently large rate generally requires sterile neutrinos to lie in the mass range in the vicinity of 100 MeV which optimizes the $\beta\beta$ decay rate. \subsection{The Case $n=3$: Single-scalar Decays} We next turn to models which predict only single-scalar $\bb_{\varphi}$, but for which the spectrum nevertheless has $n=3$. This may be ensured by constructing the model to ensure that the emitted scalar is a NGB which carries two units of conserved electron number. The rate for $\bb_{\varphi}$ decay is given by evaluating the result for the Feynman graph of Fig.~2. using the generic Yukawa coupling of eq.~(\ref{yukawa}). By virtue of the light scalar being a NGB carrying unbroken electron number, one finds that the leading result in powers of the lepton momenta, as given by expression \ref{leptonterm}, vanishes identically. It is therefore necessary to work to next order in these momenta, which raises the resulting spectral index from $n=1$ to $n=3$. The resulting amplitude then satisfies $L^{\mu\nu} = - L^{\nu\mu}$, with\cite{CMM}: \begin{eqnarray} \label{bbcmamp} L^{0m} &=& - 4\sqrt{2} \sum_{ij} V_{ei} V_{ej} b_{ij} \left[ { \ell^m \over (\ell^2 + m_i^2 - i\varepsilon ) (\ell^2 - m_j^2 +i\varepsilon)} \right] , \nonumber\\ L^{mn} &=& - 4\sqrt{2} \sum_{ij} V_{ei} V_{ej} b_{ij} \left[ {\epsilon^{mnr} \ell_r \over (\ell^2 + m_i^2 - i\varepsilon ) (\ell^2 - m_j^2 +i\varepsilon)} \right] . \end{eqnarray} An example of a renormalizable model for which this is the decay formula is given\cite{CMM} by requiring the theory to have a nonabelian flavour symmetry, $G = SU_{\scriptscriptstyle F}(2) \times U_{\scriptscriptstyle L'}(1)$ which gets broken down to the unbroken electron number. To implement this symmetry-breaking pattern, introduce an electroweak-singlet scalar field, $\Phi_i$, which transforms under $G$ like $({\bf 2}, -1)$. Also introduce the electroweak-singlet left-handed neutrino fields, $N \sim ({\bf 2}, 0)$ and $s_{\scriptscriptstyle\pm} \sim ({\bf 1}, \pm 1)$. The most general renormalizable lagrangian involving the new fields which respects all of the symmetries is \begin{equation} \label{cmmlagrangian} \Sc{L} = - \lambda \bar{L}H\gamma_\rht s_- - M \bar s_+ \gamma_\rht s_- - g_+ \, (\bar{N} \gamma_{\scrs L} s_+) \; \Phi - g_- \, (\bar{N} \gamma_{\scrs L} s_-) \; \tw{\Phi} + c.c. \end{equation} $\tw{\Phi} = i \tau_2 \Phi^*$ represents the conjugate $SU_{\scriptscriptstyle F}(2)$ doublet, where $\tau_2$ is the second Pauli matrix acting on flavour indices. The scalar potential is then chosen to ensure that $\Phi$ gets a VEV, which we assume to take the form $\Avg{ \Phi} = { 0 \choose v}$. The soft $n=3$ spectrum suppresses the integrated decay rate, giving a detectable spectrum only when all couplings and masses are optimal. This once again places the sterile neutrinos in the mass range of several hundred MeV, with significant mixing with the electron neutrino. \subsection{The Case $n=7$} As a final example, consider the case with spectral index $n=7$. This decay is produced by ensuring that the light scalar is a NGB, and that it carries only one unit of electron number so that the decay rate is derivatively suppressed, and two scalars must be emitted in the decay. The $\bb_{\varphi\varphi}$ decay rate is found by evaluating the Feynman graph of Fig.~3. Once again, the symmetry properties make it necessary to work to higher order in the lepton momenta than was necessary for the $n=3$ $\bb_{\varphi\varphi}$ decay. It turns out that the expression for the resulting amplitude is quite cumbersome when given in terms of the couplings of eq.~(\ref{yukawaints}), and so we use instead variables for which the derivative coupling nature of the Goldstone bosons is manifest from the outset. The trilinear coupling to neutrinos of a Goldstone boson carrying $L_e = 1$, then becomes: \begin{equation} \label{derivyukawa} \Sc{L}_{\rm gb} = -i \; \ol{\nu}_i \gamma^\mu ( X_{ia} \gamma_{\scrs L} + Y_{ia} \gamma_\rht) N_a \; \partial_\mu \varphi + {h.c.}, \end{equation} where the coefficients $X_{ia}$ and $Y_{ia}$ are coupling matrices that can be computed in any specific model.\cite{CMM,multimajoron} Using these interactions to evaluate the amplitude given by the diagram of Fig.~3 gives, for the special case\cite{multimajoron} $Y_{ia} = 0$: \begin{equation} \label{newmatrixelement} L^{\mu\nu}(\ell) = \left({ 4 \over 105 \pi^2} \right)^{\frac{1}{2}} \; \sum_{ija} \; \left[ {V_{e\nu_i} V_{e\nu_j} \tw{\Sc{N}}_{ija}\eta^{\mu\nu} \over (\ell^2 + m_{\nu_i}^2 -i \epsilon) \, (\ell^2 + m_{\nu_j}^2 -i \epsilon) \, (\ell^2 + m_{\ss{N}_a}^2 -i \epsilon) } \right] \; . \end{equation} $\tw{\Sc{N}}_{ija}$ represents the following expression: \begin{equation} \label{newnumerator} \tw{\Sc{N}}_{ija} \equiv (-\ell^2) X_{ia} X_{ja} m_{\ss{N}_a} . \end{equation} It is fairly unlikely that this decay will be found in $\beta\beta$ experiments in the forseeable future, even if Nature should actually work contain $n=7$ $\bb_{\varphi\varphi}$ decays. There are two reasons why this is so. First, the very high spectral index for this decay, $n=7$, raises several problems for detecting this kind of decay. Firstly, since the electron spectrum is {\em softer} than that of the SM $\bb_{2\nu}$ decay, the decay electrons tend to come out with low energies, and so would be difficult to distinguish from background. Secondly, the soft spectrum greatly suppresses the integrated total decay rate, making it very difficult to get a detectable decay in a model which also satisfies all other laboratory bounds. Even though models which produce this spectrum have been constructed\cite{multimajoron}, none have been found which are both phenomenologically viable and which predict a detectable $\bb_{\varphi\varphi}$ decay rate. \section{Models: General Features and Conclusions} A comparison of models in these four classes leads to a number of reasonably robust conclusions. \begin{enumerate} \item The models for which $n=1$ and those which predict $n=3$ $\bb_{\varphi\varphi}$ decays illustrate in detail the general fine-tuning problem that was argued in section 2 to be endemic to these kinds of $\bb_{\varphi}$ decay. The requirement that a light scalar should exist, and that $\bb_{0\nu}$ should not be predicted at an unacceptably large rate, taken together require a fine tuning of the scalar potential to ensure either an extremely small scalar mass, or an equally small $L_e$-breaking {\it v.e.v.}\/. Although supersymmetric models along these lines have been constructed\cite{oscar} which are technically natural, they are also quite contrived and complicated. \item Models with softer electron spectra give smaller integrated decay rates, since the total decay becomes suppressed by higher powers of the small endpoint energy $Q$. This implies that theories predicting $n=1$ decays have an easier time producing observably large decay rates than do models for which $n$ is larger. As a result these models tend to offer the largest latitude to accomodate other laboratory limits and phenomenological constraints. To produce acceptably large scalar-emitting $\beta\beta$ decay rates, models for which $n=3$ must typically have all of the relevant dimensionless couplings be $O(1)$, have the mixings of the relevant sterile neutrino be as large as are phenomenologically allowed ($\roughlydown< 10\%$), and have the participating new neutrino states have masses in the optimal mass range of a few hundred MeV. This mass range is prefered since it is comparable to the typical momenta, $p_{\scriptscriptstyle N}$ of the decaying nucleons within the nucleus, and so does not lead to suppressions of the form of $M/p_{\scriptscriptstyle N}$ or $p_{\scriptscriptstyle N}/M$. The resulting models therefore can work, but do not leave a great deal of freedom to accomodate other limits. Of the $n=3$ models, those which emit only a single scalar tend to predict larger decay rates than the two-scalar decays because they are not suppressed by additional small phase-space factors. Models for which $n=7$ are the worst case, and have decay rates that are sufficiently suppressed by powers of $Q/p_{\scriptscriptstyle N}$ that they are unlikely to be experimentally detectable for the forseeable future. \item Models with electron spectra as soft as the $n=7$ decays are also harder to detect for another reason, besides the size of their total decay rate. Most of the background in $\beta\beta$ decay experiments occurs for electron energies which are comparatively soft, and so it is the soft electrons which are the ones which are hard to dig out of this background. This makes it all the more unlikely that these decays will turn up in the experimental data. \item Because the contribution of sterile neutrinos to scalar-emitting $\beta\beta$ decays become suppressed when their masses are much larger or much smaller than around 100 MeV, and since couplings and mixings tend to have to be large in order to produce an observable decay, the experimental discovery of $\bb_{\varphi}$ or $\bb_{\varphi\varphi}$ decay would strongly suggest the existence of new particles in this mass range. The signals for such particles could be: anomalous bumps in $K\to e \nu$ or $\pi \to e \nu$ decays; violations of weak universality in leptonic $\pi$ decays, possible Zenlike monojet events at LEP, {\it etc.}\/. Signals in beam dump experiments would not necessarily be expected, since the heavy neutrinos would dominantly decay invisibly into ordinary neutrinos and the light scalars. \item All models which produce scalar-emitting $\beta\beta$ decays necessarily have at least one very light scalar which is significantly coupled to the electron neutrino. As a result, all of these models generically run into trouble with Big Bang nucleosynthesis. The scalars tend not to decouple from the ordinary neutrinos, and so tend to violate the constraints on the gravitating degrees of freedom which can exist at the epoch around $T \sim 1$ MeV. This bound can be evaded for comparatively special values of the masses and couplings of the particles involved.\cite{loophole,CMM,multimajoron} The loophole arises since there can be an interval during which the neutrinos are no longer in chemical equilibrium with the protons and neutrons, but where the $n/p$ ratio has still not frozen out. During this interval the annihilation of sterile neutrinos can raise the electron neutrino abundance, which in turn acts to deplete the neutron abundance. A sterile neutrino which annihilates in this way effectively counts as a {\em negative} number of neutrino species, and so can counteract the positive contribution of the light scalars. Of course, the details of the loophole are less interesting than the simple fact that the loophole exists. Although nucleosynthesis considerations generically disfavour models with additional light scalars, nucleosynthesis should and would be re-evaluated should a scalar-emitting $\beta\beta$ decay mode be observed. \item The nuclear matrix elements that are required to evaluate the $\beta\beta$ decay rates in essentially all of the models are the same as those which have long been studied\cite{Rosen,Haxton,KK} within the context of $\bb_{0\nu}$ and $\bb_{\varphi}$ decay in the GR model. This can be seen since $L^{\mu\nu} \propto \eta^{\mu\nu}$, which implies that the decay rate depends only on the nuclear form factor combination: ${W^\mu}_\mu = w_{\scrs GT} - w_{\scrs F}$. This is precisely the same combination as appears in $\bb_{2\nu}$ and $\bb_{0\nu}$ decays, and so it is well studied in the literature. The exception to this rule are those models which predict $n=3$ single-scalar $\bb_{\varphi}$ decay, which instead depend on the antisymmetric part of $W_{\mu\nu}$. The corresponding matrix elements are less well studied, with a corresponding greater uncertainty in the predicted decay rates. (Explicit formulae for these matrix elements in terms of nucleon operators are given in the literature\cite{CMM}.) \item All of the models given here predict the same decay distribution as a function of the opening angle of the two electrons, so this observable cannot be used to distinguish one from another (as it can be used to distinguish decays mediated through right-handed currents). The only exception to this statement among the models considered are those for which the final electrons are emitted from an $L_e = 2$ scalar having electric charge $Q=-2$. Unfortunately the bounds on the masses of such scalars make the resulting $\beta\beta$ decay rate undetectable\cite{multimajoron}. \end{enumerate} To summarize, $\beta\beta$ experiments can legitimately expect to see scalar emitting decays even though the original models which proposed this decay mode have since been ruled out by constraints such as those coming from LEP. If such decays are seen, they are most likely to point to new physics with properties which are quite different than what would have been expected from these traditional models. Although decay spectra with $n=1, 3,5,7,...$ are all possible, the basic combinations are $n=1,3$ and 7. The $n=7$ spectrum is very unlikely to seen, however, due to the strong suppression of this decay rate by powers of the small energy release, $Q$. Two-scalar $n=3$ decays also tend to be suppressed compared to single-scalar decays having the same spectrum due to the presence of additional small phase-space factors. Furthermore, naturalness considerations (driven by the strong bound on the occurence of the $\bb_{0\nu}$ decay mode) disfavour those models which predict $n=1$ $\bb_{\varphi}$ decays, or $n=3$ $\bb_{\varphi\varphi}$ decays. Thus, from a theoretical perspective the $n=3$ $\bb_{\varphi}$ deays are the most natural. If such decays are seen we may also probably expect some new developments in nucleosynthesis and in precision experiments such as those which constrain lepton universality. May we live to see such exciting times! \section{Acknowledgements} I would like to thank the workshop's organizers for providing such a splendid setting for the workshop, and for their kind invitation to speak. It is also a pleasure to acknowledge my collaborators in the research described here: Peter Bamert, Jim Cline and Rabi Mohapatra. Our funds have been provided by linear combinations of N.S.E.R.C.\ of Canada, les Fonds F.C.A.R.\ du Qu\'ebec, the Swiss National Foundation and the U.S. National Science Foundation. \def\pr#1{\it Phys.~Rev.~{\bf #1}} \def\np#1{\it Nucl.~Phys.~{\bf #1}} \def\pl#1{\it Phys.~Lett.~{\bf #1}} \def\prc#1#2#3{{\it Phys.~Rev.~}{\bf C#1} (19#2) #3} \def\prd#1#2#3{{\it Phys.~Rev.~}{\bf D#1} (19#2) #3} \def\prl#1#2#3{{\it Phys. Rev. Lett.} {\bf #1} (19#2) #3} \def\plb#1#2#3{{\it Phys. Lett.} {\bf B#1} (19#2) #3} \def\npb#1#2#3{{\it Nuc. Phys.} {\bf B#1} (19#2) #3} \def{\it et.al. \/}{{\it et.al. \/}} \bibliographystyle{unsrt}
hep-th/9505033
\section{Introduction} The relation between the two simple string theory models in two dimensions, the critical $U(1)$ gauged WZW $SL(2,R)$ model $^{[1-4]}$, and the noncritical string theory of a one dimensional matter field coupled to Liouville field, has attracted considerable attention $^{[5-14]}$. In Ref.[1] it was argued that as it is not possible to remove one of the parameters of the two dimensional black hole in favour of the Liouville field in all the regions of the black hole geometry, the theory can not be regarded as a non-critical string theory of $ c=1$ matter coupled to gravity. In agreement with this result, Distler and Nelson$^{[6]}$ studied the BRST cohomology of the black hole and found that there are more discrete states in the black hole than in the Liouville theory. Also by looking at the behaviour of the states of the black hole near the horizon, it was found $^{[12]}$ that there are only a few states that do not diverge near the horizon and are therefore physical, the $W_\infty$ states not being among them. On the other hand using the free field realization of ${ SL(2,R)/ U(1)}$ and the true BRST charge in the black hole, it was shown that as far as the energy-momentum tensor is concerened, the model is identical to 2d gravity $^{[8]}$. It was then claimed that the extra states that appear in the $SL(2,R)/U(1)$ , are BRST exact and therefore the spectrum of both theories are the same. It has been argued that $^{[13]}$ there are null states in the black hole which lead to even more discrete states than in Ref.[6]. Yet in a different approach the number of discrete states of the black hole come out even less than those of 2-d gravity $^{[12]}$. On the other hand in the context of matrix models, it has been shown that the 2 dimensional black hole theory and the 2 dimensional gravity theory are closely related $^{[14]}$. Therefore the relation between the two theories warrants further investigation. In this work we will study the relation by a different method, i.e., by gauging the SL(2,R) by a nilpotent subgroup and looking at the behaviour of the black hole theory ${SL2,R)/U(1)}$ when the ${U(1)}$ tends towards this nilpotent subgroup, which we call ${E(1)}$. When the ${U(1)}$ subgroup of the ${SL(2,R)/U(1)}$ black hole is substituted with the subgroup ${E(1)}$, it is found $^{[15,16]}$ that the resulting theory is nothing but the one dimensional Liouville field theory with zero cosmological constant $^{[18]}$. The same result can of course be obtained by boosting the original ${U(1)}$ subgroup and letting the boost parameter go to infinity. This reduction of the degrees of freedom from two to one, has been studied and is understood to be a consequence of an enlarged symmetry $^{[17]}$. In the effort to understand the details of the effect of the boost on the black hole theory, the effective Lagrangian of the boosted theory was studied in the limit of large but finite boost parameters, and found to resemble the corresponding quantity in the theory of a c=1 conformal matter coupled to Liouville field$^{[15,16]}$. In this paper we have pursued this line of investigation in more detail and have found that not only the action and the tachyon of the black hole tend to those of 2-d gravity, but also the discrete states of the black hole tend to the discrete states of the 2-d gravity. The identification of the quantum numbers of the former theory with the momenta of the latter$^{[6]}$, will then appear as a natural consequence of the boost transformation. The transformation also explains the disappearance of the extra discrete states which occur in black hole and not in the 2-d gravity. In section 2 we will review the results of Ref.[16] for E(1) gauging and discuss the free field representation of the nilpotent gauged WZW model of $SL(2,R)$ and show that the stress tensor of this model is the same as in the Liouville theory. In section 3 we will study the limit of the primary fields of $SL(2,R)/U_t(1)$ at $t \rightarrow \infty$, where t is the boost parameter, and show that the primary fields in the regions V and III of Ref.[1] lead to the vertex operators of the c=1 theory coupled to Liouville. In section 4 we study the operator aspects of the the boost transformation and in section 5 we will take up the discrete states. \section {Gauging SL(2,R) by its Nilpotent Subgroup} Let us take the following parametrization for elements $g \in SL(2,R)$ , \begin{equation} g= \left( \begin{array}{ll} a&u\\-v&b \end{array} \right),\ \ \ \ \ \ \ \ ab+uv=1. \end{equation} We consider the nilpotent subgroup E(1) of SL(2,R) generated by $\sigma = \sigma_3+i \sigma_2$ and use the axial gauge freedom ( $ g \rightarrow hgh$ ) to fix the gauge by the following condition : \begin{equation} a+b=0, \end{equation} which is valid in region V . Then using the parameters: $$x={1 \over 2} (u-v)$$ \begin{equation} e^{\varphi '} = {1 \over 4} (u+v-2a), \end{equation} we find the following effective action $^{[16]}$ \begin{equation} I_{eff} = \frac {k}{4 \pi} \int d^2 \sigma \sqrt {h} h^{ij} \partial_i \varphi' \partial_j \varphi' +\frac {1}{4 \pi}\int d^2 \sigma \sqrt {h} R^{(2)}\varphi' . \end{equation} In the above equation $h^{ij}$ is the two dimensional metric and $R^{(2)}$ is the curvature of the world-sheet . This one-dimensional action is nothing but the Liouville action . We will now show this equivalence of $SL(2,R) / E(1) $ and Liouville theory at the level of stress tensor . As is well known, if one uses the Gauss decomposition to represent the group elements of $SL(2,R)$, the following representations for the currents of $SL(2,R)_k$ in terms of free fields $\beta$ ,$\gamma$ and $\phi $ can be obtained $^{[19]}$: \pagebreak $$ J_+ = \beta $$ \begin{equation} J_- = \beta \gamma^2 +\sqrt {2k'} \gamma \partial \phi +k \partial \gamma \ \ \ , \ \ \ \ k'=k-2 \end{equation} $$J_3= -\beta \gamma - \sqrt {k'\over 2} \partial \phi $$ where $\beta$ and $\gamma$ are the commuting ghost fields with dimensions $h=1,0$ and with OPE's $\beta (z) \gamma (w) \sim {1\over z-w}$ and $\phi (z) \phi (w) \sim -\lg (z-w)$. Then using Sugawara construction, the stress tensor of $SL(2,R)_k$ becomes: \begin{equation} T _{SL(2,R)}(z) = \beta \partial \gamma -{1\over 2} (\partial \phi)^2 -{1\over \sqrt {2k'}} \partial ^2 \phi \end{equation} Now we want to gauge away the nilpotent subgroup of $SL(2,R)$, i.e. $J_+$, by using the BRST method. As $J_+ (z) J_+ (w) $ is regular, there is no need to introduce a gauge field (auxiliary field) for constructing the BRST charge ($Q _+$), and hence there is no need to introduce ghosts to fix the gauge field. In this way we arrive at the following expression for the BRST charge of the nilpotent subgroup of $SL(2,R)$: \begin{equation} Q_+ = \oint dz J_+ (z) \end{equation} which satisfies, $$Q_+ ^2=0$$. As we do not introduce the gauge field, so it has no contribution in $T(z)$ and therefore \begin{equation} T_{SL(2,R)/ E(1)}=\beta \partial \gamma -{1\over 2} (\partial \phi)^2 -{1\over \sqrt {2k'}} \partial ^2\phi \end{equation} But there are terms in Eq.(8) which are BRST exact and must be subtracted from the stress tensor. It can be easily shown that: $$\beta \partial \gamma =\partial (\gamma \beta) - \gamma \partial \beta$$ $$= \partial (\gamma \beta) -\{Q_+, {1\over 2} \partial \beta \gamma^2\}$$ $$= \{Q_+, \partial ({1\over 2} \beta \gamma^2)\} - \{Q_+, {1\over 2} \partial \beta \gamma^2\}$$ Thus up to a BRST exact term and at the level $k={9\over 4}$, we find that: \begin{equation} T_{SL(2,R)/E(1)} = -{1\over 2}(\partial \phi)^2 -\sqrt {2} \partial ^2 \phi \end{equation} The above equation is exactly the Liouville action at zero cosmological constant.The same result can be obtained if we consider the stress tensor of $SL(2,R) / U_t(1) $ and look at its behaviour at $t \rightarrow \infty $ . \section {Primary Fields} The primary fields of WZW model are defined via its matrix elements. In the case of $SL(2,R)$ gauged by $\sigma_3$, the vertex operator in the region I is $^{[3]}$: \begin{equation} V_\lambda ^\omega = <\lambda ,\omega|g(y,\tau)|\lambda, -\omega> \end{equation} where $ \lambda$ defines the spin of $SL(2,R)$ representation, $\omega$ is the eigenvalue of $\sigma_3$ and $y$ and $\tau$ are defined by: \begin{equation} y=uv ,\ \ \ \ \ \ \ \ e^{2\tau}= -{u \over v}. \end{equation} In this region the suitable gauge condition is $a-b=0$ . There are four different vertex operators in the region I which have different behaviours near the horizon and infinity, among which one, denoted $U_\lambda ^\omega$, can be naturally extended to the region III,$^{[3]}$ : \begin{equation} U_\lambda ^\omega= e^{-2i\omega\tau} F_\omega ^ \lambda (y)\\= e^{-2i\omega\tau} (-y)^{-i\omega} B(\nu_+, \bar\nu_-),F(\nu _ +, \bar\nu_-,1-2 i\omega,y) \end{equation} where $B(\alpha, \beta )=\Gamma ( \alpha ) \Gamma ( \beta ) / \Gamma(\alpha + \beta )$, F is the hypergeometric function $_2F_1$ , and \begin{equation} \nu_\pm = {1\over 2} -i(\lambda \pm \omega) \end{equation} However, it is convenient to work with the boosted group element $g$ , in Eq.(10), rather than using the state corresponding to the $U_t(1)$. We therefore have, \begin{equation} U_\lambda ^\omega (t)=<\lambda,\omega|g(y_{-t}, \tau_{-t})|\lambda , -\omega> \end{equation} where \begin{equation} g_{-t}=e^{{t\over 2} \sigma _1}g e^{-{t \over 2 }\sigma _1} \end{equation} If the eigenvalues of $\sigma=\sigma_3+i\sigma _2$ are denoted by $\chi$ $$\sigma|\lambda ,\chi >=\chi |\lambda ,\chi>$$ then as $\sigma^t_3 \stackrel {t \rightarrow \infty} \longrightarrow e^t \sigma$ the states $|\lambda ,\omega>_t$ must tend to $|\lambda ,\chi>$ as $t\rightarrow \infty$. As a result, \begin{equation} \omega=\chi e^t \end{equation} and \begin{equation} \nu_ \pm= {1\over 2}-i(\lambda \pm \chi e^t) \end{equation} To find the limit of the vertex operators , we will express the hypergeometric function in terms of the associated Legendre function of the $2^{nd}$ kind $Q_\mu ^\nu(z) $ $ ^{[20]}$, and obtain, $$U_\lambda ^\chi (t)\stackrel {t\rightarrow \infty}\longrightarrow e^{-4i\chi {u+v \over u-v}}Q_ \nu ^0(1+{8\over(u-v)^2})$$ which when $|\nu| \rightarrow \infty$, reduces to: \begin{equation} U_\lambda ^\chi (t)\rightarrow e^{-4i\chi {u+v\over u-v}} \sqrt {\pi \over 2}(w^2-1)^{-1\over 4}(w-\sqrt {w^2-1})^{-{i\over 2}\chi e^t} \end{equation} where $w=1+{8\over (u-v)^2}$. As expected, there is no connection to 2-d gravity in the region I. The vertex operators in the region V are $^{[3]}$ : \begin{equation} W_\omega ^\lambda (y,\tau)=e^{-2i\omega\tau} y^{-i\omega} F(\nu_+ , \bar \nu_- , 1 , 1-y) \end{equation} where $y=uv$ and $\tau= {1\over 2}\ln ({u/v})$ and the gauge condition is $a+b=0$. As this function has no singularity when crossing the singularity at $y=1$, it can be trivially continued to region III. In the same way as discussed in the previous section, the corresponding vertex operator of ${SL(2,R)/ U_t(1)}$ in this region can be recovered from Eq.(19) by simply transforming $y \rightarrow y_ {-t}$ , $\tau \rightarrow \tau _{-t}$ and taking the gauge condition $a_{-t}+b_{-t} =0$ .After some algbra, using a similar set of identities as in the case of region I, we obtain, \begin{equation} W_\chi^\lambda(t) \stackrel {t\rightarrow \infty} \longrightarrow {1\over \pi}e^{-2ix\chi e^{-\varphi'}}e^{\pi \chi e^t} e^{-(t+\varphi')}K_{2i\lambda}(2\chi e^{-\varphi'})\end{equation} Fortunately the dependence of $W_\chi ^\lambda$ on $t$ is such that we can absorb it consistently in $\varphi'$, and therefore if we define $\varphi=\varphi' +t$ and $xe^{-\varphi}=X$ and use Eq.(16), we will finally obtain, \begin{equation} W_\omega^\lambda (t) \stackrel {t\rightarrow \infty}\longrightarrow {e^{\pi \omega}\over \pi}e^{-2i\omega X} e^{-\varphi}K_{2i\lambda}(2 \omega e^{-\varphi}) \end{equation} The Eq.(21) is exactly the vertex operator of c=1 coupled to 2-d gravity with non-zero cosmological constant $^{[21]}$. This equivalence is even clearer when the vertex operator is considered on-shell, that is when $\lambda =\pm {\omega / 3}$ $^{[3]}$. Eq.(20) also shows that the eigenvalue of $\sigma$ plays the role of the cosmological constant $\chi =\sqrt {\mu}$ $^{[3]}$. When $\chi \rightarrow 0$, we obtain, \begin{equation} W_\omega^\lambda (t) \stackrel {t\rightarrow \infty} \longrightarrow e^{-(\varphi'+t)} e^{-ix\chi e^{-\varphi'}} \{Ae^{2i\lambda(\varphi' +t)} +c.c.\} \end{equation} where: $$A={\Gamma (2i\lambda) \over \Gamma ({1\over2}+i\lambda +i\omega)\Gamma ({1\over 2}+i\lambda - i\omega)}$$ As in the previous case, if we define $\varphi =\varphi' + t$ and $X =xe^{-\varphi}$ we will arrive at the following expression for the primary fields \begin{equation} W _\omega^\lambda \rightarrow e^{-\varphi} e^{-2i\omega X } \{ Ae^{2i\lambda \varphi}+A^* e^{-2i\lambda \varphi}\} \end{equation} However, this is nothing but the vertex operator of c=1 plus Liouville at zero cosmological constant, of course after applying the on-shell condition. The scattering matrix is ${A/ A^*}$. Therefore $\chi $ plays exactly the role of the cosmological constant and the boosted black hole in region V is equivalent to 2-d gravity. At $t=\infty$ where $\chi =\omega e^{-t}=0$, Eq.(22) becomes: \begin{equation} W_\omega^\lambda (t=\infty) = Ae^{2i(\lambda -1)\varphi} +c.c. \end{equation} which is the expression for the vertex operator of pure gravity, Liouville theory, as expected: as $t \rightarrow \infty$, $\sigma _3^t \rightarrow \sigma$ and we expect the theory to reduce to $SL(2,R) / E(1) $, which is the Liouville theory. \section { Limit of the Operators} In the region V where the gauge condition is $a+b=0$ ,the effective action is $^{[1]}$ : \begin{equation} I_{eff}=-{k \over 4\pi}\int {\partial u \bar\partial v + \partial v \bar\partial u \over 1-uv}d^2z+{1\over 4\pi}\int d^2\sigma \sqrt {h}R^{(2)} ln(1-uv). \end{equation} If we boost this action and keep the next to leading order terms in $\epsilon= e^{-t}$, we will find : $$ I_{eff} = \frac {k}{4 \pi} \int d^2 \sigma \sqrt {h} h^{ij} [ \partial_i \varphi \partial_j \varphi -(\partial_i X \partial_j X + X \partial_i \varphi \partial_j X)]$$ \begin{equation} +\frac {1}{2 \pi}\int d^2 \sigma \sqrt {h} R^{(2)}(\varphi - {1 \over 4}X^2), \end{equation} where $X=xe^{-\varphi}$ and $\varphi=\varphi'+t$, which resembles the theory of a matter field X coupled to a Liouville field ${\varphi}$, including an interaction term $^{[5]}$; which we may ignore if we assume that X is small, because of its ${\epsilon}$ dependence, and if ${\partial X}$ is comparable to ${\partial \varphi}$. Note that if we redefine the Liouville field as, \begin{equation} \Phi=\varphi - {1 \over 4} X^2 ,\end{equation} we obtain, \begin{equation} I_{eff} = \frac {k}{4 \pi} \int d^2 \sigma \sqrt {h} h^{ij} ( \partial_i \Phi \partial_j \Phi -\partial_i X \partial_j X) +\frac {1}{2 \pi}\int d^2 \sigma \sqrt {h} R^{(2)}\Phi, \end{equation} which indeed is the standard 2-d gravity action. Observe that the correction to ${\varphi}$ is only second order in ${\epsilon}$ and will not affect the results for the tachyon vertex operators of previous section. To go further we need to know the operator properties of the fields in the boosted theory. To investigate the OPE's of these fields, it is necessary to consider the free field representation of SL(2,R). As it is known, if we use the Gauss decomposition to parametrize the SL(2,R) group elements , that is : \begin{equation} g=e^{\gamma \sigma_+}e^{\phi'\sigma_3}e^{\chi \sigma_-}= \left( \begin{array}{ll} e^{\phi'}+\chi\gamma e^{-\phi'}&\gamma e^{-\phi'}\\ \chi e^{-\phi'} & e^{-\phi'} \end{array} \right) \end{equation} where $\sigma_\pm={1 \over 2}(\sigma_1\pm i \sigma_2)$ , then the currents $J=J_i\sigma_i=-k(\partial g)g^{-1}$ reduce to the representation (5) with definition of $\phi$ and $\beta$ as : $$\phi=-k\sqrt{2\over k'}\phi'$$ \begin{equation} \beta=-k\partial \chi e^{-2\phi'} . \end{equation} Boosting the representation (29) and imposing the gauge condition, results in the following relations for the leading terms of $\beta_{-t},\gamma_{ -t}$ and $\phi_{-t}$ : $$\beta_{-t}=k(x\partial \varphi - \partial x)e^\varphi=-k\partial X e^{2 \varphi}$$ \begin{equation} \gamma_{-t}=1+xe^{-\varphi}=1+X \end{equation} $$\phi_{-t}=k\sqrt{2\over k'}\varphi$$ Now the OPE's of the above fields are : $$\beta_{-t}(z) \gamma_{-t}(w) \sim {1\over z-w} \ \ \ \ \ \ , \ \ \ \ \ \phi_{-t}(z) \phi_{-t}(w) \sim -lg(z-w) $$ \begin{equation} \beta_{-t}(z) \phi_{-t}(w) \sim \gamma_{-t}(z) \phi_{-t}(w) \sim regular\end{equation} therefore using the Eq.(31), the following OPE's are dictated for $X$ and $\varphi$ fields : $$\varphi(z)\varphi(w) \sim -{k'\over 2k^2}lg(z-w)$$ \begin{equation} \varphi(z) X(w) \sim regular \end{equation} $$<X(z)X(w)>=-{1\over 2k}lg(z-w).$$ The above equations show that the interpretation of $\varphi$ and $X$ as the Liouville and the c=1 bosonic fields, which commute with each other, is permitted. \section {Discrete States} The discrete states of the $SL(2,R)/U(1)$ black hole were found in reference [6] using the parafermionic modules $V_{j,m}$ built on the states $U_{j,m}$ of the discrete irreducible representations of $SL(2,R)$, where $j$ and $m$ are the usual angular momentum lables of $SL(2,R)$. It was found that aside from the propagating tachyon states with $$m=\pm{3(j+1)/2},$$ there are three sets of discrete states labled as $D$ ,$ C$ and $\hat{D}$ with $$m=\pm {3\over 8}(2s-4r-1) \ \ \ \ , \ \ \ \ j={1 \over 8}(2s+4r-5)$$ for $\hat{D}$ states; and $$m=\pm{3\over 4}(s-2r+1) \ \ \ \ , \ \ \ \ \ j={1\over 4}(s+2r-3)$$ for $D$ states; and $$m={3\over 2}(s-r) \ \ \ \ , \ \ \ \ j={1\over 2}(s+r-1)$$ for the $C$ states, where $s$ and $r$ are positive integers. {}From the form of the discrete states of the 2-d gravity theory, $$p_x=\sqrt{2}(p-q) \ \ \ \ , \ \ \ \ p_\varphi=\sqrt{2}(p+q-1),$$ with $p$ and $q$ positive integers , it was suggested that the sets $D$ and $C$ correspond to the 2-d gravity discrete states with the identification \begin{equation} p_x=2\sqrt{2}m/3 \ \ \ \ , \ \ \ \ \ p_\varphi=2\sqrt{2}j;\end{equation} and that the $\hat{D}$ did not correspond to any 2-d gravity states. These extra states will of course not appear if a free field representation of $SL(2,R)/U(1)$ is used, as the operator structure and the energy momentum tensor of the theories are not distinguishable in this representation. However, once the Kac-Moody Verma modules are usd, the extra states are inescapable. Below, we will see that performing a boost transformation on the states to the black hole will remove the $\hat{D}$ states and reduce the spectrum to that of the 2-d gravity. To begin with, recall that the states $\hat{D}$ and $D$ are related by the screening operators, $$S^{\pm}=\oint dze^{\sqrt{k'/2}\phi^1 \pm i \sqrt{k/2}\phi^2} ,$$ in terms of the free fields $\phi^{1}$ and $\phi^{2}$. In fact $\hat{D}$ = ker $(S)$ and $D=V/\hat{D}$, and the modules $V$ are generated on the SL(2,R) base representation operators, $$U_{j,m}=e^{j\sqrt{2/k'}\phi^{1}+m\sqrt{2/k}(i\phi^{2}+\phi^{3})},$$ where $\phi^{3}$ is the $U(1)$ free field. We now apply our boosting map on these operators and obtain, $$U_{j,m} \rightarrow const. e^{-2j\frac{k}{k'}\varphi' - 2(j+m)X},$$ $$S^+ \rightarrow const. \oint dze^{-k\varphi'-2(k-1)X},$$ \begin{equation} S^- \rightarrow const. \oint dze^{-k\varphi' -2x}.\end{equation} {}From these equations, as we have interpreted $\varphi$ to be the Liouville field, considering the normalisation Eq.(33) of $ \varphi$, we see that \begin{equation} p_{\varphi}=2 \sqrt {2}j,\end{equation} as suggested in Ref.[6]. The corresponding relation for $p_x$, \begin{equation} p_{x}= 2\sqrt {2}(m+j)/3 ,\end{equation} has the coefficient suggested in Ref.[6], and $m$ substituted for $m+j$. Next, taking the OPE's of the screening operators with the Verma module operators, we see that in the approximation we are considering, the screening operators fail to be well defined, indicating that the states $\hat{D}$ are removed from the spectrum. To confirm the result above we have calculated the commutator of the limit of the screening operators with that of the Virasoro operator $L_{0}$, and they also fail to vanish in the first order approximation theory. We conclude that in the large boost limit of the black hole theory, the extra discrete states disappear and the spectrum becomes that of the 2-d gravity. {\bf Acknowledgements} We would like to thank A.Morozov for valuable discussions on the free field representation of GWZW models. \begin{center} {\large References \\} \end{center} \begin{enumerate} \bibitem{} E. Witten, Phys. Rev. D44(1991),314. \bibitem{} G. Mandel, A. M. Sengupta and S. R. Wadia, Mod. Phys. Lett. A6(1991) ,168. \bibitem{} R. Dijkgraaf, H. Verlinde and E. Verlinde, Nucl. Phys.B 371(1992) 269. \bibitem{} A. A. Tseytlin, Nucl. Phys. B399(1993)601; I. Bars, Phys. Lett. B293(1992)315. \bibitem{} E. J. Martinec and S. L. Shatashvili, Nucl. Phys. B368(1992)338;\\ M.Bershadsky and D. Kutasov, Phys. Lett. B266(1991),345. \bibitem{} J. Distler and P. Nelson, Nucl. Phys. B374(1992),123. \bibitem{} S. Mukhi and C. Vafa, Nucl. Phys. B407(1993),667. \bibitem{} T. Eguchi, H. Kanno and S. K. Yang, Phys. Lett. B298(1993),73. \bibitem{} T. Eguchi, Phys. Lett. B316(1993),74. \bibitem{} M. Ishikawa and M. Kato, Phys. Lett. B302(1993),209. \bibitem{} S. Chaudhuri and J. D. Lykken, Nucl. Phys. B396(1993),270 \\ K. Becker and M. Becker : Interactions in the $SL(2,R)/U(1)$ blackhole background , CERN-TH.6976/93. \bibitem{} N. Marcus and Y. Oz, hepth/9305003. \bibitem{} K. Itoh, H. Kunitomo, N. Ohta and M. Sakaguchi; BRST Analysis of Physical States in Two-Dimensional Blackhole, OS-GE 28-93. \bibitem{} S. R. Wadia, hepth/9503125 \bibitem{} F. Ardalan, ``2D black holes and 2D gravity'', in Low Dimensional Topology and Quantum Field Theory, Ed.,H. Osborn, (Plenum Press,1993), p.177 \bibitem{} M. Alimohammadi, F. Ardalan and H. Arfaei, Int. J. Mod. Phys. A10(1995)115. \bibitem{} F. Ardalan and M. Ghezelbash, Mod. Phys. Lett. A9(1994)3749. \bibitem{} P. Forgacs, A. Wipf, J. Balog, L. Feher and L. O'Raifeartaigh, Phys. Lett. B227(1989),214 \\ L. O'Raifeartaigh, P. Ruelle and I. Tsutsui, Phys. Lett. B258(1991),359 \\ A. Alekseev and S. Shatashvili, Nucl. Phys. B323(1989),719 \\ M. Bershadsky and H. Ooguri, Commun. Math. Phys. 126(1989),49. \\ G. T. Horowitz and A. A. Tseytlin, Phys. Rev. D50(1994),5204. \\ C. Klimcik, Null gauged WZNW theories and Toda-like $\sigma$-models, hep-th/9501091. \bibitem{} A. Gerasimov, A. Morozov and M. Olshanetsky, Int. J. of Mod. Phys.A5 (1990),2495. \bibitem{} Erdely, Magnus, Oberhettinger and Tricomi: Higher Transcendental Functions (Bateman Manuscript Project), McGraw-Hill (1953). \bibitem{} G. Moore, N. Seiberg and M. Staudacher: From Loops to States in 2D Quantum Gravity, Rutgers preprint RU-91-11 (March,1991). \end{enumerate} \end{document} --Boundary (ID NNWIgK8+yIp+WcxLCTmhug)--
cond-mat/9505018
\section{Introduction} There are many naturally occurring examples of uniaxially modulated structures. The ferrimagnetic states of the rare earths [\onlinecite{JJ}] include several phases where the wavevector lies along the $\hat{\rm c}$-axis and can be of a long period commensurate or incommensurate with the underlying lattice. Modulated atomic ordering has been observed in metallic alloys such as TiAl$_3$ and a relationship established between the wavelength of the modulated phases and the temperature[\onlinecite{alloys}]. Polytypism describes the phenomenon whereby a compound can have modulated structural order of different periods [\onlinecite{politypism}]. A well-known example is SiC where the `ABC' stacking sequence of the close-packed layers can correspond to many varied and often very long wavelengths. These systems have been usefully modeled in terms of arrays of interacting domain walls [\onlinecite{MEFXS}]. When the wall energy is small, wall--wall interactions become important in determining the wall spacing and small changes in the external parameters can lead to many different modulated phases becoming stable. A model which has proved very useful for understanding this process is the axial next-nearest neighbor Ising or ANNNI model which is an Ising system with first- and second-neighbor competing interactions along one lattice direction [\onlinecite{RJE}]. At zero temperature the ANNNI model has a multiphase point where an infinite number of phases are degenerate corresponding to zero domain wall energy. At low temperature entropic fluctuations cause domain wall interactions which stabilize a sequence of modulated structures [\onlinecite{BAK}-\onlinecite{VG}]. Our aim in this paper is to investigate whether quantum fluctuations can play a similar r\^{o}le. That quantum fluctuations can remove ground state degeneracies not required by symmetry was pointed out by Shender [\onlinecite{EFS}] and termed ``ground state selection'' by Henley [\onlinecite{CLH}]. We find that quantum fluctuations do indeed remove the infinite degeneracy of the multiphase point of the ANNNI model. A sequence of first order transitions is stabilized in a way qualitatively similar to the finite temperature behavior but involving a different sequence of phases. However, for long-period phases entropic and quantum fluctuations behave in a subtly different way. Our analysis focuses on the domain wall interactions and we calculate in turn the wall energy, two-wall interactions and three-wall interactions [\onlinecite{MEFXS}]. This is done by an analysis of the structure of perturbation theory around the multiphase point of the ANNNI model: all orders of perturbation theory are important. The calculation is described in Secs. 3 and 4 and corrections pertinent to the long-period phases are treated in Sec. 5. To illustrate the essence of the phenomenon, we start, in Sec. 2, by focussing on a simpler problem concerning the unbinding of a single interface. In this model, which is effectively a one-wall version of the ANNNI problem, spins at opposite sides of the system are fixed to be antiparallel. When the magnetic field, $h$, is nonzero, the domain wall separating up spins from down spins is bound to one of the surfaces. For $h=0$ and for an Ising model, there is a multiphase degeneracy, because the interface energy is independent of its distance from the surface. However, when the Ising model is replaced by a very anisotropic Heisenberg model, then, as we show, quantum fluctuations induce a surface--interface repulsion resulting in the interface's unbinding through a series of first order layering transitions [\onlinecite{GO}]. This calculation is similar in spirit, but much simpler than that considered in the rest of the paper. \section{Interface Unbinding Transition} Our first aim is to show how quantum fluctuations can affect the unbinding transition of an interface from a surface. Accordingly, we consider the Hamiltonian \begin{equation} {\cal H} = - {J \over S^2} \sum_{i=1}^{N-1} {\bf S}_{i} \cdot {\bf S}_{i+1} + {h \over S} \sum_{i=2}^{N-1}S_{i}^z - {D \over S^2} \sum_{i=1}^N ([S_{i}^z]^2 -S^2) -{H \over S} (S_1^z-S_N^z) , \label{HAMIL} \end{equation} where $i$ are the sites of a one-dimensional lattice of length $N$ and ${\bf S}_i$ is a quantum spin of magnitude $S$ at site $i$. In Eq. (\ref{HAMIL}) we introduced factors of $S$ to simplify the classical spin ($S \rightarrow \infty$) limit. Although the results are described for one dimension, they hold for any dimension because of the translational invariance of the interface parallel to the surface (walls are flat in two or more dimensions for an Ising model at sufficiently low temperature). The final term is chosen to impose the boundary conditions such that there is an interface in the system. The interface will be defined as being in position $k$ when it lies between sites $k$ and $k+1$. We shall restrict ourselves to the limits of zero temperature, $H=\infty$ and $N=\infty$. For $D=\infty$, $S_i^z=\sigma_i S$ where $\sigma_i=\pm 1$ and the Hamiltonian~(\ref{HAMIL}) reduces to an Ising model in a magnetic field whose Hamiltonian ${\cal H}_I$ is given by \begin{equation} \label{HSUBI} {\cal H}_I = -J \sum_{i=1}^{N-1} \sigma_i \sigma_{i+1} + h \sum_{i=2}^{N-1} \sigma_i - H [ \sigma_1 - \sigma_N ] \ . \end{equation} For $h>0$ the interface lies at $k=1$; for $h<0$ it unbinds to $k=\infty$. $h=0$ is a multiphase point where every interface position has the same energy. For classical spins, $S=\infty$, the ground state and hence the multiphase point are maintained as the spin anisotropy is decreased from $D=\infty$. Our aim here is to study the way in which this degeneracy is lifted by quantum fluctuations when $D \gg J$ and $S$ is large but finite. We find that the interface unbinds through an infinite sequence of first order transitions as $h \rightarrow 0^{+}$, as illustrated schematically in Fig. \ref{fig:pd}. The existence of the transitions follows from considering the structure of degenerate perturbation theory around the multiphase point. To start the analysis we write the Hamiltonian~(\ref{HAMIL}) in bosonic form using the Dyson-Maleev transformation [\onlinecite{DM,SVM}] \begin{eqnarray} S_i^z & = & \sigma_i ( S - a_i^+ a_i) \nonumber \\ S_i^+ & = & \sqrt{2S} \left( \delta_{\sigma_i,1} \left[ 1 - {a_i^+ a_i \over 2S} \right] a_i + \delta_{\sigma_i,-1} a_i^+ \left[ 1-{a_i^+ a_i\over 2S} \right] \right) \nonumber \\ S_i^- & = & \sqrt{2S} \left( \delta_{\sigma_i,1} a_i^+ + \delta_{\sigma_i,-1} a_i \right) \ , \end{eqnarray} where $\delta_{a,b}$ is unity if $a=b$ and is zero otherwise, $a_i^+$ ($a_i$) creates (destroys) a spin excitation at site $i$, and $\sigma_i$ specifies the sign of the $i$th spin. The resulting Hamiltonian is \begin{equation} \label{HAM} {\cal H} ( \{ \sigma_i \} ) = {\cal H}_I + {\cal H}_0 + V_{||} + V_{\not{\parallel}} +V_4 \ , \end{equation} where ${\cal H}_I$ is given in Eq. (\ref{HSUBI}), \begin{equation} {\cal H}_0 = \sum_{i=2}^{N-1} \Biggl[ 2D + J \sigma_{i} ( \sigma_{i-1} + \sigma_{i+1} ) - h \sigma_{i} ) \Biggr] S^{-1} a_{i}^+ a_{i}, \label{eqn:h0} \end{equation} $V_4$ represents the four operator terms proportional to $1/S^2$, and $V_{||}$ ($V_{\not{\parallel}}$) is the interaction between spins which are parallel (antiparallel) \begin{equation} V_{||} =-\sum_{i=2;i \ne k}^{N-1}JS^{-1} (a_{i}^+ a_{i+1} + a_{i+1}^+ a_{i} ), \label{eqn:vpar} \end{equation} \begin{equation} V_{\not{\parallel}} = -J S^{-1}(a_{k}^+ a_{k+1}^+ + a_{k+1} a_{k} ). \label{eqn:vnotpar} \end{equation} We work to lowest order in $1/S$ and therefore neglect terms like $V_4$ which are higher order than quadratic in the boson operators. To understand the structure of the phase diagram near the multiphase point it is most convenient to calculate the energy difference $\Delta E_{k}=E_{k}-E_{k-1}$ where $E_k$ is the energy of the system with the interface at position $k$ [\onlinecite{DuxY}]. In particular, contributions to $E_{k}$ which are independent of $k$ do not affect the location of the interface and need not be considered. The energies $E_k$ will be calculated at $h=0$ using standard perturbation techniques [\onlinecite{Messiah}] \begin{equation} E_k = {}_k\!\langle 0| (V_{||} + V_{\not{\parallel}}) | 0 \rangle_k - {}_k\!\langle 0| (V_{||} + V_{\not{\parallel}}) {Q_0 \over {\cal H}_0 - E_0} (V_{||} + V_{\not{\parallel}}) | 0 \rangle_k + \dots \label{pert} \end{equation} where the vector $| 0 \rangle_k$ corresponds to the configuration with the interface at position $k$ and no excitation present and $Q_0= 1 - | 0 \rangle_k\,{}_k\!\langle 0|$. All the vectors $| 0 \rangle_k$ are eigenstates of ${\cal H}_0$ with the same eigenvalue $E_0$. However, the perturbative term $(V_{||} + V_{\not{\parallel}})$ conserves $\sum_i S_i^z$ and thus it can never cause a transition between two different ground states. Therefore we may use non-degenerate perturbation theory to check whether the excitations can lift the degeneracy of the interface states. Contributions to the energies $E_k$ arise from spin deviations at the interface created by $V_{\not{\parallel}}$ which are propagated away from and then back to the interface by $V_{||}$ and subsequently destroyed by $V_{\not{\parallel}}$. However only such processes which are $k$-dependent are of interest to us. The lowest order term which contributes to $\Delta E_{k}$ corresponds to an excitation which is created at the interface at position $k$ and propagates to the surface and back before being destroyed. This graph is illustrated in Fig. \ref{fig:dek}. (This process contributes to $E_{k}$, but does not occur for $E_{k-1}$.) It has a contribution which follows immediately from $(2k)^{\rm th}$ order perturbation theory as \begin{equation} \Delta E_k = - \frac{J^{2k}}{S(4D)^{2k-1}} + {\cal O}\bigl( {1 \over D^{2k}} \bigr) \end{equation} \noindent where the terms in $J$ and $h$ in the denominator contribute only to higher order in $1/D$. $\Delta E_k$ is negative corresponding to a repulsive interaction between the interface and the surface and hence as $h \rightarrow 0^{+}$ the interface unbinds through a series of first order phase transitions with boundaries between the phases at \begin{equation} h_{k:k-1}=\frac{J^{2k}}{S(4D)^{2k-1}}. \end{equation} One feature of this calculation which is seen again for the ANNNI model is the fact that the interface energy (here $\Delta E_k$) involves the $(2k)$th power of the coupling constant, and not just the $k$th power, as one might imagine for a classsical system [\onlinecite{DuxY}]. The point is that the quantum fluctuation has to propagate from the interface to the surface AND back. As we will see later, this difference leads to a crucial distinction between the way quantum fluctuations and classical fluctuations lift the multiphase degeneracy for the ANNNI model. \section{The ANNNI Model} A similar formalism is now used to approach a more complicated problem: the effect of quantum fluctuations where the multiphase point is a point of infinite degeneracy for bulk rather than interface phases. We take as our example the axial next-nearest-neighbor Ising or ANNNI model [\onlinecite{RJE}]. Rather than a single wall interacting with a surface the phase structure is now controlled by an infinite number of interacting walls and we shall follow Fisher and Szpilka [\onlinecite{MEFXS}] in analyzing the phase structure in terms of the interactions between the walls. A brief account of this work has been published elsewhere [\onlinecite{HMY}]. The Hamiltonian we consider is \begin{equation} {\cal H} = - {J_0 \over S^2} \sum_{i \langle jj'\rangle} {\bf S}_{i,j} \cdot {\bf S}_{i,j'} - {J_1 \over S^2} \sum_{i,j} {\bf S}_{i,j} \cdot {\bf S}_{i+1,j} + {J_2 \over S^2} \sum_{i,j} {\bf S}_{i,j} \cdot {\bf S}_{i+2,j} - {D \over S^2} \sum_{i,j} ([S_{i,j}^z]^2 -S^2) , \label{AHAMIL} \end{equation} where $i$ labels the planes of a cubic lattice perpendicular to the $z$-direction and $j$ the position within the plane. Also $\langle jj'\rangle$ indicates a sum over pairs of nearest neighbors in the same plane and ${\bf S}_{i,j}$ is a quantum spin of magnitude $S$ at site $(i,j)$. For $D= \infty$, only the states $S_{i}^z=\sigma_iS$, where $\sigma_i = \pm 1$ are relevant and ${\cal H}$ reduces to the ANNNI model [\onlinecite{RJE}]. \begin{equation} {\cal H}_A = - J_0 \sum_{i \langle jj'\rangle} \sigma_{i,j} \sigma_{i,j'} - J_1 \sum_{i,j} \sigma_{i,j} \sigma_{i+1,j} + J_2 \sum_{i,j} \sigma_{i,j} \sigma_{i+2,j} . \end{equation} The ground state of the ANNNI model is ferromagnetic for $\kappa \equiv J_2/J_1 < 1/2$ and an antiphase structure with layers ordering in the sequence $\{ \sigma_i \} = \{ \dots 1, 1, -1 , -1, 1,1,-1, -1\dots \}$ for $\kappa > 1/2$. $\kappa=1/2$ is a multiphase point[\onlinecite{BAK}, \onlinecite{MEFWS}], where the ground state is infinitely degenerate with all possible configurations of ferromagnetic and antiphase orderings having equal energy. For classical spins, $S=\infty$, the ground state (and therefore a multiphase point) is maintained as $D$ is reduced from infinity. To describe how the degeneracy is broken at the multiphase point when $S$ is large, but not infinite, we define a notation similar to that of Fisher and Selke[\onlinecite{MEFWS}] using $\langle n_1, n_2, \dots n_m \rangle$ to denote a state consisting of domains of parallel spins with alternate orientation whose widths repeat periodically the sequence $\{n_1, n_2, \dots n_m\}$. As in the previous section we use the Dyson-Maleev [\onlinecite{DM,SVM}] transformation to recast the Hamiltonian~(\ref{AHAMIL}) into bosonic form (working to lowest order in $1/S$) with the result \begin{equation} \label{AHAM} {\cal H} ( \{ \sigma_i \} ) = E_0 + {\cal H}_0 + V_{||} + V_{\not{\parallel}} \ , \end{equation} where $E_0 \equiv {\cal H}_A$, \begin{eqnarray} {\cal H}_0 & = & \sum_{i,j} \Biggl[ 2\tilde{D} + J_1 \sigma_{i,j} ( \sigma_{i-1,j} + \sigma_{i+1,j} ) - J_2 \sigma_{i,j} ( \sigma_{i-2,j} + \sigma_{i+2,j} ) \Biggr] S^{-1} a_{i,j}^+ a_{i,j} \nonumber \\ & \equiv & \sum_{i,j} E_{i,j} S^{-1} a_{i,j}^+ a_{i,j} \ , \label{eqn:calh0} \end{eqnarray} with $\tilde{D}=D+2 J_{0}$ and $V_{||}$ ($V_{\not{\parallel}}$) is the interactions between spins which are parallel (antiparallel) \begin{equation} V_{||} = {1 \over S} \sum_{i,j} \Biggl[ - J_1 X(i,i+1;j) (a_{i,j}^+ a_{i+1,j} + a_{i+1,j}^+ a_{i,j} ) + J_2 X(i,i+2;j) (a_{i,j}^+ a_{i+2,j} + a_{i+2,j}^+ a_{i,j} ) \Biggr] \label{eqn:vparallel} \end{equation} \begin{equation} V_{\not{\parallel}} = {1 \over S} \sum_{i,j} \Biggl[ - J_1 Y(i,i+1;j) (a_{i,j}^+ a_{i+1,j}^+ + a_{i+1,j} a_{i,j} ) + J_2 Y(i,i+2;j) (a_{i,j}^+ a_{i+2,j}^+ + a_{i+2,j} a_{i,j} ) \Biggr] \ , \label{eqn:vnotparallel} \end{equation} where $X(i,i';j)$ [$Y(i,i';j$)] is unity if spins $(i,j)$ and $(i',j)$ are parallel [antiparallel] and is zero otherwise. We do not consider quantum fluctuations within a plane, since the phase diagram is determined by the interplanar quantum couplings. Moreover we shall work to leading order in $1/S$ , in which case four-operator terms can be neglected [\onlinecite{NEGLECT}]. Also we will continue to use non-degenerate perturbation theory, since the perturbative term $(V_{||} + V_{\not{\parallel}})$ cannot connect states in which the wall is at different locations, since such states have different values of $\sum_iS_i^z$. The structure of the phase diagram will be constructed by considering in turn $E_w$, the energy of an isolated wall; $V_2(n)$, the interaction energy of two walls separated by $n$ sites; and generally $V_k(n_1 , n_2 , \dots n_{k-1})$, the interaction energy of $k$ walls with successive separations $n_1$, $n_2$, ... $n_{k-1}$ [\onlinecite{MEFXS}]. In terms of these quantities one may write the total energy of the system when there are $n_w$ walls at positions $m_i$ as \begin{eqnarray} E = && E_0 + n_w E_w + \sum_i V_2(m_{i+1}-m_i) + \sum_i V_3(m_{i+2}-m_{i+1},m_{i+1}-m_i)\nonumber \\ & + & \sum_i V_4(m_{i+3}-m_{i+2},m_{i+2}-m_{i+1},m_{i+1}-m_i) + \dots , \end{eqnarray} where $E_0$ is the energy with no walls present. The scheme of Ref. [\onlinecite{RBG}] for calculating the general wall potentials $V_k$ is illustrated in Fig. \ref{fig:2w}. Let all spins to the left of the first wall have $\sigma_i=\sigma$ and those to the right of the last wall have $\sigma_i=\eta$ for $k$ even and $\sigma_i=-\eta$ for $k$ odd. The energy of such a configuration is denoted $E_k(\sigma, \eta)$. If $\sigma=-1$ ($\eta=-1$) the left (right) wall is absent. Thus the energy ascribed to the existence of $k$ walls is given by[\onlinecite{RBG}] \begin{equation} V_k(n_1, n_2, \dots n_{k-1}) = \sum_{\sigma , \eta = \pm 1} \sigma \eta E_k (\sigma , \eta) \ . \label{CONN} \end{equation} Contributions to $E_k$ which are independent of $\sigma$ or $\eta$ do not influence $V_k$. $E_{k}(\sigma,\eta)$ is calculated by developing the energy in powers of the perturbations $V_{\not{\parallel}}$ which allows creation (and annihilation) of a pair of excitations straddling a wall and $V_{||}$ which allows the excitations to hop within domains. We consider contributions to the wall energy and to two- and three-wall interactions in turn. \subsection{Wall energy} Contributions to the wall energy to second order in perturbation theory arise from excitations which are created at a wall and then immediately destroyed as shown in Fig. \ref{fig:2pt}. These effectively count the number of walls and therefore lead to a renormalization of the wall energy of \begin{equation} E_{w}=2J_{1}-4J_{2}-\frac{J_1^2-2J_2^2}{4 \tilde{D} S}+{\cal O} \left( \frac{J^3}{\tilde{D}^2 S} \right) \label{eqn:Ewall} \end{equation} but since we work to leading order in $S^{-1}$, the $S^{-1}$ correction to $E_w$ will not affect the results for $V_k$. \subsection{Pair interactions} The lowest order contributions to $V_2(n)$ are obtained by creating an excitation at, say, the left wall using $V_{\not{\parallel}}$ and then using $V_{||}$ for it to hop to the right wall and back. Because we assume the existence of the left wall, this contribution implicitly includes a factor $\delta_{\sigma,1}$. Now we look for the lowest-order (in $J/D$) contribution which also has a dependence on $\eta$. In analogy with the unbinding problem, we might consider processes in which the excitation hops beyond the wall. Since such a term can not occur when the wall is actually present, it will carry a factor $\delta_{\eta,-1}$. For $n$ odd, we illustrate this process in Fig. \ref{fig:2w5}, and see that it gives a contribution to $V_2(n)$ of order $J_2^{n+1}/D^n$. As we shall see, there is actually a slightly different process which comes in at one order lower in $J/D$. To sense the presence of the right-hand wall, note that $E_{i,j}$ in Eq. (\ref{eqn:calh0}) will depend on $\eta$ if the $i$ is within two sites of the wall. Therefore it is only necessary to hop to within two sites of the right wall, as shown in Fig. \ref{fig:z}, for an energy denominator $({\cal H}_0 -E_0)$ in the series expansion (\ref{pert}) to depend on $\eta$. This process is of lower order in $J/D$ because it takes two interactions to hop back and forth but only one to sense the potential via an energy denominator. Accordingly, in contrast to the interface unbinding considered in Sec. 2, it is necessary to retain the terms in the $J$'s in the energy denominators to obtain the leading order contribution to $V_2(n)$. We consider separately $n$ odd and $n$ even. \\ \underline{$n$ odd}\\ To lowest order the processes which contribute are those shown in Fig. \ref{fig:2wnoe}a. For a domain of $n$ spins with $\sigma_i=-1$, $(n-1)^{\rm th}$ order perturbation theory gives \begin{eqnarray} \label{odd} E_2(\sigma,\eta) = && 2\delta_{\sigma,1} J_2^{n-1} S^{-1}(-1)^{n-2}\{4 \tilde{D}+2J_1\}^{-2}\{4 \tilde{D} +2J_1-2J_2\}^{-(n-5)} \nonumber \\ && \times \{ 4 \tilde D +2J_1 - J_2 (1-\eta) \}^{-1} \ . \end{eqnarray} In writing this result we dropped all lower-order terms because they do not depend on both $\sigma$ and $\eta$. Here and below, the dependence on $\sigma$ is contained in the factor $\delta_{\sigma,1}$ because we assume the existence of the left-hand wall. The energy denominators are constructed as follows. The left-hand excitation has energy $2 \tilde D$ since it is next to a wall. The right-hand excitation has the energy according to its position as illustrated in Fig. \ref{fig:z}. The prefactor of 2 arises because the initial excitation can be near either wall and the overall factor $(-1)^{n-2}$ arises from the $(-1)$ associated with each energy denominator. Adding the contributions from~(\ref{odd}) appropriately weighted as in~(\ref{CONN}) gives \begin{eqnarray} V_{2}(n)&=& {2 J_2^{n-1} S^{-1} (-1)^{n-2} \over \{ 4 \tilde{D} + 2 J_1\}^2 \{ 4 \tilde{D} + 2 J_1 - 2 J_2 \}^{n-5}} \biggl \{ {1 \over 4 \tilde{D} + 2 J_1} - {1 \over 4 \tilde{D} +2 J_1 -2 J_2} \biggr\} \label{oddd} \\ & & \nonumber \\ & & \hskip 5.0cm =4J_2^{n} S^{-1}/ (4\tilde{D})^{n-1} + { \cal O}(1/ \tilde{D}^n),\;\;\;\;\;\;\;\mbox{$n$ odd.} \label{V2odd} \end{eqnarray} \noindent Note that there is no term ${\cal O}(1/\tilde{D}^{n-2})$. This is because to this order the energy denominators are independent of the $J$'s. Hence to this order $E_k(\sigma,\eta)$ is independent of $\eta$ and the sum in Eq.~(\ref{CONN}) is zero. Similarly terms from $n^{th}$ order perturbation theory (in which one $J_2$ hop is replaced by two $J_1$ hops) do not contribute ${\cal O}(1/\tilde{D}^{n-1})$. \noindent \underline{$n$ even}\\ For even $n$ several diagrams contribute to leading order, i.~e., at $n$th order perturbation theory. These are shown in Fig. \ref{fig:2wnoe}b. As an example we give the contributions to the energy from the diagram (b)(iii). Again we drop all terms which do not depend on both $\sigma$ and $\eta$. Thus \begin{eqnarray} E_2^{({\rm iii})}(\sigma , \eta) = && 2 (-1)^{n-1} \delta_{\sigma ,1} \left( {n-2 \over 2 } \right) J_1^2 J_2^{n-2} S^{-1} (4 \tilde D)^{-1} (4 \tilde D + 2 J_1)^{-1} \nonumber \\ && (4 \tilde D + 2J_1 - 2J_2)^{-(n-4)} [ 4 \tilde D + 2J_1 - J_2 (1- \eta) ]^{-1} \ , \label{even} \end{eqnarray} where the superscript iii indicates a contribution from diagram iii of Fig. \ref{fig:2wnoe}, the prefactor 2 comes from including the contribution of the mirror image diagram, the prefactor $(-1)^{n-1}$ is the sign of $n$th order perturbation theory, the factor $(n-2)/2$ is the number of places the single ($J_1$) hop can be put, and $\delta_{\sigma,1}$ indicates that this contribution assumes the existence of the left-hand wall. To leading order in $\tilde D$, the $\eta$-dependence is contained in \begin{eqnarray} E_2^{({\rm iii})} (\sigma \eta) & = & (-1)^{n-1}(n-2) \delta_{\sigma,1} J_1^2 J_2^{n-2}S^{-1} (4 \tilde D)^{-(n-2)} (4 \tilde D + \eta de_2 / d \eta )^{-1} \nonumber \\ & \approx & (-1)^n(n-2) \eta \delta_{\sigma,1} J_1^2 J_2^{n-2}S^{-1} (4 \tilde D)^{-n} ( d e_2 /d \eta) \ . \end{eqnarray} Using $de_2/d \eta = J_2$, we get \begin{equation} V_2^{({\rm iii})} = 2 (n-2)J_1^2 J_2^{n-1}S^{-1} (4 \tilde D)^{-n} \ . \end{equation} We treat the other diagrams of Fig. \ref{fig:2wnoe} similarly. Dropping terms which do not depend on both $\sigma$ and $\eta$ and working to lowest order in $(\tilde D)^{-1}$, we get \begin{eqnarray} E_2 ( \sigma , \eta ) &=& \eta \delta_{\sigma ,1} J_2^{n-2} S^{-1} (4 \tilde D)^{-n} \Biggl[ 2 J_1^2 (de_2/d \eta) + {1 \over 2} J_1^2 (n-2)^2 (de_2/d \eta) \nonumber \\ && \ \ + (n-2) J_1^2 (de_2/d \eta) + (n-2) J_1^2 (de_2/d \eta) + 2J_2^2 (de_1/d \eta) + 2J_2^2 (de_2/d \eta) \Biggr] \ , \end{eqnarray} where the contributions are from each diagram of Fig \ref{fig:2wnoe}, written in the order in which they appear in the figure. Thus for $n$ even we have \begin{eqnarray} V_2(n) & = & S^{-1} (4 \tilde D)^{-n} J_2^{n-2} \Biggl[ 4J_1^2 J_2 + J_1^2J_2 (n-2)^2 + 4 (n-2)J_1^2J_2 + 4 J_2^2(J_2-J_1) + 4J_2^ 3 \Biggr] \nonumber \\ &=&\frac{J_2^{n-1}}{(4 \tilde{D})^n S}(n^2 J_1^2 -4J_1J_2+8J_2^2), \;\;\;\;\;\; \mbox{$n$ even} , \label{V2even} \end{eqnarray} where we used $de_2/d\eta=J_2$ and $de_1/d\eta=J_2-J_1$. Fisher and Szpilka [\onlinecite{MEFXS}] have shown that the phase sequences can be determined graphically by constructing the lower convex envelope of $V_2(n)$ versus $n$. The points $[n,V_2(n)]$ which lie on the envelope correspond to stable phases. The pair interactions defined by the expressions (\ref{V2odd}) and (\ref{V2even}) correspond already to a convex function for $n << (\tilde{D}/J)^{1/2}$. Hence, in this regime, we expect within the two-wall approximation a sequence of phases $\langle 2 \rangle, \langle 3 \rangle, \langle 4 \rangle, \ldots$ as shown schematically in Fig. \ref{fig:pda}. The widths of the phases $\langle n \rangle$ can be estimated using the fact that each phase is stable over an interval $\Delta E_w = n [V_2(n-1) - 2 V_2(n) - V_2(n+1) ]$ [\onlinecite{MEFXS}]. Therefore, using (\ref{eqn:Ewall}) we can say that the width $\Delta(J_2/J_1)$ occupied by the phase $\langle n \rangle$ in Fig. \ref{fig:pda} is ${\cal O}((J_2/D)^{n-1})$ for $n$ odd and\ ${\cal O}((J_2/D)^{n-2})$ for $n$ even. This sequence of layering through unitary steps $\langle n \rangle \to \langle n+1 \rangle$ will not be obeyed for large $n$, i. e., for $n \sim (\tilde{D}/J)^{1/2}$, because then $V_2(n)$ will suffer from strong even-odd oscillations. Moreover, for large $n$, the entropy of more complicated perturbations may dominate the physics. A discussion of this is given in Sec. 5. Here we go on to consider the effect of 3-wall interactions which can split the phase boundaries $\langle n \rangle : \langle n+1 \rangle$ where there is still a multiphase degeneracy of all states comprising domains of length $n$ and $n+1$. \section{Three-wall interactions} Three-wall interactions are needed to analyze the stability of the $\langle n \rangle : \langle n+1 \rangle$ phase boundary to mixed phases of $\langle n \rangle$ and $\langle n+1 \rangle$. The condition that the boundary be stable is [\onlinecite{MEFXS}] \begin{equation} \label{FEQ} F(n,n+1) \equiv V_3(n,n)-2V_3(n,n+1)+V_3(n+1,n+1)<0. \label{eqn:F} \end{equation} \noindent Consider first the calculation of $F(2n-1,2n)$. The diagrams which contribute in leading order to $V_3(2n-1,2n-1)$ and $V_3(2n,2n-1)$ are shown in Figs. \ref{fig:f1}a and \ref{fig:f1}b, respectively. To leading order in $1 / \tilde{D}$, $V_3(n+1,n+1)$ does not contribute to $F(n,n+1)$. Figure \ref{fig:f1} aims to emphasize the positions of the initial excitation and the closest approaches to the neighboring domain walls. One must also consider the position of the first neighbor hops in B and C and the sequence of the hops when calculating the contribution of the diagrams. An explicit calculation of the contributions of the relevant diagrams would be extremely tedious. However what concerns us here is the sign of $F(2n-1,2n)$. If $N_i$ is the contribution to $F$ of diagrams of type $i$ in Fig. \ref{fig:f1} , \begin{equation} F(2n-1,2n)=2 N_{\rm A} + 2 N_{\rm B} + 2 N_{\rm C} - 2 N_{\rm D}, \end{equation} where the factors of $2$ multiplying $N_{\rm A}$, $N_{\rm B}$ and $N_{\rm C}$ account for the mirror image diagrams and that multiplying $N_{\rm D}$ occurs because of the 2 in Eq. (\ref{FEQ}). We shall now show that $F(2n-1,2n)<0$. Consider a diagram in which the hops occur in the same order in A, B, C and D and the $J_1$ hops in B and C are, say, nearest the outer walls. The matrix elements $m_i$ of all types of diagram carry a negative common factor (the sign arising because we are considering even-order perturbation theory) and their ratios are $m_{\rm A}/m_{\rm D}=1$ and $m_{\rm A}/m_{\rm B}=m_{\rm A}/m_{\rm C}=J_2^2/J_1^2$. We must also expand the difference in the energy denominators in a way analogous to the step between equation (\ref{oddd}) and (\ref{V2odd}), but here to second order in $J/\tilde{D}$. Using (\ref{CONN}), the contribution of each diagram to the appropriate $N_i$ may be written \begin{eqnarray} & & \sum_{\sigma,\eta} \sigma \eta \biggl[{m_i\over (4\tilde{D})^{4n-5}S}\left(f_1+{f_2+f_3\sigma +f_4 \eta \over (4\tilde{D})} +{f_5+f_6\sigma+f_7\eta+f_8\sigma^2+f_9\eta^2+f_{10}\sigma\eta \over (4\tilde{D})^2}+\ldots \right) \biggr] \nonumber \\ & & \nonumber \\ & & \hskip 7.0cm = {4 m_i f_{10} \over (4\tilde{D})^{4n-3}S} + {\cal O}({1 \over (4\tilde{D})^{4n-2}}) \label{eqn:fs} \end{eqnarray} \noindent where the coefficients $f$ depend only on $J_1$ and $J_2$. When the sum is taken only the term $f_{10}$ multiplying $\sigma \eta$ survives. For diagrams of type A, $f_{10}$ is $J_2(J_2-J_1)$, while for B, C and D it is $J_2^2$. Therefore these diagrams give a contribution to $F$ proportional to \begin{equation} -J_2^2J_1(2 J_1-J_2) <0 \ . \end{equation} \noindent The contributions to $F$ of the other diagrams in B and C (which correspond to a different position of the first neighbor hop) is proportional to $-J_1^2 J_2^2$. Hence $F(2n-1,2n)<0$ and the $\langle 2n-1 \rangle: \langle 2n \rangle$ boundaries are stable. A similar argument holds for $F(2n,2n+1)$ for $n>1$. The relevant diagrams are shown in Fig. \ref{fig:f2}. They contribute \begin{equation} F(2n,2n+1)=2 N_{\rm A} + 2 N_{\rm B} + 2 N_{\rm C} + 2 N_{\rm D} +2 N_{\rm E} -2 N_{\rm F}. \end{equation} Using the same argument as above \begin{equation} N_{\rm A} + N_{\rm B} - N_{\rm F} \propto -J_2^2J_1(J_1-J_2)<0. \end{equation} $N_{\rm C}$, $N_{\rm D}$, $N_{\rm E}$, and the other orderings of $N_{\rm B}$ are negative and hence $F(2n,2n+1)<0$. Thus the phase boundaries $\langle 2n \rangle : \langle 2n+1 \rangle$ are first order for $n > 1$. For the $\langle 2 \rangle : \langle 3 \rangle $ boundary different diagrams contribute to $F(2,3)$. Indeed the second order expansion of the energy denominators [as in Eq. (\ref{eqn:fs})] gives a zero contribution. Accordingly, the calculation of $F(2,3)$ requires going to higher order in $(J_2/\tilde D)$. This calculation is carried out in detail in Appendix A and shows that the $\langle 2 \rangle : \langle 3 \rangle$ boundary is also stable. \section{Large $ \lowercase{n}$ analysis} For small $n$, we have seen that the leading contribution to $V_2(n)$ is of order $D(J_2/D)^x/S$, where the value of $x$ corresponds to the minimum number of steps needed to go from near one wall to near the other one and back: $x = 2 [n/2]+1$, where $[x]$ is the integer part of $x$. As $n$ increases, the contributions from longer paths, although individually less important, can become dominant because of their greater entropy. To allow for this possibility we now carry out perturbation theory in terms of the exact eigenstates for one excitation in each block of parallel spins. In this formulation, the unperturbed Hamiltonian is the sum of the Hamiltonians of each domain of parallel spins when all interactions with neighboring domains are removed. Thus from equations (\ref{eqn:calh0}) and (\ref{eqn:vparallel}) the unperturbed Hamiltonian for a block of parallel spins from sites $I$ to $J$ inclusive can be written \begin{equation} {\cal H}_0^{(I,J)} = {\sum_{i,j}}^\prime J_{ij} S^{-1} ( a_i^+ - a_j^+)(a_i - a_j) + \sum_i 2 \tilde DS^{-1} a_i^+ a_i \ , \end{equation} where $J_{i,j}= J_1 \delta_{j,i+1} -J_2 \delta_{j,i+2}$ and the prime on the summation indicates that the sum is restricted so that both indices are actually in the block. The matrix representation of ${\cal H}_0^{(I,J)}$ is given explicitly in Appendix B. It follows from (\ref{eqn:h0}) and (\ref{eqn:vnotparallel}) that the perturbation $V^{(s)}$, associated with a wall between sites $s$ and $s+1$ is \begin{equation} \label{VWALL} V^{(s)} = W^{(s)} + X^{(s)}+Y^{(s)} \end{equation} where \begin{eqnarray} W^{(s)} & = & J_2 S^{-1} \Bigl( a_{s-1}^+ a_{s-1} + a_s^+ a_s + a_{s+1}^+ a_{s+1} + a_{s+2}^+ a_{s+2} \Bigr) - J_1S^{-1} \Bigl( a_s^+ a_s + a_{s+1}^+ a_{s+1} \Bigr) \nonumber \\ &\equiv& S^{-1} \sum_k W_k^{(s)} a_k^+ a_k \ , \label{eqn:ws} \end{eqnarray} \begin{equation} X^{(s)} = J_2S^{-1} \Bigl( a_{s+1}^+ a_{s-1}^+ + a_{s+2}^+ a_s^+ \Bigr) - J_1 S^{-1} \Bigl( a_{s+1}^+ a_s^+ \Bigr) \equiv \sum_{i<j} S^{-1} X_{ij}^{(s)} a_i^+ a_j^+ \ , \end{equation} and $Y^{(s)}=\Bigl( X^{(s)} \Bigr)^+$. In this formulation it is not natural to calculate $V_2(n)$ directly. Instead one calculates the total energy of given configurations from which $V_2(n)$ is easily deduced. We start by calculating the total energy of a configuration with a single wall between sites $0$ and $1$. This gives the wall energy as \begin{equation} \label{WALL1} E_1^{(2)} = \langle 0 | Y^{(0)} {Q_0 \over {\cal E}} X^{(0)} | 0 \rangle = S^{-2} \sum_{ijkl} X^{(0)}_{kl} X^{(0)}_{ij} \langle 0| a_k a_l {Q_0 \over {\cal E}} a_i^+ a_j^+ | 0 \rangle \label{eqn:e12} \end{equation} where ${\cal E} = E_0 - {\cal H}_0$, with $E_0$ the ground state energy, defined to be zero in this context. Here we have introduced the notation that the subscript on $E$ specifies the number of walls, the superscript the order in perturbation theory, and the arguments (if any) the separations between walls. To evaluate (\ref{eqn:e12}) and similar expressions we now introduce the exact eigenstates for a single excitation on either side of the wall when interactions across the wall are ignored. For a block of parallel spins occupying sites $I$ through $J$, inclusive, these single-particle eigenstates satisfy \begin{equation} \label{EIGEQ} \sum_j \Bigl( {\cal H}_0^{(I,J)} \Bigr) _{ij} \phi_\alpha^{(I,J)}(j) = S^{-1} \epsilon_\alpha^{(I,J)} \phi_\alpha^{(I,J)} (i) \ . \end{equation} Later we write $\epsilon^{(I,J)} \rightarrow \epsilon^{(J-I+1)}$. To evaluate Eq. (\ref{WALL1}) in terms of the exact eigenstates notice that $a_i^+ a_j^+$ connects the ground state to a state in which the semi-infinite chain to the right of the wall is in an excited state which we label $\beta$ and the semi-infinite chain to the left of the wall is in an excited state $\alpha$. Thus we have \begin{equation} E_1^{(2)} = - S^{-1} \sum_{ijkl} \sum_{\alpha \beta} X_{ij}^{(0)} X_{kl}^{(0)} { \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (k) \phi_\beta^{(1,\infty)}(j) \phi_\beta^{(1,\infty)}(l) \over \epsilon_\alpha^{(\infty)} + \epsilon_\beta^{(\infty)} } \ . \end{equation} This process is illustrated in Fig. \ref{fig:e12}. We now construct the energy of a system with only two walls, one between sites $0$ and $1$, the other between sites $n$ and $n+1$. The contribution to the total energy of this configuration from second-order perturbation theory, denoted $E_2^{(2)}(n)$ comes from an expression similar to Eq. (\ref{WALL1}) but which here involves one semi-infinite chain and one block of length $n$, \begin{equation} E_2^{(2)} (n) = - 2 S^{-1} \sum_{ijkl} \sum_{\alpha \beta} X_{ij}^{(0)} X_{kl}^{(0)} { \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (k) \phi_\beta^{(1,n)}(j) \phi_\beta^{(1,n)}(l) \over \epsilon_\alpha^{(\infty)} + \epsilon_\beta^{(n)} } \ . \end{equation} Here and below we include a factor of 2 because the process could be initiated at either of the two walls. Note that as $n \rightarrow \infty$, $E_2^{(2)} \rightarrow 2 E_1^{(2)}$, as expected. We will also need the contribution to the energy of this configuration from third-order perturbation theory. The only process at this order is shown in Fig. \ref{fig:2w3pt} and it gives a contribution \begin{equation} \label{PERT} E_2^{(3)}(n) = 2 \langle 0 | Y^{(0)} { Q_0 \over {\cal E}} W^{(n)} { Q_0 \over {\cal E}} X^{(0)} | 0 \rangle = 2 S^{-3} \sum_{ijklm} X_{lm}^{(0)} W_k^{(n)} X_{ij}^{(0)} \langle 0 | a_l a_m { Q_0 \over {\cal E}} a_k^+ a_k { Q_0 \over {\cal E}} a_i^+ a_j^+ | 0 \rangle \ . \end{equation} In Eq. (\ref{PERT}) we see that $X^{(0)}$ creates one excited eigenstate in the semi-infinite chain to the left of the walls and also an excited eigenstate in the down-spin block of length $n$. That type of reasoning allows us to rewrite Eq. (\ref{PERT}) as \begin{equation} E_2^{(3)} (n) = 2 S^{-1} \sum_{ijklm} \sum_{\alpha \beta \gamma} X_{ij}^{(0)} W_k^{(n)} X_{lm}^{(0)} { \phi_\alpha^{(-\infty,0)} (i) \phi_\beta^{(1,n)}(j) \phi_\beta^{(1,n)} (k) \phi_\gamma^{(1,n)}(k) \phi_\alpha^{(-\infty,0)} (l) \phi_\gamma^{(1,n)}(m) \over \Bigl( \epsilon_\alpha^{(\infty)} + \epsilon_\beta^{(n)} \Bigr) \Bigl( \epsilon_\alpha^{(\infty)} + \epsilon_\gamma^{(n)} \Bigr) } \ . \end{equation} Then, up to third-order perturbation contributions, the wall potential $V_2(n)$ we wish to obtain is given by \begin{equation} V_2 (n) = E_2^{(2)}(n) - 2 E_1^{(2)} + E_2^{(3)}(n) \ . \label{eqn:v_2_n} \end{equation} To interpret these expressions it is convenient to express them in terms of the Green's function, defined by \begin{equation} G_{ij}^{(1,n)}(E) = \sum_\alpha {\phi_\alpha^{(1,n)} (i) \phi_\alpha^{(1,n)} (j) \over E + \epsilon_\alpha^{(n)}} \ . \end{equation} Thus \begin{equation} E_1^{(2)} = - S^{-1} \sum_{ijkl} \sum_\alpha X_{ij}^{(0)} X_{kl}^{(0)} \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (k) G_{jl}^{(1,\infty)} (\epsilon_\alpha^{(\infty)}), \label{eqn:exx1} \end{equation} \begin{equation} E_2^{(2)} (n) = - 2 S^{-1} \sum_{ijkl} \sum_\alpha X_{ij}^{(0)} X_{kl}^{(0)} \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (k) G_{jl}^{(1,n)} (\epsilon_\alpha^{(\infty)}), \end{equation} and \begin{equation} E_2^{(3)} (n) = 2 S^{-3} \sum_{ijklm} \sum_\alpha X_{ij}^{(0)} W_k^{(n)} X_{lm}^{(0)} \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (l) G_{jk}^{(1,n)} ( \epsilon_\alpha^{(\infty)}) G_{km}^{(1,n)} ( \epsilon_\alpha^{(\infty)}) \ . \label{eqn:exx2} \end{equation} To obtain $V_2(n)$ we will have to determine $\delta G \equiv G^{(1,n)}-G^{(1,\infty)}$. To evaluate this quantity we need to identify the perturbation which, when added to the unperturbed Hamiltonian describing two independent blocks of spins, $(1,n)$ and $(n +1 , \infty)$, gives the unperturbed Hamiltonian ${\cal H}_0^{(1,\infty)}$. This perturbation $V$ can be written as \begin{equation} \label{VV} V = -W^{(n)}+ Z^{(n)} \ , \end{equation} where $Z^{(n)}$ describes hopping across the wall which is needed to make the semi-infinite chain from a finite block of parallel spins. We now use some results of standard perturbation theory for a Green's function, as given, for instance, in Ref. \onlinecite{RMP}. For this expansion we work to lowest order in the wall perturbation, $V$ of Eq. (\ref{VV}). We choose ${\cal H}_0$ to be the Hamiltonian for a block of $n$ parallel spins and treat $V$ perturbatively. In first-order perturbation theory for $V$, it is not necessary to keep $Z^{(n)}$ (and consequently ${\cal H}_0^{(n+1,\infty)}$) because it moves an excitation to the right of the right wall which cannot be hopped back to the $(1,n)$ block without going to higher-order perturbation theory. So correct to first order in perturbation theory we have \begin{eqnarray} G^{(1,\infty)}_{ij} & = & \Biggl[ \Bigl( E + S {\cal H}_0^{(1,n)} -S W^{(n)} \Bigr)^{-1} \Biggr]_{ij} \nonumber \\ &=& \Biggl[ \Bigl( E + S {\cal H}_0^{(1,n)} \Bigr)^{-1} \Biggr]_{ij} + \sum_k \Biggl[ \Bigl( E + S {\cal H}_0^{(1,n)} \Bigr)^{-1} \Biggr]_{ik} W^{(n)}_k \Biggl[ \Bigl( E + S {\cal H}_0^{(1,n)} \Bigr)^{-1} \Biggr]_{kj} \nonumber \\ &=& G^{(1,n)}_{ij}+ \sum_k G^{(1,n)}_{ik}W^{(n)}_k G^{(1,n)}_{kj} \ , \label{eqn:gr} \end{eqnarray} \noindent where $W_k^{(n)}$ is defined in Eq. (\ref{eqn:ws}). Thus using Eqs. (\ref{eqn:v_2_n}), (\ref{eqn:exx1})-(\ref{eqn:exx2}) and (\ref{eqn:gr}) we have the result \begin{equation} \label{V2NEQ} V_2(n) = 4S^{-1} \sum_{ijklm} \sum_\alpha X_{ij}^{(0)} W_k^{(n)} X_{lm}^{(0)} \phi_\alpha^{(-\infty,0)}(i) \phi_\alpha^{(-\infty,0)}(l) G_{jk}^{(1,n)} ( \epsilon_\alpha^{(\infty)}) G_{km}^{(1,n)} ( \epsilon_\alpha^{(\infty)}) \ . \label{eqn:v2xwx} \end{equation} Evaluating this when $J_1=2J_2$ we obtain [writing $G$ for $G^{(1,n)}(\epsilon_\alpha^\infty )$ and $\phi$ for $\phi^{(-\infty,0)}$] \begin{eqnarray} V_2(n)&=&{4J_2^3 \over S} \sum_\alpha \Biggl\{ \phi_\alpha (0)^2 \Biggl[ \Biggl( G_{2,n-1} - 2G_{1,n-1}\Biggr)^2 -\Biggl( G_{2,n} - 2G_{1,n}\Biggr)^2 \Biggr] \nonumber \\ && \ + 2\phi_\alpha(-1)\phi_\alpha(0) \Biggl[ \Biggl( G_{2,n-1}-2G_{1,n-1} \Biggr) G_{1,n-1} - \Biggl( G_{2,n} - 2G_{1,n}\Biggr) G_{1,n} \Biggr] \nonumber \\ && \ + \phi_\alpha(-1)^2 \Biggl( G_{1,n-1}^2 - G_{1,n}^2 \Biggr) \Biggr\} \\ &=& \label{V2GEN} {4J_2^3 \over S} \sum_\alpha \Biggl\{ \Biggl[ \phi_\alpha(0) \Biggl( G_{2,n-1} -2 G_{1,n-1} \Biggr) + \phi_\alpha(-1) G_{1,n-1} \Biggr]^2 \nonumber \\ && \ - \Biggl[ \phi_\alpha(0) \Biggl( G_{2,n} -2 G_{1,n} \Biggr) + \phi_\alpha(-1) G_{1,n} \Biggr]^2 \Biggr\} \ . \label{eqn:Vngreen} \end{eqnarray} We will evaluate this with successively increasingly accurate approximations for large $n$. For small $n$ it is certainly correct to replace $\epsilon_\alpha^{(\infty)}$ by $2D'\equiv 2\tilde D + 2J_1-2J_2= 2 \tilde D + 2 J_2$, since corrections will be proportional to $J/D'$ with a bounded coefficient. For the moment we continue to use this approximation even for large $n$. With this approximation, the sum over $\alpha$ in Eq. (\ref{eqn:v2xwx}) yields \begin{equation} \label{COMPLETE} \sum_\alpha \phi_\alpha^{(-\infty,0)}(i) \phi_\alpha^{(-\infty,0)}(l) = \delta_{i,l} \ . \end{equation} We refer to this as the nonpropagation approximation, since it amounts to setting the off-diagonal elements of ${\cal H}_0^{(-\infty,0)}$ to zero, forcing $i$ and $l$ to coincide. Within this approximation and writing writing $G$ for $G^{(1,n)}(2D')$, we find that \begin{equation} \label{V2EQ} V_2(n) = V_A + V_B \end{equation} where \begin{equation} \label{VAEQ} V_A = {4J_2^3 \over S} \biggl( G_{2,n-1} - 2G_{1,n-1} \biggr)^2 \ , \end{equation} \begin{equation} \label{VBEQ} V_B = {4J_2^3 \over S} \biggl( 4 G_{1,n} G_{1,n-1} - 5 G_{1,n}^2 \Biggr) \ . \end{equation} In Appendix B we give an essentially exact evaluation of the required $G$'s, apart from an overall scale factor which we only obtain approximately. This evaluation leads to the result \begin{eqnarray} \label{V2RES} V_2 (n) &=& { 16 D' \over S \lambda^n } \Biggl( \sin^2 [ n \delta + 4\delta^3 ] - (3/8) (J_2/D')^3 \Biggr) \ , \ \ \ \ n \ {\rm even} \ ; \nonumber \\ &=& { 16 D' \over S \lambda^n } \Biggl( \cos^2 [ n \delta + 4\delta^3 ] - (3/8) (J_2/D')^3 \Biggr) \ , \ \ \ \ n \ {\rm odd} \ ; \end{eqnarray} where, to leading order in $J_2/(4D')$, $\lambda^{-1} = \delta^2 = J_2/(4D')$. Here $V_A$ gives rise to the term involving the square of the trigonometric function and, when $V_A=0$, \begin{equation} V_B = - { 6 D' \over S \lambda^n } \left( {J_2 \over D' } \right)^3 \ . \end{equation} For small $n$ these expressions reduce to our previous results (\ref{V2odd}) and (\ref{V2even}) at leading order in $J_2/D'$, in which case $V_B$ is irrelevant. We now discuss the interpretation of these results. For the moment let us ignore completely the term $V_B$. When $V_2(n)$ is nonmonotonic, as we found here, an elegant graphical construction which yields the phase diagram was suggested by Szpilka and Fisher [\onlinecite{MEFXS}]. This proceeds by drawing the lower convex envelope of the points $V_2(n)$ versus $n$. Points on the convex envelope are the allowed stable phases (assuming no further bifurcation due to $V_3$). For this construction it is important to distinguish the case when $V_2(n)$ becomes negative. If this occurs, then there will be a first-order transition from $n_0$ to $n=\infty$ where $n_0$ is the value of $n$ for which $V_2(n)$ attains its most negative value. On the other hand, if $V_2(n)$ is positive for all $n$, then one has an infinite devil's staircase, with no bound on the allowed values of $n$. Accordingly, it is obviously important to ascertain whether or not $V_2(n)$ is positive definite. Eq. (\ref{V2RES}) suggests that $V_2(n)$ can become negative when $(n \delta + 4 \delta^3 )/(2\pi)$ is sufficiently close to an integer. However the approximations inherent in its derivation may alter this conclusion. \subsection{Effect of Allowing Propagation of the Left Excitation} To determine whether or not an unending devil's staircase actually exists in the phase diagram, it is necessary to assess the validity of the nonpropagation approximation. We now avoid the approximate treatment of Eq. (\ref{V2NEQ}) in which we replaced $\epsilon_\alpha^{(\infty)}$ by $2D'$. We write Eq. (\ref{V2NEQ}) as \begin{equation} V_2(n) = 4S^{-1} \sum_{ijklm} \sum_\alpha X_{ij}^{(0)} W_k^{(n)} X_{lm}^{(0)} \phi_\beta^{(1,n)}(j) \phi_\beta^{(1,n)}(k) \phi_\gamma^{(1,n)}(k) \phi_\gamma^{(1,n)}(l) Y \ , \end{equation} where \begin{eqnarray} \label{YY} Y & \equiv & \sum_\alpha { \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (l) \over [\epsilon_\alpha^{(\infty)} + \epsilon_\beta ^{(n)}] [\epsilon_\alpha^{(\infty)} + \epsilon_\gamma ^{(n)}]} \nonumber \\ &=& \sum_\alpha { \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (l) \over [2D' + \epsilon_\beta ^{(n)}] [2D' + \epsilon_\gamma ^{(n)}]} \Biggl( 1 - {\epsilon_\alpha^{(-\infty,0 )} -2D' \over 2D' + \epsilon_\beta^{(n)} } - {\epsilon_\alpha^{(-\infty,0 )} -2D' \over 2D' + \epsilon_\gamma^{(n)} } \dots \Biggr) \nonumber \\ &\equiv& Y_0 + \sum_\alpha \phi_\alpha^{(-\infty,0)} (i) \phi_\alpha^{(-\infty,0)} (l) \Biggl( \epsilon_\alpha^{(-\infty,0 )} - 2D' \Biggr) \Biggl[ { 1 \over 2} { \partial \over \partial D' } \Biggl( {1 \over 2 D' + \epsilon_\beta^{(n)} } {1 \over 2D' + \epsilon_\gamma^{(n)} } \Biggr) \Biggr] \nonumber \\ & \equiv & Y_0 + \delta Y \ . \end{eqnarray} Keeping only the term $Y_0$ leads to the nonpropagation approximation, and thence to Eq. (\ref{COMPLETE}) and the results of Eq. (\ref{V2RES}). We now analyze the effect of $\delta Y$. For that purpose we use the fact that the eigenfunctions satisfy Eq. (\ref{EIGEQ}). For sites $i$ near the wall, (i.e., $i=0$ and $i=-1$) Eq. (\ref{EIGEQ}) yields [omitting the cumbersome superscripts $(-\infty,0)$] \begin{equation} \Bigl( \epsilon_\alpha -2 D' \Bigr) \phi_\alpha(0) = (J_2-J_1) \phi_\alpha (0) - J_1 \phi_\alpha (-1) + J_2 \phi_\alpha (-2) \end{equation} \begin{equation} \Bigl( \epsilon_\alpha -2 D' \Bigr) \phi_\alpha(-1) = J_2 \phi_\alpha (-1) - J_1 \phi_\alpha (0) -J_1 \phi\alpha(-2) + J_2 \phi_\alpha (-3) \ . \end{equation} Using these equations and also Eq. (\ref{COMPLETE}), we get \begin{eqnarray} \delta Y = & \Biggl[ & (J_2-J_1) \delta_{i,0} \delta_{l,0} + J_2 \delta_{i,-1} \delta_{l,-1} - J_1 \delta_{i,0} \delta_{l,-1} - J_1 \delta_{i,-1} \delta_{l,0} \Biggr] \nonumber \\ &\times & \Biggl[ { 1 \over 2} { \partial \over \partial D' } \Biggl( {1 \over 2 D' + \epsilon_\beta^{(n)} } {1 \over 2D' + \epsilon_\gamma^{(n)} } \Biggr) \Biggr] \ . \end{eqnarray} $\delta Y$ leads to contributions with $i\not= l$ shown in Fig. \ref{fig:dv}, and also with $i=l$ which are not shown but are similar to those of Fig. \ref{fig:2wnoe}. If $G$ denotes $G^{(1,n)}(2D')$, then we have, from Eq. (\ref{eqn:v2xwx}) a correction to the two-wall interaction of \begin{eqnarray} \label{DV} \delta V_2(n) & = & {2 S^2} { \partial \over \partial D'} \sum_{ijklm} X_{i,j}^{(0)} W_k^{(n)} X_{l,m}^{(0)} G_{j,k} G_{k,m} \nonumber \\ && \ \ \ \times \Biggl[ (J_2-J_1) \delta_{i,0} \delta_{l,0} + J_2 \delta_{i,-1} \delta_{l,-1} - J_1 \delta_{i,0} \delta_{l,-1} - J_1 \delta_{i,-1} \delta_{l,0} \Biggr] \nonumber \\ &=& {2 \over S} {\partial \over \partial D'} \Biggl\{ 4J_1^2J_2 \Bigl( J_2 G_{1,n-1}^2 + (J_2-J_1) G_{1,n}^2 \Bigr) \nonumber \\ && \ \ - 4J_1 J_2^2 \Bigl( J_2 G_{2,n-1}G_{1,n-1} + (J_2-J_1)G_{1,n} G_{2,n} \Bigr) + 2J_2^3 \Biggl( J_2 G_{1,n-1}^2 + (J_2-J_1) G_{1,n}^2 \Biggr) \nonumber \\ && \ \ + 2 (J_2-J_1)J_2^2 \Biggl( J_2 G_{2,n-1}^2 + (J_2-J_1)G_{2,n}^2 \Biggr) + 2 (J_2-J_1)J_1^2 \Biggl( J_2 G_{1,n-1}^2 + (J_2-J_1)G_{1,n}^2 \Biggr) \nonumber \\ && \ \ - 4(J_2-J_1)J_1J_2 \Biggl( J_2 G_{1,n-1} G_{2,n-1} + (J_2-J_1) G_{1,n} G_{2,n} \Biggr) \Biggr\} \ . \end{eqnarray} We simplify this by setting $J_1=2J_2$. Then \begin{equation} \delta V_2(n) = {2J_2^4 \over S} {\partial \over \partial D'} \Biggl( -2 G_{2,n-1}^2 + 12 G_{1,n-1}^2 - 10G_{1,n}^2 \Biggr) \ . \end{equation} The dominant contribution comes from the first term. To evaluate this expression, it is necessary to develop an expression for $G_{2,n-1}$. Using Eq. (\ref{GEQS}) of Appendix B, we write \begin{equation} \Delta_n (-1)^{n+1} G_{2,n-1} = C^2 d_{n-3} = J_2^{n-1} (C/J_2)^2 y^{n-3} Q_{n-3} = J_2^{n-1} y^{n+1} Q_{n-3} \ , \end{equation} where $y = \sqrt {4D'/J_2}$ and we set $C=4D'$. The calculations for $n$ odd and $n$ even are similar. Here we do them only for $n$ even, in which case \begin{equation} G_{2,n-1}^2 = { J_2^{2n-2} \over \Delta_n^2 } y^{2n+2} \sin^2 (n \delta - 2 \delta ) \ . \end{equation} To take the derivative note that $G_{2,n-1}^2 \sim D'^{-n-1} \sin^2 (n \delta - 2 \delta)$ and $\delta \sim D'^{-1/2}$. Thus we have \begin{equation} {d G_{2,n-1}^2 \over dD' } = { J_2^{2n-2} \over \Delta_n^2 } y^{2n+2} \Biggl( -{(n+1) \over D'} \sin^2 (n \delta - 2 \delta) - 2 \sin (n \delta - 2 \delta ) \cos (n \delta - 2 \delta) {(n-2) \delta \over 2 D'} \Biggr) \ . \end{equation} Then we obtain, for $n \gg 1$ and $n \delta/\pi$ an integer, \begin{equation} {d G_{2,n-1}^2 \over dD'} = - 2 {J_2^{2n-2} y^{2n+2} \over \Delta_n^2 } {n \delta^2 \over D^\prime} \ , \end{equation} so that \begin{equation} \delta V_2(n) = {8J_2^2 y^2 \over D'S} n \delta^2 {J_2^{2n} y^{2n} \over \Delta_n^2} = {8J_2^2 y^2 \over D'S} n \delta^2 \left( {y \over \lambda} \right)^{2n} \approx {8J_2^2 y n \delta \over D'S \lambda^n } \ . \end{equation} Since $n \delta \geq 1$ when $V_A=0$, this term is larger than $V_B$ by at least $(D'y/J_2) \sim (4D'/J_2)^{3/2}$. Since this term is positive, we see that allowing for the left-hand excitation to propagate leads to a correction to $V_A$ which is much more important than $V_B$. In other words, this more accurate evaluation gives $V_2(n)=V_A + \delta V_2(n)$. This result relies on the validity of the expansion in Eq. (\ref{YY}), the precise condition for which is not obvious. However, when $n$ gets sufficiently large, this expansion breaks down and the considerations in the next subsection become necessary. \subsection{Large $n$ Limit with Propagation} The expansion that we have used in Eq. (\ref{YY}) implicitly assumes that the Green's function has a weak dependence on energy. That is true as long as $n$ is small enough. But when $n$ becomes arbitrarily large, then there must exist a regime in which the right-hand side of Eq. (\ref{V2GEN}) is dominated by the largest term in the sum over $\alpha$. If it were correct to keep only a single value of $\alpha$, then it would be possible to fix $D'$, so that the first square bracket in Eq. (\ref{V2GEN}) would vanish and $V_2(n)$ would be negative. This reasoning is not correct, however as the analysis in Appendix C shows. Even for large $n$ the sum over $\alpha$ has a width in $\alpha$ of order $\sqrt{1/n}$ which prevents $(G_{2,n-1}-2G_{2,n})^2$ from being fixed to be precisely zero. We therefore conclude that for the one-dimensional system of walls, $V_2(n)$ does remain positive for $n \rightarrow \infty$. It seems unlikely that in a crossover between the regimes we have considered $V_2(n)$ would become negative. So we conclude that for the one-dimensional problem $V_2(n)$ remains positive and there is no cutoff in the Devil's staircase for the phase diagram. \subsection{Large $n$ Limit For Three-Dimensional Systems} In the discussion up to now, we have treated the three-dimensional system as if it were a one-dimensional system in which planar walls separate up-spin segments from down-spin segments. Here we give a brief argument which suggests that these one-dimensional results continue to hold for the three-dimensional system. One way to phrase the argument is to note that when $D'$ is large compared to the $J$'s, we are far from criticality. The correlation length (of order $| \ln (J/D^\prime)|^{-1}$) is very short. Thus, entropic effects of longer paths are strongly cut off by the correlation length. Here we indicate the nature of a formal argument of this type. To analyze the three-dimensional case, we consider only the dominant term, illustrated in Fig. \ref{fig:3d}. It gives rise to the contribution \begin{equation} \delta V_2(n) = {4J_2^3 \over S} \sum_{r_\perp, s_\perp} \sum_{\alpha, q_\perp} \phi_{\alpha, q_\perp} (0;0) \phi_{\alpha, q_\perp} (0;r_\perp+s_\perp) G_{2,n-1}(r_\perp ; \epsilon_{\alpha, q_\perp}) G_{2,n-1}(s_\perp ; \epsilon_{\alpha,q_\perp }) \ , \end{equation} where we omit the superscripts. The subscripts on $\phi$ are the quantum number, $\alpha$, associated with the coordinate perpendicular to the wall and the wavevector $q_\perp$ associated with the transverse coordinates. The arguments of $\phi$ are the coordinate perpendicular to the wall and the vector displacement in the plane of the wall. The arguments of $G$ are the displacement in the plane of the wall and the energy. Considering only the dependence of $\phi$ and $\epsilon$ on wavevector we obtain \begin{equation} \delta V_2(n) \sim \sum_{r_\perp, s_\perp , q_\perp} \exp [ i {\bf q_\perp} \cdot {(\bf r_\perp + s_\perp)} ] G_{2,n-1}(r_\perp;\epsilon_{q_\perp}) G_{2,n-1}(s_\perp; \epsilon_{q_\perp}) \ . \end{equation} In terms of Fourier transformed variables for coordinates in the plane of the wall (indicated by overbars), we have \begin{equation} \label{V23D} \delta V_2(n) \sim \sum_{q_\perp} \Bigl[ \bar G_{2,n-1}(q_\perp;\epsilon_{q_\perp})\Bigr]^2 \ . \end{equation} But this is again the type of expression analyzed in Appendix C. So we conclude that for the three-dimensional system $V_2(n)$ is also always positive. \section{Discussion} The aim in this paper has been to demonstrate how quantum fluctuations can lead to interactions between domain walls and hence stabilize long-period phases in the vicinity of a multiphase point where the intrinsic wall energy is small. We first considered a Heisenberg model with strong uniaxial spin anisotropy $D$ and an interface pinned to a surface by a bulk magnetic field $h$. A perturbation expansion in $D^{-1}$ was used to show that the wall-interface interaction is repulsive and hence that the interface unbinds from the surface through an infinite number of layering transitions as $h$ passes through 0. The bulk of the paper was devoted to describing the behavior of the Heisenberg model with first- and second-neighbor competing interactions and uniaxial anisotropy $D$ near the ANNNI model limit $D=\infty$. This model has a multiphase point for sufficiently large $D$ which is split by quantum fluctuations to give a sequence of long-period commensurate phases $\langle 2 \rangle$, $\langle 3 \rangle$, $\langle 4 \rangle$, ... $\langle n \rangle$ ... . The phase sequence could be established for $n$ not too large by a calculation of two-wall and three-wall interactions using perturbation theory with $D^{-1}$ as a small parameter. A discussion of correction terms important for large $n$ was given from which we concluded that, unlike for the ANNNI model, the sequence of phases is infinite. The reason this model is different in this regard from the ANNNI model is an inherently quantum one: for one wall to indirectly interact with another an excitation has to propagate from one wall to the other AND return. Thus the interaction in the quantum case is proportional to the square of a oscillatory Green's function whereas in the ANNNI model the analogous function appears linearly. As a consequence of this oscillation, the phases come in the sequence $n \rightarrow n+1$ or $n \rightarrow n+2$, depending on the value of $n$. In the latter case, we did not explicitly investigate the stability of the phase diagram, but a cursory analysis leads us to believe that the function $F(n,n+2)$ analogous that in Eq. (\ref{FEQ}) is negative. Similar behavior is observed in both the interface model [\onlinecite{DuxY}] and the ANNNI model [\onlinecite{MEFXS},\onlinecite{MEFWS}] for finite temperatures. Here thermal fluctuations replace quantum fluctuations in mediating the domain wall interactions. Although long-period phases are stabilized in the ANNNI case, the qualitative form of the phase sequence is very different to that discussed in this paper. The stable phases are $\langle 2^k 3 \rangle$, $k=0,1,2,\ ...\ k_{\rm max}$ with $ k_{\rm max} \to \infty$ as $T \to 0$. Mixed phases $\langle 2^k 3 2^{k-1} 3 \rangle$ also appear [\onlinecite{MEFXS}]. A third mechanism that can split the degeneracy at both a bulk [\onlinecite{SY}] and an interface [~\onlinecite{MY}~] multiphase point is the softening of the spins themselves; a non-infinite spin anisotropy. This does not occur for the ANNNI model where there is a finite energy barrier for the spins to move from their positions at $D=\infty$. However, for a similar model with 6-fold anisotropy an infinite number of phases become stable near the multiphase point as $D$ is reduced from infinity. [\onlinecite{SY}] \acknowledgments JMY is supported by an EPSRC Advanced Fellowship and CM by an EPSRC Studentship and the Fondazione "A. della Riccia", Firenze. ABH acknowledges the receipt of an EPSRC Visiting Fellowship. Work done at Tel Aviv was also supported in part by the USIEF and the US-Israel BSF. \vskip 2.2 cm
2207.02111
\section{Coalescent theory in population genetics} Population genetics is the study of the evolution of the genotypes in a population of living beings, under various evolutionary pressures such as mutation, selection or genetic drift. Initiated in the first half of the XX${}^{th}$ century with the work of the British statistician Ronald Fisher and the American geneticist Sewall Wright, it has seen the emergence of a backward model called \textbf{coalescent}, the first developments of which are due to the British mathematician John Kingman in the 80's (\cite{kingman1982coalescent}). The coalescent theory consists in sampling individuals -- more precisely loci of individuals' genomes -- in the present population, and tracing their genealogies back in time, until successive common ancestors, from two or more lineages, are obtained. The instants, backward to the past, when such common ancestors appear are called \textbf{coalescence times}, and are considered as random variables with values in $\mathbb{N}$ or in $\mathbb{R}^+$. The mathematical objects of interest are then the joint distributions of the various coalescence times of a family tree, which allow to express the observable quantities in the genomes of a present population, as functions of those distributions. Those functions depend on genetic parameters (like mutation rate, recombination rate, selection rate) and demographic parameters (like sizes and numbers of sub-populations, migration rates between sub-populations). Observations of genetic sequences can then be used to infer genetic or demographic parameters using various statistical methods, and technological advances over the last two decades have made it possible to acquire huge masses of data, which can be used to refine existing models and develop new ones. \subsection*{Wright-Fisher model and Kingman coalescent} More precisely, the Wright-Fisher model describes the evolution of a population of $2N$ individuals (the individuals can be genes or loci) with the following assumptions: in each generation, each individual independently generates a number of descendants following a Poisson distribution of the same constant parameter, with the offsprings completely replacing their parents, all conditioned by the fact that the size of the population must remain constant. The process can be described in an equivalent way backward in time: each individual of a given generation randomly chooses its parent in the previous generation in a uniform way. An illustration of the process is given in Figure \ref{fig:WrightFisher}\footnote{Figure from volume \cite{hein2004gene}.}. \begin{figure}[!htb] \centering% \subfloat[ ]{\includegraphics[scale=0.21]{wright-fisher-a.png}}~\quad~ \subfloat[ ]{\includegraphics[scale=0.21]{wright-fisher-b.png}}~\quad~ \subfloat[ ]{\includegraphics[scale=0.206]{wright-fisher-c.png}} \caption{Here is a realization of the Wright-Fisher process on $16$ generations with a population size of $2N=10$. Panel \textbf{(a)} presents the evolution when each row is a generation, the individuals have been rearranged in panel \textbf{(b)} in order to highlight the family tree, and for panel \textbf{(c)} three individuals have been chosen in the last generation, their respective lineages having been put in bold. We see that the first coalescence between individuals $1$ and $2$ takes place two generations in the past, and that the last coalescence to the most recent common ancestor of individuals 1, 2 and 3 takes place nine generations in the past.}\label{fig:WrightFisher}% \end{figure} If we now consider a pair of individuals in the last generation, and if we note $T_2$ the waiting time for the coalescence of the two lineages (going back in time), we have $$ \mathbb{P}(T_2>i)=\left(1-\frac{1}{2N}\right)^i, $$ and if we suppose $N$ to be large, by changing the time scale, we obtain the usual approximation of the geometric distribution by the exponential distribution $$ \mathbb{P}(T_2>\lfloor 2Nt\rfloor)=\left(1-\frac{1}{2N}\right)^{\lfloor 2Nt\rfloor}\sim \mathrm{e}^{-t}. $$ We can thus generalize for the first renormalized coalescence time, which we keep noting $T_k$, for $k$ individuals sampled in the last generation: $$ \mathbb{P}(T_k>t)\sim \mathrm{e}^{-\binom{k}{2}t}, $$ and one thus obtains the coalescence tree, corollary of Kingman's coalescent, where the successive times of coalescence are independent, following exponential distributions of parameters equal to the binomial coefficients $\binom{k}{2}$. \subsection*{Demographic complications of the coalescence model} The coalescent defined this way is only valid in the context of a so-called panmictic population, i.e. without geographical structure (each individual randomly chooses its parent in the whole population), and with constant size. We will see how to generalize the coalescent in the case of a population of changing size, and in the case of a structured population. \subsubsection*{Population size change} If we consider that the size of the population can vary, by posing $N(i)$ the size of the population at generation $i$ in the past, and by considering the quantity \begin{equation}\label{lambda} \lambda(t)=\frac{N(\lfloor 2Nt \rfloor)}{N(0)}, \end{equation} the relative size of the population in the past with the same temporal renormalization as in the previous section, it can be shown (see e.g. \cite{Tavare2004}, section 2.4) that under reasonable conditions of variation of $\lambda$ in the neighbourhood of infinity, the coalescence time $T_k$ of $k$ individuals sampled in the present satisfies \begin{equation}\label{PTkch} \mathbb{P}(T_k>t)=\exp\left( -\binom{k}{2}\int_{0}^{t} \frac{\mathrm{d}\tau}{\lambda(\tau)} \right). \end{equation} But unlike the panmictic case, the successive $T_k$ are no longer independent, which makes the global study of the tree more difficult. \subsubsection*{Structured population} The absence of the panmictic assumption opens the door to multiple ways of modeling population structuring. Classically, the global population is considered to be made up of subpopulations (called islands, or demes), each of which is panmictic, between which migration events may occur, with rates that may depend on each pair of islands. The demographic parameters of the model are thus: the number of islands $n$, the respective renormalized sizes $(s_i)_{i=1 \dots n}$ of the $n$ islands (again assumed constant in time), and the renormalized migration rates (to take into account the scaling already described which allows us to go to continuous time by assuming that the sizes of each population are sufficiently large) $(M_{ij})_{i \neq j}$ between the islands $i$ and $j$. The description of the coalescent tree thus becomes much more complex, but the information can be summarized, as Hilde Herbots-Wilkinson showed in 1994 in her landmark thesis work (\cite{Herbots1994}). If we note $\alpha=(\alpha_1, \dots, \alpha_n)$ the configuration where $\alpha_i$ represents the number of lineages present in the island $i$, then the coalescence process can be described by the infinitesimal generator $Q$ such that \begin{equation}\label{Q} Q(n_{\alpha},n_{\beta})=\left\{\begin{array}{cl} \alpha_i\frac{M_{ij}}{2} & \text{if }\beta=\alpha-\epsilon^i+\epsilon^j \quad (i\neq j) \\ \frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2} & \text{if }\beta=\alpha-\epsilon^i\\ -\sum_i \left(\alpha_i\frac{M_i}{2}+\frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2}\right) & \text{if } \beta=\alpha \\ 0 & \text{otherwise},\end{array}\right. \end{equation} where $\epsilon^i$ is the vector of size $n$ whose components are all zero except the $i$-th which is $1$, and where $n_\alpha$ and $n_\beta$ are the respective numbers of the $\alpha$ and $\beta$ configurations having chosen a prior ordering of all possible configurations. Note that $\frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2}$ represents a coalescence rate and $\alpha_i\frac{M_{ij}}{2}$ a migration rate. \subsection*{Genetic parameters, estimations and inferences} From a genetic point of view, all these models are assumed to be neutral, i.e. not taking into account the possible influence of selection in the reproductive capacity of each individual. However, it is possible to easily incorporate the phenomena of mutation and recombination into these models, because they can be considered as events independent of the genealogical process. The classical assumptions are that each mutation or recombination event affects a different part of the genome (the so-called \textit{infinite site model}), and that the mutation and recombination rates are constant both in time and along the genetic sequences. \subsubsection*{Mutation and genetic diversity} \label{sfs} Mutation events are distributed on the genealogical tree according to a Poisson process, and we can link genetic diversity data, by observing for example the number of alleles of a given gene, or its distribution, and more generally the quantification of polymorphism, with the configuration of the tree (topology, length of branches) associated with the chosen model. By choosing a model, we can estimate the mutation parameter, and on the contrary, by assuming the mutation parameter to be known, we can estimate the lengths of the branches of the tree, and thus have information on the distributions of the coalescence times. Among the best known estimators, let us mention Watterson's Theta (\cite{watterson1975number}) or Tajima's D (\cite{tajima1989statistical}). \subsubsection*{Recombination and Sequential Markovian Coalescence} The phenomenon of recombination is much more difficult to incorporate into these models than mutation, since it requires sexual reproduction, and at each recombination event the resulting genome is derived not from one parent but from two, thus exponentially increasing the number of ancestors involved for each lineage of individuals sampled in the present population. The ancestral recombination graph (ARG, see \cite{griffiths1996ancestral}) requires a computational treatment that is very quickly prohibitive when the sample size increases. The work of Mc Vean and Cardin (\cite{mcvean2005approximating}) allowed, under an original hypothesis of the Markovian dependence property \textit{along the genome} (hence the so-called \textit{sequential} property), to greatly restrict the space to be explored for statistical inference methods. Several demographic parameter inference software packages then emerged, including the famous PSMC (for \textit{Pairwise Sequentially Markovian Coalescent}, \cite{Li2011}), which has been widely used since 2011, and allows to estimate the variation of the population size (noted $\lambda(t)$ in equation \eqref{lambda}), using the genetic data from a diploid individual only (fully sequenced genome), see for example Figure \ref{fig:Estimation_PSMC}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{./PSMC1.png} \caption{Demographic inference obtained from human DNA from individuals of different populations (\cite{Li2011}). On the $x$-axis, the number of years in the past. On the $y$-axis, the renormalized size of the population, assumed to be panmictic.} \label{fig:Estimation_PSMC} \end{figure} \section{Consideration of population structure, central role of the IICR} Some work in the first decade of this century (\cite{wakeley2001coalescent}, \cite{chikhi2001estimation}, \cite{chikhi2010confounding}) highlighted the effect that a structured population could generate on statistics assuming panmixia , with notably the detection of false bottleneck signals in some cases. After a preliminary study which consisted in analyzing a simple case of comparison of an island model and a size change model (\cite{mazet2015demographic}), we obtained a first result, which has become a nodal point of our subsequent research and which we present here. \subsubsection*{The IICR and the change in size} We have highlighted in \cite{Mazet2016} the following result. Considering that whatever the chosen demographic model, the coalescence time $T_2$ of the lineages of two individuals chosen in the present population, a random variable with values in $\mathbb{R}^+$ which can thus be considered as a lifetime of density $f_{T_2}$, thus admits a ``failure rate'' which here translates into an \textbf{(instantaneous) rate of coalescence} equal to $$ \mu(t)=\frac{f_{T_2}(t)}{\mathbb{P}(T_2>t)}. $$ The density of $T_2$ can thus always be written $$ f_{T_2}(t)=\mu(t)\exp\left( -\int_0^t \mu(\tau) \mathrm{d}\tau \right), $$ hence \begin{equation}\label{PT2gen} \mathbb{P}(T_2>t)=\exp\left( -\int_0^t \mu(\tau) \mathrm{d}\tau \right). \end{equation} If we now bring equation \eqref{PT2gen} together with the particular case $k=2$ of equation \eqref{PTkch}, we realize that in the panmictic case, the change in size $\lambda(t)$ is exactly equal to $\frac{1}{\mu(t)}$, which is thus the inverse of the instantaneous coalescence rate, noted by the acronym \textbf{IICR}. Two important consequences can be drawn from this observation: \begin{enumerate} \item The sole data of the $T_2$ distribution cannot be informative of the demographic model, since whatever it is, there is always a panmictic model which will provide exactly this $T_2$ distribution. Indeed, it is sufficient to choose the inverse of the coalescence rate as the size change. \item What software like PSMC infers, on data from a single diploid genome and however long it may be, is the IICR associated with the demographic model, which is usually \textbf{not the change in size} of the population when it is structured. \end{enumerate} Taking the second consequence further, as a proof of concept we built a constant size demographic model based solely on a symmetric island model, with the number of islands also constant, where only the migration parameter is allowed to vary. As we can see in Figure \ref{fig:hum_niles} extracted from \cite{Mazet2016}, the PSMC output on data simulated under this model is very similar to that on real human data. \begin{figure}[ht] \centering \includegraphics[height=7cm]{./hum_niles.png} \caption{In red the PSMC of real data from a human (CHN.A in Figure \ref{fig:Estimation_PSMC}). In green the PSMC of 10 simulations of the same model in islands, of constant size, with three changes of migration rate represented by the vertical dotted lines. The blue vertical lines represent some identified period that could be linked with those changes. } \label{fig:hum_niles} \end{figure} There is obviously no question of claiming that the human population is structured in symmetrical islands and that its population size has remained constant over the course of evolution, but this example prompts us to question the interpretation of the IICR, which is the object inferred by the PSMC, and shows that it is necessary to investigate further, before drawing any conclusions about the demographic history of a population. \subsubsection*{Influence of the sample size} \label{Tk} From the first consequence drawn above, we explored theoretically what the data of a third lineage could bring as additional information. We then showed that in the simple case of a population structured in islands, then adding the information of the $T_3$ distribution to the $T_2$ distribution is enough to distinguish this model from the panmictic model having the same $T_2$ distribution, thus the same IICR (\cite{Grusea2018}). This result provides theoretical evidence that a sample size strictly greater than two is sufficient to distinguish an island structured model from a panmictic model, but initial attempts to move into practice have not yet been successful, because of the precision required, which often blends into the noise of the real data. \subsubsection*{Structure and IICR: sampling strategy} Exploratory work was then done (\cite{Chikhi2018}), using simulated data, to find out what signatures are left on the IICR produced by different types of structured models, and thus indirectly (or directly when dealing with software of the same type as PSMC) on the false signals of size changes that these models generate. As an illustration, we present in Figure \ref{fig:3iles} extracted from this paper, the simulated IICRs in a model with three islands and asymmetric migration rates, by sampling a diploid individual in each of the islands. We see that not only the structure of the model can give false signals of size change for software assuming panmixia, but also the IICR is \textbf{dependent on the sampling location}, for the same demographic history. This finding also deserves to be further explored for use in model selection. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{./figure_3iles_Lounes.png} \caption{The size of each island is constant. However, the IICRs of each pair of sequences are time-varying functions, and these functions even may not be monotonic. Furthermore, they differ depending on the island from which the pair is sampled.} \label{fig:3iles} \end{figure} \subsubsection*{The IICR as a model validation} The IICR can also be used as a \textbf{summary statistic} of a given model, for validation. In the same paper \cite{Chikhi2018} we thus tested a number of models proposed in the literature for human evolution, for \textit{Homo sapiens} as well as for \textit{Homo neandertalensis}. Simulating the IICR of some of these models allowed us to discard them, as the IICR produced differed radically from that estimated by PSMC on human data. The Figure \ref{fig:lounes} illustrates this situation. \begin{figure}[ht] \centering \includegraphics[height=6cm]{./fig_psmc_lounes.png} \caption{The PSMCs of real modern human and Neanderthal data (the last two in the order of the legend), set against the PSMCs of simulated data of different proposed models (the first six), which are thus radically different. And for the record, the panmictic model suggested by \cite{Li2011} and the fictional structured model proposed by \cite{Mazet2016}. For more detailed explanation, see \cite{Chikhi2018}.} \label{fig:lounes} \end{figure} \subsubsection*{IICR and structured coalescent} On the theoretical side, work has been undertaken to calculate, as precisely as we like, the IICR of any structured model (\cite{rodriguez2018iicr}). The modeling initiated by Herbots provides a set of infinitesimal generators of Markov processes (see formula \eqref{Q}), and it is possible to exploit the semi-group property of the exponentials of these matrices. Indeed, changes in some parameters of the structured models, such as island sizes or migration rates, leave the state space of the process unchanged, so the matrices can be time dependent as piecewise constant functions. For example, if we suppose that at a date $T$ in the past some of the parameters $M_{ij}$ or $s_i$ change, and if we note by $Q_0$ the generator for the time $0\leq t\leq T$ and by $Q_1$ the one corresponding to the time $t>T$, the transition semi-group of the Markov chain can be written as follows: \[ P_{t} = \begin{cases} \mathrm{e}^{tQ_{0}}, & \text{ if } t\leq T \\ \mathrm{e}^{TQ_{0}}\mathrm{e}^{(t-T)Q_{1}},\ \ & \text{ otherwise}. \end{cases} \] In particular, the distribution of $T_2^{\alpha}$, coalescence time of two lineages starting from a $\alpha$ configuration, is deduced from \[ \mathbb{P}(T_{k}^\alpha \leq t)=P_t(n_{\alpha}, n_c), \] where $n_\alpha$ is the number of the state corresponding to $\alpha$, and $n_c$ the number of the coalescence state. Its density is then equal to $f_{T_{k}^\alpha}(t) = P'_{t} (n_{\alpha}, n_c),$ where \[ P'_{t} = \begin{cases} \mathrm{e}^{tQ_{0}}Q_{0}, & \text{ if } t< T \\ \mathrm{e}^{TQ_{0}}\mathrm{e}^{(t-T)Q_1}Q_1,\ \ & \text{ otherwise}. \end{cases} \] These explanations allow to numerically determine the theoretical IICRs of a large number of structured models, such as the continent-islands model in Figure \ref{fig:ilescont}, with possible changes in demographic parameters, such as subpopulation sizes or migration rates. Also as a proof of concept, an extended fictional model of human evolution was proposed, integrating Neanderthals alongside modern humans in the same constant size structured model, with only the migration coefficients allowed to change. The simulated PSMCs are presented in Figure \ref{fig:sapiens_neand}. \begin{figure}[ht] \centering \includegraphics[height=6cm]{./fig_ilescont.png} \caption{Theoretical IICR of the structured model with a continent of size $1$, three islands of sizes $\frac{1}{20}$, and migration rates proportional to sizes between islands and the continent (no migration between islands). We find, as in the simulations in \cite{Chikhi2018}, the importance of sampling location, as well as the obvious false signals of population size changes that software such as PSMC could infer, here the demographic model being constant over time.} \label{fig:ilescont} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{./fig_sapiens_neand.png} \caption{Superposition of the PSMCs of real Neanderthal and Sapiens data, with the theoretical IICRs of the proposed structured model (left) and the PSMCs of the simulated data from this structured model (right).} \label{fig:sapiens_neand} \end{figure} \section{Inference of parameters in a structured model} \subsection*{IICR of $T_2$ for a structured model} The theoretical possibility (presented in \cite{rodriguez2018iicr}) of numerically computing the IICR of two sampled lineages in any structured model, as a function of the demographic parameters such as the number of islands, the successive island sizes, the successive migration rates, and the times of change of parameters like sizes or migration rates, opens the way to estimate these parameters from IICRs inferred from real data, e.g. via the inevitable PSMC. The challenges of complexity and computation time, even in the simplest case of the symmetric island model, have been overcome thanks to the work of Armando Arredondo, part of his PhD thesis (\cite{ArredondoThesis}), with the design and realization of a software for inferring such parameters, called SNIF (Structured Non-stationary Inferential Framework), presented in \cite{arredondo2021inferring}. Testing this software on simulated data revealed a first problem of identifiability between subpopulation sizes and migration rates. Second, if we want to obtain an acceptable level of precision for the estimation of the migration rates, the number of different values over time (this number is called the number of ``components'') should not be too large, generally not more than 5 or 6. On the other hand, the estimated number of subpopulations is extremely reliable. A synthesis of these first results can be seen on Figure \ref{fig:SNIF}. \begin{figure}[!ht]\centering% \includegraphics{fig_main_validation-exact}% \caption{Scatter plots of simulated and inferred parameters. $n$ is the number of islands, $t_i$ the $i$-th change of value of the migration parameter, and $M_i$ the $i$-th value of the latter. Panel \textbf{(a)} corresponds to scenarios with $c=3$~components, and \textbf{(b)} to scenarios with $c=6$~components. The different sub-panels represent the simulated (horizontal axis) versus inferred (vertical axis) parameter values for all the parameters (or a representative selection of parameters in the case of panel \textbf{(b)}) of $L=400$ unscaled simulated scenarios.}\label{fig:SNIF}% \end{figure} An application on human data also allows to find a good model in symmetrical islands which explains surprisingly well the graph produced by the PSMC, see Figure \ref{fig:SNIF_PSMC}. \begin{figure}[!ht]\centering% \includegraphics{fig_main_application-humans-iicr-and-cg}% \caption{Results of performing demographic inference on three representative human PSMC curves. Panel \textbf{(a)} shows the various IICR plots inferred for the different populations, numbers of components $c$ and weight parameters $\omega$ used, together with the target IICR curves (or PSMC plots) on which these estimations are based. Panel \textbf{(b)} shows the connectivity graphs for the same set of inferred scenario. As a reference point, the connectivity graph of the scenario proposed in \cite{rodriguez2018iicr} is also shown. The vertical axes represent migration rates ($M$).}\label{fig:SNIF_PSMC}% \end{figure} This method has already been used to contribute to the study of the evolution of species of microcebes, Malagasy lemuriform primates (\textit{Microcebus murinus} and \textit{Microcebus ravelobensis}). The results are published in (\cite{teixeira2021}). Other data on other species of mammals are being analysed using this software. \subsection*{Increase of the sample size} In order to increase the precision of the estimation of demographic parameters, it is natural to want to increase the size of the statistical sample. To do this, there are two possible ways, which are detailed below. \subsubsection*{Computation of the IICR$_k$} The IICR of $T_k$ (noted here IICR$_k$) for $k>2$ is theoretically easily computable thanks to the infinitesimal generator of equation \eqref{Q} and the extensions exposed in section \ref{Tk}. Indeed, the IICR$_k$ of the first time $T_k$ of coalescence of $k$ lineages can be defined in the same way as the IICR (which is in fact the IICR$_2$): $$ \text{IICR}_k(t)=\frac{f_{T_k}(t)}{\mathbb{P}(T_k>t)}. $$ While we know that in the panmictic case we have $$ \forall k\geq 2, \forall t>0, \qquad \text{IICR}_k(t)=\frac{1}{\binom{k}{2}}\text{IICR}_2(t), $$ this is not the case for a structured model (as we formally showed for the symmetric island model in \cite{Grusea2018}). There already exist powerful methods to estimate the IICR$_k$ of real genomic data of sample size $k$, notably the extensions of the PSMC, called MSMC (for \textit{Multiple Sequentially Markov Coalescent}, see \cite{Schiffels2013}) or its more recent version MSMC2 (see in \cite{schiffels2020msmc}). The practical problem comes from the fact that the larger the sample size, the shorter the coalescence time, and thus the fewer the genomic traces on the data, because the number of mutation and recombination events decreases very quickly, and falls below the acceptable threshold for the statistical estimation to be satisfactory. \subsubsection*{Sites Frequency Spectrum} A less informative data for a sample of size $k$, although very much used in population genetics, comes from the distribution of alleles frequency over the gene, generally called SFS (for \textit{Site Frequency Spectrum}). The average SFS is known for a panmictic model (see for example \cite{griffiths1998age}), but the calculation becomes of great combinatorial complexity for any structured model. In the case of the island model, Armando Arredondo has just completed the theoretical treatment of obtaining the average SFS for any value of $k$, as well as the feasibility in computation time for a sample size of $k \leq 26$ in the current state of computing capabilities (\cite{ArredondoThesis}, chapter 3). It now remains to implement this algorithm in the inference software. \section{IICR and consideration of selection} All the models we have discussed so far are so-called neutral models, i.e. they do not take into account the selection pressure that individuals undergo, through parts of their genomes, either increasing their reproductive capacity (positive selection) or decreasing it (negative selection) on average. One way to model selection on genomic sequences is to assume that the portions of genomes under selection have an effective size different from the neutral areas (see for example \cite{charlesworth2009effective}, \cite{gossmann2011quantifying} or \cite{jimenez2016heterogeneity}). This is a way to make the coalescence rate variable over the genome, this rate being linked to the reproductive capacity. Since the IICR is directly related to the coalescence rate, it is natural to explore the influence of modeling selection by the variability of the effective size along the genome on the IICR (\cite{boitard2022heterogeneity}). A theoretical calculation allows us to show, under the simple hypothesis of a panmictic population, that if we assume the existence of $K$ zones of the genome under respective effective sizes equal to $\lambda_i=\frac{1}{\mu_i}$ for $i=1\dots K$, each corresponding to a proportion $a_i$ of the genome (with $\sum_i^K a_i=1$) then the IICR is $$ \text{IICR}(t)=\frac{\sum\limits_{i=1}^K a_i\mathrm{e}^{(-\mu_i t)}}{\sum\limits_{i=1}^K a_i\mu_i\mathrm{e}^{(-\mu_i t)}}, $$ and a basic calculation indicates that for all values of $K$, $a_i$ and $\lambda_i$, this IICR is always increasing on $\mathbb{R}^+$, with $$ \text{IICR}(0)=\frac{1}{\sum\limits_{i=1}^K a_i\mu_i} \quad \text{and} \quad \lim_{t \to +\infty}\text{IICR}(t)=\max_{i=1 \dots K}\lambda_i. $$ Thus, it is shown that under these assumptions, the largest effective size present in the genome, even under a very small proportion, has a non-negligible influence on the growth of the IICR as a function of time from the present to the past, inducing in this way a false signal of abrupt population decrease in the more or less ancient past, as can be seen in Figure \ref{fig:IICR_sel_pan}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{./IICR_sel_pan.png} \caption{Example of IICR for $K=3$, $\lambda_1=0.1$, $\lambda_2=1$ and $\lambda_3=3$. On the left we set $a_3=0.01$ and on the right $a_1=0.5$. The value $\lambda_3$ determines the limit, and $a_3$ the speed of convergence, with more or less pronounced transient plateaus depending on the other values.} \label{fig:IICR_sel_pan} \end{figure} Finally, in a more general way, if the population is not panmictic, if we note by $f_i(t)$ and by $a_i$ respectively the density of the coalescence time $T^i_2$ and the proportion of the $i$-th of the $K$ classes of the genome, we have $$ \text{IICR}(t)=\frac{\sum_{i=1}^K a_i \mathbb{P}(T^i_2>t)}{\sum_{i=1}^K a_i f_i (t)}, $$ and thanks to our previous work on island-structured models, we can superimpose the structure and the selection and numerically calculate the corresponding IICRs (see Figure \ref{fig:IICR_sel_str}). We can then see that, on the one hand, even if we find the same monotonic pattern as in the panmictic case, the structure hides partly the growth towards the limit value. On the other hand, for some (small but realistic) values of the migration rate, we can see, in an intermediate zone, a reversal of the monotonicity of the IICR as a function of the proportion of the large class. Indeed, generally the smaller the proportion of large size class, the lower the IICR, except in those zones. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{./IICR_sel_str.png} \caption{Example of IICR for a symmetric 10-island model, migration rate $M$, and a genome with $K=2$ effective size classes $\lambda_1=0.1$ and $\lambda_2=1$ of relative proportions $a_1$ and $a_2$.} \label{fig:IICR_sel_str} \end{figure} These results on the links between modeling selection by effective size and IICR should again caution the researcher against jumping to conclusions about the demographic history of a population. \section{Conclusion and prospects} In summary, the IICR of $T_2$, despite its intrinsic limitations, on the one hand because it is a distribution of a variable which is not directly observable, and on the other hand because it is based on a sample of size 2, proves to be an extremely fertile object of modeling. The matrix writing of this time function\footnote{It should be noted that the mathematical objects studied in our work are part of a general formalism coming from the theory of phase-type distributions, see for example \cite{hobolth2019phase}.} facilitates precise numerical calculations (\cite{rodriguez2018iicr}), and makes accessible powerful inference methods, such as the recently developed SNIF program (\cite{arredondo2021inferring}). The IICR also sheds new light on a concept with which it is naturally associated, that of \textit{effective size}, which is the source of an abundant literature (see, for example, \cite{charlesworth2009effective}, for a complete review), and is subject to many different, sometimes contradictory, interpretations. The starting point is the evidence of a direct correspondence between the IICR and the population size in panmictic condition. But a first level of hypothesis complexification, with the introduction of a population structuring, quickly leads to erroneous conclusions about size changes. We can thus observe variations in the effective size that do not correspond to those of the real size, or even that are in the opposite sense (\cite{rodriguez2018iicr}, \cite{Chikhi2018}). Similarly, it is useful to know how to detect possible effective size variations, induced by the introduction of genomic areas under selection, positive or negative (\cite{boitard2022heterogeneity}). Another crucial contribution of the IICR is to highlight the importance of the sampling strategy (\cite{Chikhi2018}), the exploitation of which should allow new methods for model selection. Among the other more immediate prospects, we note the continuation of the theoretical study of the IICR for models a little more elaborate than the symmetric island model (first of all the asymmetric island model, the island-continents model, or even the one or two dimensional stepping-stone model), with the objective of highlighting the influence of the values of the demographic parameters on the variations of the IICR, via the eigenvalues of the associated infinitesimal generator. Finally, in order to increase the predictive and explanatory capacities of our models by enlarging the sample size, we have to develop the inference methods from the existing one (\cite{arredondo2021inferring}), by incorporating on the one hand the average SFS of an island model, and on the other hand the IICR of $T_k$. \newpage \bibliographystyle{apalike} \section{Coalescent theory in population genetics} Population genetics is the study of the evolution of the genotypes in a population of living beings, under various evolutionary pressures such as mutation, selection or genetic drift. Initiated in the first half of the XX${}^{th}$ century with the work of the British statistician Ronald Fisher and the American geneticist Sewall Wright, it has seen the emergence of a backward model called \textbf{coalescent}, the first developments of which are due to the British mathematician John Kingman in the 80's (\cite{kingman1982coalescent}). The coalescent theory consists in sampling individuals -- more precisely loci of individuals' genomes -- in the present population, and tracing their genealogies back in time, until successive common ancestors, from two or more lineages, are obtained. The instants, backward to the past, when such common ancestors appear are called \textbf{coalescence times}, and are considered as random variables with values in $\mathbb{N}$ or in $\mathbb{R}^+$. The mathematical objects of interest are then the joint distributions of the various coalescence times of a family tree, which allow to express the observable quantities in the genomes of a present population, as functions of those distributions. Those functions depend on genetic parameters (like mutation rate, recombination rate, selection rate) and demographic parameters (like sizes and numbers of sub-populations, migration rates between sub-populations). Observations of genetic sequences can then be used to infer genetic or demographic parameters using various statistical methods, and technological advances over the last two decades have made it possible to acquire huge masses of data, which can be used to refine existing models and develop new ones. \subsection*{Wright-Fisher model and Kingman coalescent} More precisely, the Wright-Fisher model describes the evolution of a population of $2N$ individuals (the individuals can be genes or loci) with the following assumptions: in each generation, each individual independently generates a number of descendants following a Poisson distribution of the same constant parameter, with the offsprings completely replacing their parents, all conditioned by the fact that the size of the population must remain constant. The process can be described in an equivalent way backward in time: each individual of a given generation randomly chooses its parent in the previous generation in a uniform way. An illustration of the process is given in Figure \ref{fig:WrightFisher}\footnote{Figure from volume \cite{hein2004gene}.}. \begin{figure}[!htb] \centering% \subfloat[ ]{\includegraphics[scale=0.21]{wright-fisher-a.png}}~\quad~ \subfloat[ ]{\includegraphics[scale=0.21]{wright-fisher-b.png}}~\quad~ \subfloat[ ]{\includegraphics[scale=0.206]{wright-fisher-c.png}} \caption{Here is a realization of the Wright-Fisher process on $16$ generations with a population size of $2N=10$. Panel \textbf{(a)} presents the evolution when each row is a generation, the individuals have been rearranged in panel \textbf{(b)} in order to highlight the family tree, and for panel \textbf{(c)} three individuals have been chosen in the last generation, their respective lineages having been put in bold. We see that the first coalescence between individuals $1$ and $2$ takes place two generations in the past, and that the last coalescence to the most recent common ancestor of individuals 1, 2 and 3 takes place nine generations in the past.}\label{fig:WrightFisher}% \end{figure} If we now consider a pair of individuals in the last generation, and if we note $T_2$ the waiting time for the coalescence of the two lineages (going back in time), we have $$ \mathbb{P}(T_2>i)=\left(1-\frac{1}{2N}\right)^i, $$ and if we suppose $N$ to be large, by changing the time scale, we obtain the usual approximation of the geometric distribution by the exponential distribution $$ \mathbb{P}(T_2>\lfloor 2Nt\rfloor)=\left(1-\frac{1}{2N}\right)^{\lfloor 2Nt\rfloor}\sim \mathrm{e}^{-t}. $$ We can thus generalize for the first renormalized coalescence time, which we keep noting $T_k$, for $k$ individuals sampled in the last generation: $$ \mathbb{P}(T_k>t)\sim \mathrm{e}^{-\binom{k}{2}t}, $$ and one thus obtains the coalescence tree, corollary of Kingman's coalescent, where the successive times of coalescence are independent, following exponential distributions of parameters equal to the binomial coefficients $\binom{k}{2}$. \subsection*{Demographic complications of the coalescence model} The coalescent defined this way is only valid in the context of a so-called panmictic population, i.e. without geographical structure (each individual randomly chooses its parent in the whole population), and with constant size. We will see how to generalize the coalescent in the case of a population of changing size, and in the case of a structured population. \subsubsection*{Population size change} If we consider that the size of the population can vary, by posing $N(i)$ the size of the population at generation $i$ in the past, and by considering the quantity \begin{equation}\label{lambda} \lambda(t)=\frac{N(\lfloor 2Nt \rfloor)}{N(0)}, \end{equation} the relative size of the population in the past with the same temporal renormalization as in the previous section, it can be shown (see e.g. \cite{Tavare2004}, section 2.4) that under reasonable conditions of variation of $\lambda$ in the neighbourhood of infinity, the coalescence time $T_k$ of $k$ individuals sampled in the present satisfies \begin{equation}\label{PTkch} \mathbb{P}(T_k>t)=\exp\left( -\binom{k}{2}\int_{0}^{t} \frac{\mathrm{d}\tau}{\lambda(\tau)} \right). \end{equation} But unlike the panmictic case, the successive $T_k$ are no longer independent, which makes the global study of the tree more difficult. \subsubsection*{Structured population} The absence of the panmictic assumption opens the door to multiple ways of modeling population structuring. Classically, the global population is considered to be made up of subpopulations (called islands, or demes), each of which is panmictic, between which migration events may occur, with rates that may depend on each pair of islands. The demographic parameters of the model are thus: the number of islands $n$, the respective renormalized sizes $(s_i)_{i=1 \dots n}$ of the $n$ islands (again assumed constant in time), and the renormalized migration rates (to take into account the scaling already described which allows us to go to continuous time by assuming that the sizes of each population are sufficiently large) $(M_{ij})_{i \neq j}$ between the islands $i$ and $j$. The description of the coalescent tree thus becomes much more complex, but the information can be summarized, as Hilde Herbots-Wilkinson showed in 1994 in her landmark thesis work (\cite{Herbots1994}). If we note $\alpha=(\alpha_1, \dots, \alpha_n)$ the configuration where $\alpha_i$ represents the number of lineages present in the island $i$, then the coalescence process can be described by the infinitesimal generator $Q$ such that \begin{equation}\label{Q} Q(n_{\alpha},n_{\beta})=\left\{\begin{array}{cl} \alpha_i\frac{M_{ij}}{2} & \text{if }\beta=\alpha-\epsilon^i+\epsilon^j \quad (i\neq j) \\ \frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2} & \text{if }\beta=\alpha-\epsilon^i\\ -\sum_i \left(\alpha_i\frac{M_i}{2}+\frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2}\right) & \text{if } \beta=\alpha \\ 0 & \text{otherwise},\end{array}\right. \end{equation} where $\epsilon^i$ is the vector of size $n$ whose components are all zero except the $i$-th which is $1$, and where $n_\alpha$ and $n_\beta$ are the respective numbers of the $\alpha$ and $\beta$ configurations having chosen a prior ordering of all possible configurations. Note that $\frac{1}{s_i}\frac{\alpha_i(\alpha_i-1)}{2}$ represents a coalescence rate and $\alpha_i\frac{M_{ij}}{2}$ a migration rate. \subsection*{Genetic parameters, estimations and inferences} From a genetic point of view, all these models are assumed to be neutral, i.e. not taking into account the possible influence of selection in the reproductive capacity of each individual. However, it is possible to easily incorporate the phenomena of mutation and recombination into these models, because they can be considered as events independent of the genealogical process. The classical assumptions are that each mutation or recombination event affects a different part of the genome (the so-called \textit{infinite site model}), and that the mutation and recombination rates are constant both in time and along the genetic sequences. \subsubsection*{Mutation and genetic diversity} \label{sfs} Mutation events are distributed on the genealogical tree according to a Poisson process, and we can link genetic diversity data, by observing for example the number of alleles of a given gene, or its distribution, and more generally the quantification of polymorphism, with the configuration of the tree (topology, length of branches) associated with the chosen model. By choosing a model, we can estimate the mutation parameter, and on the contrary, by assuming the mutation parameter to be known, we can estimate the lengths of the branches of the tree, and thus have information on the distributions of the coalescence times. Among the best known estimators, let us mention Watterson's Theta (\cite{watterson1975number}) or Tajima's D (\cite{tajima1989statistical}). \subsubsection*{Recombination and Sequential Markovian Coalescence} The phenomenon of recombination is much more difficult to incorporate into these models than mutation, since it requires sexual reproduction, and at each recombination event the resulting genome is derived not from one parent but from two, thus exponentially increasing the number of ancestors involved for each lineage of individuals sampled in the present population. The ancestral recombination graph (ARG, see \cite{griffiths1996ancestral}) requires a computational treatment that is very quickly prohibitive when the sample size increases. The work of Mc Vean and Cardin (\cite{mcvean2005approximating}) allowed, under an original hypothesis of the Markovian dependence property \textit{along the genome} (hence the so-called \textit{sequential} property), to greatly restrict the space to be explored for statistical inference methods. Several demographic parameter inference software packages then emerged, including the famous PSMC (for \textit{Pairwise Sequentially Markovian Coalescent}, \cite{Li2011}), which has been widely used since 2011, and allows to estimate the variation of the population size (noted $\lambda(t)$ in equation \eqref{lambda}), using the genetic data from a diploid individual only (fully sequenced genome), see for example Figure \ref{fig:Estimation_PSMC}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{./PSMC1.png} \caption{Demographic inference obtained from human DNA from individuals of different populations (\cite{Li2011}). On the $x$-axis, the number of years in the past. On the $y$-axis, the renormalized size of the population, assumed to be panmictic.} \label{fig:Estimation_PSMC} \end{figure} \section{Consideration of population structure, central role of the IICR} Some work in the first decade of this century (\cite{wakeley2001coalescent}, \cite{chikhi2001estimation}, \cite{chikhi2010confounding}) highlighted the effect that a structured population could generate on statistics assuming panmixia , with notably the detection of false bottleneck signals in some cases. After a preliminary study which consisted in analyzing a simple case of comparison of an island model and a size change model (\cite{mazet2015demographic}), we obtained a first result, which has become a nodal point of our subsequent research and which we present here. \subsubsection*{The IICR and the change in size} We have highlighted in \cite{Mazet2016} the following result. Considering that whatever the chosen demographic model, the coalescence time $T_2$ of the lineages of two individuals chosen in the present population, a random variable with values in $\mathbb{R}^+$ which can thus be considered as a lifetime of density $f_{T_2}$, thus admits a ``failure rate'' which here translates into an \textbf{(instantaneous) rate of coalescence} equal to $$ \mu(t)=\frac{f_{T_2}(t)}{\mathbb{P}(T_2>t)}. $$ The density of $T_2$ can thus always be written $$ f_{T_2}(t)=\mu(t)\exp\left( -\int_0^t \mu(\tau) \mathrm{d}\tau \right), $$ hence \begin{equation}\label{PT2gen} \mathbb{P}(T_2>t)=\exp\left( -\int_0^t \mu(\tau) \mathrm{d}\tau \right). \end{equation} If we now bring equation \eqref{PT2gen} together with the particular case $k=2$ of equation \eqref{PTkch}, we realize that in the panmictic case, the change in size $\lambda(t)$ is exactly equal to $\frac{1}{\mu(t)}$, which is thus the inverse of the instantaneous coalescence rate, noted by the acronym \textbf{IICR}. Two important consequences can be drawn from this observation: \begin{enumerate} \item The sole data of the $T_2$ distribution cannot be informative of the demographic model, since whatever it is, there is always a panmictic model which will provide exactly this $T_2$ distribution. Indeed, it is sufficient to choose the inverse of the coalescence rate as the size change. \item What software like PSMC infers, on data from a single diploid genome and however long it may be, is the IICR associated with the demographic model, which is usually \textbf{not the change in size} of the population when it is structured. \end{enumerate} Taking the second consequence further, as a proof of concept we built a constant size demographic model based solely on a symmetric island model, with the number of islands also constant, where only the migration parameter is allowed to vary. As we can see in Figure \ref{fig:hum_niles} extracted from \cite{Mazet2016}, the PSMC output on data simulated under this model is very similar to that on real human data. \begin{figure}[ht] \centering \includegraphics[height=7cm]{./hum_niles.png} \caption{In red the PSMC of real data from a human (CHN.A in Figure \ref{fig:Estimation_PSMC}). In green the PSMC of 10 simulations of the same model in islands, of constant size, with three changes of migration rate represented by the vertical dotted lines. The blue vertical lines represent some identified period that could be linked with those changes. } \label{fig:hum_niles} \end{figure} There is obviously no question of claiming that the human population is structured in symmetrical islands and that its population size has remained constant over the course of evolution, but this example prompts us to question the interpretation of the IICR, which is the object inferred by the PSMC, and shows that it is necessary to investigate further, before drawing any conclusions about the demographic history of a population. \subsubsection*{Influence of the sample size} \label{Tk} From the first consequence drawn above, we explored theoretically what the data of a third lineage could bring as additional information. We then showed that in the simple case of a population structured in islands, then adding the information of the $T_3$ distribution to the $T_2$ distribution is enough to distinguish this model from the panmictic model having the same $T_2$ distribution, thus the same IICR (\cite{Grusea2018}). This result provides theoretical evidence that a sample size strictly greater than two is sufficient to distinguish an island structured model from a panmictic model, but initial attempts to move into practice have not yet been successful, because of the precision required, which often blends into the noise of the real data. \subsubsection*{Structure and IICR: sampling strategy} Exploratory work was then done (\cite{Chikhi2018}), using simulated data, to find out what signatures are left on the IICR produced by different types of structured models, and thus indirectly (or directly when dealing with software of the same type as PSMC) on the false signals of size changes that these models generate. As an illustration, we present in Figure \ref{fig:3iles} extracted from this paper, the simulated IICRs in a model with three islands and asymmetric migration rates, by sampling a diploid individual in each of the islands. We see that not only the structure of the model can give false signals of size change for software assuming panmixia, but also the IICR is \textbf{dependent on the sampling location}, for the same demographic history. This finding also deserves to be further explored for use in model selection. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{./figure_3iles_Lounes.png} \caption{The size of each island is constant. However, the IICRs of each pair of sequences are time-varying functions, and these functions even may not be monotonic. Furthermore, they differ depending on the island from which the pair is sampled.} \label{fig:3iles} \end{figure} \subsubsection*{The IICR as a model validation} The IICR can also be used as a \textbf{summary statistic} of a given model, for validation. In the same paper \cite{Chikhi2018} we thus tested a number of models proposed in the literature for human evolution, for \textit{Homo sapiens} as well as for \textit{Homo neandertalensis}. Simulating the IICR of some of these models allowed us to discard them, as the IICR produced differed radically from that estimated by PSMC on human data. The Figure \ref{fig:lounes} illustrates this situation. \begin{figure}[ht] \centering \includegraphics[height=6cm]{./fig_psmc_lounes.png} \caption{The PSMCs of real modern human and Neanderthal data (the last two in the order of the legend), set against the PSMCs of simulated data of different proposed models (the first six), which are thus radically different. And for the record, the panmictic model suggested by \cite{Li2011} and the fictional structured model proposed by \cite{Mazet2016}. For more detailed explanation, see \cite{Chikhi2018}.} \label{fig:lounes} \end{figure} \subsubsection*{IICR and structured coalescent} On the theoretical side, work has been undertaken to calculate, as precisely as we like, the IICR of any structured model (\cite{rodriguez2018iicr}). The modeling initiated by Herbots provides a set of infinitesimal generators of Markov processes (see formula \eqref{Q}), and it is possible to exploit the semi-group property of the exponentials of these matrices. Indeed, changes in some parameters of the structured models, such as island sizes or migration rates, leave the state space of the process unchanged, so the matrices can be time dependent as piecewise constant functions. For example, if we suppose that at a date $T$ in the past some of the parameters $M_{ij}$ or $s_i$ change, and if we note by $Q_0$ the generator for the time $0\leq t\leq T$ and by $Q_1$ the one corresponding to the time $t>T$, the transition semi-group of the Markov chain can be written as follows: \[ P_{t} = \begin{cases} \mathrm{e}^{tQ_{0}}, & \text{ if } t\leq T \\ \mathrm{e}^{TQ_{0}}\mathrm{e}^{(t-T)Q_{1}},\ \ & \text{ otherwise}. \end{cases} \] In particular, the distribution of $T_2^{\alpha}$, coalescence time of two lineages starting from a $\alpha$ configuration, is deduced from \[ \mathbb{P}(T_{k}^\alpha \leq t)=P_t(n_{\alpha}, n_c), \] where $n_\alpha$ is the number of the state corresponding to $\alpha$, and $n_c$ the number of the coalescence state. Its density is then equal to $f_{T_{k}^\alpha}(t) = P'_{t} (n_{\alpha}, n_c),$ where \[ P'_{t} = \begin{cases} \mathrm{e}^{tQ_{0}}Q_{0}, & \text{ if } t< T \\ \mathrm{e}^{TQ_{0}}\mathrm{e}^{(t-T)Q_1}Q_1,\ \ & \text{ otherwise}. \end{cases} \] These explanations allow to numerically determine the theoretical IICRs of a large number of structured models, such as the continent-islands model in Figure \ref{fig:ilescont}, with possible changes in demographic parameters, such as subpopulation sizes or migration rates. Also as a proof of concept, an extended fictional model of human evolution was proposed, integrating Neanderthals alongside modern humans in the same constant size structured model, with only the migration coefficients allowed to change. The simulated PSMCs are presented in Figure \ref{fig:sapiens_neand}. \begin{figure}[ht] \centering \includegraphics[height=6cm]{./fig_ilescont.png} \caption{Theoretical IICR of the structured model with a continent of size $1$, three islands of sizes $\frac{1}{20}$, and migration rates proportional to sizes between islands and the continent (no migration between islands). We find, as in the simulations in \cite{Chikhi2018}, the importance of sampling location, as well as the obvious false signals of population size changes that software such as PSMC could infer, here the demographic model being constant over time.} \label{fig:ilescont} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{./fig_sapiens_neand.png} \caption{Superposition of the PSMCs of real Neanderthal and Sapiens data, with the theoretical IICRs of the proposed structured model (left) and the PSMCs of the simulated data from this structured model (right).} \label{fig:sapiens_neand} \end{figure} \section{Inference of parameters in a structured model} \subsection*{IICR of $T_2$ for a structured model} The theoretical possibility (presented in \cite{rodriguez2018iicr}) of numerically computing the IICR of two sampled lineages in any structured model, as a function of the demographic parameters such as the number of islands, the successive island sizes, the successive migration rates, and the times of change of parameters like sizes or migration rates, opens the way to estimate these parameters from IICRs inferred from real data, e.g. via the inevitable PSMC. The challenges of complexity and computation time, even in the simplest case of the symmetric island model, have been overcome thanks to the work of Armando Arredondo, part of his PhD thesis (\cite{ArredondoThesis}), with the design and realization of a software for inferring such parameters, called SNIF (Structured Non-stationary Inferential Framework), presented in \cite{arredondo2021inferring}. Testing this software on simulated data revealed a first problem of identifiability between subpopulation sizes and migration rates. Second, if we want to obtain an acceptable level of precision for the estimation of the migration rates, the number of different values over time (this number is called the number of ``components'') should not be too large, generally not more than 5 or 6. On the other hand, the estimated number of subpopulations is extremely reliable. A synthesis of these first results can be seen on Figure \ref{fig:SNIF}. \begin{figure}[!ht]\centering% \includegraphics{fig_main_validation-exact}% \caption{Scatter plots of simulated and inferred parameters. $n$ is the number of islands, $t_i$ the $i$-th change of value of the migration parameter, and $M_i$ the $i$-th value of the latter. Panel \textbf{(a)} corresponds to scenarios with $c=3$~components, and \textbf{(b)} to scenarios with $c=6$~components. The different sub-panels represent the simulated (horizontal axis) versus inferred (vertical axis) parameter values for all the parameters (or a representative selection of parameters in the case of panel \textbf{(b)}) of $L=400$ unscaled simulated scenarios.}\label{fig:SNIF}% \end{figure} An application on human data also allows to find a good model in symmetrical islands which explains surprisingly well the graph produced by the PSMC, see Figure \ref{fig:SNIF_PSMC}. \begin{figure}[!ht]\centering% \includegraphics{fig_main_application-humans-iicr-and-cg}% \caption{Results of performing demographic inference on three representative human PSMC curves. Panel \textbf{(a)} shows the various IICR plots inferred for the different populations, numbers of components $c$ and weight parameters $\omega$ used, together with the target IICR curves (or PSMC plots) on which these estimations are based. Panel \textbf{(b)} shows the connectivity graphs for the same set of inferred scenario. As a reference point, the connectivity graph of the scenario proposed in \cite{rodriguez2018iicr} is also shown. The vertical axes represent migration rates ($M$).}\label{fig:SNIF_PSMC}% \end{figure} This method has already been used to contribute to the study of the evolution of species of microcebes, Malagasy lemuriform primates (\textit{Microcebus murinus} and \textit{Microcebus ravelobensis}). The results are published in (\cite{teixeira2021}). Other data on other species of mammals are being analysed using this software. \subsection*{Increase of the sample size} In order to increase the precision of the estimation of demographic parameters, it is natural to want to increase the size of the statistical sample. To do this, there are two possible ways, which are detailed below. \subsubsection*{Computation of the IICR$_k$} The IICR of $T_k$ (noted here IICR$_k$) for $k>2$ is theoretically easily computable thanks to the infinitesimal generator of equation \eqref{Q} and the extensions exposed in section \ref{Tk}. Indeed, the IICR$_k$ of the first time $T_k$ of coalescence of $k$ lineages can be defined in the same way as the IICR (which is in fact the IICR$_2$): $$ \text{IICR}_k(t)=\frac{f_{T_k}(t)}{\mathbb{P}(T_k>t)}. $$ While we know that in the panmictic case we have $$ \forall k\geq 2, \forall t>0, \qquad \text{IICR}_k(t)=\frac{1}{\binom{k}{2}}\text{IICR}_2(t), $$ this is not the case for a structured model (as we formally showed for the symmetric island model in \cite{Grusea2018}). There already exist powerful methods to estimate the IICR$_k$ of real genomic data of sample size $k$, notably the extensions of the PSMC, called MSMC (for \textit{Multiple Sequentially Markov Coalescent}, see \cite{Schiffels2013}) or its more recent version MSMC2 (see in \cite{schiffels2020msmc}). The practical problem comes from the fact that the larger the sample size, the shorter the coalescence time, and thus the fewer the genomic traces on the data, because the number of mutation and recombination events decreases very quickly, and falls below the acceptable threshold for the statistical estimation to be satisfactory. \subsubsection*{Sites Frequency Spectrum} A less informative data for a sample of size $k$, although very much used in population genetics, comes from the distribution of alleles frequency over the gene, generally called SFS (for \textit{Site Frequency Spectrum}). The average SFS is known for a panmictic model (see for example \cite{griffiths1998age}), but the calculation becomes of great combinatorial complexity for any structured model. In the case of the island model, Armando Arredondo has just completed the theoretical treatment of obtaining the average SFS for any value of $k$, as well as the feasibility in computation time for a sample size of $k \leq 26$ in the current state of computing capabilities (\cite{ArredondoThesis}, chapter 3). It now remains to implement this algorithm in the inference software. \section{IICR and consideration of selection} All the models we have discussed so far are so-called neutral models, i.e. they do not take into account the selection pressure that individuals undergo, through parts of their genomes, either increasing their reproductive capacity (positive selection) or decreasing it (negative selection) on average. One way to model selection on genomic sequences is to assume that the portions of genomes under selection have an effective size different from the neutral areas (see for example \cite{charlesworth2009effective}, \cite{gossmann2011quantifying} or \cite{jimenez2016heterogeneity}). This is a way to make the coalescence rate variable over the genome, this rate being linked to the reproductive capacity. Since the IICR is directly related to the coalescence rate, it is natural to explore the influence of modeling selection by the variability of the effective size along the genome on the IICR (\cite{boitard2022heterogeneity}). A theoretical calculation allows us to show, under the simple hypothesis of a panmictic population, that if we assume the existence of $K$ zones of the genome under respective effective sizes equal to $\lambda_i=\frac{1}{\mu_i}$ for $i=1\dots K$, each corresponding to a proportion $a_i$ of the genome (with $\sum_i^K a_i=1$) then the IICR is $$ \text{IICR}(t)=\frac{\sum\limits_{i=1}^K a_i\mathrm{e}^{(-\mu_i t)}}{\sum\limits_{i=1}^K a_i\mu_i\mathrm{e}^{(-\mu_i t)}}, $$ and a basic calculation indicates that for all values of $K$, $a_i$ and $\lambda_i$, this IICR is always increasing on $\mathbb{R}^+$, with $$ \text{IICR}(0)=\frac{1}{\sum\limits_{i=1}^K a_i\mu_i} \quad \text{and} \quad \lim_{t \to +\infty}\text{IICR}(t)=\max_{i=1 \dots K}\lambda_i. $$ Thus, it is shown that under these assumptions, the largest effective size present in the genome, even under a very small proportion, has a non-negligible influence on the growth of the IICR as a function of time from the present to the past, inducing in this way a false signal of abrupt population decrease in the more or less ancient past, as can be seen in Figure \ref{fig:IICR_sel_pan}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{./IICR_sel_pan.png} \caption{Example of IICR for $K=3$, $\lambda_1=0.1$, $\lambda_2=1$ and $\lambda_3=3$. On the left we set $a_3=0.01$ and on the right $a_1=0.5$. The value $\lambda_3$ determines the limit, and $a_3$ the speed of convergence, with more or less pronounced transient plateaus depending on the other values.} \label{fig:IICR_sel_pan} \end{figure} Finally, in a more general way, if the population is not panmictic, if we note by $f_i(t)$ and by $a_i$ respectively the density of the coalescence time $T^i_2$ and the proportion of the $i$-th of the $K$ classes of the genome, we have $$ \text{IICR}(t)=\frac{\sum_{i=1}^K a_i \mathbb{P}(T^i_2>t)}{\sum_{i=1}^K a_i f_i (t)}, $$ and thanks to our previous work on island-structured models, we can superimpose the structure and the selection and numerically calculate the corresponding IICRs (see Figure \ref{fig:IICR_sel_str}). We can then see that, on the one hand, even if we find the same monotonic pattern as in the panmictic case, the structure hides partly the growth towards the limit value. On the other hand, for some (small but realistic) values of the migration rate, we can see, in an intermediate zone, a reversal of the monotonicity of the IICR as a function of the proportion of the large class. Indeed, generally the smaller the proportion of large size class, the lower the IICR, except in those zones. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{./IICR_sel_str.png} \caption{Example of IICR for a symmetric 10-island model, migration rate $M$, and a genome with $K=2$ effective size classes $\lambda_1=0.1$ and $\lambda_2=1$ of relative proportions $a_1$ and $a_2$.} \label{fig:IICR_sel_str} \end{figure} These results on the links between modeling selection by effective size and IICR should again caution the researcher against jumping to conclusions about the demographic history of a population. \section{Conclusion and prospects} In summary, the IICR of $T_2$, despite its intrinsic limitations, on the one hand because it is a distribution of a variable which is not directly observable, and on the other hand because it is based on a sample of size 2, proves to be an extremely fertile object of modeling. The matrix writing of this time function\footnote{It should be noted that the mathematical objects studied in our work are part of a general formalism coming from the theory of phase-type distributions, see for example \cite{hobolth2019phase}.} facilitates precise numerical calculations (\cite{rodriguez2018iicr}), and makes accessible powerful inference methods, such as the recently developed SNIF program (\cite{arredondo2021inferring}). The IICR also sheds new light on a concept with which it is naturally associated, that of \textit{effective size}, which is the source of an abundant literature (see, for example, \cite{charlesworth2009effective}, for a complete review), and is subject to many different, sometimes contradictory, interpretations. The starting point is the evidence of a direct correspondence between the IICR and the population size in panmictic condition. But a first level of hypothesis complexification, with the introduction of a population structuring, quickly leads to erroneous conclusions about size changes. We can thus observe variations in the effective size that do not correspond to those of the real size, or even that are in the opposite sense (\cite{rodriguez2018iicr}, \cite{Chikhi2018}). Similarly, it is useful to know how to detect possible effective size variations, induced by the introduction of genomic areas under selection, positive or negative (\cite{boitard2022heterogeneity}). Another crucial contribution of the IICR is to highlight the importance of the sampling strategy (\cite{Chikhi2018}), the exploitation of which should allow new methods for model selection. Among the other more immediate prospects, we note the continuation of the theoretical study of the IICR for models a little more elaborate than the symmetric island model (first of all the asymmetric island model, the island-continents model, or even the one or two dimensional stepping-stone model), with the objective of highlighting the influence of the values of the demographic parameters on the variations of the IICR, via the eigenvalues of the associated infinitesimal generator. Finally, in order to increase the predictive and explanatory capacities of our models by enlarging the sample size, we have to develop the inference methods from the existing one (\cite{arredondo2021inferring}), by incorporating on the one hand the average SFS of an island model, and on the other hand the IICR of $T_k$. \newpage \bibliographystyle{apalike}
0910.5025
\section{Introduction} The rich cluster RXJ1347-1145 ($z=0.45$) is the most X-ray luminous galaxy cluster known \citep{schindler95,schindler97,allen02} and has been the object of extensive study at radio, millimeter, submillimeter, optical and X-ray wavelengths \citep{kitayama04,komatsu01,Gitti07,allen02,schindler97,pointecouteau99,ota08, cohen02,bradac08,miranda08}. Discovered in the ROSAT All-Sky Survey, RXJ1347-1145 was originally thought to be a dynamically old, relaxed system \citep{schindler95,schindler97} based on its smooth, strongly-peaked X-ray morphology--- a prototypical relaxed ``cooling-flow'' cluster. The NOBA 7 bolometer system on the 45-meter Nobeyama telescope \citep{kitayama04,komatsu01} has made high-resolution observations ($13''$ FWHM, smoothed to $\sim 19''$ in the presented map) of the Sunyaev-Zel'dovich effect (SZE) at 150 GHz which indicate a strong enhancement of the SZ effect $20'' \, (170 \, {\rm kpc})$ to the south-east of the peak of the X-ray emission, however. Hints of this asymmetry had been seen in earlier, lower resolution measurements with the Diabolo $2.1$~mm photometer on the IRAM 30-m \citep{pointecouteau99}. The enhancement has been interpreted as being due to hot ($T_e > 20 \, {\rm keV}$) gas which is more difficult to detect using X-rays than cooler gas is, owing to the lower responsivities of imaging X-ray telescopes such as Chandra and XMM at energies above $\sim 10 \, {\rm keV}$. In contrast, the SZE intensity is proportional to $T_e$ up to arbitrarily high temperatures, aside from relativistic corrections which are weak at 90 GHz, so such hot gas stands out. The feature is consistent with the presence of a large substructure of gas in the intra-cluster medium (ICM) shock-heated by a merger, as is seen in the ``Bullet Cluster'' 1E0657-56 \citep{markevitch02}; this interpretaion has been supported by more recent observations \citep[e.g.][]{allen02,ota08}. {\it Thus, rather than being an example of a hydrostatic, relaxed system, high-resolution SZE observations suggest that the observed properties of the ICM in RXJ1347-1145 are strongly affected by an ongoing merger.} This is a striking cautionary tale for ongoing blind SZE surveys \citep{carlstrom02}, for which useful X-ray data will be difficult or impossible to obtain for many high-$z$ systems, as well as a sign that our current understanding of nearby, well-studied X-ray clusters may be dramatically incomplete. Reports \citep{komatsu01,pointecouteau01,kitayama04} of a strong enhancement of the SZE away from the cluster center are based on relatively low-resolution images compared to the size of the offsets and features involved. SZE images at lower frequencies also show show substantial offsets between the peak of X-ray and SZE emission; for instance, the 21 GHz \citep{komatsu01} and SZE peak is $\sim 20''$ to the SE of the X-ray peak, and the 30 GHz \citep{reese02} SZE peak is $\sim 13''$ to the SE of the X-ray peak. The situation is further complicated by the presence of a radio source in the center of the cluster. We have sought to test these claims, and to begin to untangle the astrophysics of this interesting system, with higher resolution imaging at a complementary frequency. In this paper we present the highest angular resolution image of the SZE yet made. We observed RXJ1347-1145 with the MUSTANG 90 GHz bolometer array on the Robert C. Byrd Green Bank Telescope (GBT). At the redshift of the cluster (and assuming $\Omega_{\Lambda}= 0.3,\Omega_{tot}=1, h=0.73$) the GBT+MUSTANG $9''$ beam corresponds to a projected length of 54 kpc. The observations are described in \S~\ref{sec:obs} and the data reduction in \S~\ref{sec:reduc}. Our interpretation and conclusions are presented in \S~\ref{sec:concl}. \section{Instrument \& Observations} \label{sec:obs} MUSTANG is a 64 pixel TES bolometer array built for use on the 100-m GBT \citep{gbtref}. MUSTANG uses reimaging optics with a pixel spacing of $0.63 f\lambda$, operates in a bandpass of 81--99~GHz, and is cooled by a pulse tube cooler and Helium-4/Helium-3 closed cycle refrigerator. Further technical details about MUSTANG can be found in \citet{dicker2006,dicker2008}. More detailed information is also provided about MUSTANG, the observing strategy, and the data analysis algorithms in \citet{agn} and \cite{orion}, which present the results of other early MUSTANG observations. Further information can be obtained at the MUSTANG web site\footnote{{\tt http://www.gb.nrao.edu/mustang/}}. The observations we present were collected in two runs, one on 21 Feb. 2009 and one on 25 Feb. 2009, each approximately four hours in duration including time spent setting up the receiver and collecting calibration observations. For both runs the sky was clear with $\sim 6 \, {\rm mm}$ of precipitable water vapor, corresponding to $\sim 20$~K zenith atmospheric loading at $3.3 \, {\rm mm}$. Both were night-time sessions, important because during the day variable gradients in the telescope structure's temperature degrade its 90 GHz performance significantly. The telescope focus was determined by collecting small maps of a bright calibrator source at a range of focus settings; every 30-40 minutes throughout the session the beam is checked on the calibrator. Typically the required focus corrections are stable to a few millimeters over several hours once residual daytime thermal gradients have decayed. Once the focus was established, the in-focus and out-of-focus beam maps were used to solve for primary aperture wavefront phase errors using the ``Out-of-Focus'' (OOF) holography technique described by \citet{bojanoof}. The solutions were applied to the active surface. This procedure improves the beamshape and increases the telescope peak forward gain, typically by $\sim 30\%$. This approach is effective at correcting phase errors on scales of 20 meters or larger on the dish surface, but is not sufficiently sensitive to solve for smaller scales. Therefore there are residual uncorrected wavefront errors that result in sidelobes out to $\sim 40''$ from the main beam. Deep beam maps were collected on the brightest 90 GHz sources on several occasions, principally in test runs on 24/25 March 2009. The repeatability of the GBT 90 GHz beam after application of the OOF solutions was found to be good. The analysis of the beam map data is discussed in \S~\ref{sec:beam}. Maps of RXJ1347-1145 (J2000 coordinates $13h47m30.5s$, $-11^{\circ}45'09''$) were collected with a variety of scan patterns designed to simultanously maximize on-source time and the speed at which the telescope moved when crossing the source. The effects of atmospheric and instrumental fluctuations, which become larger on longer timescales, are reduced by faster scan speeds. The primary mapping strategies were: a) a ``daisy'' or ``spirograph'' scan in which the source of interest is frequently recrossed; and b) a ``billiard ball'' scan, which moves at an approximately constant speed and has more uniform coverage over a square region of interest. The nominal region of interest in this case was $5'\times 5'$, centered on RXJ1347-1145. The size of the maps is sufficiently small that except under the most exceptionally stable conditions, instrument and random atmosphere drifts dominate the constant atmosphere (${\rm sec(za)}$) term and any possible ground pickup. The total integration time on source was $3.4 \, {\rm h}$. The asteroid Ceres was observed on both nights and used as the primary flux calibrator assuming $T_B = 148 \, {\rm K}$ (T. Mueller, private comm.). We assign a 15\% uncertainty to this calibration. We checked the Ceres calibration on nights when other sources (Saturn, CRL2688) were visible and found consistent results to within the stated uncertainty. Using these observations and the lab-measured receiver optical efficiency of $\eta_{opt,rx} = 50 \pm 10 \%$ we compute an overall aperture efficiency of $\eta_{aperture,gbt} = 20\%$, corresponding to a Ruze-equivalent surface RMS of $315 {\, \rm \mu m}$. This result is consistent with recent traditional holographic measurements of the GBT surface. Since the observations presented here the surface has been set based on further holography maps and now has a surface RMS, weighted by the MUSTANG illumination pattern, of $\sim 250 {\, \rm \mu m}$. \section{Data Reduction} \label{sec:reduc} \subsection{Beam Characterization} \label{sec:beam} Imaging diffuse, extended structure requires a good understanding of the instrument and telescope beam response on the sky. To achieve this we collected numerous beam maps through our observing runs, including several deep beammaps on bright ($5 \, {\rm Jy}$ or more) sources. After applying the Out-of-Focus holography corrections to the aperture the beam results were repeatable; Figure~\ref{fig:beams} shows the radial beam profile from maps of a bright source (3C279) collected on two occasions. We find a significant error beam concentrated around the main lobe which increases the beam volume from $87 \, {\rm arcsec^2}$ (for the core component only) to $145 \, {\rm arcsec^2}$. We attribute this beam to residual medium and small scale phase errors on the primary aperture. The beam shape and volume is taken into account when comparing to model predictions. By way of comparison, Figure~\ref{fig:beams} also shows the profile of the beam determined from the radio source in the center of RXJ1347-1145. Since the SZ map has been smoothed, the apparent beam is slightly broader, but allowing for this, still consistent with the beam determined on 3C279. \begin{figure}[h!] \includegraphics[width = 3.25in, height=2.5in]{mustangbeamsMar10.pdf} \caption{MUSTANG+GBT beams determined from observations of 3C279. The green dash triple-dot line shows a double-Gaussian fit to the observed beam. The purple dashed line is the cumulative fractional beam volume. We attribute the excess power in the wings of the observed beam to residual medium and small scale phase errors on the dish. The data points (diamonds with error bars) show a complementary determination of the beam from the 5 mJy radio source in the center of RXJ1347-1145. The beam in this case is slightly wider due to the smoothing (4'' FWHM) applied to the final map, which is accounted for in the analysis.} \label{fig:beams} \end{figure} \subsection{Imaging} A number of systematic effects must be taken into account in the time domain data before forming the image: \begin{enumerate} \item The responsivities of individual detectors are measured using an internal calibration lamp that is pulsed periodically. Optically non-responsive detectors ($10$-$15$ out of 64) are flagged for removal from subsequent analysis. Typical detector responsivities are stable to $2-3\%$ over the course of several hours. \item Common mode systematic signals are subtracted from the data. These are caused by atmospheric and instrumental (thermal) fluctuations. The pulse tube cooler, which provides the 3K base temperature of the receiver, induces a $1.4$~Hz signal due to small emission fluctuations of the 3K optics. The pulse-tube signal is removed by fitting and subtracting a $1.41$ Hz sine wave. The remaining common mode signal is represented by a template formed by a weighted average of data from good pixels; this template is low-pass filtered and subtracted from the data, with a fitted amplitude per detector. The low-pass filter time constant (typically $0.1 \, {\rm Hz}$) is determined by the stability of the data in question. This procedure helps to preserve large-scale structure in the maps. \item Slow residual per-pixel drifts are removed using low-order orthogonal polynomials. \item Individual detector weights are computed from the residual detector timestreams after the above steps. Since the noise level of the detectors varies considerably this is an important step. Best results are obtained by retaining only the top $\sim 80\%$ of responsive detectors. \item The remaining calibrated detector timestreams are inspected visually on a per-scan (typically 5 minute period) basis. Scans which have timestreams with obvious, poorly-removed systematic signals remaining are removed. This results in flagging $28\%$ of scans. The SNR in an individual detector timestream is sufficiently low that this does not bias our map. \end{enumerate} Following these calibration steps the detector timestream data are gridded onto a $2''$ pixellization in Right Ascencion and Declination using a cloud-in-cell gridding kernel. To check our results we have implemented three, mostly independent analysis pipelines. The results in this paper are based on a straightforward but flexible single-pass pipeline written in IDL, described above. There is also an iterative, single-dish CLEAN based approach implemented in the OBIT package \citep{obit} and an optimal SNR method in which the time domain data are decomposed into noise (covariance) eigenvectors; their temporal power spectra computed; and a maximum likelihood map constructed from the noise-weighted eigenvectors. Results obtained with these algorithms were consistent. The first two approaches are described in more detail in \citet{orion}. Our final map, smoothed by a $4''$ FWHM Gaussian and gridded on $0''.5$ pixels, is shown in Figure~\ref{fig:finalmap}, along with the difference between the two individual night maps. It shows a strong, clear SZ decrement, well separated from the central point source and consistent with the level expected from the \citet[][hereafter K04]{kitayama04} 150 GHz measurement. The right hand panel shows the image formed by differencing the images of the two individual nights. By computing the RMS in a fiducial region in the center of the difference image (and scaling down by a factor of $2$ to account for the differencing and the shorter integration times) we estimate a map-center image noise of $\sim 0.3 \, {\rm mJy/bm}$ (rms). The noise level in regions of the map outside the fiducial region is corrected for exposure time variations assuming Gaussian, random noise with a white power spectrum. The enhancement of the SZE to the south east of the X-ray peak, originally detected by Komatsu et al. at $4.2\sigma$ significance, is confirmed by our measurement at $5.4 \sigma$ (indicating the peak SNR per beam) with a factor of $\sim$ 2 greater angular resolution. A detailed assessment of the impact of this is presented in \S~\ref{sec:sims}. Work is underway to develop analysis techniques which account for correlated noise in a way that permits quantitative model fitting. \begin{figure*}[h!] \includegraphics[width=6in]{Threemap_1347.pdf} \caption{MUSTANG image of the SZE in RXJ1347-1145 (left); the same, with the point source subtracted as described in the text (center); and the individual nights imaged separately and differenced, on the same color scale (right). The noise in the center of the map is is $\sim 0.3 \, {\rm mJy/bm}$; contours in the left panel correspond to SNR of 1 to 5 in $1\sigma$ increments and account for variations in integration time in the map, so are not directly proportional to the image. Color scale units are mJy/bm. The MUSTANG beam ($10''$ FWHM after smoothing) is shown in the lower right of each panel.} \label{fig:finalmap} \end{figure*} \subsection{Simulations} \label{sec:sims} It is difficult to measure diffuse, extended structure such as the SZE, particularly in the presence of potentially contaminating systematic signals such as time-varying atmospheric fluctuations. To assess the impact of residual, unmodelled noise fluctuations in the maps we have undertaken an extensive suite of simulations which replace the raw detector timestream data with simulated data. As a source for the simulated data we used real detector timestreams collected during observations of a blank patch of sky collected for another project. The phase of these timestreams with respect to the telescope trajectory on RXJ1347-1145 was randomly shifted to create different instances of noisy cluster observations. We added simulated astronomical signals as described below in order to determine how well the (known) input signals are recovered in the maps. To assess the spatial fidelity of our reconstructed images, random white-noise skies were generated on $2''$ grids, subsequently smoothed by a $9''$ (FWHM) Gaussian. These skymaps served as input to generate fake timestreams which were then processed by the exact processing scripts used to produce the image in Figure~\ref{fig:finalmap}. The ratio of the absolute magnitude of the Fourier transform of the reconstructed sky map to the absolute magnitude of the Fourier transform of the input skymap measures the fidelity of our image reconstructions as a function of angular scale. The results of repeating this 100 times, with different white-noise skies and noise instances, are shown in Figure~\ref{fig:sim}. We find that our pipeline faithfully recovers structures up to $60''$, with reasonable response but some loss of amplitude on larger scales, up to $120''$. The loss of structure on small angular scales is an effect of our relatively coarse pixellization. Simulations were carried out at similar signal to noise ratios as those in our final map, although changes in the signal to noise ratio of over a factor of 5 showed no significant change to our transfer function The common-mode subtraction, essential to removing atmospheric and instrumental systematic signals, can also introduce negative bowls around bright point sources which could mimic the SZE in cases such as RXJ1347-1145. To determine the magnitude of this systematic we have followed a similar approach. Instead of white-noise skies the input signal consists of a single unresolved source with a flux density of 5~mJy at the location of the radio source seen in RXJ1347-1145. The resulting negative bowl in the reconstructed images has a mean peak spurious decrement $\sim 2\%$ of the point source peak brightness, in comparison with $\sim 50\%$ for our real data. Additionally the iterative pipeline (OBIT) is much less susceptible to such artifacts, and shows consistent results. We conclude that this is not a significant contribution to our result. \subsection{Image Domain Noise Estimate} We divide the data set in half and subtract the individual night images to obtain a difference map. The RMS of this difference map in the central 85 by 93 arcseconds, dividing by two to correct for the differencing and the reduced integration time in each individual night image, gives an image noise level of $0.3 \, {\rm mJy}$. A histogram of the pixel values in this region of the difference image is shown in figure \ref{fig:hist}. \begin{figure}[h!] \includegraphics[width = 3.25in, height=2.5in]{Hists2.pdf} \caption{Histogram of pixel values in the difference image in Figure~\ref{fig:finalmap}. The histogram is well described by a gaussian distribution of $1 \sigma = 0.3 \, {\rm mJy/bm}$} \label{fig:hist} \end{figure} We obtain a complementary estimate of the noise from a region of the final SZ map well away from the cluster. Correcting for the difference in integration times in these regions of the map, this result is consistent to within 8\%. Using this noise figure, the peak SNR per beam in the map--- on the SZE decrement SE of the cluster core--- is $5.4\sigma$. More aggressive filtering the map results in an even higher detection significance for the SE enhancement by reducing the low spatial frequency tail of the noise power spectrum (Figure~\ref{fig:sim}). \subsection{The Effects, and Subtraction, of the Central Radio Source} Our final map has sufficient angular resolution to distinguish the central radio source from the structures of interest. In particular it is clear that, as seen in earlier analyses of the SZE in this cluster \citep{pointecouteau99} there is a strong azimuthal variation in the intensity of the SZE at a radius of $\sim 20''$ from the X-ray centroid, which also coincides with the radio source. To produce a source-subtracted image we fit and subtract an azimuthally symmetric, double-Gaussian beam (as determined from 3C279 in \S~\ref{sec:beam}). The reason for assuming azimuthal symmetry is that the hour angle sampling of the 3C279 data is considerably more limited than that of the RXJ1347-1145 data; therefore the 3C279 data will not provide a good measurement of the effective two-dimensional beam, only of its average radial profile. Furthermore the SNR on the point source in RXJ1347-1145 is insufficient to measure significant departures from azimuthal symmetry. The average radial profile of the central source in RXJ1347-1145 is shown in Figure~\ref{fig:beams} out to $r=15''$, where the signal becomes too weak to measure above thermal noise and variations in the SZE. \subsection{Effect of Background Anisotropies} The angular scales reconstructed in our map ($\sim 1'.5$ and smaller) correspond to spherical harmonic multipoles of $\ell = 7200$ and higher. On these scales intrinsic CMB anisotropies are strongly suppressed by photon diffusive damping at the last scattering surface and do not contribute measurably to our result at the sensitivity level we have achieved. \begin{figure}[h!] \epsscale{1.15} \includegraphics[width = 3.5in, height=3in]{transfer_functionCropped.pdf} \caption{Map noise characteristics. {\it Solid curve}: The ratio of the absolute magnitude of the Fourier transform of the input map to the absolute magnitude of the Fourier transform of the output map from simulations. All structure on scales smaller than $1'$ was recovered well although there is a fall off towards high spatial frequencies due to pixellization effects. {\it Diamonds}: The absolute magnitude of the Fourier transform of the (reconstructed simulated) cluster map, normalized to a peak value of unity. {\it Plus marks}: Absolute magnitude of the fourier transform of signal-free simulated maps, with the same normalization as the cluster data. The dashed line shows the size of the MUSTANG instantaneous field of view.} \label{fig:sim} \end{figure} \begin{figure}[h!] \includegraphics[width = 3.5in, height=3.5in]{Nobeyama_compare2.pdf} \caption{Comparison of NOBA and Mustang maps of RX J1347-1145. The color scale shows the MUSTANG data with $5''$ pixels, smoothed to match the published NOBA map resolution. The contours show the NOBA map at intervals of $1.16\times 10^{-4}$ in $y$ starting from $5\times 10^{-5}$. Three labeled features are discussed in the text: {\it 1}, the hot shock south-east of the cluster core; {\it 2}, an enhancement in integrated pressure to the east; and {\it 3}, a compact decrement observed at $\sim 3 \sigma$ by NOBA that is absent from the MUSTANG image.} \label{fig:Noba} \end{figure} \begin{figure}[h!] \includegraphics[width = 3.25in, height=3in]{rxjModelThickWithDataAndBeam.pdf} \caption{MUSTANG SZE image of RXJ1347-1145 with contours (thin green lines) at $-1.5$, $-1.0$, and $-0.5$ mJy/bm. The bold white contours, at the same surface brightness levels, show the model SZ signal discussed in the text.} \label{fig:szmodel} \end{figure} \begin{figure*} \includegraphics[width=6in]{Composite_and_xray_Oct13_2009.pdf} \caption{{\bf Left:} False color composite Image of RXJ1347-1145. Red/blue: Mustang SZ. Green: Archival HST/ACS image taken through the F814W filter; and white contours: Surface mass density $\kappa$ from the weak + strong lensing analysis of \cite{bradac08}. Contours are linearly spaced in intervals of $\Delta\kappa = 0.1$ beginning at $\kappa = 1.0$. Several features are labelled: {\it A} indicates the central BCG, which is a radio source; {\it B} indicates the BCG of the secondary cluster; and {\it 3} (also labeled in the right-hand panel) indicates the location of the discrepancy between NOBA and MUSTANG, discussed in the text and Figure~\ref{fig:Noba}. The diamond, cross and box mark the locations of the peaks in X-ray surface brightness, surface mass density and SZE decrement respectively. {\bf Right:} Contours of the MUSTANG decrement SNR ($1\sigma$ to $5 \sigma$ in $1\sigma$ increments) superposed on the Chandra count-rate image smoothed to $10''$ resolution.} \label{fig:composite} \end{figure*} \section{Interpretation \& Conclusions} \label{sec:concl} \subsection{Comparison with Previous SZE Observations} Figure~\ref{fig:Noba} presents a direct comparison of the MUSTANG and NOBA results in units of main-beam averaged Compton y parameter. For a more accurate comparison, we downgrade the resolution and pixelscale of the MUSTANG map to match that of NOBA (13'' FWHM on a 5'' pixel grid). The overall agreement between the maps is excellent, in particular as regards the amplitude and morphology of the local enhancement of the SZE south-east of the cluster core. The largest discrepancy is south west of the cluster, where NOBA shows a $3\sigma$ compact decrement which is absent from the MUSTANG data. Considering the low and uniform X-ray surface brightness in the vicinity of this discrepancy (see Figure~\ref{fig:composite}) and the higher angular resolution and lower noise of the MUSTANG data, it is likely that this feature is a spurious artifact in the NOBA map. Both datasets also show a ridge extending north from the shock front on the eastern side of the cluster. In the 150 GHz map the feature is of marginal significance ($1-2\sigma$); interestingly, it is clearly visible in the 350 GHz SZE increment map but K04 dismiss it due to the possibility of confusing dust emission from the nearby galaxies. \subsection{Empirical Model of the SZE in RXJ1347-1145} We construct a simple empirical model for the cluster SZE assuming the isothermal $\beta$-model of \citet{schindler97} normalized by the SZE measurement of \citet{reese02} and \citet{kitayama04} to describe the bulk cluster emission. We add a 5 mJy point source in the cluster core, coincident with the peak of the $\beta$-model, and two Gaussian components in integrated pressure, one south-east and one almost directly east of the cluster center. In comparing to our 90 GHz data, we use the relatavistic correction of \citet{sazonov98}, assuming $kT = 25 \, {\rm keV}$ (which reduces the amplitude of the decrement by 15\%) for the Gaussian components and $kT = 10 \, {\rm keV}$ for the bulk component. The parameters chosen (two Gaussian widths for each component, a position, a peak surface brightness, and a position angle) are shown in Table~\ref{tbl:szmodel}. The resulting sky image is convolved with our PSF (\S~\ref{sec:beam}) and transfer function (\S~\ref{sec:sims}). We find that this provides a good match to the data (Figure~\ref{fig:szmodel}). The peak comptonization at $10''$ Gaussian resolution is $3.9 \times 10^{-4}$ on the eastern ridge and $6.0 \times 10^{-4}$ on the region identified as a shock by Komatsu et al. When convolved to $19''$ FWHM (NOBA) resolution, we find $\Delta y = 3.9 \times 10^{-4}$, close to their observed value $\Delta y = 4.1 \times 10^{-4}$. The intent of this static, phenomenological model is simply to provide a description of the observed high angular-resolution SZE and a direct comparison of NOBA and MUSTANG results. Work is underway which will allow quantitatively determining the best fit physical model by simulataneously fitting datasets at multiple wavelengths using a Monte-Carlo Markov Chain. This work is beyond the scope of this paper and will be presented in a follow up publication. \begin{table*} \begin{center} \begin{tabular}{llll} Component & Amplitude & Offset & Notes \\ & [$y/10^{-3}$] & [$''$] & \\ \hline $\beta$-model & $1.0$ & $0,0$ & $\theta_c = 10''$, $\beta=0.60$ \\ Shock & $1.6$ & -14, 14 & $\sigma_1=8''$, $\sigma_2=2''$, ${\rm P.A.}=45^{\circ}$ \\ Ridge & $1.0$ & 10, 14 & $\sigma_1=8''$, $\sigma_2=2''$, ${\rm P.A.}=-15^{\circ}$ \\ \hline \end{tabular} \caption{Note: Offset is (north,east) of peak X-ray position} \label{tbl:szmodel} \end{center} \end{table*} \subsection{Multi-wavelength Phenomenology} Our data show an SZE decrement with an overall significance of $5.4 \sigma$. At the center of the cluster, coincident with the peak of X-ray emission and the brightest cluster galaxy (BCG), there is an unresolved $5 \,{\rm mJy} $ radio source. This flux density is consistent with the 90 GHz flux density presented in \citet{pointecouteau01}, as well as what is expected from a power law extrapolation of $1.4$ GHz and 30 GHz measurements \citep{NVSS,reese02}. A strong, localized SZE decrement can be seen $20''$ to the south-east of the center of X-ray emission and clearly separated from the cluster center. Our data also indicate a high-pressure ridge immediately to the east of the cluster center. K04 tentatively attribute the south-east enhancement to a substructure of gas $240 \pm 183 \, {\rm kpc}$ in length along the line of sight, at a density (assumed uniform) of $(1.4 \pm 0.59) \times 10^{-2} \, {\rm cm^{-3}}$ and with a temperature $T_e = 28.5 \pm 7.3 \, {\rm keV}$. Recent X-ray spectral measurements \citep{ota08} with SUZAKU also indicate the presence of hot gas in the south-east region ($T_e = 25.1^{+6.1}_{-4.5} \ ^{+6.9}_{-9.5} \, {\rm keV}$ with statistical and systematic errors, respectively, at 90\% confidence level). \citet{allen02} have reported that the slight enhancement of softer X-ray emission in this region seen by Chandra is consistent with the presence of a small substructure of hot, shocked gas. \citet{kitayama04} attribute the hot gas to an ongoing merger in the plane of the sky. The merger hypothesis is supported by optical data, in particular, the presence of a second massive elliptical $\sim 20''$ directly to the east of the BCG that coincides with the center of X-ray emission (and with the radio point source). Furthermore the density and temperature of the hot substructure indicate that it is substantially overpressured compared to the surrounding ICM. Assuming a sound speed of $1600 \, {\rm km/sec}$ this overpressure region should relax into the surrounding ICM on a timescale $\sim 0.1 \, {\rm Gyr}$, again arguing for an ongoing merger. Our data support this merger scenario. To put them in context, Figure~\ref{fig:composite} shows a composite image with archival Chandra and HST data, and the weak + strong lensing mass map of \citet{bradac08}. We propose that the data are best explained by a merger occuring in or near the plane of the plane of the sky. The left-hand (``B'') cluster, having fallen in from the south-{\it west}, has just passed closest approach and is hooking around to the north-west. As the clusters merge shock forms, heating the gas in the wake of its passage. As argued by \cite{kitayama04}, and seen in simulations \citep{takizawa99}, the clusters must have masses within a factor of 2 or 3 of equality and a substantial ($\sim 4000 \, {\rm km/sec}$) relative velocity in order to produce the high observed plasma temperatures, $T_e > 20 \, {\rm keV}$. This merger geometry is consistent with the lack of structure in the line-of-sight cluster member galaxies' velocities \citep{cohen02}. The primary (right-hand, ``A'') cluster contains significant cold and cooling gas in its core (a ``cooling flow''). Such gas is seen to be quite robust in simulated major cluster mergers \citep{gomez02,poole08}. Even in cases where the cooling flow is finally disrupted by the encounter, \cite{gomez02} find a delay of $1-2$ GYr between the initial core encounter and the collapse of the cooling flow. The existence of a strong cooling flow, therefore, does not argue against a major merger in this case. More detailed simulations could shed further light on this interesting system. \subsection{Broad Implications} Since calibrating SZ observable - mass relationships is vital to understand the implications of ongoing SZE surveys, it is important to understand the mechanism by which a substantial portion of the ICM can be heated so dramatically, and how this energy is distributed through the ICM over time. Observations of cold fronts in other clusters \citep[e.g.][]{v01} have shown that energy transport processes in the ICM are substantially inhibited, perhaps by magnetic fields. It is distinctly possible, then, that once heated by shocks, very hot phases would persist. We have estimated the magnitude of the bias in an arcminute-resolution Compton $y$ measurement that is introduced by a hot gas phase by convolving the two gaussian components of the SZE model in Table~\ref{tbl:szmodel} with a $1'$ FWHM Gaussian beam, typical of SZ survey telescopes such as ACT \citep{actref,actclus} or SPT \citep{sptref,sptcat}. Compared with the bulk emission component, also convolved with a $1'$ beam, the small-scale features are a 10\% effect. While relatively modest this is a systematic bias in the Compton $y$ parameter which, if not properly accounted for, would result in a $20\%$ overestimate in distances (underestimate in $H_0$) derived from a comparison of the SZE and X-ray data which did not allow for the presence of the hot gas component. To assess the impact on the {\it scatter} in $M-y$, a larger sample of high-resolution SZE measurements is needed. A full calculation would also need to take into consideration effects such as detection apertures and the spatial filtering due to imaging algorithms, some of which would increase the importance of the effect and some of which would decrease its importance. This is the one of a very few clusters that has been observed at sub-arcminute resolution in the SZE \citep[see also][]{nord09}, so it is possible that many clusters exhibit similar behavior. Such events, if their enhancement of the SZE brightness is transient, could also bias surveys towards detecting kinematically disturbed systems near the survey detection limits. The astrophysics that has been revealed by high resolution X-ray observations, and is beginning to be revealed by high resolution SZE data, is interesting in its own right. The SZE observations require large-aperture millimeter telescopes which have henceforth been lacking, but with both large single dishes and ALMA coming online, exciting observations will be forthcoming. There is substantial room for improvement: since the observations we report here the GBT surface has improved from $320 {\, \rm \mu m}$ RMS to $250 {\, \rm \mu m}$ RMS, which will yield more than a factor of $1.5$ improvement in sensitivity. The array used in these observations, while state of the art, has not yet achieved sky photon noise limited performance; further progress is being made in this direction. Considering these facts, and that the results presented here were acquired in a short period of allocated telescope time (8h), this new high-resolution probe of the ICM has a bright future. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank Eiichiro Komatsu for providing the NOBA SZ map; Marusa Bradac for providing her total mass map; Masao Sako, Ming Sun, Maxim Markevitch, Tony Mroczkowski and Erik Reese for helpful discussions; and Rachel Rosen and an anonymous referee for comments on the manuscript.
0910.4719
\section{Introduction} Let $\Sigma$ be a finite alphabet. We use notation like $$ x_{[i,k]} = {(x_j)}_{i\le j \le k}, \qquad x \in \Sigma^{\Bbb Z},\, \, i,k \in {\Bbb Z}, \, \, i \le k, $$ and we denote by $ x_{[i,k]} $ also the word that is carried by the block $ x_{[i,k]}. $ The length of a word $a$ is denoted by $\ell(a)$. On the shift space ${\Sigma}^{\Bbb Z}$ there acts the shift by $$ x \longrightarrow {(x_{i+1})}_{i \in {\Bbb Z}}, \qquad x = {(x_i)}_{i \in {\Bbb Z}} \in \Sigma^{\Bbb Z}. $$ A closed shift-invariant subset of $\Sigma^{\Bbb Z}$ is called a subshift. For an introduction to the theory of subshifts see \cite{Ki, LM}. A word is called admissible for a subshift if it appears in a point of the subshift. We denote the language of admissible words of a subshift $X\subset \Sigma^{\Bbb Z}$ by ${\mathcal L}(X)$ and set ${\mathcal L}_n(X) = \{ a \in {\mathcal L}(X)\mid \ell(a) = n \}, n \in {\Bbb N}. $ A subshift $X\subset \Sigma^{\Bbb Z}$ is uniquely determined by ${\mathcal L}(X)$. For a subshift $X\subset \Sigma^{\Bbb Z}$ and for $I_{-}, I_+ \in {\Bbb Z}, I_{-} < I_{+}$, one has a topological conjugacy $$ x \longrightarrow (x_{[i+I_{-}, i+I_{+}]})_{i \in {\Bbb Z}}, \qquad (x \in X) $$ of $X$ onto the higher block system $X^{\langle[I_{-},I_{+}]\rangle}$ of $X$. Among the first examples of subshifts are the topological Markov shifts. Using a matrix $(A(\sigma,\sigma'))_{\sigma, \sigma' \in \Sigma},$ $$ A(\sigma,\sigma') \in \{0,1 \},\qquad \sigma, \sigma' \in \Sigma, $$ that has in every row and every column at least one entry that is equal to $1$ as a transition matrix one obtains a topological Markov shift $tM(\Sigma,A)$ by setting $$ tM(\Sigma,A) = \{ {(\sigma_i)}_{i \in {\Bbb Z}} \in \Sigma^{\Bbb Z} \mid A(\sigma_i, \sigma_{i+1}) =1, i \in \Bbb Z \}. $$ For $n >1$ the $n$-block system $(\Sigma^{\Bbb Z})^{\langle[1,n]\rangle}$ of the shift on $\Sigma^{\Bbb Z}$ is a topological Markov shift with a transition matrix $A^{(n)}$ that is given by $$ A^{(n)}(a, a')= \begin{cases} 1 & \text{if } a_{(1,n]} = a'_{[1,n)},\\ 0 & \text{if } a_{(1,n]} \ne a'_{[1,n)} \end{cases}, \qquad a,a' \in {\Sigma}^n. $$ A subshift $ X \subset {\Sigma}^{\Bbb Z}$ is said to be of finite type if there is a finite set ${\frak F}$ of words in the alphabet $\Sigma$ such that $(\sigma_i)_{i \in {\Bbb Z}} \in X$ precisely if no word in ${\frak F}$ appears in $(\sigma_i)_{i \in {\Bbb Z}}$. A subshift is topologically conjugate to a subshift of finite type if and only if it is of finite type \cite{Ki, LM}. We formulate this theorem equivalently as: \begin{theorem} Let $X \subset \Sigma^{\Bbb Z}$ be a subshift that is topologically conjugate to a topological Markov shift. Then there exists an $n_\circ \in {\Bbb N}$ such that \begin{equation*} X^{{\langle [1,n]\rangle}} = tM({\mathcal L}_n(X),(A^{(n)}(a,a'))_{a,a' \in {\mathcal L}_n(X)}), \qquad n \ge n_\circ. \end{equation*} \end{theorem} The coded system \cite{BH} of a formal language ${\mathcal C}$ in a finite alphabet $\Sigma$ is the subshift that is obtained as the closure of the set of points in $\Sigma^{\Bbb Z}$ that carry bi-infinite concatenations of words in ${\mathcal C}$. ${\mathcal C}$ can here always be chosen to be a prefix code. The property of being coded is an invariant of topological conjugacy. We denote the coded system of a code ${\mathcal C}$ by $sc({\mathcal C})$. More generally a Morkov code (see \cite{Ke}) is given by a formal language ${\mathcal C}$ of words in a finite alphabet $\Sigma$ together with a finite index set $\Gamma$ and, mappings $s: {\mathcal C} \longrightarrow \Gamma, t: {\mathcal C} \longrightarrow \Gamma $ and a transition matrix $(A(\gamma, \gamma'))_{\gamma,\gamma' \in \Gamma}, A(\gamma, \gamma')\in \{0,1\}, \gamma,\gamma' \in \Gamma. $ From a Markov code $({\mathcal C},s,t)$ one obtains the Markov coded system $scM({\mathcal C})$ as the subshift that is the closure of the set of points $x \in \Sigma^{\Bbb Z}$ such that there are indices $i_k \in {\Bbb Z}, k \in {\Bbb Z}$, $i_k < i_{k+1}, k \in {\Bbb Z}$ such that $x_{[i_k,i_{k+1})} \in {\mathcal C}, k \in {\Bbb Z}$, and such that $$ A(t(x_{[i_{k-1},i_{k})}), s(x_{[i_k,i_{k+1})}) ) = 1, \qquad k \in {\Bbb Z}. $$ With the alphabet $\{ a_n \mid 1 \le n \le N \} \cup\{\alpha_-, \alpha_+\}, N \in {\Bbb N}$, consider the codes \begin{align*} { {\mathcal C}^{(N)}_{reset} } = & \{ \alpha_{-}^k \alpha_{+}^m a_n \mid 1 \le n \le N, m, k \in {\Bbb N}, m \le k \}\\ \intertext{ and with the alphabet $\{ b_n \mid 1 \le n \le N \} \cup\{\alpha_-, \alpha_+\}, N \in {\Bbb N}$, consider the codes} { {\mathcal C}^{(N)}_{counter} } & = \{ \alpha_{-}^k \alpha_{+}^k b_n \mid 1 \le n \le N, k \in {\Bbb N} \}. \end{align*} The coded systems ${ sc({\mathcal C}^{(N)}_{reset}) }$, $ { sc({\mathcal C}^{(N)}_{counter}) } $ and ${ sc({\mathcal C}^{(N)}_{reset}) }\cup{ sc({\mathcal C}^{(N)}_{counter}) }$ serve us as prototypes for a class of subshifts that we will call standard one-counter shifts. (Compare here \cite[Example 1, p. 561]{Bl}, \cite[Example II, p. 449]{Ke}, \cite[Example 6.1, p. 896]{KM2002}). We arrive at a description of this class of subshifts by observing the behavior of ${ sc({\mathcal C}^{(N)}_{reset}) }$, ${ sc({\mathcal C}^{(N)}_{counter}) } $ and of ${ sc({\mathcal C}^{(N)}_{reset}) }\cup{ sc({\mathcal C}^{(N)}_{counter}) }$ and by abstracting the essential structural properties that these coded systems are to share with the standard one-counter shifts. ${ sc({\mathcal C}^{(N)}_{reset}) }$ and ${ sc({\mathcal C}^{(N)}_{reset}) }\cup{ sc({\mathcal C}^{(N)}_{counter}) }$ are prototypes of what we will call standard one-counter shifts with reset. To every standard one-counter shift $X \subset {\Sigma}^{\Bbb Z}$ there is associated a unique Markov code ${\mathcal C}^{(X)}$ such that $X = scM({\mathcal C}^{(X)})$ and such that a version of Theorem 1.1 holds. A formal language is called a one-counter language if it is recognized by a push down automaton with one stack symbol \cite{F,FMR,HU}. The Markov code ${\mathcal C}^{(X)}$ that is associated to a standard one-counter shift $X \subset {\Sigma}^{\Bbb X}$ is a one-counter language. Given a subshift $X \subset {\Sigma}^{\Bbb X}$ a word $v \in {\mathcal L}(X)$ is called synchronizing if for $u, w \in {\mathcal L}(X)$ such that $ uv, vw \in {\mathcal L}(X)$ also $ u v w \in {\mathcal L}(X)$. A topologically transitive subshift is called synchronizing if it has a synchronizing word. Before turning in Section 3 to the standard one-counter codes we introduce in Section 3 auxiliary notions for synchronizing subshifts. We introduce strongly synchronizing subshifts as the subshifts in which synchronizing symbols appear uniformly close to synchronizing words, and we introduce sufficiently synchronizing subshifts as the subshifts that have a strongly synchronizing higher block system. $\lambda$-graph systems (as introduced in \cite{Ma1999c}) are labeled directed graphs that are equipped with a shift like map $\iota$. A $\lambda$-graph system ${\frak L}$ gives rise to a $C^*$-algebra ${\mathcal O}_{\frak L}$. To a subshift $X$ there is invariantly associated a future $\lambda$-graph system ${}^X\!{\frak L}$ that is based on the future equivalences of the pasts in $X_{(-\infty,0]}$ (as in \cite{KM2002}) and there is invariantly associated a past $\lambda$-graph system ${\frak L}^X$ that is based on the past equivalences of the futures in $X_{[0,-\infty)}$ (as in \cite{Ma1999c}). The future and the past $\lambda$-graph systems of a subshift are time symmetric to one-another: the future $\lambda$-graph system of a subshift is identical to the past $\lambda$-graph system of its inverse and vice versa. For a standard one-counter shift $X$ we will see that ${\mathcal O}_{{}^X\!{\frak L}}$ is simple if and only if $X$ has reset and that ${\mathcal O}_{{\frak L}^X}$ is not simple. Since the stable isomorphism class of ${\mathcal O}_{{\frak L}^X}$ is an invariant of flow equivalence \cite{Ma2001b}, a standard one-counter shift with reset is not flow equivalent to its inverse. For the one-counter shifts ${ sc({\mathcal C}^{(N)}_{reset}) }, { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$, we will show that \begin{align*} K_0( { {\mathcal O}_{sc(({\mathcal C}^{(N)}_{reset})^{rev}) } } ) & \cong K_0( { {\mathcal O}_{sc({\mathcal C}^{(N)}_{reset}) } } ) \cong {\Bbb Z}/N {\Bbb Z} \oplus {\Bbb Z},\\ K_1( { {\mathcal O}_{sc(({\mathcal C}^{(N)}_{reset})^{rev}) } } ) & \cong K_1( { {\mathcal O}_{sc({\mathcal C}^{(N)}_{reset}) } } ) \cong 0. \end{align*} The one-counter code ${ {\mathcal C}^{(N)}_{counter} }$ is equal to its reversal $({ {\mathcal C}^{(N)}_{counter} })^{rev}$. The K-groups of the $C^*$-algebra have been computed in \cite{KM4} as \begin{align*} K_0({ {\mathcal O}_{sc({\mathcal C}^{(N)}_{counter}) } } ) & \cong {\Bbb Z}/N {\Bbb Z} \oplus {\Bbb Z}^2,\\ K_1({ {\mathcal O}_{sc({\mathcal C}^{(N)}_{counter}) } } ) & \cong {\Bbb Z}. \end{align*} For another computation of K-groups of one-counter shifts see \cite{Ma2,Ma16}. \medskip For a subshift $X \subset \Sigma^{\Bbb Z}$ we set \begin{equation*} X_{[i,k]} = \{ x_{[i,k]} \mid x \in X\}, \quad \qquad i,k \in {\Bbb Z}, \quad i \le k. \end{equation*} We set also \begin{align*} \Gamma_k^+(a) & = \{b \in X_{(n,n+k]} \mid (a,b) \in X_{[m,n+k]} \},\quad k \in {\Bbb N},\\ \Gamma_\infty^+(a) & = \{x^+ \in X_{(n,\infty)} \mid (a,x^+) \in X_{[m,\infty)} \}, \quad n, m \in {\Bbb Z}, \, m < n, \, a \in X_{[m,n]}. \end{align*} $\Gamma^-$ has the time symmetric meaning. We recall that, given subshifts $X \subset \Sigma^{\Bbb Z}$, $\tilde{X} \subset \tilde{\Sigma}^{\Bbb Z}$, and a topological conjugacy $\tilde{\varphi}:\tilde{X} \longrightarrow X$, there is for some $L \in { {\Bbb Z}_+ }$ a block mapping $\tilde{\varPhi}:\tilde{X}_{[-L,L]} \longrightarrow \Sigma$, such that $$ \tilde{\varphi}(\tilde{x}) = (\tilde{\varPhi}(\tilde{x}_{[i-L,i+L]}))_{i \in {\Bbb Z}}, \qquad \tilde{x} \in \tilde{X}. $$ We set $$ \tilde{\varPhi}(\tilde{a}) = (\tilde{\varPhi}(\tilde{a}_{[j-L,j+L]}))_{i+L \le j \le k-L}, \qquad \tilde{a} \in \tilde{X}_{[i,k]}, \quad i, k \in {\Bbb Z}, \quad k-i >2L. $$ We use similar notation for words. \section{Strong synchronization} The first lemma is well known. We include the proof for completeness. \begin{lemma} Let $ \tilde{X} \subset \tilde{\Sigma}^{\Bbb Z}, X \subset \Sigma^{\Bbb Z} $ be subshifts and let $\varphi:\tilde{X} \longrightarrow X$ be a topological conjugacy. Let $\tilde{L}, L \in { {\Bbb Z}_+ }$ be such that $[-\tilde{L}, \tilde{L}]$ is a coding window for $\varphi$ and $[-L, L]$ is a coding window for $\varphi^{-1}$. Let $\tilde{x} \in \tilde{X}, x = \varphi(\tilde{x}) $ and $I_{-}, I_{+} \in {\Bbb Z}, I_{-} \le I_{+}.$ Let $x_{[I_{-}, I_{+}]}$ be synchronizing. Then $\tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]}$ is synchronizing. \end{lemma} \begin{proof} Let \begin{align*} \tilde{y}^{-} & \in \Gamma^{-}_{\infty}(\tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]}), \\ \tilde{y}^{+} & \in \Gamma^{+}_{\infty}(\tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]}), \end{align*} and let $y^{-} \in \Gamma^{-}_{\infty}(x_{[I_{-}, I_{+}]}), y^{+} \in \Gamma^{+}_{\infty}(x_{[I_{-}, I_{+}]}), $ be given by \begin{align*} \widetilde{\varPhi}(\tilde{y}^{-}, \tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]}) & = (y^{-}, x_{[I_{-}, I_{+}]}),\\ \widetilde{\varPhi}( \tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]},\tilde{y}^{+}) & = ( x_{[I_{-}, I_{+}]},y^{+}). \end{align*} One has $(y^{-}, x_{[I_{-}, I_{+}]}, y^{+}) \in X$ and $$ \varphi^{-1}(y^{-}, x_{[I_{-}, I_{+}]}, y^{+}) =(\tilde{y}^{-}, \tilde{x}_{[I_{-}-\tilde{L}-L, I_{+}+ \tilde{L} + L]},\tilde{y}^{+}) $$ and the lemma follows. \end{proof} For a subshift $X \subset {\Sigma}^{\Bbb Z}$, we denote the set of its synchronizing symbols by ${ \Sigma_{synchro}(X) }$. \begin{lemma} Let $ \tilde{X} \subset \tilde{\Sigma}^{\Bbb Z}, X \subset \Sigma^{\Bbb Z} $ be subshifts and let a topological conjugacy $\varphi:\tilde{X} \longrightarrow X$ be given by a one-block map $\tilde{\varPhi}:\tilde{\Sigma}\longrightarrow \Sigma.$ Let $ L \in { {\Bbb Z}_+ }$ be such that $\varphi^{-1}$ has coding window $[-L, L]$ and set $\hat{\varPhi}(\tilde{x}_{[-L,L]}) =\tilde{\varPhi}(\tilde{x}_0), \tilde{x}_{[-L,L]} \in \tilde{X}_{[-L, L]}.$ Then $$ \hat{\varPhi}^{-1}({ \Sigma_{synchro}(X) }) \subset \Sigma_{synchro}(\tilde{X}^{\langle[-L,L]\rangle}). $$ \end{lemma} \begin{proof} Apply Lemma 2.1. \end{proof} We say that a synchronizing subshift $X \subset \Sigma^{\Bbb Z}$ is strongly synchronizing if there exists a $Q \in { {\Bbb Z}_+ }$ such that the following holds: if $ x \in X$ and $I_{-}, I_{+} \in {\Bbb Z}, I_{-} < I_{+}$ are such that $x_{[I_{-}, I_{+}]}$ is synchronizing, then there exists an index $i,$ $I_{-} - Q \le i \le I_{+} + Q$ such that $x_{i}$ is a synchronizing symbol. The higher block systems of a strongly synchronizing subshift are also strongly synchronizing. We say that a subshift $X \subset \Sigma^{\Bbb Z}$ is sufficiently synchronizing if it has strongly synchronizing higher block systems. \begin{prop} Sufficient synchronization is an invariant of the topological conjugacy of subshifts. \end{prop} \begin{proof} To prove the lemma it is by Lemma 2.2 enough to consider the case of subshifts $ X \subset \Sigma^{\Bbb Z}, \tilde{X} \subset \tilde{\Sigma}^{\Bbb Z} $ and of a topological conjugacy $\varphi:X \longrightarrow \tilde{X}$ that is given by a one-block map $\varPhi:\Sigma \longrightarrow \tilde{\Sigma}$ such that \begin{equation} \varPhi^{-1}(\tilde{\Sigma}_{synchro}(\tilde{X}) \subset { \Sigma_{synchro}(X) } \label{eqn:1} \end{equation} with $\tilde{X}$ strongly synchronizing and to show that $X$ is strongly synchronizing. Let $L \in { {\Bbb Z}_+ }$ be such that $\varphi^{-1}$ has the coding window $[-L,L]$ and let $\tilde{Q}\in { {\Bbb Z}_+ }$ be such that for $ \tilde{x} \in \tilde{X}, \tilde{I}_{-}, \tilde{I}_{+} \in {\Bbb Z}, \tilde{I}_{-} < \tilde{I}_{+} $ such that $\tilde{x}_{[\tilde{I}_{-}, \tilde{I}_{+}]}$ is synchronizing, one has an $\tilde{i},$ $\tilde{I}_{-} - Q \le \tilde{i} \le \tilde{I}_{+} + Q$ such that $\tilde{x}_{\tilde{i}}$ is a synchronizing symbol. Then one has for $ x \in X$, $I_{-}, I_{+} \in {\Bbb Z}, I_{-} < I_{+}$ such that $x_{[I_{-}, I_{+}]}$ is synchronizing, by Lemma 2.1 that $\varphi(x)_{[I_{-},I_{+}]}$ is synchronizing. It follows that there exists an $i\in {\Bbb Z},$ $I_{-} -L -\tilde{Q} \le i \le I_{+} + L + \tilde{Q}$ such that $\varPhi(x_{i})$ is a synchronizing. By (\ref{eqn:1}), then $x_{i}$ is synchronizing. \end{proof} \section{Standard one-counter shifts} {\bf 3 a. The structure of standard one-counter shifts} Let $X \subset \Sigma^{\Bbb Z}$ be a topologically transitive subshift. We call a pair $((\alpha_{-})_{i\in {\Bbb Z}}, (\alpha_{+})_{i\in {\Bbb Z}}) $ of fixed points of $X$ a characteristic pair, if it is the unique pair of fixed points that satisfies the following conditions $(a), (b) $ and $(c^-)$, and a condition $(c^+)$ that is symmetric to condition $(c^-)$: $(a)$ $X$ has a unique orbit $O_X$ that contains all points that are left asymptotic to $(\alpha_{-})_{i\in {\Bbb Z}}$ and right asymptotic to $(\alpha_{+})_{i\in {\Bbb Z}}$, and that do not contain a synchronizing word. $(b)$ $X$ has a point that is left asymptotic to $(\alpha_{+})_{i\in {\Bbb Z}}$ and right asymptotic to $(\alpha_{-})_{i\in {\Bbb Z}}$ and that contains a synchronizing word. $(c^-)$ There exists a $K \in {\Bbb N}$ such that the following holds: If $ x \in X$ and $I_{-}, I_{+} \in {\Bbb Z}, I_{-} \le I_{+},$ are such that $x$ is right asymptotic to $(\alpha_-)_{i\in {\Bbb Z}}$, and $x_{[I_{-}, I_{+}]}$ is synchronizing, and $x_{(I_+, I_+ +k]}$, is not synchronizing, $ k \in {\Bbb N}$, then there exists an index $i,$ $I_{-} < i \le I_{+} + K$, such that $x_{j} = \alpha_{-}, j \ge i$. \begin{prop} Let $X \subset \Sigma^{\Bbb Z}$ be a topologically transitive subshift with a characteristic pair $((\alpha_{-})_{i\in {\Bbb Z}},(\alpha_{+})_{i\in {\Bbb Z}}) $ of fixed points, and let $\tilde{\varphi}$ be a topological conjugacy of a subshift $\tilde{X} \subset {\tilde \Sigma}^{\Bbb Z}$ onto $X$. Then $({\tilde{\varphi}}^{-1}((\alpha_{-})_{i\in {\Bbb Z}}), {\tilde{\varphi}}^{-1}((\alpha_{+})_{i\in {\Bbb Z}})) $ is a characteristic pair of fixed points of $\tilde{X}$. \end{prop} \begin{proof} Conditions $(a), (b)$, $(c^-), (c^+)$, being satisfied by $((\alpha_{-})_{i\in {\Bbb Z}},(\alpha_{+})_{i\in {\Bbb Z}}), $ the proposition follows by means of Lemma 2.1. \end{proof} We introduce notation that we use for a synchronizing subshift $X \subset {\Sigma}^{\Bbb Z}$, that has a characteristic pair $((\alpha_{-})_{i\in {\Bbb Z}},(\alpha_{+})_{i\in {\Bbb Z}}) $ of fixed points. For $\sigma_{-} \in { \Sigma_{synchro}(X) }$ we denote by ${\mathcal D}(\sigma_{-},\alpha_{-})$ the set of words $d^- \in {\mathcal L}(X)$, that do not contain a synchronizing symbol, and that do not end with $\alpha$, such that \begin{equation*} \sigma_{-} d^- \in \bigcap_{k \in {\Bbb N}} \Gamma^-(\alpha_-^k), \end{equation*} and for $\sigma_{-}^+ \in { \Sigma_{synchro}(X) }$ we denote by ${\mathcal D}(\sigma_{-}^+,\alpha_{+})$ the set of words $d_+^- \in {\mathcal L}(X)$, that do not contain a synchronizing symbol, and that do not begin with $\alpha$, such that \begin{equation*} \sigma_{-}^+ d_+^- \in \bigcap_{k \in {\Bbb N}} \Gamma^-(\alpha_+^k). \end{equation*} We set \begin{align*} \Sigma_{-}(X) & = \{ \sigma_{-} \in { \Sigma_{synchro}(X) } \mid {\mathcal D}(\sigma_{-},\alpha_{-}) \ne \emptyset \},\\ \Sigma_{-}^+(X) & = \{ \sigma_{-}^+ \in { \Sigma_{synchro}(X) } \mid {\mathcal D}(\sigma_{-}^+,\alpha_{+}) \ne \emptyset \}.\\ \end{align*} ${\mathcal D}(\alpha_{+},\sigma_+,), {\mathcal D}(\alpha_-,\sigma_{+}^-,)$ and $\Sigma_{+}(X),\Sigma_{+}^-(X)$ have the symmetric meaning. \begin{lemma} For a strongly synchronizing subshift $X \subset \Sigma^{\Bbb Z}$ that has a characteristic pair $((\alpha_{-})_{i\in {\Bbb Z}}, (\alpha_{+})_{i\in {\Bbb Z}}) $ of fixed points, the sets $\Sigma_{-}(X)$ and $\Sigma_{+}(X)$ are not empty and the sets ${\mathcal D}(\sigma_{-},\alpha_{-}), \, \sigma_{-}\in \Sigma_{-}(X)$ and ${\mathcal D}(\alpha_{+},\sigma_{+}), \, \sigma_{+}\in \Sigma_{+}(X)$ are finite. \end{lemma} \begin{proof} We show that $\Sigma_{-}(X)$ is not empty, and that the sets ${\mathcal D}(\sigma_{-},\alpha_{-}), \, \sigma_{-}\in \Sigma_{-}(X)$ are finite. By condition $(b)$ there exists an $x \in X$ that contains a synchronizing word and that is right asymptotic to $(\alpha_{-})_{i \in {\Bbb Z}}$. The assumption that $X$ is strongly synchronizing implies that $x$ contains a synchronizing symbol. Let $i \in {\Bbb Z}$ be such that $x_i \in { \Sigma_{synchro}(X) }, x_{i+K} \not\in { \Sigma_{synchro}(X) }, K \in {\Bbb N}$. If here $x_{i+K} = \alpha_{-}, K \in {\Bbb N}$, then the empty word is in ${\mathcal D}(\sigma_-,\alpha_-)$ where $\sigma_- = x_i$. Otherwise let $j > i$ be given by $x_j \ne \alpha_-, x_{j+K} = \alpha_-, K \in {\Bbb N}$ and have $x_{[i,j]} \in {\mathcal D}(\sigma_-,\alpha_-).$ The finiteness of ${\mathcal D}(\sigma_{-},\alpha_{-}), \, \sigma_{-}\in \Sigma_{-}(X)$ follows from condition $(c^-)$. \end{proof} Let $X \subset \Sigma^{\Bbb Z}$ be a subshift with a characteristic pair $((\alpha_{-})_{i\in {\Bbb Z}}, (\alpha_{+})_{i\in {\Bbb Z}}) $ of fixed points. Let $x \in O_X$. If for some $i_\circ \in {\Bbb Z}$, \begin{align*} x_i & = \alpha_-, \qquad i \le i_\circ,\\ x_i & = \alpha_+, \qquad i > i_\circ \end{align*} then set $c_X$ equal to the empty word. Otherwise determine $i_-, i_+ \in {\Bbb Z}, i_- < i_+$, by \begin{align*} x_i & = \alpha_-, \qquad i < i_-,\\ x_{i_-}& \ne \alpha_-, \\ x_{i_+}& \ne \alpha_+,\\ x_i & = \alpha_+, \qquad i > i_+, \end{align*} and set $c_X$ equal to the word $x_{[i_-,i_+]}$. We also set \begin{align*} \Omega^+(X) & = \{ d^+ \sigma_+ \mid \sigma_+ \in \Sigma_+(X), d^+ \in {\mathcal D}(\alpha_+,\sigma_+)\},\\ \Omega^-(X) & = \{ d_-^+ \sigma_+^- \mid \sigma_+^- \in \Sigma_+^-(X), d_-^+ \in {\mathcal D}(\alpha_-,\sigma_+^-)\}. \end{align*} We denote by $\OPR$ the set of $ d^+ \sigma_+ \in \Omega^+(X)$ such that there is a $D \in { {\Bbb Z}_+ }$ such that \begin{equation*} \alpha_-^{k_-} c_X \alpha_+^{k_+ +D} d^+ \sigma_+ \in {\mathcal L}(X), \qquad k_-, k_+ \in {\Bbb N}, \end{equation*} the smallest such $D$ to be denoted by $D(d^+\sigma_+)$. We say that $X$ has reset if $\OPR \not= \emptyset.$ We set $$ \OPC = \Omega^+(X) \backslash \OPR. $$ We set $$ { \Omega^-_{reset}(X) } = \{c_X \alpha_+^{k_+ +D(d^+\sigma_+)} d^+ \sigma_+ \mid d^+ \sigma_+ \in \OPR, k_+ \in \Bbb N\}, $$ and we say that $X$ satisfies the reset condition if $\Omega^-(X) \backslash { \Omega^-_{reset}(X) }$ is a finite set. For a strongly synchronizing subshift $X \subset \Sigma^{\Bbb Z}$ that has a synchronizing symbol denote by ${\mathcal C}(X)$ the set of admissible words of $X$ that begin with a synchronizing symbol, that have no other synchronizing symbol and that can be followed by a synchronizing symbol. For $c \in {\mathcal C}(X)$ set $t(c)$ equal to the set of synchronizing symbols that can follow $c$ and set $s(c)$ equal to the singleton set that contains the first symbol of $c$. With the set of subsets of $\Sigma$ as index set and with a transition matrix $A$ whose positive entries are given by \begin{equation*} A({\Sigma_\circ}, \{ \sigma\}) = 1, \qquad \Sigma_\circ \in\bigl\{ \{ t(c) \mid c \in {\mathcal C}(X) \}\bigr\},\quad \sigma \in \Sigma_\circ, \quad \sigma \in { \Sigma_{synchro}(X) }, \end{equation*} ${\mathcal C}(X)$ is a Markov code and \begin{equation*} X = scM({\mathcal C}(X)). \end{equation*} Given a strongly synchronizing subshift $X \subset \Sigma^{\Bbb Z}$ with a characteristic pair $((\alpha_{-})_{i\in {\Bbb Z}}, (\alpha_{+})_{i\in {\Bbb Z}})$ of fixed points such that $\Sigma_-^+(X) = \emptyset$, that satisfies the reset condition, we set \begin{equation} \begin{split} {\mathcal C}_-^{(X)}(\sigma_+^-) = \{ \sigma_- d^- \alpha_-^{k_-} d_-^+ \mid & \sigma_- \in \Sigma_-(X), d^- \in {\mathcal D}(\sigma_-,\alpha_-), \\ d_-^+\sigma_+^- \in & \Omega^-(X)\backslash\Omega_{reset}^-(X), k_- \in {\Bbb N} \}, \qquad \sigma_+^- \in \Sigma_+^-(X), \end{split}\label{eqn:41} \end{equation} \begin{align} {\mathcal C}_-^{(X)} & = \bigcup_{\sigma_+^- \in \Sigma_+^-(X)} {\mathcal C}_-^{(X)}(\sigma_+^-), \label{eqn:42} \\ t(c) & = \{\sigma_+^- \in \Sigma_+^-(X) \mid c \in {\mathcal C}_-^{(X)}(\sigma_+^-) \},\qquad c \in {\mathcal C}_-^{(X)}, \label{eqn:43} \end{align} and, given $M_-, M_+ \in { {\Bbb Z}_+ }$ and mappings \begin{align*} \sigma_-d_- \longrightarrow & D_-(\sigma_- d^-)\in { {\Bbb Z}_+ }, \qquad \sigma_- \in \Sigma_-(X), d^-\in {\mathcal D}(\sigma_-,\alpha_-),\\ d^+ \sigma_+ \longrightarrow & D_+(d^+ \sigma_+)\in { {\Bbb Z}_+ }, \qquad d^+ \sigma_+\in \OPR, \end{align*} we set \begin{gather} \begin{split} & {\mathcal C}_{reset}^{(X)}(D_-, M_-, M_+, D_+; \sigma_+) \\ = & \{ \sigma_- d^- \alpha_-^{k_-} c_X \alpha_+^{k_+ + D(d^+\sigma_+)} d^+ \mid \sigma_-\in \Sigma_-(X), d^- \in {\mathcal D}(\sigma_-,\alpha_-), d^+\sigma_+ \in \OPR, \\ & k_{-}, k_{+} \in {\Bbb N}, D_-(\sigma_-d^-) + k_- + M_- \ge M_+ + k_+ + D_+(d^+\sigma_+) \},\quad \sigma_+\in \Sigma_+(X), \end{split}\label{eqn:44} \\ {\mathcal C}_{reset}^{(X)}(D_-, M_-, M_+, D_+) = \bigcup_{\sigma_+ \in \Sigma_+(X)} {\mathcal C}_{reset}^{(X)}(D_-,M_-,M_+, D_+; \sigma_+), \label{eqn:45} \\ \begin{split} t(c) & = \{\sigma_+ \in \Sigma_+(X) \mid c \in {\mathcal C}_{reset}^{(X)}(D_-, M_-,M_+,D_+;\sigma_+) \},\\ & \qquad c \in {\mathcal C}_{reset}^{(X)}(D_-, M_-,M_+,D_+). \end{split} \label{eqn:46} \end{gather} and given $J_-, J_+ \in { {\Bbb Z}_+ }$, and mappings \begin{align*} \sigma_-d^- \longrightarrow & \Delta_-(\sigma_- d^-)\in { {\Bbb Z}_+ }, \qquad \sigma_- \in \Sigma_-(X), d^-\in {\mathcal D}(\sigma_-,\alpha_-),\\ d^+\sigma_+ \longrightarrow & \Delta_+(d^+ \sigma_+)\in { {\Bbb Z}_+ }, \qquad d^+\sigma_+ \in \OPC, \end{align*} we set \begin{gather} \begin{split} & {\mathcal C}_{counter}^{(X)}(\Delta_-, J_-, J_+, \Delta_+; \sigma^+) \\ = & \{ \sigma_- d^- \alpha_-^{k_-} c_X \alpha_+^{k_+} d^+ \mid \sigma_- \in \Sigma_-(X), d^- \in {\mathcal D}(\sigma_-,\alpha_-), \\ & d^+\sigma_+ \in \OPC, k_{-}, k_{+} \in {\Bbb N}, \\ & ( \Delta_-(\sigma_-, d^-) + k_- + J_- ) \cap (J_+ + k_+ + \Delta_+(d^+\sigma_+)) \ne \emptyset \}, \quad \sigma_+ \in \Sigma_+(X) \end{split} \label{eqn:47}\\ {\mathcal C}_{counter}^{(X)}(\Delta_-,J_-,J_+, \Delta_+) = \bigcup_{\sigma_+ \in \Sigma_+(X)} {\mathcal C}_{counter}^{(X)}(\Delta_-, J_-, J_+,\Delta_+; \sigma_+), \label{eqn:48} \\ \begin{split} t(c) = \{\sigma_+ \in \Sigma_+(X) \mid & c \in {\mathcal C}_{counter}^{(X)}(\Delta_-, J_-, J_+,\Delta_+, \sigma_+) \},\\ & c \in {\mathcal C}_{counter}^{(X)}(\Delta_-,J_-,J_+, \Delta_+). \label{eqn:49} \end{split} \end{gather} By (3.1-9) there is defined a Markov code $$ {\mathcal C}_-^{(X)} \cup {\mathcal C}_{reset}^{(X)}(D_-, M_-,M_+,D_+)\cup {\mathcal C}_{counter}^{(X)}(\Delta_-,J_-,J_+, \Delta_+). $$ We define a standard one-counter shift as a strongly synchronizing subshift $X \subset {\Sigma}^{\Bbb Z}$ that has a characteristic pair of fixed points, such that $\Sigma_-^+(X)$ is empty, such that $X$ satisfies the reset condition, and such that there exist $ I \in { {\Bbb Z}_+ }$ and parameters $D_-, M_-,M_+,D_+$, $\Delta_-, J_-, J_+, \Delta_+$ such that \begin{equation} \begin{split} &\{ c \in {\mathcal C}(X) \mid \ell(c) > I \} \\ =& \{ c \in {\mathcal C}_{-}^{(X)} \cup {\mathcal C}_{reset}^{(X)} (D_-, M_-, M_+,D_+) \cup {\mathcal C}_{counter}^{(X)} (\Delta_-, J_-, J_+, \Delta_+) \mid \ell(c) > I \}, \end{split} \label{eqn:511} \end{equation} where the equality is understood as an equality of Markov codes. If (3.10) holds then we say that $I, D_-, M_-, M_+, D_+, \Delta_-, J_-, J_+, \Delta_+$ are parameters of the standard one-counter shift $X \subset {\Sigma}^{\Bbb Z}$. The parameters $ \Delta_-, J_-, J_+, \Delta_+ $ can be missing, and in the case that $X$ has no reset the parameters $D_-, M_-, M_+, D_+$ are missing. For a standard one-counter shift $X \subset \Sigma^{\Bbb Z}$ denote the smallest $I \in { {\Bbb Z}_+ }$ such that (3.10) holds by $I_X$, and denote by $D_-(X), M_-(X), M_+(X), D_+(X)$, $\Delta_-(X)$, $J_-(X)$, $J_+(X)$, $\Delta_+(X)$ the uniquely determined parameters for $X$ that satisfy the normalization conditions \begin{align*} \min{(M_-,M_+)} =& \min{(J_-,J_+)} = \min_{\sigma_-\in \Sigma_-(X), d^-\in {\mathcal D}(\sigma_-,d_-)} D(\sigma_-d^-)\\ =& \min_{d^+\sigma_+ \in {\Omega^+(X)}} D(d^+\sigma_+) =\min_{\sigma_- \in \Sigma_-(X), d^- \in {\mathcal D}(\sigma_-,\alpha_-)} \Delta_-(\sigma_-d^-) \\ =&\min_{\{\sigma_+ \in \Sigma_+(X), d^+ \in {\mathcal D}(\alpha_+,\sigma_+)\mid d^+\sigma_+ \in \OPC \}}\Delta_+(d^+\sigma_+) = 0. \end{align*} E. g. for $sc({\mathcal C}^{(N)}_{reset}\cup{\mathcal C}^{(N)}_{counter}) $ the normalized parameters are given by $I = M_- = M_+ = J_- = J_+ = 0$, the range of $D_-$ and $D_+$ being $\{0 \}$, and the range of $\Delta_-$ and $\Delta_+$ being $\{0 \}$. We associate with a standard one-counter shift $X \subset \Sigma^{\Bbb Z}$ the Markov code \begin{align*} {\mathcal C}^{(X)} = & \{ c \in {\mathcal C}^{(X)} \mid \ell(c) \le I_X \} \\ & \bigcup \{ c \in {\mathcal C}^{(X)}_{-} \cup {\mathcal C}^{(X)}_{reset}(D_-(X), M_-(X), M_+(X),D_+(X))\\ &\quad \cup {\mathcal C}^{(X)}_{counter}(\Delta_-(X), J_-(X), J_+(X),\Delta_+(X)) \mid \ell(c) > I_X \}, \end{align*} and find that $X$ has a distinguished presentation as the Markov coded system of ${\mathcal C}^{(X)}$. There is a development that can be considered, at least partially, as the converse. One takes as a starting point a finite alphabet $\Sigma$, a proper subset $\Sigma_{synchro}$ of $\Sigma$ and symbols $\alpha_-, \alpha_+ \not\in \Sigma$. One also has to provide for some $I \in { {\Bbb Z}_+ }$ a Markov code all of whose words begin with a symbol in $\Sigma_{synchro}$, with the remaining symbols in $\Sigma \backslash \Sigma_{synschro}$ and that have length less than or equal to to $I$, and one has to provide the other components, $\Sigma_-, \Sigma_+,$ $D_-, M_-, M_+, D_+$, $\Delta_-, J_-, J_+, \Delta_+ $ that are needed for the construction of a Markov code ${\mathcal C}$ according to rules that imitate the content of (3.1-9). An additional requirement is that the symbols in $\Sigma_{synchro}$ are the only synchronizing symbols in $scM({\mathcal C})$ for which there is a test. One arrives in this way at a standard one-counter shift $scM({\mathcal C})$ such that ${\mathcal C}(scM({\mathcal C})) = {\mathcal C},$ and one observes a perfect reciprocity between a class of Markov codes and the class of standard one-counter shifts. \medskip {\bf 3 b. Behavior under topological conjugacy} In this subsection we assume that we are given a strongly synchronizing subshift $X \subset \Sigma^{\Bbb Z}$ with a characteristic pair $(({\alpha}_-)_{i \in {\Bbb Z}}, ({\alpha}_+)_{i \in {\Bbb Z}}) $ of fixed points and a subshift $X \subset \Sigma^{\Bbb Z}$ together with a topological conjugacy $\tilde{\varphi}:\tilde{X} \longrightarrow X$ that is given by a one-block map $\tilde{\varPhi}: \tilde{\Sigma} \longrightarrow \Sigma$ such that \begin{equation} \tilde{\varPhi}^{-1}({ \Sigma_{synchro}(X) }) \subset \tilde{\Sigma}_{synchro}(\tilde{X}). \end{equation} We introduce notation that we use in this situation. We set $ (\tilde{\alpha}_-)_{i \in {\Bbb Z}} = \tilde{\varphi}^{-1}( (\alpha_-)_{i \in {\Bbb Z}}) $, $ (\tilde{\alpha}_+)_{i \in {\Bbb Z}} = \tilde{\varphi}^{-1}( (\alpha_+)_{i \in {\Bbb Z}}). $ $[-L,L]$ will denote a coding window of $\tilde{\varphi}^{-1}$ and $\varPhi$ will be a block map $\varPhi: {\mathcal L}_{2L + 1}(X) \longrightarrow \tilde{\Sigma}$ that gives $\tilde{\varphi}^{-1}$. $Q \in {\Bbb N}$ will be chosen such that for a synchronizing word $a$ of $X$ and for $a^- \in \Gamma_Q^-(a), a^+ \in \Gamma_Q^+(a)$ the word $a^- a a^+$ contains a synchronizing symbol. For $\tilde{\sigma}_- \in \tilde{\Sigma}_-(\tilde{X}), \tilde{b}^-\in \Gamma_{Q + L}^-(\tilde{\sigma}_-),$ we can set by Lemma 2.1 and by (3.11) $$ \tilde{\varPhi}(\tilde{b}^-\tilde{\sigma}_-) = b^-(\tilde{b}^-\tilde{\sigma}_-) \sigma_-(\tilde{b}^-\tilde{\sigma}_-) a^-(\tilde{b}^-\tilde{\sigma}_-), $$ where the words $b^-(\tilde{b}^-\tilde{\sigma}_-) $ and $a^-(\tilde{b}^-\tilde{\sigma}_-) $ and the symbol $\sigma_-(\tilde{b}^-\tilde{\sigma}_-) $ are uniquely determined by $\tilde{b}^-\tilde{\sigma}_-$ under the condition that $\sigma_-(\tilde{b}^-\tilde{\sigma}_-) $ is synchronizing and that $a^-(\tilde{b}^-\tilde{\sigma}_-) $ does not contain a synchronizing symbol. We set $$ I_-(\tilde{b}^-\tilde{\sigma}_-) = \ell(a^-(\tilde{b}^-\tilde{\sigma}_-)). $$ Denoting by $d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)$ the longest prefix of $a^-(\tilde{b}^-\tilde{\sigma}_-)\tilde{\varPhi}(\tilde{d}^-)\alpha_-$ that is in ${\mathcal D}(\sigma_-,\alpha_-)$, we have a mapping $$ \Psi_{\tilde{b}^-\tilde{\sigma}_-}: \tilde{d}^- \longrightarrow d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-), \qquad \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-,\tilde{\alpha}_). $$ Denote by ${\mathcal D}_{\tilde{b}^-\tilde{\sigma}_-}(\alpha_-)$ the set of $d^- \in {\mathcal D}(\sigma_-(\tilde{b}^-\tilde{\sigma}_-),\alpha_-) $ such that the prefix of length $Q + L +1$ of the word $ b^-(\tilde{b}^-\tilde{\sigma}_-)\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-\alpha_-^{Q+L} $ is equal to $\tilde{\varPhi}(\tilde{b}^-\tilde{\sigma}_-)$ and such that the prefix of length $Q + 2L + 1$ of the word $ \varPhi(b^-(\tilde{b}^-\tilde{\sigma}_-)\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-\alpha_-^{Q+2L}) $ is a suffix of $\tilde{b}^-\tilde{\sigma}_-$. We use corresponding symbols with a time symmetric meaning. We define $ H_-, H_+ \in { {\Bbb Z}_+ }$ by \begin{equation} \tilde{\varPhi}(c_{\tilde{X}}) = \alpha_-^{H_-} c_X \alpha_+^{H_+}. \end{equation} \begin{lemma} For $\tilde{\sigma}_- \in \tilde{\Sigma}_-(\tilde{X}), \tilde{b}^- \in \Gamma_{Q+L}^-(\tilde{\sigma}_-), $ the mapping $\Psi_{\tilde{b}^-\tilde{\sigma}_-}$ is a bijection of ${\mathcal D}(\tilde{\sigma}_-, \tilde{\alpha}_-)$ onto ${\mathcal D}_{\tilde{b}^-\tilde{\sigma}_-}(\alpha_-)$. \end{lemma} \begin{proof} By construction $$ \Psi_{\tilde{b}^-\tilde{\sigma}_-} ({\mathcal D}(\tilde{\sigma}_-, \tilde{\alpha}_-)) \subset {\mathcal D}_{\tilde{b}^-\tilde{\sigma}_-}(\alpha_-), $$ and one confirms that the inverse of $ \Psi_{\tilde{b}^-\tilde{\sigma}_-} $ is given by the mapping that assigns to a $d^- \in {\mathcal D}_{\tilde{b}^-\tilde{\sigma}_-}(\alpha_-) $ the word that is obtained by removing the prefix of length $Q+1$ from the longest prefix of the word $ \varPhi(b^-(\tilde{b}^-\tilde{\sigma}_-)\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-\alpha_-^{2L+1}) $ that does not end in $\tilde{\alpha}_-$. \end{proof} \begin{lemma} Let $\Sigma_-^+(X) = \emptyset$. Then also $\tilde{\Sigma}_-^+(\tilde{X}) = \emptyset$. \end{lemma} \begin{proof} If there were a $ \tilde{\sigma}^+_-\in \tilde{\Sigma}_-^+(\tilde{X}) $ and a $ \tilde{d}_+^- \in {\mathcal D}(\tilde{\sigma}_-^+, \tilde{\alpha}_+), $ then one would have for a $\tilde{b}^- \in \Gamma^-_{Q+L}(\tilde{\sigma}_-^+)$ that $$ \Psi_{\tilde{b}^-\tilde{\sigma}_-}(\tilde{d}_+^-) \in {\mathcal D}(\sigma_-^+(\tilde{b}^-\tilde{\sigma}_-), \alpha_+). $$ \end{proof} \begin{lemma} For $\tilde{d}^+\tilde{\sigma}_+ \in \Omega_{reset}^+(\tilde{X}), \tilde{b}^+ \in \Gamma_{L+Q}^+(\tilde{\sigma}_+), $ one has $$ d^+(\tilde{d}^+\tilde{\sigma}_+ \tilde{b}^+) \sigma_+(\tilde{\sigma}_+\tilde{b}^+) \in \Omega^+_{reset}(X). $$ \end{lemma} \begin{proof} One has $$ D_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+) \sigma_+(\tilde{\sigma}_+\tilde{b}^+)) \le D_+(\tilde{d}^+\tilde{\sigma}_+) + I_+(\tilde{\sigma}_+\tilde{b}^+) + \ell(\tilde{d}^+) - \ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)). $$ \end{proof} \begin{lemma} Let $\tilde{d}_-^+\tilde{\sigma}_+^- \in \Omega^-(\tilde{X}), \tilde{b}^+ \in \Gamma_{L+Q}^+(\tilde{\sigma}_+^-), $ and let \begin{align} d^+\sigma_+(\tilde{\sigma}_+\tilde{b}^+) & \in \Omega^+_{reset}(X) \\ \intertext{and} k_+ & > 2L \end{align} be such that \begin{equation} d_-^+(\tilde{d}_-^+\tilde{\sigma}_+^- \tilde{b}^+) = c_X \alpha_+^{k_+ +D_+(d^+\sigma_+(\tilde{\sigma}_+\tilde{b}^+))}d^+. \end{equation} Then $$ \tilde{d}_-^+\tilde{\sigma}_+^- \in \Omega^-_{reset}(\tilde{X}). $$ \end{lemma} \begin{proof} By (3.13) $$ \alpha_-^{L + l_- + H_-}c_X \alpha_+^{l_+ + k_+ + D_+(d^+\sigma_+(\tilde{\sigma}_+\tilde{b}^+))} d^+ \sigma_+(\tilde{\sigma}_+\tilde{b}^+) \in {\mathcal L}(X), \quad l_-, l_+ \in {\Bbb N}, $$ and by (3.14) and (3.15) there is a $\tilde{k}_+ \in {\Bbb N}$ such that the word $$ \varPhi( \alpha_-^{L + l_- + H_-}c_X \alpha_+^{l_+ + k_+ + D_+(d^+\sigma_+(\tilde{\sigma}_+\tilde{b}^+))} d^+ \sigma_+(\tilde{\sigma}_+\tilde{b}^+) b^+(\tilde{\sigma}_+\tilde{b}^+)) $$ contains the word $$ \tilde{\alpha}_-^{l_-} \tilde{c}_X \tilde{\alpha}_+^{l_+ + \tilde{k}_+} \tilde{d}^+_-\tilde{\sigma}^-_+ $$ as a subword for $l_-, l_+ \in {\Bbb N}.$ \end{proof} \begin{lemma} For $\tilde{d}^+\tilde{\sigma}_+ \in \Omega^+(\tilde{X}), \tilde{b}^+ \in \Gamma_{L+Q}^+(\tilde{\sigma}_+), $ one has $$ \tilde{d}^+\tilde{\sigma}_+ \in \Omega^+_{reset}(\tilde{X}) $$ if and only if $$ d^+(\tilde{d}^+\tilde{\sigma}_+ \tilde{b}^+) \sigma_+(\tilde{\sigma}_+\tilde{b}^+) \in \Omega^+_{reset}(X). $$ \end{lemma} \begin{proof} This follows from Lemma 3.5 and Lemma 3.6. \end{proof} \begin{lemma} Let $X$ satisfy the reset condition. Then $\tilde{X}$ also satisfies the reset condition. \end{lemma} \begin{proof} It follows from Lemma 3.6 that there is a bound on the length of the words in $ \Omega^-(\tilde{X}) \backslash \Omega^-_{reset}(\tilde{X}). $ \end{proof} We note that the converse of Lemma 3.8 also holds. \begin{prop} $\tilde{X}$ has reset if and only if $X$ has reset. \end{prop} \begin{proof} Let $d^+\sigma_+ \in \Omega^+(X)$. To obtain $\tilde{\sigma}_+ \in \Sigma_+(\tilde{X})$ and $\tilde{b}^+ \in \Gamma_{L+Q}^+(\tilde{\sigma}_+)$ such that $d^+\sigma_+ \in {\mathcal D}_{\tilde{\sigma}_+\tilde{b}^+}(\alpha_+)$, let $a^+ \in \Gamma_{L+Q}^+(\sigma_+)$ and let $\tilde{\sigma}_+\tilde{b}^+$ equal to the first subword of length $Q + L + 1$ of $$ \varPhi(\alpha_+^{2L+1} d^+ \sigma_+ a^+) $$ that begins with a synchronizing symbol. Apply Lemma 3.3 and Lemma 3.7. \end{proof} \begin{lemma} Let $X$ be a standard one-counter shift. Then $\tilde{X}$ is also a standard one-counter shift. \end{lemma} \begin{proof} By Lemma 3.4 $\Sigma_-^+(\tilde{X})$ is empty and by Lemma 3.8 $\tilde{X}$ satisfies the reset condition. Let $I, D_-, M_-, M_+, D_+$, $\Delta_-, J_-, J_+, \Delta_+$ be parameters for $X$. Let \begin{equation} \begin{split} \tilde{I} > & I + 2Q + + 6L + M_- + M_+ + J_- + J_+ + \ell(C_X) \\ & + 2\max \{ \ell(\sigma_- d^-) \mid \sigma_- \in \Sigma_-(X), d^-\in {\mathcal D}(\sigma_-,\alpha_-) \} \\ & + 2\max \{ \ell(d^+ \sigma_+) \mid d^+ \sigma_+\in \Omega^+(X) \} \\ & + \max \{ \ell(d^+ \sigma_+^-) \mid d^+\sigma_+ \in \Omega^-(X) \backslash \Omega^-_{reset}(X) \} \\ & + \max \cup_{\sigma_- \in \Sigma_-(X), d^-\in {\mathcal D}(\sigma_-,\alpha_-)} \Delta_-(\sigma_-d^-) \\ & + \max \cup_{ d^+\sigma_+ \in \Omega^+_{counter}(X)} \Delta_+(d^+\sigma_+), \end{split} \end{equation} \begin{align} \tilde{M}_- & = M_- + H_-, \qquad \tilde{M}_+ = M_+ + H_+, \\ \tilde{J}_- & = J_- + H_-, \qquad \tilde{J}_+ = J_+ + H_+, \end{align} \begin{gather} \begin{split} {\tilde D}_-(\tilde{\sigma}_-\tilde{d}_-) & = \max_{\tilde{b}^- \in \tilde{\Gamma}^-_{Q+2L}(\tilde{\sigma}_-)} \{ D_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) \\ & \qquad -\ell(d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + \ell(\tilde{d}^-) + I_-(\tilde{b}^-\tilde{\sigma}_-)\},\\ & \qquad \qquad \tilde{\sigma}_- \in \Sigma_-(\tilde{X}), \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-,\tilde{\alpha}_-), \end{split} \\ \begin{split} \tilde{D}_+(\tilde{d}^+\tilde{\sigma}_+) & = \min_{\tilde{b}^+ \in \Gamma^+_{L+Q}(\tilde{\sigma}_+)} \{ I_+(\tilde{\sigma}_+\tilde{b}^+)+ \ell(\tilde{d}^+) -\ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)) \\ & \qquad + D_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)\sigma_+(\tilde{\sigma}_+\tilde{b}^+)) \},\\ & \qquad \qquad \tilde{d}^+\tilde{\sigma}_+ \in \Omega^+_{reset}(\tilde{X}), \end{split} \end{gather} and \begin{gather} \begin{split} \tilde{\Delta}_-(\tilde{\sigma}_-\tilde{d}^-) & = \bigcup_{\tilde{b}^- \in \Gamma^-_{Q +L}(\tilde{\sigma}_-)} \Delta_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + \ell({\tilde{d}^-}) \\ & \qquad -\ell(d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + I_-(\tilde{b}^-\tilde{\sigma}_-)),\\ & \qquad \qquad \tilde{\sigma}_- \in \tilde{\Sigma}_-(\tilde{X}), \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-, \tilde{\alpha}_-), \end{split} \\ \begin{split} \tilde{\Delta}_+(\tilde{d}^+\tilde{\sigma}_+) & = \bigcup_{\tilde{b}^+ \in \Gamma^+_{L+Q}(\tilde{\sigma}_+)} I_+(\tilde{\sigma}_+\tilde{b}^+)-\ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)) + \ell(\tilde{d}^+) \\ & \qquad + \Delta_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)\sigma_+(\tilde{\sigma}_+\tilde{b}^+)),\\ & \qquad \qquad \tilde{\sigma}_+ \tilde{d}^+ \in \Omega^+_{counter}(\tilde{X}). \end{split} \end{gather} We prove that \begin{equation*} \begin{split} & \{ \tilde{c} \in {\mathcal C}(\tilde{X}) \mid \ell(\tilde{c})=\tilde{I} \} \\ \subset \quad & {\mathcal C}^{(\tilde{X})}_{-} \cup {\mathcal C}^{(\tilde{X})}_{reset}(\tilde{D}_-,\tilde{M}_-,\tilde{M}_+, \tilde{D}_+) \cup {\mathcal C}^{(\tilde{X})}_{counter}(\tilde{\Delta}_-,\tilde{J}_-,\tilde{J}_+, \tilde{\Delta}_+). \end{split} \end{equation*} Given a word $\tilde{c} \in {\mathcal C}(\tilde{X})$ of length $\tilde{I}$, let $\tilde{\sigma}_-$ be the first symbol of $\tilde{c}$ and let $\tilde{\sigma}_+ \in t(\tilde{c})$. Also let $$ \tilde{b}^-\in \Gamma^-_{Q + L}(\tilde{\sigma}_-),\qquad \tilde{b}^+\in \Gamma^+_{L + Q}(\tilde{\sigma}_+), $$ and let a word $c \in {\mathcal L}(X)$ be given by $$ \tilde{\varPhi}(\tilde{b}^- \tilde{c}\tilde{\sigma}_+ \tilde{b}^+) =b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+(\tilde{\sigma}_+ \tilde{b}^+) b^+(\tilde{\sigma}_+ \tilde{b}^+). $$ By (3.16) \begin{equation*} \begin{split} c \in & {\mathcal C}^{(X)}_{-}(\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)) \cup {\mathcal C}^{(X)}_{reset}(D_-,M_-,M_+, D_+;\sigma_+(\tilde{\sigma}_+ \tilde{b}^+))\\ & \cup {\mathcal C}^{(X)}_{counter}(\Delta_-,J_-,J_+, \Delta_+;\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). \end{split} \end{equation*} In the case that \begin{equation} c \in {\mathcal C}^{(X)}_{-}(\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), \end{equation} one has $ \sigma_+(\tilde{\sigma}_+ \tilde{b}^+) \in \Sigma_+^-(X), $ and there are $$ d^- \in {\mathcal D}(\sigma_-(\tilde{b}^-\tilde{\sigma}_-),\alpha_-),\qquad d^+_- \in {\mathcal D}(\alpha_-,\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), $$ and $k_-\in {\Bbb N}$ such that \begin{equation*} d^+_-\sigma_+(\tilde{\sigma}_+ \tilde{b}^+) \in \Omega^-(X)\backslash\Omega_{reset}^-(X), \qquad c=\sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^- \alpha_-^{k_-} d_-^+. \end{equation*} By (3.16) $k_- > 2 L$, and it is seen from the action of $\varPhi$ that one has, setting $$ \tilde{d}^- = \Psi^{-1}_{\tilde{b}^-\tilde{\sigma}_-}(d^-), \qquad \tilde{d}_-^+ = \Psi^{-1}_{\tilde{\sigma}_+\tilde{b}^+}(d_-^+), $$ and \begin{equation} \tilde{k}_- = k_- - I_-(\tilde{b}^-\tilde{\sigma}_-) + \ell(d^-) - \ell(\tilde{d}^-) - H_-, \end{equation} that \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} \tilde{d}^+_-. \end{equation*} Here $$ \tilde{d}_-^+ \tilde{\sigma}_+ \in \Omega^-(\tilde{X})\backslash\Omega^-_{reset}(\tilde{X}), $$ for otherwise one would have by Lemma 3.5 a contradiction to (3.23). This means that $$ \tilde{c} \in {\mathcal C}_-^{(\tilde{X})}(\tilde{\sigma}_+). $$ In the case that $$ c \in {\mathcal C}^{(X)}_{reset}(D_-,M_-,M_+, D_+;\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), $$ there are $$ d^- \in {\mathcal D}(\sigma_-(\tilde{b}^-\tilde{\sigma}_-),\alpha_-),\qquad d^+_- \in {\mathcal D}(\alpha_+,\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), $$ and $k_-, k_+ \in {\Bbb N}$ such that $$ d^+ \sigma_+(\tilde{\sigma}_+ \tilde{b}^+) \in \Omega^+_{reset}(X), \qquad c = \sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^- \alpha_-^{k_-}c_X \alpha_+^{k_+} d^+, $$ \begin{equation} D_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-) + k_- + M_- \ge M_+ + k_+ + D_+(d^+\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). \end{equation} Set again \begin{align} \tilde{d}^- & = \Psi^{-1}_{\tilde{b}^-\tilde{\sigma}_-}(d^-),\\ \intertext{and also set} \tilde{d}^+ & = \Psi^{-1}_{\tilde{\sigma}^+\tilde{b}^+}(d^+), \end{align} $$ \tilde{d}_-^+ = \Psi^{-1}_{\tilde{\sigma}_+\tilde{b}^+}(c_X \alpha_+^{k_+}d^+).$$ If here \begin{equation} \tilde{d}_-^+ \tilde{\sigma}_+ \in \Omega^-(\tilde{X})\backslash\Omega^-_{reset}(\tilde{X}), \end{equation} then by Lemma 3.6 $k_+ \le 2 L$, and then by (3.16) $k_- > 2L$, and it is seen from the action of $\varPhi$ that, with $\tilde{k}_-$ given by the expression (3.24), \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} \tilde{d}^+_-. \end{equation*} By (3.28) this means that $$ \tilde{c} \in {\mathcal C}_-^{(\tilde{X})}(\tilde{\sigma}_+). $$ If here $$ \tilde{d}_-^+ \tilde{\sigma}_+ \in \Omega^-_{reset}(\tilde{X}), $$ one has by (3.16) and (3.25) that $k_- > 2L$ and it is seen from the action of $\varPhi$ that, with $\tilde{k}_-$ given by the expression (3.24), and with \begin{equation} \tilde{k}_+ = k_+ - H_- - \ell(\tilde{d}^+) + \ell(d^+) - I_+(\tilde{\sigma}_+\tilde{b}^+), \end{equation} that one has then \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} c_{\tilde{X}} \tilde{\alpha}_+^{\tilde{k}_+} \tilde{d}^+. \end{equation*} By (3.25), (3.19) and (3.20) \begin{equation*} \tilde{D}_-(\tilde{\sigma}_- \tilde{d}^-) + \tilde{k}_- + \tilde{M}_- \ge \tilde{M}_+ + \tilde{k}_+ + \tilde{D}_+(\tilde{d}^+\tilde{\sigma}_+), \end{equation*} and this means that $$ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{reset}(\tilde{D}_-,\tilde{M}_-,\tilde{M}_+, \tilde{D}_+;\tilde{\sigma}_+). $$ In case that $$ c \in {\mathcal C}^{(X)}_{counter}(\Delta_-, J_-, J_+, \Delta_+;\sigma_+), $$ there are $$ d^- \in {\mathcal D}(\sigma_-(\tilde{b}^-\tilde{\sigma}_-),\alpha_-),\qquad d^+ \in {\mathcal D}(\alpha_+,\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), $$ and $$ D_- \in \Delta_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-),\qquad D_+ \in \Delta_+(d^+\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), $$ and $k_-, k_+ \in {\Bbb N}$ such that \begin{equation} d^+\sigma_+(\tilde{\sigma}_+ \tilde{b}^+) \in \OPC, \end{equation} \begin{equation*} c= \sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^- \alpha_-^{k_-} c_X {\alpha_+}^{k_+}d^+, \end{equation*} \begin{equation} D_- + k_- + J_- = J_+ + k_+ + D_+. \end{equation} By (3.16) and (3.31) $k_-, k_+ > 2 L$, and with $\tilde{d}^-,\tilde{d}^+,\tilde{k}_-,\tilde{k}_+$ given by the expressions (3.26),(3.27), (3.24), (3.29) and with \begin{align} \tilde{D}_-& = D_- + \ell(\tilde{d}^-) -\ell(d^-) + I_-(\tilde{b}^-\tilde{\sigma}_-),\\ \tilde{D}_+& = I_+(\tilde{\sigma}_+ \tilde{b}^+) + \ell(\tilde{d}^+) -\ell(d^+) + D_+, \end{align} it is seen from the action of $\varPhi$ that \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} c_{\tilde{X}} \tilde{\alpha}_+^{\tilde{k}_+} \tilde{d}^+. \end{equation*} By (3.32) and (3.33) \begin{equation*} \tilde{D}_- + \tilde{k}_- + \tilde{J}_- = \tilde{J}_+ + \tilde{k}_+ + \tilde{D}_+, \end{equation*} and by Lemma 3.7 and by (3.30) this means that $$ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{counter}(\tilde{\Delta}_-, \tilde{J}_-, \tilde{J}_+, \tilde{\Delta}_+;\tilde{\sigma}_+). $$ We prove that \begin{equation} \{ \tilde{c} \in {\mathcal C}_-^{(\tilde{X})} \mid \ell(\tilde{c}) = \tilde{I} \} \subset {\mathcal C}(\tilde{X}). \end{equation} For $\tilde{\sigma}_-\in \tilde{\Sigma}_+^-(\tilde{X}),$ and for a word $ \tilde{c} \in {\mathcal C}_-^{(\tilde{X})}(\tilde{\sigma}_+^-) $ of length $\tilde{I}$, with the first symbol $\tilde{\sigma}_-$, there are $$ \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-,\tilde{\alpha}_-),\qquad \tilde{d}^+_- \in {\mathcal D}(\tilde{\alpha}_+,\tilde{\sigma}_+^-), $$ and $\tilde{k}_- \in {\Bbb N}$ such that \begin{equation} \tilde{d}_-^+ \tilde{\sigma}_+^- \in \Omega^-(\tilde{X})\backslash\Omega^-_{reset}(\tilde{X}), \end{equation} \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} \tilde{d}^+_-. \end{equation*} Let $$ \tilde{b}^-\in \Gamma^-_{Q + L}(\tilde{\sigma}_-),\qquad \tilde{b}^+\in \Gamma^+_{Q + L}(\tilde{\sigma}_+), $$ and let a word $c \in {\mathcal L}(X)$ in the symbols of $\Sigma$ be given by \begin{equation} \varPhi(\tilde{b}^- \tilde{c}\tilde{\sigma}_+ \tilde{b}^+) =b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+^-(\tilde{\sigma}_+^- \tilde{b}^+). \end{equation} From (3.36) it is seen that there is a $k_-\in {\Bbb N}$ such that \begin{equation*} c=\sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-) \alpha_-^{k_-} d_-^+(\tilde{d}^+_-\tilde{\sigma}_+^- \tilde{b}^+). \end{equation*} If here $$ d_-^+(\tilde{d}^+_-\tilde{\sigma}_+^- \tilde{b}^+) \sigma_+^-(\tilde{\sigma}^-_+\tilde{b}^+) \in \Omega^-(X)\backslash\Omega_{reset}^-(X), $$ then by (3.16), $k_- > 2L$ and if here $$ d_-^+(\tilde{d}^+_-\tilde{\sigma}_+^- \tilde{b}^+) \sigma_+^-(\tilde{\sigma}^-_+\tilde{b}^+) \in \Omega_{reset}^-(X), $$ then there are $ d^+ \in {\mathcal D}(\alpha_+,\sigma_+^-(\tilde{\sigma}_+^- \tilde{b}^+)) $ and $k_+ \in {\Bbb N}$ such that $$ d^+\sigma_+^- \in \OPR, \qquad d_-^+(\tilde{d}^+_-\tilde{\sigma}_+^- \tilde{b}^+) = c_X\alpha_+^{k_+}d^+. $$ By Lemma 3.6 and by (3.35) $k_+ \le 2L$, and then by (3.16) $k_- >2L$, and also \begin{equation*} D_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-) + k_- + M_- \ge M_+ + k_+ + D_+(d^+\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), \end{equation*} and therefore $$ c c_X\alpha_+^{k_+}d^+ \in {\mathcal C}^{(X)}_{reset}(D_-,M_-,M_+, D_+;\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). $$ By (3.16) then $$ c c_X\alpha_+^{k_+}d^+ \in {\mathcal C}(X), $$ and it is seen from the action of $\varPhi$ that the word $\tilde{c}$ is a subword of the word $$ \varPhi(b^-(\tilde{b}^-\tilde{\sigma}_-)c c_X\alpha_+^{k_+}d^+\sigma_+(\tilde{\sigma}_+ \tilde{b}^+) b^+(\tilde{\sigma}_+ \tilde{b}^+)) \in {\mathcal L}(X), $$ and (3.34) is confirmed. We prove that \begin{equation} \{ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{reset}(\tilde{D}_-,\tilde{M}_-,\tilde{M}_+, \tilde{D}_+) \mid \ell(\tilde{c} ) = \tilde{I} \} \subset {\mathcal C}(\tilde{X}). \end{equation} For $\tilde{\sigma}_+ \in \tilde{\Sigma}_+(\tilde{X})$ and for a word $$ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{reset}(\tilde{D}_-,\tilde{M}_-,\tilde{M}_+, \tilde{D}_+;\tilde{\sigma}_+) $$ of length $\tilde{I}$ with the first symbol $\tilde{\sigma}_-$ there are $$ \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-, \tilde{\alpha}_-),\qquad \tilde{d}^+ \in {\mathcal D}(\tilde{\alpha}_+, \tilde{\sigma}_+), $$ and $k_-, k_+ \in {\Bbb N}$ such that \begin{align} \tilde{d}_-^+ \tilde{\sigma}_+ & \in \Omega^-_{reset}(\tilde{X}),\\ \tilde{D}_-(\tilde{\sigma}_-\tilde{d}^-) + \tilde{k}_- + \tilde{M}_- & \ge \tilde{M}_+ + \tilde{k}_+ + \tilde{D}_+(\tilde{d}^+\tilde{\sigma}_+), \end{align} \begin{equation*} \tilde{c} = \tilde{\sigma}_- \tilde{d}^- \tilde{\alpha}_-^{\tilde{k}_-} c_{\tilde{X}} \tilde{\alpha}_+^{\tilde{k}_+} \tilde{d}^+. \end{equation*} By (3.12), (3.17), (3.19) and (3.20) one can select $$ \tilde{b}^-\in \Gamma^-_{Q + L}(\tilde{\sigma}_-),\qquad \tilde{b}^+\in \Gamma^+_{Q + L}(\tilde{\sigma}_+) $$ such that \begin{align*} \tilde{D}_-(\tilde{\sigma}_-\tilde{d}^-) & = D_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) - \ell(d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + \ell(\tilde{d}^-) + I_-(\tilde{b}^-\tilde{\sigma}_-),\\ \tilde{D}_+(\tilde{d}^+\tilde{\sigma}_+) & = I_+(\tilde{\sigma}_+ \tilde{b}^+) + \ell(\tilde{d}^+) -\ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)) + D_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), \end{align*} and such that one has with \begin{align} k_- & = I_-(\tilde{b}^-\tilde{\sigma}_-) -\ell(d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + \ell(\tilde{d}^-) + \tilde{k}_- + H_-, \\ k_+ & = H_+ + k_+ +\ell(\tilde{d}^+) -\ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)) + I_+(\tilde{\sigma}_+\tilde{b}^+), \end{align} that \begin{equation} D_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + k_- + M_- \ge M_+ + k_+ + D_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). \end{equation} By (3.38) and Lemma 3.7 and by (3.42) it follows for the word $c$ in the symbols of $\Sigma$ that is given by \begin{equation*} \tilde{\varPhi}(\tilde{b}^- \tilde{c}\tilde{\sigma}_+ \tilde{b}^+) = b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+(\tilde{\sigma}_+ \tilde{b}^+)b^+(\tilde{\sigma}_+ \tilde{b}^+), \end{equation*} that \begin{equation*} c = \sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-) \alpha_-^{k_-} c_X \alpha_+^{k_+}d^+(\tilde{d}^+\tilde{\sigma}_+^- \tilde{b}^+) \in {\mathcal C}^{(X)}_{reset}(\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). \end{equation*} By (3.16) then $c \in {\mathcal C}(X)$. By (3.16) and (3.42) $k_-, k_+ > 2L$ and from the action of $\varPhi$ it is seen that the word $\tilde{c}$ is a subword of the word \begin{equation*} {\varPhi}( b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+(\tilde{\sigma}_+ \tilde{b}^+)b^+(\tilde{\sigma}_+ \tilde{b}^+) ) \in {\mathcal L}(X), \end{equation*} and (3.37) is confirmed. We prove that \begin{equation} \{ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{counter}(\tilde{\Delta}_-,\tilde{J}_-,\tilde{J}_+, \tilde{\Delta}_+) \mid \ell(\tilde{c} ) = \tilde{I} \} \subset {\mathcal C}(\tilde{X}). \end{equation} For $\tilde{\sigma}_+ \in \tilde{\Sigma}_+(\tilde{X})$ and a word $$ \tilde{c} \in {\mathcal C}^{(\tilde{X})}_{counter}(\tilde{\Delta}_-,\tilde{J}_-,\tilde{J}_+, \tilde{\Delta}_+;\tilde{\sigma}_+) $$ of length $\tilde{I}$ with a first symbol $\tilde{\sigma}_-$ there are $$ \tilde{d}^- \in {\mathcal D}(\tilde{\sigma}_-, \tilde{\alpha}_-),\qquad \tilde{d}^+ \in {\mathcal D}(\tilde{\alpha}_+, \tilde{\sigma}_+), $$ and $$ \tilde{D}^- \in \tilde{\Delta}_-(\tilde{\sigma}_- \tilde{d}^-),\qquad \tilde{D}^+ \in \tilde{\Delta}_+(\tilde{d}^+ \tilde{\sigma}_+), $$ and $\tilde{k}_-, \tilde{k}_+ \in {\Bbb N}$ such that \begin{align} \tilde{d}^+ \tilde{\sigma}_+ & \in \Omega^-_{counter}(\tilde{X}),\\ \tilde{D}_- + \tilde{k}_- + \tilde{J}_- & = \tilde{J}_+ + \tilde{k}_+ + \tilde{D}_+. \end{align} By (3.12), (3.18), (3.21) and (3.22) one can select $$ \tilde{b}^-\in \Gamma^-_{Q + L}(\tilde{\sigma}_-),\qquad \tilde{b}^+\in \Gamma^+_{Q + L}(\tilde{\sigma}_+), $$ such that there are \begin{align*} D_- \in & \Delta_-(\sigma_-(\tilde{b}^-\tilde{\sigma}_-)d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)),\\ D_+ \in & \Delta_+(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)), \end{align*} such that \begin{align*} \tilde{D}_- & = D_- - \ell(d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-)) + \ell(\tilde{d}^-) + I_-(\tilde{b}^-\tilde{\sigma}_-),\\ \tilde{D}_+ & = I_+(\tilde{\sigma}_+ \tilde{b}^+) + \ell(\tilde{d}^+) -\ell(d^+(\tilde{d}^+\tilde{\sigma}_+\tilde{b}^+)) + D_+. \end{align*} With $k_-, k_+ \in {\Bbb N}$ given by the expressions (3.40) and (3.41) then \begin{equation} D_- + k_- + J_- = J_+ + k_+ + D_+. \end{equation} By (3.44) and Lemma 3.7 and by (3.45) it follows for the word $c$ in the symbls of $\Sigma$ that is given by \begin{equation*} \tilde{\varPhi}(\tilde{b}^- \tilde{c}\tilde{\sigma}_+ \tilde{b}^+) = b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+(\tilde{\sigma}_+ \tilde{b}^+)b^+(\tilde{\sigma}_+ \tilde{b}^+), \end{equation*} that \begin{equation*} c = \sigma_-(\tilde{b}^-\tilde{\sigma}_-) d^-(\tilde{b}^-\tilde{\sigma}_-\tilde{d}^-) \alpha_-^{k_-} c_X \alpha_+^{k_+}d^+(\tilde{d}^+\tilde{\sigma}_+^- \tilde{b}^+) \in {\mathcal C}^{(X)}_{counter}(\sigma_+(\tilde{\sigma}_+ \tilde{b}^+)). \end{equation*} By (3.16) then $c \in {\mathcal C}(X)$. By (3.16) and (3.46) $k_-, k_+ > 2L$ and from the action of $\varPhi$ it is seen that the word $\tilde{c}$ is a subword of the word \begin{equation*} {\varPhi}( b^- ( \tilde{b}^- \tilde{\sigma}_-) c \sigma_+(\tilde{\sigma}_+ \tilde{b}^+)b^+(\tilde{\sigma}_+ \tilde{b}^+) ) \in {\mathcal L}(\tilde{X}), \end{equation*} and (3.43) is confirmed. We have shown that $ \tilde{I}, \tilde{D}_-,\tilde{M}_-,\tilde{M}_+, \tilde{D}_+, \tilde{\Delta}_-, \tilde{J}_-, \tilde{J}_+, \tilde{\Delta}_+ $ are parameters for $\tilde{X}$. \end{proof} \medskip {\bf 3 c. Shifts of standard one-counter type} One has a theorem that can be viewed as analogous to Theorem 1.1. \begin{theorem} Let $X \subset \Sigma^{\Bbb Z}$ be a subshift that is topologically conjugate to a standard one-counter shift. Then there exists an $n_\circ \in {\Bbb N}$ such that $X^{{\langle [1,n]\rangle}}$ is a standard one-counter shift, \begin{equation*} X^{{\langle [1,n]\rangle}} = scM({\mathcal C}^{(X^{{\langle [1,n]\rangle}})}), \qquad n \ge n_\circ. \end{equation*} \end{theorem} \begin{proof} Apply Lemma 2.2 and Lemma 3.10. \end{proof} One can view the class of standard one-counter shifts as extending the class of topological Markov shifts and one is then lead to introduce a class of subshifts of standard one-counter type as the class of subshifts that have a higher block system that is a standard one-counter shift. Theorem 3.11 is then equivalent to the statement that a subshift that is topologically conjugate to a subshift of standard one-counter type is itself of standard one-counter type. \section{$\lambda$-graph systems and $C^*$-algebras} Consider a $\lambda$-graph system ${\frak L} =(V,E,\lambda,\iota)$ over the alphabet $\Sigma$ with vertex set $ V = \cup_{l \in { {\Bbb Z}_+ }} V_{l}, $ edge set $ E = \cup_{l \in { {\Bbb Z}_+ }} E_{l,l+1}, $ labeling map $\lambda: E \rightarrow \Sigma$ and shift-like map $ \iota $ that is given by surjective maps $ \iota_{l,l+1}:V_{l+1} \rightarrow V_l,l \in { {\Bbb Z}_+ }. $ A subset ${{\mathcal V}}$ of $V$ is called hereditary if all $v \in V$ such that $\iota(v) \in {{\mathcal V}}$ are in ${{\mathcal V}}$, and if $v \in {{\mathcal V}}$ then all initial vertices of all edges that have $v$ as a final vertex are also in ${{\mathcal V}}$. A hereditary subset ${{\mathcal V}}$ is said to be proper if ${{\mathcal V}} \cap V_l \ne V_l$ for all $l \in {\Bbb N}$. Let us denote by $\{v_1^l,\dots, v_{m(l)}^l\}$ the vertex set $V_l$ at level $l$. For $ i=1,2,\dots,m(l),\ j=1,2,\dots,m(l+1), \ \alpha \in \Sigma, $ we put \begin{eqnarray*} A_{l,l+1}(i,\alpha,j) &=& {\begin{cases} 1 & \text{ if } \ s(e) = {v}_i^l, \lambda(e) = \alpha, t(e) = {v}_j^{l+1} \text{ for some } e \in E_{l,l+1}, \\ 0 & \text{ otherwise,} \end{cases}} \\ I_{l,l+1}(i,j) &=& {\begin{cases} 1 & \text{ if } \ \iota_{l,l+1}({v}_j^{l+1}) = {v}_i^l, \\ 0 & \text{ otherwise.} \end{cases}} \end{eqnarray*} The $C^*$-algebra ${\mathcal O}_{\frak L}$ associated with ${\frak L}$ is the universal $C^*$-algebra generated by partial isometries $S_\alpha, \alpha \in \Sigma$ and projections $E_i^l, i=1,2,\dots,m(l),\ l \in { {\Bbb Z}_+ }$ subject to the following operator relations called $({\frak L})$: \begin{eqnarray*} & &\sum_{\beta \in \Sigma} S_{\beta}S_{\beta}^* = 1, \\ \sum_{i=1}^{m(l)} E_i^l &=& 1, \qquad E_i^l = \sum_{j=1}^{m(l+1)}I_{l,l+1}(i,j)E_j^{l+1}, \\ & & S_\alpha S_\alpha^* E_i^l =E_i^l S_\alpha S_\alpha^*, \\ S_{\alpha}^*E_i^l S_{\alpha} &=& \sum_{j=1}^{m(l+1)} A_{l,l+1}(i,\alpha,j)E_j^{l+1}, \end{eqnarray*} for $ i=1,2,\dots,m(l),\l\in { {\Bbb Z}_+ }, \alpha \in \Sigma $ \cite{Ma2002a}. For a subshift $X\subset \Sigma^{\Bbb Z}$ we recall the construction of its future $\lambda$-graph system ${}^X\!{\frak L}$. The label set of ${}^X\!{\frak L}$ is $\Sigma$ and its vertex set is $$ V(X) = \cup_{l \in { {\Bbb Z}_+ }}V_l(X) $$ where $V_0(X)$ contains the singleton set that contains the empty word, and where $$ V_l(X) = \{ \Gamma_l^+(x^-) \mid x^- \in X_{(-\infty,0]} \}, \qquad l \in {\Bbb N}. $$ All edges of ${ {}^X\!{\frak L} }$ leave a vertex in $\cup_{l \in {\Bbb N}} V_l(X)$, and a vertex $ v \in \cup_{l \in {\Bbb N}} V_l(X)$ has an outgoing edge that carries the label $\sigma \in \Sigma$ if and only if $v$ contains a word that begins with $\sigma$ and the target vertex of this outgoing edge is equal to $\{ a \in \Gamma^+_{[1,l)} \mid \sigma a \in v \}, l \in {\Bbb N}$. The mapping $$ \iota : \cup_{l \in {\Bbb N}} V_l(X) \longrightarrow \cup_{l \in { {\Bbb Z}_+ }} V_l(X) $$ deletes last symbols. \begin{theorem} Let $X \subset \Sigma^{\Bbb Z}$ be a standard one-counter shift with a characteristic pair $(({\alpha}_-)_{i \in {\Bbb Z}}, ({\alpha}_+)_{i \in {\Bbb Z}}) $ of fixed points. Then \begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item $V(X)$ has a proper hereditary subset if and only if $X$ has no reset. \item ${ {\frak L}^X }$ has a proper hereditary subset. \end{enumerate} \end{theorem} \begin{proof} (i) Let $\OPR \ne \emptyset$. Let $I, D_-, M_-, M_+, D_+$ be parameters for $X$, where $I$ is chosen such that $scM(\{ c \in {\mathcal L}(X) \mid \ell(c) \le I \})$ is aperiodic and topologically transitive subshift of finite type with alphabet $\Sigma$. Let $Q \in {\Bbb N}$ be such that for $\sigma, \sigma' \in { \Sigma_{synchro}(X) }$ there exists for $ q > Q$ an admissible concatenation of words in $\{ c \in {\mathcal L}(X) \mid \ell(c) \le I \}$ that begins with $\sigma$ and that can be followed by $\sigma'$. With $D > I$ such that also $$ D > \ell(c_X) + M_- + M_+ + \ell(d^-) + D_-(\sigma_-d^-), \qquad \sigma_- \in \Sigma_-, d^- \in {\mathcal D}(\sigma_-,\alpha_-), $$ one has for $x^- \in X_{(-\infty,0]}$, that $\Gamma_D^+(x^-)$ contains a synchronizing symbol. Let $x^- \in X_{(-\infty,0]}, l \in {\Bbb N}$. One can choose a word $ a\in {\mathcal L}(X)$ of length less than $l + D$ such that $$ \Gamma_l^+(x^-) = \Gamma_l^+(a) $$ and for $y^- \in X_{(-\infty,0]}$ one has that $\Gamma_{l +2D +Q}^+(y^-)$ contains a word with suffix $a$. It follows that $V(X)$ has no proper hereditary subset. In case that $\OPR = \emptyset$ one has $\{ \alpha_+^l \} \in V_l(X), l \in {\Bbb N}$ and it follows that the set $\cup_{l \in {\Bbb N}} V_l(X) \backslash \{ \alpha_+^l \}$ is a proper hereditary subset of $V(X)$. (ii) Here $\cup_{l \in {\Bbb N}}(V_l(X) \backslash \{ \alpha_-^l \}) $ is a proper hereditary subset of $V(X)$. \end{proof} \begin{cor} Let $X$ be a standard one-counter shift. Then \begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item ${\mathcal O}_{{ {}^X\!{\frak L} }}$ is simple if and only if $X$ has reset. \item ${\mathcal O}_{{ {\frak L}^X }}$ is not simple. \end{enumerate} \end{cor} \begin{proof} There exists a bijective correspondence between hereditary subsets of the vertex set $V$ and ideals in the $C^*$-algebra ${\mathcal O}_{\frak L}$ (\cite{Ma2002a}, \cite{Ma2006a}). \end{proof} For the notion of flow equivalence see \cite{BF, Fr, PS, Th}. For a subshift $Y \subset \Sigma^{\Bbb Z}$ and for $ \sigma \in \Sigma,\ \sigma' \not\in \Sigma, $ we say that the subshift $ Y' \subset (\Sigma \cup \{ \sigma' \})^{\Bbb Z} $ is obtained from the subshift $Y$ by replacing in $Y$ $\sigma$ by $\sigma \sigma'$ if for every admissible word $a'$ of $Y'$ there is an admissible word $a$ of $Y$ such that $a'$ can be obtained by replacing in $a$ the symbol $\sigma$ by the word $\sigma \sigma'$ and then, if necessarily, still removing the first symbol or the last symbol or both. We say then also that $Y$ is obtained from $Y'$ by replacing in $Y'$ the word $\sigma \sigma'$ by the symbol $\sigma$. Subshifts $X \subset \Sigma^{\Bbb Z}$ and $\widetilde{X} \subset \widetilde{\Sigma}^{\Bbb Z}$ are flow equivalent if there exists a chain of subshifts \begin{equation*} Y[q] \subset \Sigma[q]^{\Bbb Z}, \quad 1 \le q \le Q, \ Q \in {\Bbb N}, \qquad Y[1] = X, \quad Y[Q] = \widetilde{X}, \end{equation*} such that $Y[q]$ is topologically conjugate to $Y[q+1]$ or $Y[q+1]$ is obtained from $Y[q]$ by replacing in $Y[q]$ a symbol $\sigma$ by the word $\sigma \sigma'$ or $Y[q]$ is obtained from $Y[q+1]$ by replacing in $Y[q+1]$ a symbol $\sigma$ by the word $\sigma \sigma'$. We remark at this point that the definition of a standard one-counter shift can be given a more general formulation in which the characteristic pair of fixed points are replaced by a pair of periodic points. In this way one arrives at a class of subshifts that is closed under flow equivalence. \begin{cor} A standard one-counter shift with reset is not flow equivalent to its inverse. \end{cor} \begin{proof} The ideal structure of the $C^*$-algebra ${\mathcal O}_{{ {}^X\!{\frak L} }}$ is an invariant of flow equivalence \cite{Ma2001b}. Apply Theorem 4.1. \end{proof} \section{K-groups} We will compute the K-groups and the Bowen-Franks groups of the one-counter shift \begin{equation*} { sc(({\mathcal C}^{(N)}_{reset})^{rev}) } = sc(\{ a_n \alpha_+^m \alpha_-^k \mid 1 \le n\le N, \ m,k \in {\Bbb N}, m \le k \}), \end{equation*} that is of the future $\lambda$-graph system of ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$ or, equivalently, of the past $\lambda$-graph system of ${ sc({\mathcal C}^{(N)}_{reset}) }$. The set up that we choose is for the future $\lambda$-graph system of ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$. Let $({ {\mathcal M} },I) =({ {\mathcal M} }_{l,l+1}, I_{l,l+1})_{l\in { {\Bbb Z}_+ }}$ be the symbolic matrix system of ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$ (the future $\lambda$-graph system of ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$). Let $(M,I) = (M_{l,l+1}, I_{l,l+1})_{l\in { {\Bbb Z}_+ }}$ be its nonnegative matrix system. The entries of the nonnegative matrix $M_{l,l+1}$ count the number of symbols of the corresponding entries of ${ {\mathcal M} }_{l,l+1}$. We denote by $m(l)$ the row size of ${ {\mathcal M} }_{l,l+1}$, so that the both matrices $M_{l,l+1}$ and $ I_{l,l+1}$ are $m(l) \times m(l+1)$ matrices. They satisfy the following relations $$ I_{l,l+1}M_{l+1,l+2} = M_{l,l+1} I_{l+1,l+2}, \qquad l \in { {\Bbb Z}_+ }. $$ We denote by $ \bar{I}^t_{l,l+1}, l \in { {\Bbb Z}_+ } $ the homomorphism from $ {\Bbb Z}^{m(l)} / (M_{l-1,l}^t - I_{l-1,l}^t){{\Bbb Z}^{m(l-1)}} $ to $ {\Bbb Z}^{m(l+1)} / (M_{l,l+1}^t - I_{l,l+1}^t){{\Bbb Z}^{m(l)}} $ induced by ${I}^t_{l,l+1}.$ Then as in \cite{Ma1999c} \begin{align} K_0( { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) & = \underset{l}{\varinjlim} \{ {\Bbb Z}^{m(l+1)} / (M_{l,l+1}^t - I_{l,l+1}^t){{\Bbb Z}^{m(l)}}, \bar{I}^t_{l,l+1} \}, \\ K_1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) & =\underset{l}{\varinjlim}\{ {{\operatorname{Ker}}} (M_{l,l+1}^t - I_{l,l+1}^t) \text{ in } {\Bbb Z}^{m(l)}, I^t_{l,l+1} \}. \end{align} Let ${\Bbb Z}_{I}$ be the group of the projective limit $\underset{l}{\varinjlim} \{ {\Bbb Z}^{m(l)}, {I}_{l,l+1} \}. $ The sequence $M_{l,l+1} - I_{l,l+1}, l \in { {\Bbb Z}_+ }$ acts on it as an endomorphism, denoted by $M - I.$ The Bowen-Franks groups $BF^i({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) } ), i=0,1,$ are defined by \begin{align*} BF^0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) & = {\Bbb Z}_{I} / (M-I){\Bbb Z}_{I}, \\ BF^1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) & = {{\operatorname{Ker}}}(M-I) \quad \text{ in }\quad {\Bbb Z}_{I}. \end{align*} We denote the symbols $\alpha_+, \alpha_-$ in the subshifts ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$ now by $b,c$ respectively. For $l \in {\Bbb N}$, consider the following subsets $\{ F_i^l \}_{i=1, \dots, 2l+2}$ of the right one-sided shift ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)}$. \begin{align*} F_1^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = b, x_2 = \cdots = x_{l+2} = c,\\ &\qquad \qquad \qquad \qquad \qquad \qquad x_{l+3} = a_i \text{ for some } 1\le i \le N \},\\ F_2^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = b, x_2 = \cdots = x_{l+1} = c,\\ &\qquad \qquad \qquad \qquad \qquad \qquad x_{l+2} = a_i \text{ for some } 1\le i \le N \},\\ & \vdots \\ F_{l+1}^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = b, x_2 = c, x_{3} = a_i \text{ for some } 1\le i \le N \},\\ F_{l+2}^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = a_i \text{ for some } 1\le i \le N \},\\ F_{l+3}^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = c, x_2 = a_i \text{ for some } 1\le i \le N \},\\ F_{l+4}^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = x_2 = c, x_3 = a_i \text{ for some } 1\le i \le N \},\\ & \vdots \\ F_{2l+2}^l = & \{ {(x_n)}_{n\in {\Bbb N}} \in { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }_{[1,\infty)} \mid x_1 = \cdots x_l = c, x_{l+1}= a_i \text{ for some } 1\le i \le N \}. \end{align*} The sets $\{ F_i^l \}_{i=1,\dots,2l+2}$ are the $l$-past equivalence classes of $ { sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$. Put $m(l) = 2l+2.$ Let $v_i^l, i=1,\dots,m(l)$ be the vertex set $V_l$ of the canonical $\lambda$-graph system ${\frak L}^{{ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }}$for the subshift ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$. The vertex $v_i^l$ is considered to be the class $[F_i^l]$ of $F_i^l$. For a symbol $\gamma$, if $\gamma F_j^{l+1} $ is contained in $F_i^l$, then a labeled edge labeled $\gamma$ from the vertex $v_i^l$ to the vertex $v_j^{l+1}$ is defined in the $\lambda$-graph system. Hence there are labeled edges labeled $a_n,n=1,\dots,N$ from $v_{l+2}^l$ to $v_j^{l+1}$ for $j=1,2,\dots, l+2$. There are labeled edges labeled $b$ from $v_{i}^l$ to $v_{2l+4-i}^{l+1}$ and to $v_i^{l+1}$ for $i=1,2,\dots, l+1$. There are labeled edges labeled $c$ from $v_i^l$ to $v_i^{l+1}$ for $i = l+3,l+4,\dots, 2l+2$, and from $v_{2l+2}^l $ to $v_{2l+3}^{l+1}$ and to $v_{2l+4}^{l+1}.$ If $F_j^{l+1} $ is contained in $F_i^l$, the $\iota$-map is defined by $\iota(v_j^{l+1}) = v_i^l$. Hence we have \begin{equation*} \iota(v_j^{l+1}) = \begin{cases} v_1^l & \text{ if } j=1,\\ v_{j-1}^l & \text{ if } j=2,3,\dots, 2l+3,\\ v_{2l+2}^l & \text{ if } j=2l + 4. \end{cases} \end{equation*} We will consider the symbolic matrix system ${ {\mathcal M} }_{l,l+1}, I_{l,l+1}$ on the ordered bases $ F_1^l, \cdots, F_{m(l)}^l. $ For $i=1,\dots, m(l)$ and $j= 1,\dots,m(l+1)$, we have \begin{align*} { {\mathcal M} }_{l,l+1}(i,j) & = {\begin{cases} a_1 + \cdots + a_N & \text{ if } i=l+2, \, j= 1,2,\cdots, l+2,\\ b & \text{ if } 1 \le i= j \le l+1, \\ b & \text{ if } i + j = 2l +5, \, 1 \le i \le l+1, \\ c & \text{ if } l+3 \le i= j \le 2l+2, \\ c & \text{ if } i = 2l + 2, j= 2l +3, 2l+4, \\ 0 & \text{ otherwise, } \end{cases}}\\ I_{l,l+1}(i,j)& = {\begin{cases} 1 & \text{ if } i=j = 1, \\ 1 & \text{ if } 2 \le j= i+1 \le 2l+3, \\ 1 & \text{ if } i=2l+2, j= 2l+4, \\ 0 & \text{ otherwise. } \end{cases}} \end{align*} Hence we have $$ M_{l,l+1}^t(i,j) = \begin{cases} N & \text{ if } j=l+2, \, i= 1,2,\cdots, l+2,\\ 1 & \text{ if } 1 \le i= j \le l+1, \\ 1 & \text{ if } i + j = 2l +5, \, 1 \le j \le l+1, \\ 1 & \text{ if } l+3 \le i= j \le 2l+2, \\ 1 & \text{ if } j = 2l + 2, i= 2l +3, 2l+4, \\ 0 & \text{ otherwise, } \end{cases} $$ so that $$ M_{l,l+1}^t(i,j) - I_{l,l+1}^t(i,j) = \begin{cases} N & \text{ if } j=l+2, \, i= 1,2,\cdots, l+2,\\ 1 & \text{ if } 2 \le i= j \le 2l+2, \, i\ne l+2,\\ 1 & \text{ if } i + j = 2l +5, \, 1 \le j \le l+1, \\ -1 & \text{ if } 2 \le i = j+1\le 2l + 2, \\ 0 & \text{ otherwise, } \end{cases} $$ $$ \setcounter{MaxMatrixCols}{16} M_{l,l+1}^t -I_{l,l+1}^t = \begin{bmatrix} 0& \hdotsfor{5} & 0 & N & 0 & \hdotsfor{5} \\ -1& 1& 0 & \hdotsfor{3} & 0 & N & 0 & \hdotsfor{5} \\ 0&-1& 1 & 0 & \hdotsfor{2} & 0 & N & 0 & \hdotsfor{5} \\ \hdotsfor{1} & 0&-1 & 1 & 0 &\hdotsfor{1} & 0 & N & 0 & \hdotsfor{5} \\ \hdotsfor{2} & \cdot& \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{3} & \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{4} & 0 & -1 & 1 & N & 0 & \hdotsfor{5} \\ \hdotsfor{5} & 0 & -1 & N & 0 & \hdotsfor{5} \\ \hdotsfor{6} & 0 &-1 & 1 & 0 & \hdotsfor{4} \\ \hdotsfor{5} & 0 & 1 & 0 &-1 & 1 & 0 & \hdotsfor{3} \\ \hdotsfor{4} & 0 & 1 & 0 & 0 & 0 &-1 & 1 & 0 &\hdotsfor{2} \\ \hdotsfor{3} & \cdot & \cdot&\cdot & \hdotsfor{3} &\cdot &\cdot&\cdot &\cdot &\cdots \\ \hdotsfor{2} & \cdot& \cdot & \cdot&\hdotsfor{5} &\cdot&\cdot &\cdot &\cdot \\ \hdotsfor{1} & 0& 1 &0 & \hdotsfor{7} &0 & -1 &1 \\ 0& 1& 0 & \hdotsfor{10} &0 \\ 1& 0& \hdotsfor{11} &0 \end{bmatrix}. $$ We see that \begin{lemma} $ {{\operatorname{Ker}}}(M_{l,l+1}^t - I_{l,l+1}^t) = 0 \quad \text{ for } \quad 2 \le l \in {\Bbb N}. $ \end{lemma} Thus we have by (5.2), \begin{lemma} $ K_1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) })\cong 0. $ \end{lemma} We will next compute $K_0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) })$. Set for $i=1,\dots, 2l+4, \, j=1,\dots, 2l +2$ $$ B_{l,l+1}(i,j) = \begin{cases} N & \text{ if } j=l+2, \, i= 1,\\ 1 & \text{ if } 2 \le i= j \le 2l+2, \, i\ne l+2,\\ 1 & \text{ if } (i, j) = (2l +4, 1), (2l+3, 2), \\ -1 & \text{ if } 2 \le i = j+1\le l + 2, \\ 0 & \text{ otherwise. } \end{cases} $$ That is $$ \setcounter{MaxMatrixCols}{16} B_{l,l+1} = \begin{bmatrix} 0& \hdotsfor{5} & 0 & N & 0 & \hdotsfor{5} \\ -1& 1& 0 & \hdotsfor{3} & 0 & 0 & 0 & \hdotsfor{5} \\ 0&-1& 1 & 0 & \hdotsfor{2} & 0 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{1} & 0&-1 & 1 & 0 &\hdotsfor{1} & 0 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{2} & \cdot& \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{3} & \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{4} & 0 & -1 & 1 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{5} & 0 & -1 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{6} & 0 & 0 & 1 & 0 & \hdotsfor{4} \\ \hdotsfor{5} & 0 & 0 & 0 & 0 & 1 & 0 & \hdotsfor{3} \\ \hdotsfor{4} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &\hdotsfor{2} \\ \hdotsfor{3} & \cdot & \cdot&\cdot & \hdotsfor{3} &\cdot &\cdot&\cdot &\cdot &\cdots \\ \hdotsfor{2} & \cdot& \cdot & \cdot&\hdotsfor{5} &\cdot&\cdot &\cdot &\cdot \\ \hdotsfor{1} & 0& 0 &0 & \hdotsfor{7} &0 & 0 &1 \\ 0& 1& 0 & \hdotsfor{10} &0 \\ 1& 0& \hdotsfor{11} &0 \end{bmatrix}. $$ Let $P_l$ be the $(2 l +2) \times (2 l+2)$ matrix defined by setting for $i,j= 1,\dots, 2l+2$, $$ P_l(i,j) = \begin{cases} 1 & \text{ if } i= j,\\ -1 & \text{ if } j=1, \, i= 2, \dots, l+1, \\ 0 & \text{ otherwise. } \end{cases} $$ We know that \begin{equation} P_{l+1} (M_{l,l+1}^t - I_{l,l+1}^t) {\Bbb Z}^{2l+2} = B_{l,l+1}{\Bbb Z}^{2l+2}. \end{equation} Denote by $\bar{P}_{l+1}$ the induced isomorphism from ${\Bbb Z}^{2l+4} /(M_{l,l+1}^t - I_{l,l+1}^t) {\Bbb Z}^{2l+2}$ onto ${\Bbb Z}^{2l+4} /B_{l,l+1}{\Bbb Z}^{2l+2}$. Let $J_{l,l+1}$ be the $(2 l +4) \times (2 l+2)$ matrix defined by setting for $i=1,\dots, 2l+4, \, j= 1,\dots, 2l+2$, $$ J_{l,l+1}(i,j) = \begin{cases} 1 & \text{ if } i= j=1,\\ 1 & \text{ if } i=j-1, \, i= 2, \dots, 2l+3, \\ 1 & \text{ if } i=2l+4, \, j= 2l+2, \\ 0 & \text{ otherwise. } \end{cases} $$ Denote by $\bar{J}_{l,l+1}$ the induced homomorphism from ${\Bbb Z}^{2l+2} /B_{l-1,l}{\Bbb Z}^{2l}$ into ${\Bbb Z}^{2l+4} /B_{l,l+1}{\Bbb Z}^{2l+2}$. \begin{lemma} The diagram $$ \begin{CD} {\Bbb Z}^{2l+2}/(M_{l-1,l}^t - I_{l-1,l}^t) {\Bbb Z}^{2l} @>\bar{I}^t_{l,l+1}>> {\Bbb Z}^{2l+4}/(M_{l,l+1}^t - I_{l,l+1}^t) {\Bbb Z}^{2l+2} \\ @V \bar{P}_l VV @V \bar{P}_{l+1} VV \\ {\Bbb Z}^{2l+2}/ B_{l-1,l}{\Bbb Z}^{2l} @>\bar{J}_{l,l+1}>> {\Bbb Z}^{2l+4}/ B_{l,l+1}{\Bbb Z}^{2l+2} \end{CD} $$ is commutative. \end{lemma} For an integer $n$, we denote by $q(n) \in {\Bbb Z}$ the quotient of $n$ by $N$ and by $r(n) \in \{0,1,\dots, N-1\}$ its residue such as $n = q(n) N + r(n)$. The following lemma is straightforward. \begin{lemma} Fix $l=2,3,\dots $. For $z = \begin{bmatrix} z_1 \\ \vdots \\ z_{2l+4} \end{bmatrix} \in {\Bbb Z}^{2l+4}, $ put inductively \begin{align*} x_1 & = z_{2l+4}, \\ x_2 & = z_{2l+3}, \\ x_{k} & = z_{k} \qquad \text{ for } k=l+3,l+4,\dots, 2l+2,\\ x_{l+2} & = q(z_{1}),\\ x_{l+1} & = -z_{l+2},\\ x_{l} & = -z_{l+1} -z_{l+2},\\ x_{l-k} & = - z_{l-k+1} - z_{l-k+2} - \cdots - z_{l+2}, \qquad \text{ for } k=1,2,\dots, l-3. \end{align*} Set \begin{align*} r_{l,l+1}(z) & = r(z_1) \in \{ 0,1,\dots, N-1 \},\\ \varphi_{l,l+1}(z) & = z_2 - z_{2l+3} + z_{2l+4},\\ \psi_{l,l+1}(z) & = z_3 + z_4+ z_5 + \cdots + z_{l+2} +z_{2l+3}. \end{align*} Then we have \begin{equation*} \begin{bmatrix} z_1 \\ \vdots \\ z_{2l+4} \end{bmatrix} = B_{l,l+1} \begin{bmatrix} x_1 \\ \vdots \\ x_{2l+2} \end{bmatrix} + \begin{bmatrix} r_{l,l+1}(z) \\ \varphi_{l,l+1}(z) \\ \psi_{l,l+1}(z)\\ 0 \\ \vdots \\ 0 \end{bmatrix}. \end{equation*} \end{lemma} The following lemma is also direct. \begin{lemma} For $z= [z_i]_{i=1}^{2l+4} \in {\Bbb Z}^{2l+4}$, one has $$ r_{l,l+1}(z) = 0 \text{ in } \{0,1,\dots,N-1 \} \quad \text{ and } \quad \varphi_{l,l+1}(z) = \psi_{l,l+1}(z)=0 \text{ in } {\Bbb Z} $$ if and only if there exists $y= [y_i]_{i=1}^{2l+2}\in {\Bbb Z}^{2l+2}$ such that $ z =B_{l,l+1}(y)$. \end{lemma} \begin{lemma} The map $ \xi_{l+1} : [z_i]_{i=1}^{2l+4} \in {\Bbb Z}^{2l+4} \longrightarrow (r_{l,l+1}(z), \varphi_{l,l+1}(z), \psi_{l,l+1}(z)) \in \{0,1,\dots,N-1 \}\oplus{\Bbb Z}\oplus{\Bbb Z} $ induces an isomorphism from $ {\Bbb Z}^{2l+4}/B_{l,l+1}{\Bbb Z}^{2l+2} $ onto $ {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z}. $ \end{lemma} \begin{proof} It suffices to show the surjectivity of $\xi_{l+1}$. For $(g,m,k) \in \{0,1,\dots,N-1 \}\oplus{\Bbb Z}\oplus{\Bbb Z}$, put $z =[g,m,k,0,\dots,0]^t \in {\Bbb Z}^{2l+4}$. One then sees that $$ r_{l,l+1}(z) = g, \qquad \varphi_{l,l+1}(z) =m, \qquad \psi_{l,l+1}(z) =k. $$ \end{proof} We denote by $\bar{\xi}_{l+1}$ the above isomorphism from $ {\Bbb Z}^{2l+4}/B_{l,l+1}{\Bbb Z}^{2l+2} $ onto $ {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z} $ induced by ${\xi}_{l+1}$. \begin{lemma} The diagram $$ \begin{CD} {\Bbb Z}^{2l+2}/B_{l-1,l}{\Bbb Z}^{2l} @>\bar{J}_{l,l+1}>> {\Bbb Z}^{2l+4}/B_{l,l+1}{\Bbb Z}^{2l+2} \\ @V \bar{\xi}_l VV @V \bar{\xi}_{l+1} VV \\ {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z} @>L >> {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z} \end{CD} $$ is commutative, where $L = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 1 \end{bmatrix}. $ \end{lemma} \begin{proof} For $z = [z_i ]_{i=1}^{2l+2} \in {\Bbb Z}^{2l+2}$, it is direct to see that \begin{align*} r_{l,l+1}(J_{l,l+1}(z)) & = r_{l-1,l}(z), \qquad \varphi_{l,l+1}(J_{l,l+1}(z)) = 0, \\ \psi_{l,l+1}(J_{l,l+1}(z)) & = \varphi_{l-1,l}(z) + \psi_{l-1,l}(z). \end{align*} \end{proof} Therefore we conclude \begin{lemma} $K_0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) } ) \cong {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}$. \end{lemma} \begin{proof} By Lemma 5.1, it follows that \begin{align*} K_0(sc({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) } ) = & \varinjlim \{ {\Bbb Z}^{2l + 4} / B_{l,l+1}{\Bbb Z}^{2l+2}, \overline{I^t}_{l,l+1} \} \\ = & \varinjlim \{ {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z}, L \} \\ \cong & {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}. \end{align*} \end{proof} As the torsion free part of $ K_0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) $ is not isomorphic to $ K_1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) $, these types of K-groups cannot appear in those of sofic systems. We next compute the Bowen-Franks groups $BF^0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) })$ and $BF^1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }).$ As in \cite[Theorem 9.6]{Ma1999c}, one sees the following formulae of short exact sequences of the universal coefficient type theorem: \begin{align*} 0& \rightarrow {{\operatorname{Ext}}}_{\Bbb Z}^1(K_{i}({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }),{\Bbb Z}) \\ & \rightarrow BF^{i}({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \\ & \rightarrow {{\operatorname{Hom}}}_{\Bbb Z}(K_{i+1}({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }),{\Bbb Z}) \rightarrow 0. \end{align*} The sequences split unnaturally. \begin{lemma} $ BF^0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong {\Bbb Z}/ N {\Bbb Z}, \quad BF^1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong {\Bbb Z}^2. $ \end{lemma} \begin{proof} Since for a finitely generated abelian group $G$, ${{\operatorname{Hom}}}_{\Bbb Z}(G,\Bbb Z)$ is the torsion free part of $G$ and ${{\operatorname{Ext}}}_{\Bbb Z}^1(G,\Bbb Z)$ is the torsion part of $G$, one gets the desired assertions by Lemma 5.8. \end{proof} As the torsion free part of $ BF^0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) $ is not isomorphic to $ BF^1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) $, these types of Bowen-Franks groups cannot appear in those of sofic systems. We restate Lemma 5.2, Lemma 5.8 and Lemma 5.9 as \begin{theorem} \begin{align*} K_0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong & {\Bbb Z}/N {\Bbb Z} \oplus {\Bbb Z}, \qquad K_1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong 0,\\ BF^0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong & {\Bbb Z}/N{\Bbb Z}, \quad \qquad BF^1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }) \cong {\Bbb Z}^2. \end{align*} \end{theorem} We will next compute the K-groups for ${ sc({\mathcal C}^{(N)}_{reset}) }$. The computation is completely similar to the above one as in the following way. We can take the $l$-past equivalence classes of ${ sc({\mathcal C}^{(N)}_{reset}) } $ as the similar ones to the ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$. Let $({ {\mathcal M} },I) = ({ {\mathcal M} }_{l,l+1},I_{l,l+1})_{l\in { {\Bbb Z}_+ }}$ be the symbolic matrix system for ${ sc({\mathcal C}^{(N)}_{reset}) }$. We see that $$ { {\mathcal M} }_{l,l+1}(i,j) = \begin{cases} a_1 + \cdots + a_N & \text{ if } i= j=l+2,\\ b & \text{ if } 1 \le i= j \le l+1, \\ b & \text{ if } i + j = 2l +5, \, 1 \le i \le l+1, \\ c & \text{ if } l+3 \le i= j \le 2l+2, \\ c & \text{ if } i = 2l + 2, j= 2l +3, 2l+4, \\ 0 & \text{ otherwise. } \end{cases} $$ Different from the symbolic matrix system for ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$ is only the $l+2$-th row in ${ {\mathcal M} }_{l,l+1}$. The matrix $I_{l,l+1}$ is the same as the one for ${ sc(({\mathcal C}^{(N)}_{reset})^{rev}) }$. Let $(M_{l,l+1},I_{l,l+1})_{l\in { {\Bbb Z}_+ }}$ be its nonnegative matrix system. Hence we have $$ M_{l,l+1}^t(i,j) - I_{l,l+1}^t(i,j) = \begin{cases} N & \text{ if } i=j=l+2,\\ 1 & \text{ if } 2 \le i= j \le 2l+2, \, i\ne l+2,\\ 1 & \text{ if } i + j = 2l +5, \, 1 \le j \le l+1, \\ -1 & \text{ if } 2 \le i = j+1\le 2l + 2, \\ 0 & \text{ otherwise. } \end{cases} $$ By considering the kernels and cokernels of the following matrices $B_{l,l+1}, l\in {\Bbb N}$ defined by $$ B_{l,l+1}(i,j) = \begin{cases} N & \text{ if } i= j=l+2, \\ 1 & \text{ if } 2 \le i= j \le 2l+2, \, i\ne l+2,\\ 1 & \text{ if } (i, j) = (2l +4, 1), (2l+3, 2), \\ -1 & \text{ if } i=2, \, j=1, \\ 0 & \text{ otherwise, } \end{cases} $$ that is $$ \setcounter{MaxMatrixCols}{16} B_{l,l+1} = \begin{bmatrix} 0& \hdotsfor{5} & 0 & 0 & 0 & \hdotsfor{5} \\ -1& 1& 0 & \hdotsfor{3} & 0 & 0 & 0 & \hdotsfor{5} \\ 0& 0& 1 & 0 & \hdotsfor{2} & 0 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{1} & 0& 0 & 1 & 0 &\hdotsfor{1} & 0 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{2} & \cdot& \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{3} & \cdot &\cdot &\cdot &\cdot&\cdot& \cdot& \hdotsfor{5} \\ \hdotsfor{4} & 0 & 0 & 1 & 0 & 0 & \hdotsfor{5} \\ \hdotsfor{5} & 0 & 0 & N & 0 & \hdotsfor{5} \\ \hdotsfor{6} & 0 & 0 & 1 & 0 & \hdotsfor{4} \\ \hdotsfor{5} & 0 & 0 & 0 & 0 & 1 & 0 & \hdotsfor{3} \\ \hdotsfor{4} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &\hdotsfor{2} \\ \hdotsfor{3} & \cdot & \cdot&\cdot & \hdotsfor{3} &\cdot &\cdot&\cdot &\cdot &\cdots \\ \hdotsfor{2} & \cdot& \cdot & \cdot&\hdotsfor{5} &\cdot&\cdot &\cdot &\cdot \\ \hdotsfor{1} & 0& 0 &0 & \hdotsfor{7} &0 & 0 &1 \\ 0& 1& 0 & \hdotsfor{10} &0 \\ 1& 0& \hdotsfor{11} &0 \end{bmatrix}. $$ We can similarly show that $$ K_1({ sc({\mathcal C}^{(N)}_{reset}) } )\cong 0 $$ and \begin{align*} K_0( { sc({\mathcal C}^{(N)}_{reset}) } ) = & \varinjlim \{ {\Bbb Z}^{2l + 4} / B_{l,l+1}{\Bbb Z}^{2l+2}, \overline{I^t}_{l,l+1} \} \\ = & \varinjlim \{ {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}\oplus{\Bbb Z}, L \} \\ \cong & {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z}. \end{align*} Therefore we have \begin{theorem} \begin{align*} K_0({ sc({\mathcal C}^{(N)}_{reset}) } )& \cong K_0({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) } ) \cong {\Bbb Z}/N{\Bbb Z} \oplus {\Bbb Z},\\ K_1({ sc({\mathcal C}^{(N)}_{reset}) } )& \cong K_1({ sc(({\mathcal C}^{(N)}_{reset})^{rev}) } ) \cong 0. \end{align*} \end{theorem} \begin{cor} For $N,N'\in {\Bbb N},\ N \ne N'$, ${ sc({\mathcal C}^{(N)}_{reset}) }$ and $sc({\mathcal C}^{(N')}_{reset})$ are not flow equivalent to each other. \end{cor} \begin{proof} K-groups are invariants of flow equivalence (\cite{Ma2001a}). \end{proof} \bibliographystyle{amsplain}
2001.06496
\section{Introduction} Ordinary classical fluids only display one kind of sound waves that correspond to longitudinal compressional oscillations of the fluid~\cite{LLfluids}. On the other hand classical solids display transverse waves as well, which originate from their finite restoring force to shear deformations~\cite{LLelasticity}. Quantum Fermi fluids can dramatically differ from this paradigm by displaying long-lived and propagating collective shear sound waves at arbitrarily small frequency and wave vector while lacking any form of static crystalline order~\cite{Pines, Conti, Shear, Chubukov, Alekseev2019b}. To this date there is no report of the observation of these shear sound waves of electrons in metals, and a pioneering attempt to detect them in $^3$He~\cite{Roach} remained inconclusive~\cite{Flowers}. However, the appearance of these modes requires only a moderate interaction strength, in the sense that they are expected to become sharp when the quasiparticle mass becomes approximately twice and three times the transport mass in two- and three-dimensions respectively~\cite{Shear}. Therefore it is possible that these elusive collective modes are actually present in a variety of electron liquids but they have remained undetected so far because their transverse nature makes them unresponsive to charge-sensitive probes. In this paper, we demonstrate that shear modes leave clear fingerprints in the conductivity of clean metallic channels. Our idealized setup is depicted in Fig.~\ref{Fig.mainfig}(a), where a uniform ac electric field generates an alternating current along the $y$ direction. In a clean channel, the current can only be damped at the boundary. This is illustrated by the current profile shown in Figs.~\ref{Fig.mainfig}(b) and \ref{Fig.mainfig}(c), which is suppressed at the boundaries due to friction. The current magnitude varies in a direction \textit{transverse} to the electron flow signaling the excitation of shear modes. The central result of our work is summarized in Fig.~\ref{Fig.mainfig}(d), which shows the conductance of the strip as a function of frequency. When scattering due to impurities or electron-electron collisions is weak, the conductance exhibits sharp dips at frequencies $\omega=n \omega_0$, where $\omega_0$ is the shear sound frequency at momentum $2\pi/W$ determined by the width $W$ of the channel. In fact, when friction only occurs at the boundary (blue curve), the conductivity vanishes on resonance and the liquid responds in a dissipationless fashion. As we will show, this is a characteristic transverse response of a sliding crystal which is only subjected to friction at the boundaries. Therefore these resonances reveal a type of crystallinity that appears in Fermi liquids when probed dynamically. Such remarkable collective behavior could be observed in ultra-clean samples such as those recently employed to observe the hydrodynamic electronic flow~\cite{electronhydro1,electronhydro2,electronhydro3,Alekseev2019} but in the low-temperature quantum regime where the classical hydrodynamic description breaks down. A related behavior in the form of oscillations of the absorption power as a function of magnetic field was predicted in Ref.~\onlinecite{Alekseev2019} (see Fig.~2 of this reference). We note, however, that in the regime of long wavelengths in a magnetic field there is no well-defined separation into transverse and longitudinal modes leading to a crucially distinct regime of collective modes from the one studied here. The conductivity dips shown in Fig.~\ref{Fig.mainfig}(d) are unique signatures of the shear sound that would be absent in weakly interacting metals where this mode does not exist (black curve). Likewise, the dips are washed out once scattering in the bulk becomes comparable to the boundary friction (dashed curve). This is a consequence of a reduced shear force when the force difference between the interior and the boundary is small as we will describe in detail. Our paper is organized as follows. Section~\ref{Sec.II} generalizes the discussion of Ref.~\onlinecite{Shear} to describe the behavior of shear modes in the presence of impurity and electron-electron collisions in an ideal infinite two-dimensional (2D) system without boundaries. Section~\ref{Sec.III} is devoted to a conceptual discussion reviewing some of the key similarities and differences between the quantum Landau Fermi liquid (LFL), crystalline solids, ordinary classical fluids, and viscoelastic classical fluids, also for ideal infinite-size 2D systems. In Sec.~\ref{Sec.IV} we develop a theory to describe the hydrodynamics of the LFL in a strip geometry and derive the exact analytic solution which predicts the appearance of shear resonances in experiments. In Sec.~\ref{Sec.V} we show that these resonances are analogous to those arising from an ideal crystal sliding in a channel by studying a toy model. We summarize our results and discuss potential material candidates to observe these shear sound modes in Sec.~\ref{Sec.VI}. \begin{figure*}[t] \includegraphics[scale=1.0]{Fig1x.eps} \caption{ (a) Experimental setup to detect shear sound. The blue region illustrates the out-of-phase (imaginary) current profile in the channel. (b) Out-of-phase (imaginary) and (c) in-phase (real) current profiles for driving frequency on and off resonance with the shear sound frequency $\omega _{\rm shear}$. (d) Real part of the transverse conductivity in units of $D \Gamma _{\rm eff} / \omega ^2$ when the shear sound is present (solid blue line) and absent (solid black line) in the limit of boundary dominated scattering [boundary scattering parameter $b = 0.1 (2 \pi v_{\rm F})$], where $D = ne^2/m$ is the Drude weight and $\Gamma _{\rm eff}$ is an effective scattering rate~\cite{supplementary}. For finite bulk scattering ($\Gamma _1 = 0.1 v_{\rm F}q_0$), the resonant zeros at the shear sound harmonics (solid blue line) evolve into smooth dips (dashed blue line). }\label{Fig.mainfig} \end{figure*} \section{Diffusive and propagating shear modes}\label{Sec.II} At low temperatures metals enter the quantum Landau Fermi liquid (LFL) regime. A Fermi liquid can be thought of as having an infinite number of slow degrees of freedom that describe the relaxation of the shape of the Fermi surface. Unlike superfluids or ordinary classical liquids, the low-energy excitations of LFLs cannot be captured completely by a description in terms of a finite number of dynamical fields such as density and current. We will focus on 2D systems but many of our conclusions carry over to the three-dimensional (3D) case. We begin by stating a central finding of our study: even in the presence of collisions, 2D Fermi liquids display a sharp propagating transverse sound mode with speed $v_{\rm s}=v_{\rm F} (1+F_1)/2\sqrt{F_1}$, for Landau parameter $F_1>1$, and for wave vectors $q \gtrsim q_*$, with $q_* = \max \left\lbrace \Gamma_1/v_{\rm s}, \Gamma_2/v_{\rm F} \sqrt{F_1} \right\rbrace$, where $v_{\rm F}$ is the Fermi velocity and $\Gamma_1,\Gamma_2$ are the momentum-relaxing and -preserving collision rates, respectively. We will now derive these results within the Landau theory of Fermi liquids. In LFL theory the shape of the Fermi surface becomes a dynamical object and small deviations of the radius $p_{\rm F}(r,\theta)$ from the equilibrium shape obey the linearized Landau kinetic equation (LKE)~\cite{Pines}: \begin{align} \partial _t p_{\rm F} (\vec{r}, \theta) &+ \vec{v}_p \cdot \vec{\partial}_{\vec{r}}\Bigl[ p_{\rm F} (\vec{r}, \theta)+ \int \frac{d \theta '}{2\pi} f(\theta - \theta ') p_{\rm F} (\vec{r}, \theta ') \Bigr]\nonumber \\ &= - e \vec{E} \cdot \vec{v}_p + I [p_{\rm F}]. \label{Eq.LKE} \end{align} Here, $\vec{v}_p = v_{\rm F}\hat{p}$ is the velocity normal to the Fermi surface at angle $\theta$, $f(\theta-\theta')$ is the Landau function including short-range and Coulomb interactions, $\vec{E}$ is the applied electric field, and $I$ are collision terms. There are two kinds of collisions terms: those which relax momentum, such as electron-impurity collisions, and those that preserve momentum, originating from electron-electron collisions, which can be modeled as~\cite{LevGregNat,LevGregPNAS,Alekseev2018}: \begin{eqnarray} I [p_{\rm F}] &=& - \Gamma _1 (p_{\rm F} - P_0[p_{\rm F}]) \nonumber \\ &&- \Gamma _2 (p_{\rm F} - P_0[p_{\rm F}] - P_1[p_{\rm F}] - P_{-1} [p_{\rm F}]). \end{eqnarray} Here, $P_m[p_{\rm F}]$ projects the Fermi radius onto the $m$th harmonic $e^{i m \theta}$. There are two types of solutions to the LKE: incoherent and collective modes. The incoherent modes are sharply \textit{localized} angular deformations of the Fermi surface~\cite{Pines, Shear} that form the particle-hole continuum with a dispersion of the form: \begin{equation} \omega _{\rm p-h}= v_{\rm F} q \cos \theta+i(\Gamma_1+\Gamma_2). \end{equation} Collective modes, however, are angularly \textit{delocalized} deformations of the Fermi surface~\cite{Pines, Shear}. When the system has a microscopic mirror symmetry and the wave vectors of the modes lie along the mirror invariant line, the modes can be separated into odd (transverse) and even (longitudinal) under the mirror operation~\cite{Pines, Shear}. The well-known plasma mode of metals is a longitudinal mode, whereas, the shear sound is a transverse mode. To illustrate the features of the shear sound, we consider a simplified model in which all the $n > 1$ angular moments of the Landau interaction function vanish, $F_{n > 1} = \int (d \theta/2\pi) f(\theta ) \cos (n \theta) = 0$. The $F_1$ parameter controls the ratio of the quasiparticle mass ($m^*$) to the Drude mass ($m$) of a Fermi liquid, $m^*=(1+F_1) m$. The Drude mass would equal the non-interacting mass ($m_0$) in Galilean invariant systems~\cite{Randeria, Baeriswyl, Varma, Okabe}. Our key results are expected to remain valid in the presence of other Landau parameters whenever the shear sound mode remains the only sharp collective mode in the transverse sector~\cite{Shear}. For this model, a LFL with $F_1>1$ would feature a propagating shear sound mode with dispersion: \begin{equation}\label{Eq.shearcomplexdispphys} \omega _{\rm s} = i \left( \Gamma _1 + v_{\rm s} q_2\right) + v_{\rm s}\sqrt{q^2 - q_2^2}, \quad q_2 = \frac{\Gamma _2}{v_{\rm F} \sqrt{F_1}}. \end{equation} This mode exists for $q>q_2$, whereas for $q<q_2$ one encounters diffusive collective modes as depicted in Fig.~\ref{Fig.schematic}(a) and detailed in the Supplemental Material~\cite{supplementary}. Therefore, the shear sound is expected to become a sharp collective mode in moderately interacting Fermi liquids ($F_1>1$) for $q > q_*$, with \begin{equation} q_* \approx \max \left\lbrace \frac{\Gamma_1}{v_{\rm s}}, q_2 \right\rbrace. \end{equation} In the $q_2 \ll q \ll p_{\rm F}$ limit, the shear sound velocity asymptotes to its undamped value $v_{\rm s}$~\cite{Shear}. On the other hand, for a weakly interacting LFL with $|F_1| < 1$, only a single, purely decaying collective mode exists as depicted in Fig.~\ref{Fig.schematic}(b), with dispersion: \begin{eqnarray}\label{Eq.weakdiff} \omega_{\rm diff} &=& i \left( \Gamma _1 +v_{\rm s} q_2- v_{\rm s} \sqrt{q_2^2 - q^2}\right) , \\ &\simeq & i \left(\Gamma _1 + \frac{v_{\rm F}}{2Q} q^2 + \mathcal{O}(q^4) \right), \quad Q = \frac{1}{v_{\rm F}}\frac{2\Gamma _2}{1+F_1}. \end{eqnarray} This decaying mode exists for $0 \leq q \leq Q$, where its relaxation rate increases with $q$ from $\Gamma _1$ at $q \rightarrow 0$ to $\Gamma_1+\Gamma_2$ at $q = Q$, as shown in Fig.~\ref{Fig.schematic}(b). We have found that $\Gamma_1+\Gamma_2$ is, within our model, the momentum-independent value of the decay rate of all the modes that make up the particle-hole continuum. Therefore, in the presence of collisions the particle-hole continuum is displaced as a whole to lie in a plane of constant imaginary part, and is depicted by the green region in Figs.~\ref{Fig.schematic}(a) and \ref{Fig.schematic}(b). Notice that this transverse mode becomes strictly diffusive only in the limit of vanishing momentum-relaxing collisions $\Gamma_1 \rightarrow 0$, and exists only for a non-vanishing rate of momentum preserving collisions $\Gamma_2 > 0$. Therefore, at such small wave vectors the weakly interacting Fermi liquid ($|F_1|<1$) behaves like a classical fluid, as we will describe in more detail in the next section, where the slow diffusive relaxation of transverse currents is a consequence of the local conservation of momentum~\cite{LLfluids}. When $F_1 < -1$, one finds instead exponentially growing modes associated with a Pomeranchuk instability~\cite{Chubukov, Chubukovmirage, supplementary}. \begin{figure}[h] \includegraphics[scale=1.0]{Fig2xV3.eps} \caption{ (a) The dispersive shear sound (blue solid curve) exists only for moderately interacting Fermi liquids ($F_1 > 1$) and relaxes at a lower rate than that of the incoherent particle-hole excitations (green wedge), $\Gamma _s = \Gamma _1 + v_{\rm s} q_2 < \Gamma _1 + \Gamma _2$. (b) The dispersive shear sound is absent when interactions are too weak ($F_1 < 1$). Red and blue dashed curves indicate the dispersion of decaying collective shear modes.}\label{Fig.schematic} \end{figure} \section{Transverse modes in Fermi liquids, classical viscoelastic fluids and crystalline solids}\label{Sec.III} In this section, we would like to discuss the relations between the transverse current responses of the quantum Fermi liquid, classical fluids, and crystalline solids. We review some remarkable similarities but also sharp differences between these systems and the quantum LFLs at small $(q, \omega)$. This serves as a reminder that analogies between quantum LFLs and classical states of matter must be employed cautiously even in the limit of small $(q, \omega)$, and that these systems ultimately belong to different universality classes. For conceptual clarity we restrict the discussion in this section to translationally invariant fluids by taking the momentum-relaxing rate to be $\Gamma _1 = 0$ from the outset. We would like to begin by making precise what we mean by ``quantum'' in ``quantum LFL''. When we refer to a ``quantum LFL'' we are emphasizing that this is a state of matter which is strictly speaking only well defined at $T=0$, although its consequences permeate to finite temperatures, analogous to the terminology employed in quantum critical phenomena. Therefore, the long-wavelength response of the ``quantum LFL'' is defined by taking first the limit $T\rightarrow 0$, and then afterwards taking the limits of small $(q, \omega)$. This order is crucial as the two limits do not commute. In fact, in the opposite case when $(q, \omega) \rightarrow 0$ while keeping temperature fixed, the response of the LFL is identical to that of an ordinary classical fluid, as well shall see below. In the language of critical phenomena, temperature can be viewed as a relevant perturbation that transforms the universal properties of the liquid at sufficiently long wavelengths. In our formalism, temperature enters through the momentum-preserving quasiparticle collision rate, which scales with temperature as $\Gamma_2\sim\left(k_B T\right)^2/E_{\rm F}$ up to logarithmic corrections~\cite{Hodges1971,Chaplik1971,Bloom1975,eecolrate,Fukuyama1983,Zheng1996,Jungwirth1996, Menashe1996,Reizer1997,Narozhny2002,novikov2006viscosity}. In Sec.~\ref{Sec.II}, we have seen that at $T=0$ the shear sound is indeed a sharp linearly dispersing mode at small $(q, \omega)$, reminiscent of solids also featuring a propagating shear sound at arbitrarily small $(q, \omega)$ but unlike classical fluids (including viscoelastic fluids), which display shear diffusion at small $(q, \omega)$. At finite temperature and sufficiently small $(q, \omega)$, LFLs also exhibit a shear diffusion mode. To make these similarities and distinctions more concrete, we will review the limiting behavior of the transverse conductivity for these various states of matter in the remainder of this section. We begin by considering the case of an ordinary classical liquid with the same symmetries as the quantum Fermi liquid we are interested in: homogeneity, isotropy, time reversal, etc. Such liquids can be described at long wavelengths by the Navier-Stokes equation~\cite{LLfluids}, which upon linearization yields the transverse conductivity (that measures the current density in response to an external transverse force) \begin{equation}\label{Eq.transcondCLfull} \sigma _\perp ^{\rm CL} (q, \omega) = \frac{n e^2}{m} \frac{1}{i \omega + \frac{\eta}{mn}q^2}, \end{equation} where $\eta$ is the shear viscosity of the liquid. As we see, there is a diffusive pole for transverse currents, with diffusion constant $D = \eta / m n$. Now, let us consider an ordinary crystalline solid which at long distances has also the same symmetries of interest. We take the solid to be described by an effective elasticity theory, from which the conductivity can be easily derived by adding an external force to the elasticity equations of motions~\cite{LLelasticity}. In particular, the transverse conductivity, \begin{equation}\label{Eq.transcondCS} \sigma _\perp ^{\rm CS} (q, \omega) = \frac{n e^2}{m} \frac{1}{i \left(\omega - c_t^2 \frac{q^2}{\omega}\right)}, \end{equation} features a real and linearly dispersing pole at $\omega = c_t q$, signaling the presence of a propagating transverse sound mode in the solid. The transverse sound velocity $c_t$ can be related to the shear modulus $\mu$ of the solid as $c_t^2 = \mu / m n$. Let us now consider the transverse response of the quantum LFL. The full expression of the transverse conductivity of the bulk Fermi liquid will be presented in Eq.~\eqref{Eq.condbk} and here we present its zero temperature and clean limit ($\Gamma _{1,2} = 0$) at small $\omega$ and $q$ but with an arbitrary ratio of $s = \omega / v_{\rm F} q$: \begin{equation}\label{Eq.transcond0} \sigma _\perp ^{\rm LFL} (q, \omega, T = 0) = \frac{n e^2}{m} \frac{2}{i v_{\rm F}q} \frac{s - \sqrt{s^2 - 1}}{1 - F_1 (s - \sqrt{s^2-1})^2}. \end{equation} where the frequencies lie in the lower half plane, $s\to s-i0$. The response has nonanalyticities at the onset of the particle-hole continuum of incoherent excitations at $\omega = v_{\rm F} q$. This threshold is ultimately a consequence of the existence of an underlying sharp Fermi surface. It is easy to verify that when $F_1 >1$ the denominator of the transverse conductivity of the quantum Fermi liquid has a zero at the ideal $T=0$ dispersion of the shear sound mode~\cite{Shear} corresponding to the $\Gamma _{1,2} \rightarrow 0$ limit of Eq.~\eqref{Eq.shearcomplexdispphys}: \begin{equation}\label{Eq.shear0} \omega _{\rm s} (\Gamma _{1,2} = 0) = v_{\rm s} q, \quad v_s = \frac{1 + F_1}{2\sqrt{F_1}} v_{\rm F}. \end{equation} Notice that the condition $F_1 >1$, is precisely that which needs to be satisfied so that the speed of the shear sound $v_{\rm s}$ is larger than $v_{\rm F}$, which is a self-consistent requirement if it is to be a well-defined propagating mode outside of the particle-hole continuum. \begin{table}[t] \begin{tabular}[b]{ccccccc} \hline\\[-1.em] \hline\\[-1.em] & &Liquid &\quad \quad \quad &Solid &\quad \quad \quad &LFL ($T=0$)\\ [.5em] \hline\\[-1.em] $\displaystyle \lim_{\omega \rightarrow 0} \sigma _\perp (q, \omega) $ & & $\displaystyle \frac{n^2 e^2}{m \eta q^2}$ & &$0$ & & $\displaystyle \frac{e^2}{h} (2S + 1) \frac{p_{\rm F}}{q}$\\ [1.em] \hline\\[-1.em] \hline\\[-1.em] \end{tabular} \caption{Quasistatic limit of the transverse conductivity in classical liquids, crystalline solids, and zero temperature LFLs. The factor $(2S + 1)$ is the spin degeneracy factor of the Fermi fluid (2 for usual spin-$\frac{1}{2}$ fermions).}\label{table} \end{table} To compare the transverse response of these different systems we first consider the ``optical'' regime $\omega \gg v_{\rm F} q$, where a quantum Fermi liquid resembles a solid as emphasized in the seminal work of Conti and Vignale~\cite{Conti}. Indeed, the transverse response of the quantum Fermi liquid in this regime \begin{equation}\label{Eq.transcondLFL} \sigma _\perp ^{\rm LFL} (\omega \gg v_{\rm F} q, T = 0) \approx \frac{n e^2}{m} \frac{1}{i\left( \omega - \frac{1 + F_1}{4} v_{\rm F}^2 \frac{q^2}{\omega}\right)} \end{equation} is identical to that of the crystalline solid in Eq.~\eqref{Eq.transcondCS}. When $F_1\gg 1$ the above form has a pole inside the optical regime at $\omega = \sqrt{1 + F_1}v_{\rm F} q/2$, which corresponds to the ideal shear sound dispersion from Eq.~\eqref{Eq.shear0} in that limit, and which was first obtained in Ref. ~\citenum{Conti}. Notice that the expansion in Eq.~\eqref{Eq.transcondLFL}, when extrapolated without caution, appears to indicate that the Fermi liquid always has a shear sound mode. However, as we have seen, the shear sound mode only appears as a separate mode for $F_1>1$. For intermediate values of $F_1$ the analogy between quantum LFLs and crystalline solids fails because particle-hole excitations cannot be ignored. In particular, the transition at $F_1 = 1$, where the shear sound merges with the particle-hole continuum, cannot be captured by a classical fluid or elasticity theory. The difference between classical and quantum regime is most striking in the quasi-static limit $\omega\ll v_F q$, where the quantum response is dominated by the particle-hole continuum. The transverse response in this regime for the three different cases is listed in Table~\ref{table}. While $\sigma _\perp$ has a $1/q^2$ dependence in a liquid, a solid cannot flow when subjected to static perturbations and exhibits a vanishing transverse conductivity at $\omega=0$. In contrast, the quantum Fermi liquid has a remarkable universal form $\propto 1/q$ in the quasi-static limit. The limit is finite in contrast to the solid, because the Fermi liquid still flows, but it is distinct from that of a classical fluid. This limiting response of the quantum LFL is universal in the sense that it is not renormalized by interactions and only depends on the geometry of the Fermi surface~\cite{Pines}. Notice also the appearance of Planck's constant in the denominator, a reminder of the quantum nature of the response in this limit. We will elaborate on the physics and measurement of this limit in a forthcoming publication and demonstrate that another quantum fluid, the spinon Fermi surface, which also features a sharp Fermi surface despite not being a LFL, has the same behavior in this limit. While the quantum Fermi liquid at strictly $T=0$ is clearly distinct from solids and classical fluids, finite temperatures smear out the sharpness of the Fermi surface on a scale $k_BT/v_F$, destroying the ``quantumness'' of the fluid at sufficiently small $q$. In the following, we elucidate how the classical behavior is recovered in LFL theory once the limit of small $q$ is taken at finite temperature. A useful point of comparison for LFLs at finite temperatures are classical viscoelastic fluids, which can also display long-lived shear modes~\cite{LLelasticity, Dyre2006, Trachenko2015, Lucas2019, Trachenko2020}. Specifically, we focus on the Frenkel model often employed in the description of classical viscoelastic fluids~\cite{Trachenko2015, Lucas2019, Trachenko2020}. Following Refs.~\citenum{Trachenko2015} and~\citenum{Trachenko2020}, we add to the Navier-Stokes-Frenkel equation an external force per unit area $\vec{f} = n e \vec{E}$ to obtain the equation of motion \begin{equation} \eta \vec{\partial}_{\vec{r}}^2 \vec{v} = (1 + \tau d_t ) (n m d_t \vec{v} + \vec{\partial}_{\vec{r}} p - \vec{f}), \end{equation} where $d_t = \partial _t + \vec{v}\cdot \vec{\partial}_{\vec{r}}$. Upon linearizing this equation one finds that the transverse current $j_{\perp} =n e v_{\perp}$ has an associated transverse conductivity % \begin{equation}\label{Eq.transcondFr} \sigma _\perp ^{\rm Fr} (q, \omega) = \frac{n e^2}{m} \frac{1}{i \omega + \frac{\eta}{mn (1 + i \tau \omega )}q^2}. \end{equation} This equation interpolates between the classical fluid in Eq.~\eqref{Eq.transcondCLfull} at $\omega \tau\ll 1$ and the solid in Eq.~(\ref{Eq.transcondCS}) at $\omega \tau\gg 1$. It contains a modified pole structure that give rise to a propagating shear sound wave with a momentum gap~\cite{Trachenko2015, Lucas2019, Trachenko2020}, \begin{equation}\label{Eq.shearFr} \omega _{\rm Fr} (q) = \frac{i}{2\tau} + \sqrt{c_{\tau}^2 q^2 - \frac{1}{4\tau^2}}, \quad c_{\tau}^2 = \frac{\eta}{nm\tau}. \end{equation} Here when $x<0$, we use the convention that $\sqrt{x}=-i\sqrt{|x|}$. This form is remarkably similar to what we have found for the shear sound in Fermi liquids at finite temperature in Sec.~\ref{Sec.II}. In fact, in the limit of small momenta, the transverse conductivity of the LFL at finite temperature, which can be obtained by taking $\Gamma_1=0$ from the more general Eq.~\eqref{Eq.condbk}, which we will discuss in the next section, reads as \begin{equation} \sigma _\perp ^{\rm LFL} (v_{\rm F} q \ll {\rm max}\{\Gamma_2, \omega \},\omega,T) \approx \frac{n e^2}{m} \frac{1}{i \omega + \frac{F_1 +1}{4(\Gamma _2 + i \omega)} v_{\rm F}^2 q^2}. \label{LFL_visco} \end{equation} On comparison with Eq.~\eqref{Eq.transcondFr}, one concludes that the time scale in Frenkel’s theory, $\tau$, is simply given by the inverse quasiparticle collision rate $\tau = \Gamma _2 ^{-1}$. We emphasize again that the analogy between viscoelastic fluids and LFLs at nonzero $T$ only holds for $F_1\gg 1$, when the pole lies in the regime of validity of Eq.~(\ref{LFL_visco}) far away from the particle-hole continuum. The discrepancy with the classical model is particularly evident at $F=1$, where the spectrum in complex frequency space undergoes a sharp transition to one without propagating collective mode as illustrated in Fig.~\ref{Fig.schematic}. At low frequencies, $\omega\ll \Gamma_2$, the LFL exhibits a shear diffusion pole with diffusion constant $D = (1 + F_1) v_{\rm F}^2/2 \Gamma _2 = \eta / m n$ regardless of the value of $F_1$ (cf. Sec.~\ref{Sec.II}), which recovers the well-known divergence with temperature of the classical viscosity of the Fermi fluid~\cite{LifshitzPitaevskii,Pomeranchuk1950,Abrikosov1959,Brooker1968,Jensen1968}. Such a divergence of the classical viscosity at low temperatures, which is present even in weakly non-interacting Fermi liquids, is a symptom of the emergence of the non-classical behavior of the fluid that we have previously discussed. The fact that the transverse conductivity is dominated entirely by the diffusion pole at finite temperatures and small $(q,\omega)$ can be understood from Fig.~\ref{Fig.schematic}, which shows that the modes making up the particle-hole continuum are completely displaced in the complex-$\omega$ plane to always have a finite imaginary part in their dispersion, even as $q \rightarrow 0$, whereas the shear diffusion pole asymptotes continuously to $(q,\omega)=(0,0)$ and thus dominates the response in such limit. This is ultimately a consequence of the conservation of momentum (when $\Gamma _1 = 0$) which prohibits currents from decaying locally and turns them into slow hydrodynamic modes~\cite{ChaikinLubensky}. \section{Shear resonances in ultraclean channels}\label{Sec.IV} In this section we develop a theory to describe the dynamics of the LFL in a strip geometry, which will allow us make concrete experimental predictions. To include boundary effects, we adopt the minimal but realistic model proposed in Ref.~\citenum{LevGregPNAS}, which combines specular boundary conditions with boundary friction modeled as an enhancement of the momentum-relaxing collisions at the boundary of the form $I [p_{\rm F}] \rightarrow I[p_{\rm F}] + I_{\rm bd}[ p_{\rm F} ]$, \begin{equation} I_{\rm bd}[ p_{\rm F}] = b \delta \left(|x| - \frac{W}{2}\right)\left(P_1 [p_{\rm F}] + P_{-1} [ p_{\rm F}] \right), \end{equation} where $x \in (-W/2,W/2)$, $y \in (-\infty,\infty)$. As demonstrated in Ref.~\citenum{LevGregPNAS} this model captures the hydrodynamic, diffusive, and ballistic regimes of metals and their crossovers. For related studies see Refs.~\citenum{LevGregNat, LevGregPNAS, Alekseev2018, Lucas1,Lucas2,Lucas3}. We have found an exact analytic solution of the LKE [Eq.~\eqref{Eq.LKE}] for this model with finite Landau parameters $\left\lbrace F_0 ,F_1\right\rbrace$ in addition to all of the above ingredients which we present in the following (see Supplemental Material~\cite{supplementary} for details). Because translation symmetry along $x$ is broken by the presence of the boundaries, the conductivity that determines the current along the channel, $j_y(x,t)$, in response to a driving electric field along the channel, $E_y(x,t)$, is a function of two wave vectors: \begin{eqnarray} j_y (q, \omega) &=& \sum_{q'}\sigma _y (q,q',\omega) E _y (q', \omega), \\ \sigma _y (q,q',\omega) &=& \delta _{q, q'}\sigma ^{\rm bk} _y (q,\omega)+ \sigma ^{\rm bd} _y (q,q',\omega). \end{eqnarray} The conductivity can be expressed as the sum of a bulk (bk) contribution: \begin{align} &\sigma ^{\rm bk}_y (q, \omega) = \frac{n e^2}{m}\frac{2i z}{ F_1 z ^2 -(v_{\rm F}q)^2 - 2i z \Gamma _2 }, \label{Eq.condbk}\\ &z = \omega-i (\Gamma_1 +\Gamma_2) - \sqrt{\left[\omega-i (\Gamma_1 +\Gamma_2)\right] ^2 -(v_{\rm F}q)^2}, \end{align} and a boundary (bd) contribution: \begin{eqnarray} \frac{\sigma ^{\rm bd}_y (q, q',\omega)}{\sigma ^{\rm bk} _y (q,\omega) \sigma ^{\rm bk} _y (q', \omega)} &=& -\frac{ \cos \left(\frac{\pi q}{q_0}\right)\cos \left(\frac{\pi q'}{q_0}\right) }{\bar{\sigma}^{\rm bd}_y + \bar{\sigma} ^{\rm bk}_{y} (\omega) }, \end{eqnarray} where $q$ is the momentum along $x$, $q_0=2\pi/W$, $m$ is the transport mass, $\bar{\sigma} ^{\rm bk}_{y} (\omega) = \sum _{n\in \mathbb{Z}} \sigma ^{\rm bk}_y (n q_0, \omega)$ is the transverse conductivity measuring the bulk response to a periodic array of delta-function perturbations, and $\bar{\sigma} ^{\rm bd} _y = ne^2 W/m b$ parametrizes boundary scattering. The total conductivity for a uniform driving field is obtained by taking the $q,q' \rightarrow 0$ limit of the above expressions, \begin{eqnarray} &&\sigma _y (\omega) = \sigma_{\rm D} (\omega) \left( 1 - \frac{ \sigma _{\rm D} (\omega) }{\bar{\sigma} ^{\rm bd} _y + \bar{\sigma} ^{\rm bk}_{y} (\omega)} \right), \label{Eq.fullcond} \end{eqnarray} where $\sigma _{\rm D} (\omega ) = ne^2 /m(i \omega + \Gamma _1)$ is the frequency--dependent Drude conductivity. The expression in Eq.~\eqref{Eq.fullcond} can be understood as the self-consistent response of the LFL to both an externally applied electric force and the boundary friction. In a single equation, our solution encompasses the effects of disorder, interactions, as well as boundary scattering, controlled respectively by the parameters $\Gamma_{1,2}$, $F_1$, and $b/W$, and therefore captures the hydrodynamic, diffusive, ballistic, and LFL regimes on equal footing. Notice that $F_0$ is absent in our expressions because of the absence of density fluctuations for driving electric fields parallel to the channel. The conductivity in Eq.~\eqref{Eq.fullcond} is shown for a metal with ($F_1 = 3.0$) and without ($F_1 = 0.5$) shear sound in Fig.~\ref{Fig.mainfig}(d). In the former case, there are sharp dips at the shear sound energy, $\omega={\rm Re}\,\omega_s$, evaluated at integer multiples of $q_0$. In Figs.~\ref{Fig.mainfig}(b) and \ref{Fig.mainfig}(c), we see that the resonant current becomes purely imaginary, i.e., it is out of phase with the applied field. Therefore, in the limit of boundary-dominated scattering, metals with shear sound display a \textit{dissipationless} response at the resonant frequencies of this mode. As we will see, this is analogous to the response of a sliding crystal which is subject to friction only at the boundaries. These conductivity minima acquire finite values in the presence of weak bulk scattering. The electron-electron collision rate is expected to scale as $\Gamma_2=(E_F/2 \pi) \left(k_B T/E_{\rm F}\right)^2$ up to logarithmic corrections~\cite{Hodges1971,Chaplik1971,Bloom1975,eecolrate,Fukuyama1983,Zheng1996,Jungwirth1996, Menashe1996,Reizer1997,Narozhny2002,novikov2006viscosity} and, therefore, can be easily suppressed by cooling the metal well below the Fermi temperature. The electron-impurity collision rate is limited at low temperatures by the bulk elastic mean-free path, $\lambda = v_{\rm F}/\Gamma _1$. We estimate that the shear sound dips would be visible in metals with $\lambda \gtrsim 5W$ at low temperatures. Furthermore, samples with enhanced boundary scattering relative to bulk scattering should lead to more pronounced conductivity dips. \section{Comparison with an Ideal Crystal Sliding in a Channel}\label{Sec.V} In this section we would like to illustrate the behavior of a crystal driven by an external uniform force through a clean channel in the presence of enhanced friction at the boundaries. We demonstrate that the aforementioned dissipationless resonant driving of the Fermi liquid at the harmonics of the shear sound is indeed a hallmark behavior of sliding crystals in such channels. In particular, we will see that in the case of a clean channel with friction arising only from the boundary, the crystal driven at the exact resonant frequency corresponding to the harmonics of its transverse sound self-consistently pins itself with zero velocity at the boundary so as to minimize energy dissipation. \begin{figure}[t] \includegraphics[scale=1.0]{Fig3x.eps} \caption{\label{Fig.diptopeak} (a) Real part of the channel conductivity of the 2D sliding crystal, in units of $\gamma _{\rm eff} / \omega ^2$, for $\tilde{b} = 0.1$ (blue) and $\tilde{b}=10$ (orange), where $\tilde{b}$ is an energy scale parametrizing boundary friction and $\gamma _{\rm eff}$ is an effective scattering rate analogous to $b/W$ and $\Gamma _{\rm eff}$ respectively in the LFL case. All energies are measured in units of the transverse phonon frequency $\omega _{\rm ph}$ (see Supplemental Material~\cite{supplementary} for full model). (b) Schematic of the 2D sliding crystal toy model comprising a tetragonal crystal confined in a channel with only boundary friction (red). (c) Out-of-phase (imaginary) and (d) in-phase (real) current profiles in the crystal in the clean limit $\Gamma _{1,2} \rightarrow 0$. Solid curves correspond to the frequency of the first conductivity dip in (a) ($\tilde{b} = 0.1$) while dashed curves correspond to the frequency at the first conductivity peak in (a) ($\tilde{b} = 10$).} \end{figure} To illustrate this, we consider a toy model of a two-dimensional tetragonal crystal~\cite{Kittel} confined in a channel with boundary friction aligned with one of its crystal axes [see Fig.~\ref{Fig.diptopeak}(b)]. The crystal slides in response to an alternating external force along the channel, experiencing friction at the edges analogous to the boundary scattering in the LFL. Because the translational invariance of the crystal along the infinite direction of the channel ($y$-axis) is preserved during the oscillatory driving, without loss of generality, it is sufficient to consider the motion of a single chain describing a row of $N$ atoms across the channel. The displacement of each atom from its equilibrium position along $y$ is described by the following equation of motion: \begin{equation} \ddot{y}_j = F_j - \kappa (2y_j - y_{j+1} - y_{j-1}) - (\gamma + \gamma _b \delta_{j,-\frac{N}{2}}) \dot{y}_j, \end{equation} where $j = -N/2, ... N/2$ labels the $x$-coordinate of the atom, $\kappa$ the shear restoring force constant, $F_j$ the external driving force, $\gamma$ the homogeneous bulk friction, and $\gamma _b$ the boundary friction. The masses of the atoms are set to unity. For simplicity, we have considered the case of periodic boundary conditions along $x$ to highlight the qualitative aspects of the system which are identical to the case with open boundary conditions. Details of the solution of the equations of motion are presented in Section III of the Supplemental Material~\cite{supplementary}, and here we will summarize the resulting behavior. Figure~\ref{Fig.diptopeak}(a) shows the conductivity, i.e., the average velocity of atoms divided by the external force, of such a sliding crystal. In the absence of bulk friction, the real part of the conductivity exhibits zeros at frequencies corresponding to the harmonics of the transverse phonon of the crystal at wavelength $W$. Resonantly driving the system at these frequencies creates a current profile that is out of phase with the drive: the crystal pins at the boundary and self-consistently avoids energy dissipation in an analogous fashion to the Fermi liquid with shear sound [see Figs.~\ref{Fig.diptopeak}(c) and \ref{Fig.diptopeak}(d)]. When probed optically, the sliding crystal therefore does not exhibit the resonant absorption typical of a crystal with pinned boundaries. The latter scenario can be described as a limiting case of the sliding crystal at infinite boundary friction. Indeed, when the boundary dissipation increases, the dips broaden, ultimately giving rise to resonant peaks at half-integer multiples of the fundamental frequency once the dissipative boundary force exceeds the shear restoring force of the crystal~\cite{supplementary}. Such peaks do not have a counterpart in the case of the LFL, where off-resonant pinning at the boundary is prevented by scattering to the incoherent particle-hole continuum. Consequently, the conductivity dips signaling the shear sound in the LFL remain narrow even in the limit of arbitrarily strong boundary scattering~\cite{supplementary}. \section{Summary and discussion}\label{Sec.VI} As we have shown, moderately interacting metals display a sharp shear sound collective mode which exists even in the presence of weak impurity and electron-electron collisions. This mode leaves clear fingerprints in clean metallic channels at low temperatures in the form of sharp resonant dips in the conductivity at frequencies controlled by the shear sound dispersion in Eq.~\eqref{Eq.shearcomplexdispphys}, and that resemble the transverse sound resonance of a sliding crystal, despite the metal lacking any form of long-range crystalline order. There already exist various ultra-clean materials that feature a strongly interacting metallic state before a metal insulator transition which are therefore ideal platforms to discover the shear sound. These include MgZnO/ZnO, Si MOSFETs, AlAs, and p-GaAs~\cite{RevModPhys.82.1743, Dolgopolov2019, Kravchenko2003, Falson2018}. They have been shown to have large mass enhancements and therefore Landau parameters with $F_1>1$~\cite{Solovyev2017, Shashkin2003, Shashkin2002, Shashkin2007, Vakili2004, Falson2018}. For example, in MgZnO/ZnO two-dimensional electron gases (2DEGs) we estimate that channels of about 1 $\mu$m, at temperatures below 2 K, and with densities so that the quasiparticle mass is enhanced to be larger than twice the bare mass, would display visible shear sound resonances in their conductance at frequencies of about $\omega \sim 0.1$ THz. \section{Acknowledgements:} We are thankful to P. S. Alekseev, J. Falson, S. Simon, and A. Chubukov for valuable discussions. We are grateful to an anonymous referee for emphasizing the similarities between shear modes of Fermi liquids at finite temperatures discussed in this work and those derived from the Frenkel model and bringing to our attention the relevant references. P.-Y.C. was supported by the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under MOST Grant for the Einstein Program MOST Grant No. 108-2636-M-007-004.
1108.2497
\section{Introduction} The most recent T2K\cite{Abe:2011ks} and MINOS \cite{MINOS} results have shown at 2.5 and 1.7 $\sigma$ respectively the evidence of a $\theta_{13}\neq 0$ in the lepton mixing matrix. The first global analysis\cite{Fogli:2011qn} have confirmed their results giving at $3\sigma$ level the range $0.001\leq\sin \theta_{13}^2 \leq 0.044$ ($0.005\leq\sin \theta_{13}^2 \leq 0.050$) for the NH (IH) case. Comparable results have been obtained by the most recent global analysis\cite{Schwetz:2011zk} that have slightly lowered the upper bound $0.001\leq\sin \theta_{13}^2 \leq 0.035$ ($0.015\leq\sin \theta_{13}^2 \leq 0.039$) for the NH (IH) case. While waiting for more statistics and forthcoming tests of these results neutrino phenomenology community have showed an impressive fast and conspicuous productivity in proposing new textures and models that could account for the correct $\theta_{13}$ size. The majority of these analysis have been devoted to a re-consideration of the possible corrections to TriBiMaximal (TBM) mixing. TBM pattern predicts at leading order (LO) a vanishing $\theta_{13}$. Even in the first TBM predicting models\cite{Altarelli:2010gt} a non vanishing $\theta_{13}$ was indeed predicted at next to leading order (NLO) typically quite small, of order $\theta_C^2$, with $\theta_C\sim.23$ the Cabibbo angle. In the last months different scenarios based mainly on discrete symmetries have been proposed to modify TBM texture and predicting a $\theta_{13}\neq 0$ up to 10 degrees\cite{allTBM, Marzocca:2011dh}. Other possibilities have been considered in \cite{other}. Even before T2K and MINOS recent data a promising idea to get a $\theta_{13}\neq 0$ was given by BiMaximal (BM) mixing. In the context of BM mixing the basic idea is that at LO solar and atmospheric angles are maximal and the reactor angle is zero\cite{BMmixing,AFM_BimaxS4,Meloni:2011fx}. Then at next leading order (NLO) only the solar and the reactor angles get corrections of order the Cabibbo angle $ \theta_C$ while the atmospheric keeps unchanged. Finally at next-next leading order (NNLO) even the atmospheric angle may get corrections but these are of order $\theta_C^2$ thus allowing to fall in the experimental data range. In this picture NLO corrections at the lepton mixing matrix arise by diagonalizing the charged lepton sector. The reason is seeded in the original motivation to study BM mixing, that is quark-lepton complementarity\cite{Complementarity}. We remind that recently it has been shown how a relative large $\theta_{13}$ may arise from the charged lepton sector in the context of SU(5) assuming exact TBM mixing in the neutrino sector\cite{Marzocca:2011dh}. At the moment while we are looking for more statistics and new analysis to confirm and deline the $\theta_{13}$ range one of the challenges is finding a texture that has a non vanishing $\theta_{13}$ at LO--eventually even too large--and thus making it smaller thanks to adequate corrections\cite{Toorop:2011jn}. At the light of the most recent results there is an intrinsic tension in the BM assumption. Consider the BM choice \begin{equation} \theta_{12}=\theta_{23}= -\frac{\pi}{4}\,, \quad \theta_{13}=0\,, \end{equation} and assume that the charged lepton mass matrix is diagonalized on the left by a rotation in the 12 sector of order the Cabibbo angle parametrized as \begin{equation} \label{Uch} U_{e}= \left( \begin{array}{ccc} \cos \theta &\sin \theta e^{i \delta}&0\\ -\sin \theta e^{i \delta}& \cos \theta&0\\0&0&1 \end{array}\right)\,, \end{equation} where $\delta$ is a possible CP Dirac phase. Then one finds that the lepton mixing angles are given by \begin{eqnarray} \label{BMex} \theta_{23} &\sim&- \frac{\pi}{4} + \frac{1}{4} \theta^2+\mathcal{O}(\theta^3)\,,\nonumber\\ \theta_{12}&\sim&-\frac{\pi}{4}- \frac{1}{\sqrt {2}}\theta \cos \delta +\mathcal{O}(\theta^3)\,,\nonumber\\ \theta_{13}&\sim & \frac{1}{\sqrt{2}}\theta\,. \end{eqnarray} The solar angle wants a \emph{large} $\theta$ of order the Cabibbo angle $\theta_C$, while the most recent fit indicates that $\theta_{13}$ is large but not too much. The results is shown in fig. \ref{parBM}: according to the simple parametric expansion of \eq{BMex} the BM $\theta_{13}$ prediction is large and could be ruled out by an improvement of precision that could low the $3\sigma$ upper bound. However in more realistic scenarios the values allowed for $\theta_{13}$ are spreaded, but we still may conclude that if the upper $3\sigma$ limit on $\theta_{13}$ would be lowered the BM pattern would be strongly disfavored. \begin{figure}[h] \begin{center} \includegraphics[width=4in]{./Par12BM.jpg} \caption{\it The reactor versus the solar angle in the case of the BM mixing matrix corrected by a rotation in the 12 plane of the charged lepton mass matrix. Vertical and horizontal lines bound the $3\sigma$ range for $\sin\theta_{12}^2$ and $\sin\theta_{13}^2$ respectively.} \label{parBM} \end{center} \end{figure} In this paper we present a complementary picture to that offered by the BM pattern that we define as the Tri-Permuting (TP) mixing matrix. The name reminds that the 3 eigenvectors are identical up to permutations and change of signs. This TP mixing matrix is defined by two maximal angles and a large $\theta_{13}$ according to \begin{equation} \sin \theta_{12}=\sin \theta_{23}= -\frac{1}{\sqrt{2}}\,, \quad \sin \theta_{13}=\frac{1}{3}\,. \end{equation} \begin{equation}\label{ULO} U_{TP}\sim \frac{1}{3}\left( \begin{array}{ccc} 2&-2&1\\ 2&1&-2\\ 1&2&2 \end{array} \right)\,. \end{equation} Under the previous assumption that charged lepton mass matrix is diagonalized through the rotation given in \eq{Uch} we get \begin{eqnarray} \label{TPang} \theta_{23} &\sim&-\frac{ \pi}{4} + \frac{1}{4}\theta \cos\delta\,+ \frac{1}{16}\theta^2( 4+\cos 2 \delta)+\mathcal{O}(\theta^3)\,,\nonumber\\ \theta_{12}&\sim&-\frac{\pi}{4}- \frac{3}{4} \theta \cos\delta-\frac{ 3}{16}\theta^2 \cos 2 \delta+\mathcal{O}(\theta^3)\,,\nonumber\\ \theta_{13}&\sim&\frac{1}{3} +\frac{2}{3} \theta \cos \delta -\frac{1}{6}\theta^2( 1+4 \sin \delta^2)+\mathcal{O}(\theta^3)\,. \end{eqnarray} Notice that while $\theta_{12}$ and $\theta_{13}$ receive a correction of order $\sim \theta \cos\delta$, $\theta_{23}$ is corrected by $\theta \cos \delta /4\sim \theta_C^2$ if $\theta \sim \theta_C$ the Cabibbo angle. Moreover to constrain $\theta_{12}$ in the correct range we need $\cos \delta<0$ that gives a correction to $\theta_{13}$ in the right direction. This is explicitly shown in fig. \ref{parTP}. At the same time we get a prediction for the Dirac CP phase that in this scenario is given by \begin{equation} \label{expdelta} \delta_l\sim 2( \theta \sin\delta -\theta^2 \sin 2 \delta)\,. \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=4in]{./Par12TP.jpg} \caption{ \it The reactor versus the solar angle in the case of the TP mixing matrix corrected by a $12$ rotation in the charged lepton sector . Vertical and horizontal lines bound the $3\sigma$ range for $\sin\theta_{12}^2$ and $\sin\theta_{13}^2$ respectively.} \label{parTP} \end{center} \end{figure} In the next section we introduce the framework in which TP mixing matrix arises. In sec.\ref{model} we build a renormalizable model that provides such a texture. Neutrino phenomenological implication are discussed in sec.\ref{anal} and sec.\ref{conc} is devoted to our conclusions. \section{Residual symmetries} \label{res} It is well known that under the assumption that neutrinos are majorana particles if there is any residual symmetry behind neutrino mass matrix this is at most a $Z_2 \times Z_2$ flavor symmetry \cite{Feruglio:2011qq,Toorop:2011jn}. While it is clear how the $Z_2 \times Z_2$ acts on the neutrino mass eigenstates since the three mass eigenstates must have \emph{flavor parity} (+,+),(+,-),(-,-), it is an open question how it acts on the neutrino interaction eigenstates. In the most general case given the three left handed neutrinos $\nu_L\sim(\nu_{L_1},\nu_{L_2},\nu_{L_3})$ each $Z_2$ flavor symmetry $S_{1,2}$, $S_{1,2}^2=I$, acts on $\nu_L$ as \begin{equation} \nu_L \to S_i \nu_L \end{equation} and it holds that $[S_1,S_2]=0$. Thus the neutrino mass matrix may be written in terms of the three eigenvectors of $S_1$ and $S_2$, $v_i$, satisfying \begin{eqnarray} S_1 v_1= v_1 &\quad &S_2 v_1 =v_1\,,\nonumber\\ S_1 v_2=v_2 &\quad& S_2 v_2=-v_2 \,, \nonumber\\ S_1 v_3=-v_3 &\quad & S_3 v_3 =-v_3\,. \end{eqnarray} As consequence the effective light neutrino majorana mass matrix may be written as \begin{equation} m_\nu= m_1 v_1^T v_1+ m_2 v_2^T v_2 +m_3 v_3^T v_3\,. \end{equation} This approach has been used in many scenarios and typically addressed as sequential dominance\cite{SD}. Determining $S_1$ and $S_2$ fixes uniquely the lepton mixing matrix if the charged lepton mass matrix is diagonal. $S_1$ and $S_2$ corresponding to the TP mixing matrix are given by \begin{eqnarray} S_1=\left( \begin{array}{ccc} \frac{7}{9} & \frac{4}{9} & -\frac{4}{9} \\ \frac{4}{9} & \frac{1}{9} & \frac{8}{9} \\ -\frac{4}{9} & \frac{8}{9} & \frac{1}{9} \end{array} \right)\,,&\quad & S_2 =\left( \begin{array}{ccc} -\frac{1}{9} & \frac{8}{9} & \frac{4}{9} \\ \frac{8}{9} & -\frac{1}{9} & \frac{4}{9} \\ \frac{4}{9} & \frac{4}{9} & -\frac{7}{9} \end{array} \right)\,. \end{eqnarray} A diagonal charged lepton mass matrix is invariant under an infinite choice of abelian symmetries since charge assigments for the left handed fields may be compensated by the corresponding right ones. A natural choice is given by $Z_e\times Z_\mu\times Z_\tau$. Clearly this symmetry has to be broken by soft terms or by NLO contributions if the correction to the TP mixing matrix has to arise by the charged lepton sector. \section{The model} \label{model} In this section we build a renormalizable model that provides the TP mixing matrix. We assume that no other heavy matter fields exist a part from those we report in tab. \ref{tab:fields}, thus the Yukawa lagrangian we write in \eq{Yuk} is complete and no NLO terms have to be taken into account. The model is based on the flavor symmetry $G_f\sim SU(3)_F\times U(1)_F$ and matter and scalar fields charge assigments are reported in tab.\ref{tab:fields}. Left handed doublets transform as a triplet of $SU(3)_F$. Standard model (SM) right handed charged lepton are $SU(3)_F$ singlet and charged under $U(1)_F$. Among the matter fields we have 2 right handed neutrinos, singlet under $SU(3)_F\times U(1)_F$, a vectorial couple of heavy SM singlets $\Sigma$,$\overline{\Sigma}$, transforming as 3 and $\overline{3}$ respectively under $SU(3)_F$, another vectorial couple of heavy SM $SU(2)$ singlet charged under $U(1)_Y $, $F$, $F^c$, 3 and $\overline{3}$ of $SU(3)_F$ respectively. We introduce five $\overline{3}$ scalar fields, three of them are charged under $U(1)_F$. In addition we impose an extra $Z_2$ symmetry under which all the fields are even with the exception of one right handed neutrino, $\nu^c_2$, and one scalar triplet, $\phi_2$, that are odd. \begin{table}[!h] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c||c|c|c|c||c|c|c|c|c|c|} \hline &&&&&&&&&&&&&&&& \\ Matter & $L$ & $\nu^c_1$&$\nu^c_2$& $e^c$ & $\mu^c$& $\tau^c$& $\Sigma$ &$\bar{\Sigma}$&$ F$ &$F^c$ &$\phi_1$&$\phi_2$&$\phi_e$&$\phi_\mu$&$\phi_\tau$&$H$ \\ &&&&&&&&&&&&&&&& \\[-0,3cm] \hline &&&&&&&&&&&&&&&& \\ $SU(2)_L$ & 2& 1&1&1&1&1&1&1&1&1&1&1&1&1&1&2\\ &&&&&&&&&&&&&&&& \\[-0,3cm] $U(1)_Y$ & -1/2& 0&0&1&1&1&0&0&-1&1&0&0&0&0&0&1/2\\ &&&&&&&&&&&&&&&& \\[-0,3cm] $SU(3)_F$ & $3$ & $1$ & $1$& $1$ &1&1&3&$\overline{3}$&3&$\overline{3}$ &$\overline{3}$ &$\overline{3}$ &$\overline{3}$ &$\overline{3}$ &$\overline{3}$&1 \\ &&&&&&&&&&&&&&&& \\[-0,3cm] $U(1)_{F}$ & 0 & 0 & 0 &$\alpha$&$\beta$ &$\gamma$&0&0&0&0&0 & 0 &$-\alpha$&$-\beta$ &$-\gamma$& 0 \\ &&&&&&&&&&&&&&&& \\ \hline \end{tabular} \end{center} \caption{\it Transformation properties of the matter fields. The choice of the charged lepton $U(1)_F$ charges is arbitrary once the condition $\alpha\neq \beta\neq \gamma$ is satisfied.} \label{tab:fields} \end{table} \subsection{Mass matrices} Given the field content of tab.\ref{tab:fields} the lagrangian reads as \begin{eqnarray} \label{Yuk} \mathcal{L}&=& k L H \overline{\Sigma}+ (y_1 \Sigma \phi_1+y'_1 \overline{\Sigma} \phi^*_1) {\nu}^c_1+(y_2 \Sigma \phi_2+y'_2 \overline{\Sigma} \phi^*_2) {\nu^c}_2+ M_\Sigma \Sigma \overline{\Sigma}+ \frac{M_1}{2} \nu^c_1 \nu^c_1+\frac{M_2}{2} \nu^c_2 \nu^c_2\nonumber\\ &+&y_F L \tilde{H} F^c+ y_e F \phi_e e^c+ y_\mu F \phi_\mu \mu^c+ y_\tau F \phi_\tau \tau^c+ M_F F F^c\,, \end{eqnarray} where $\tilde{H}=i \sigma_2 H$ with $H$ the usual SM higgs doublet and $\sigma_2$ the Pauli matrix. We have omitted $SU(2)$ indices to simplify the notation. As already stated this is the full Yukawa lagrangian thus no NLO corrections have to been included. In sec.\ref{res} we have said that TP mixing matrix is obtained when neutrino and charged lepton mass matrices are invariant under $Z_2 \times Z_2$ and $Z_e\times Z_\mu \times Z_\tau$ respectively. This means that the flavor group $G_f$ has to be broken following different patterns in the neutrino and charged lepton sector as it usually happens in the context of discrete flavour symmetries\cite{Altarelli:2010gt}. In our scenario this is realized when the five scalars $\phi_i$ develop vacuum expectation values (VEVs) as \begin{eqnarray} \label{vev} \vev{\phi_1}\sim(2,2,1)\,, & \quad& \vev{\phi_2}\sim(2,-1,-2)\,, \nonumber\\ \vev{\phi_e}\sim(1,0,0)\,, & \quad& \vev{\phi_\mu}\sim(0,1,0)\,, \quad \vev{\phi_\mu}\sim(0,0,1) \,.\nonumber\\ \end{eqnarray} In sec.\ref{pot} we sketch how this alignment may be realized. When flavor and electroweak symmetries are broken the neutrino mass matrix is a 5 block matrix that in the basis $(\nu_L, \overline{\Sigma},\Sigma,\nu^c_1,\nu^c_2)$ is given by \begin{eqnarray} \label{M1} M_\nu&=& \left( \begin{array}{ccc} 0& m_D & 0\\ m_D^{T} & M_0 &\lambda \\ 0&\lambda& M_{\nu^c} \end{array}\right)\,. \end{eqnarray} with \begin{eqnarray} m_D=( k v_h \cdot \mathbb{1} ,0)\,, &\quad& \lambda=\left( \begin{array} {cc} \lambda'_1&\lambda'_2\\ \lambda_1&\lambda_2\\ \end{array}\right)\,,\nonumber\\ M_{\nu^c}=Diag(M_1,M_2)\,, &\quad&M_0=\left( \begin{array} {cc} 0&M_\Sigma\cdot \mathbb{1} \\ M_\Sigma\cdot \mathbb{1} &0\\ \end{array}\right)\,,\nonumber\\ \end{eqnarray} being $\mathbb{1}$ the identity $3\times 3$ matrix. The $\lambda$'s are defined as \begin{eqnarray} \lambda_1,\lambda'_1= y_1,y_1' v_1\left( \begin{array}{c} 2\\2 \\1\end{array} \right)&\quad & \lambda_2,\lambda'_2= y_2,y_2' v_2\left( \begin{array}{c} 2\\-1 \\-2\end{array} \right) \end{eqnarray} Under the assumption $M_\Sigma> M_{1,2}>\lambda_{1,2},\lambda_{1,2}'$ it is convenient defining the spinor $\Sigma_1$ and ${\Sigma}_2$ \begin{equation} \Sigma_1 =\frac{1}{\sqrt{2}}(\overline{\Sigma}+\Sigma) \quad \Sigma_2 =\frac{1}{\sqrt{2}}(-\overline{\Sigma}+\Sigma) \,. \end{equation} In this way \eq{M1} becomes \begin{eqnarray} \label{M2} M_\nu&=& \left( \begin{array}{ccccc} 0& \tilde{m}_D & 0\\ \tilde{m}_D^T &\tilde{M}_0& \tilde{\lambda}\\ 0& \tilde{\lambda}^T& M_{\nu^c} \end{array}\right)\,. \end{eqnarray} $M_\nu$ may be sequentially diagonalized by using the block diagonalization method introduced in \cite{Schechter:1980gr}. First the method is applied to the block involving the heavy fields $( {\Sigma}_1,\Sigma_2,\nu^c_1,\nu^c_2)$. The unitarity matrix that diagonalize the block is defined as \begin{equation} U_H=\sim \left(\begin{array}{cc}1-\frac{B B^T}{2}& B\\-B^T &1-\frac{B^T B}{2} \end{array} \right) \end{equation} with \begin{equation} B\sim -\tilde{M}_0^{-1} \tilde{\lambda}\,. \end{equation} The lightest singlet neutrinos mass matrix becomes \begin{equation} \tilde{M}_{\nu^c}=M_{\nu^c}-\tilde{\lambda}^T \tilde{M}_0^{-1}\tilde{\lambda}\,. \end{equation} The effective light neutrino mass matrix is given by the usual type I see saw formula according to \begin{eqnarray} m_\nu& \sim& -\tilde{m}_D B\frac{1}{ \tilde{M}_{\nu^c}} B^T m_D^T \sim-\tilde{m}_D\frac{1}{ \tilde{M}_0} \tilde{\lambda}\frac{1}{ {M}_{\nu^c}} \tilde{\lambda} ^T \frac{1}{\tilde {M}_0^{T}} m_D^T\,, \end{eqnarray} and presents the form \begin{equation} \label{massnu} m_\nu=\left(\begin{array}{ccc} 4 x + 4 y& 4 x -2 y& 2 x -4 y\\ 4 x -2 y& 4 x +y& 2 x+ 2 y\\ 2 x -4 y& 2 x+2 y& x +4 y \end{array} \right) \end{equation} with \begin{equation} x= 2 k^2 v_h^2 \frac{ y_1^2 v_1^2}{M_1 M_\Sigma^2}\,,\quad y= 2 k^2 v_h^2 \frac{ y_2^2 v_2^2}{M_2 M_\Sigma^2}\,. \end{equation} The previous $m_\nu$ is diagonalized by $U_{TP}$ with eigenvalues $( 9 x, 9y,0)$. Thus our realization allow only the IH spectrum. For what concerns the charged lepton sector in addition to the SM fields we have the heavy fields $F,F^c$. When the vacuum alignment coincides exactly with that in \eq{vev} the full charged lepton left-right mass matrix presents a trivial block structure \begin{equation} \label{chl} M_{ch}=\left( \begin{array}{cc} 0&y_F v_H \cdot \mathbb{1} \\Y& M_F \cdot \mathbb{1} \end{array}\right)\,, \end{equation} with $Y=\mbox{Diag}(y_e v_e,y_\mu v_\mu,y_\tau v_\tau)$. By integrating out the heavy fields the SM charged lepton mass matrix is diagonal and \begin{equation} (m_e,m_\mu ,m_\tau)=\frac{y_F v_H}{M_F} (y_e v_e,y_\mu v_\mu,y_\tau v_\tau)\,. \end{equation} In sec.\ref{pot} it is discussed how the alignments given in \eq{vev} get corrections due to the presence of soft terms needed to give mass to the Goldstone Bosons (GBs) arising by minimizing the potential. Specifically, for what concerns $\phi_{e,\mu,\tau}$, \eq{vev} have to be substituted by \begin{equation} \vev{\phi_{e,\mu,\tau}} + \epsilon_{e,\mu,\tau} (2,2,1)\,. \end{equation} In this way the block $ Y $ is substituted by \begin{equation} \tilde{Y}=\left( \begin{array}{ccc} y_e( v_e + 2 \epsilon_e) & 2 y_\mu \epsilon_\mu&2 y_\tau \epsilon_\tau \\ 2 y_e \epsilon_e & y_\mu( v_\mu + 2 \epsilon_\mu) & 2 y_\tau \epsilon_\tau\\ y_e \epsilon_e & y_\mu \epsilon_\mu& y_\tau( v_\tau + 2 \epsilon_\tau) \end{array}\right)\simeq \left( \begin{array}{ccc} y_e v_e & 2 y_\mu \epsilon_\mu&2 y_\tau \epsilon_\tau \\ 2 y_e \epsilon_e & y_\mu v_\mu & 2 y_\tau \epsilon_\tau\\ y_e \epsilon_e & y_\mu \epsilon_\mu& y_\tau v_\tau \end{array}\right)\,, \end{equation} since $\epsilon_x << v_x$. Neglecting terms proportional to $y_e v_e <<y_\mu v_\mu,y_\tau v_\tau$ the charged lepton mass matrix squared presents the following structure \begin{equation} \label{mchl} m_{ch}m^\dag_{ch}\simeq |m_\tau|^2 \left( \begin{array}{ccc}0& \epsilon'_\mu \frac{|y'_\mu|^2}{|y'_\tau|^2}& \epsilon'_\tau\\ {\epsilon'}^{*}_\mu \frac{|y'_\mu|^2}{|y'_\tau|^2}&\frac{|y'_\mu|^2}{|y'_\tau|^2}& \epsilon'_\tau + {\epsilon'}^{*}_\mu \frac{|y'_\mu|^2}{|'y_\tau|^2} \\{ \epsilon'}_\tau^* & {\epsilon'}^{*}_\tau + \epsilon'_\mu \frac{|y'_\mu|^2}{|y'_\tau|^2}&1 \end{array}\right)\,, \end{equation} where we have defined $y_x' = y_x v_x, \epsilon_x'= y_x v_x \epsilon_x/|m_\tau|^2$. For construction $\epsilon_x'<<1$ and $| \epsilon'_\tau| <|\epsilon'_\mu|$ to fit the correct ratio $|m_\mu|/|m_\tau|$ thus \eq{mchl} is diagonalized by \begin{equation} U_e=\left( \begin{array}{ccc}1& \epsilon'_\mu& \epsilon_\tau'\\ {\epsilon'}^*_\mu&1& \epsilon_\tau'\\ {\epsilon'}^*_\tau& {\epsilon'}^*_\tau &1 \end{array}\right)\,. \end{equation} The final lepton mixing matrix has the desired structure and in first approximation it is given by \begin{equation} \label{Ulep} U_{lep}=U_e^\dag U_{TP} \simeq \left( \begin{array}{ccc} \frac{2}{3}-\frac{2 \epsilon'_\mu }{3} & -\frac{\epsilon'_\mu }{3}-\frac{2}{3} & \frac{2 \epsilon'_\mu }{3}+\frac{1}{3} \\ \frac{2 {\epsilon'}^*_\mu }{3}+\frac{2}{3} & \frac{1}{3}-\frac{2 {\epsilon'}^*_\mu }{3} & \frac{{\epsilon'}^*_\mu }{3}-\frac{2}{3} \\ \frac{1}{3} & \frac{2}{3} & \frac{2}{3} \end{array} \right)\,, \end{equation} where we have used $| \epsilon'_\tau| <|\epsilon'_\mu|$. \subsection{Vacuum alignment} \label{pot} Model based on flavor symmetries spontaneously broken in different directions in the charged lepton and neutrino sectors have always to face off the problem of realizing the correct vacuum alignments. This affects TBM as well as BM models based on both discrete and continuous symmetries. The formers tend to break in one direction and therefore different techniques have been developed to break them in two directions. For the latter the situation is even worse since continuous symmetry do not develop a preferred direction to be broken to and the minimum conditions present an infinite degeneracy. For this reason in model based on flavor continuous groups such as $SU(3)$ or $SO(3)$ the correct vacuum alignment is typically obtained by introducing soft breaking terms of the continuous symmetry. These softs preserve an appropriate discrete subgroup of the continuos symmetry and through the minimization of the potential they select the correct directions \cite{deMedeirosVarzielas:2005ax}. Here we use the same approach. In our model we have five $\overline{3}$ of the flavour group $G_f\sim SU(3)_F\times U(1)_F \times Z_2$. For five $\overline{3}$ of a generic $SU(3)$ the most generic potential is written as \begin{equation} V[\phi_i]= \mu^2_{ij} \phi_i^\dag \phi_j + \lambda_{ijkl} (\phi_i^\dag \phi_j) (\phi_k^\dag \phi_l)\,, \end{equation} where the sum other all the fields is understood. In our case, the abelian symmetries forbid many terms and in particular the scalar $SU(3)_F$ invariant potential has an accidental enlarged symmetry $SU(3)\times U(3)^3$: the first $SU(3)$ involves $\phi_1$ and $\phi_2$ and one $U(3)$ per $\phi_i$, $i=e,\mu,\tau$. The generic vacuum configuration constrains the absolute scalar VEVs \begin{equation} \vev{\phi_1}\,, \quad \vev{\phi_2} \,, \quad \vev{\phi_{e,\mu,\tau}}\,, \end{equation} and the accidental $SU(3)\times U(3)^3$ is completely broken giving rise to $8+9 \times 3=35$ GBs. This is a situation quite common in flavor models: the inclusion of the soft terms is needed both to trigger the correct alignments and to give mass to the unwanted GBs. In our context we need three set of soft terms: \begin{itemize} \item[-] $V^1_{soft}$: it triggers the correct vacuum alignment for $\phi_{e,\mu,\tau}$ and also breaks the accidental $U(3)^3$ symmetry to $SU(3)$, giving mass to 19 GBs. At this level we are left with an accidental global symmetry $SU(3)\times SU(3)$ \item[-] $V^2_{soft}$: it triggers the correct vacuum alignment for $\phi_{1,2}$ breaking $SU(3)\times SU(3)$ to $SU(3)$ and giving mass to other 8 GBs. \item[-] $V^3_{soft}$: it breaks the residual $SU(3)$ and gives mass to the last 8 GBs. \end{itemize} A suitable example for $V^1_{soft}$ is given by \begin{equation} \label{V1s} V^1_{soft}=[ m^2_{e\mu} (\phi_e^\dag \phi_\mu)+m^2_{\mu\tau} (\phi_\mu^\dag \phi_\tau)+ A \phi_e \phi_\mu\phi_\tau] +H.c. \end{equation} The basic assumption is that the soft terms slightly modify the first derivative system that in first approximation may be considered unchanged. Thus the first derivative system of the potential involving $\phi_{e,\mu,\tau}$ fixes $\vev{\phi_{e,\mu,\tau}}$ and we have always the freedom to choose one direction, for example \begin{equation} \vev{\phi_{e}}=v_e (1,0,0)\,. \end{equation} If $m^2_{e\mu},m^2_{\mu\tau} >0$ and $A<0$ the quadratic and quartic terms of \eq{V1s} select orthogonal directions for $\phi_{\mu}$ and $\phi_{\tau}$: \begin{itemize} \vskip -.3cm \item[-] the quadratic terms select the direction \begin{equation} \vev{\phi_{\mu}}= v_\mu (0,\cos\alpha,\sin\alpha),\vev{\phi_{\tau}}=v_\tau (0,\sin\alpha,-\cos\alpha)\,,\end{equation} \vskip -.3cm \item[-] the trilinear term is proportional to $ \cos 2\alpha$ and to maximize it we need $\alpha=\pi/2$ that gives the correct alignment to $\phi_\mu$ and $\phi_\tau$. \vskip -.3cm \end{itemize} Building $V^2_{soft}$ is a bit more \emph{ad hoc}: we need to impose that $V^2_{soft} $ is invariant under one of the following transformations \begin{eqnarray} i)&\quad&\phi_{1_1}\to 2 \phi_{1_3} \quad \phi_{1_3}\to 1/2 \phi_{1_1}\,,\nonumber\\ ii)&\quad& \phi_{1_1}\to \phi_{1_2}\quad \phi_{1_2}\to \phi_{1_1}\,,\nonumber\\ iii)&\quad& \phi_1\to \phi_2 \end{eqnarray} Clearly this transformations breaks explicitly $SU(3)$ and the $Z_2$ under which $\phi_2$ is odd. A possible $V^2_{soft}$ is given by \begin{equation} m_1^2 |\phi_{1_1}-2 \phi_{1_3}|^2 +m_\pm^2 |\phi_{1_1}\pm \phi_{1_2}|^2+ m_{12}^2 ( \phi_1^\dag \phi_2+ H.c.)\,, \end{equation} By choosing correctly the sign of $m^2_1,m^2_-,m^2_{12}>0,m^2_+<0$ not to have tachyons we get $\vev{\phi_1}$ and $\vev{\phi_2}$ orthogonal and along the right directions. Finally $V^3_{soft}$ have to make massive the last 8 GBs: in order not to destroy the alignments provided by $V^1_{soft}$ and $V^2_{soft}$ it has to be subdominant with respect to them. It may have a form like \begin{equation} m_{\tau 1}^2 (\phi_\tau^\dag \phi_1)\,. \end{equation} In general $V^3_{soft}$ leaves the freedom to preserve only one vev direction and would slightly disalign the others. If only $\phi_1$--or $\phi_2$--is involved in $V^3_{soft}$ together with $\phi_{e,\mu,\tau}$ it is possible to disalign only the triplets entering in the charged lepton Yukawa lagrangian. For example they would disalign according to \begin{equation} \vev{\phi_{e,\mu,\tau}} + \epsilon_{e,\mu,\tau} (2,2,1)\,, \end{equation} giving rise to the corrections needed to generate a non trivial charge lepton mixing. \section{Neutrino phenomenological analysis} \label{anal} Eq.(\ref{massnu}) and \eq{Ulep} give us all the informations to outline the neutrino phenomenology of the model discussed. For what concerns the spectrum only the IH case is allowed with vanishing $m_3$. In first approximation the predictions for the lepton angles coincide with those given in \eq{TPang} but a more accurate scan of the parameters space is performed by taking into account the complete charged lepton mixing matrix obtained fitting the charged lepton masses. The result is showed in fig.\ref{TPnum}. From the upper to the lower panel we plot the reactor angle versus the solar angle, the atmospheric angle and the CP dirac phase $\delta_l$. The numerical scan confirms the parametric plot showed in the introduction. We have reported the $3\sigma$ range for the 3 angles according to the most recent analysis. There is a nice correlation between the solar and reactor angle: if in the near future there would be an improvement in the $3\sigma$ range precision of one of the two angles we would automatically get an upper or/and lower predictions for the other. On the other hand the model could be ruled out by an improvement on the precision for the atmospheric angle since it predicts $\sin\theta_{23}^2$ far from its central value (0.42). For what concerns the CP Dirac phase our points clustered in the range $0\pm\pi/4$. From \eq{expdelta} we see that $\delta_l\sim0$ is the expected value from the analytical parametrization. Indeed \eq{TPang} tell us that $\cos \delta \sim-1$ to fit the solar angle. To study neutrinoless double beta decay we consider the effective $0\nu\beta\beta$ parameter $m_{ee}$ defined as \begin{equation} m_{ee} = [U_{lep}\, \text{diag}(m_1, \, m_2, \, 0) \, U_{lep} ]_{11}. \end{equation} In fig.\ref{TPmee} we plot $m_{ee}$ versus the $\sin\theta_{13}^2$ since in our model the lightest neutrino mass is always vanishing. As consequence the model predicts almost an exact value for $m_{ee}\sim 45$ meV. The future experiments are expected to reach good sensitivities: $90$ meV \cite{gerda} (GERDA), $20$ meV \cite{majorana} (Majorana), $50$ meV \cite{supernemo} (SuperNEMO), $15$ meV \cite{cuore} (CUORE) and $24$ meV \cite{exo} (EXO). As a result the model may be tested in the next future. \begin{figure}[h!] \begin{center} \includegraphics[width=4in]{./TP12.jpg}\\ \includegraphics[width=4in]{./TP23.jpg} \includegraphics[width=4in]{./TPdelta.jpg} \caption{\it The predictions for the lepton mixing parameters. From the upper to the lower panel we plot the reactor angle versus the solar angle, the atmospheric angle and the CP dirac phase $\delta_l$. Vertical and horizontal lines bound the $3\sigma$ range for the corresponding angles.} \label{TPnum} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=4in]{./TPmm.jpg} \caption{\it The predictions for $m_{ee}$. Our IH spectrum is characterized by a vanishing $m_3$. As consequence we predict almost an exact value for $m_{ee}\sim 45$ meV that is in the precision range of the forthcoming neutrino experiments. Vertical lines bound the $3\sigma$ range for $\sin \theta_{13}^2$. } \label{TPmee} \end{center} \end{figure} \section{Conclusions} \label{conc} In this paper we have proposed a new mixing matrix for the neutrino mass matrix that we named the Tri-Permuting (TP) mixing matrix. This pattern requires large corrections both to the solar and to the reactor angles that thus are correlated in a new way orthogonal to other patterns proposed in literature such as the BM mixing one. Not to affect neutrino masses and the atmospheric angle these corrections have to arise by the charged lepton mixing matrix. We have build a full renormalizable model in which this scenario is realized. In the model proposed both neutrino and charged lepton get mass through a generalized see saw. The model is characterized by a neutrino IH spectrum with vanishing $m_3$. As consequence it is highly testable in the next future because it predicts an exact value for the effective $0\nu\beta\beta$ parameter $m_{ee}\sim 45$ meV and could be ruled out by an improvement of precision for the atmospheric angle. At the same time it gives a nice correlation between solar and reactor angles that could be tested by future analysis. We have also roughly discussed the potential sketching the strategy to obtain the correct vacuum alignments and the correction to the charged lepton mass matrix needed to correct the TP mixing matrix. A part from the neutrino sector the model phenomenology is deeply rich due to the presence of many new scalars and heavy fermions. A complete analysis of its phenomenology is postponed to a future work\cite{me}. \section*{Aknowledgments} I am grateful to S. Morisi for useful discussions and suggestions in the initial stage of this project.
1908.04252
\section{Introduction} The cosmological constant problem is a notoriously difficult problem in particle physics and cosmology, because there is no working symmetry to protect the cosmological constant from being large. This had led to the early no-go theorem for the cosmological constant problem \cite{cc-review}. The four-form flux provides an undetermined constant \cite{duff,witten,cc1,cc2}, enabling the cosmological constant to vary towards a small value. The probability with the Euclidean action \cite{early} may prefer a small cosmological constant among the distribution of values with different flux parameters. Although the gauge field corresponding to the four-form flux is not dynamical in 4D, the four-form flux can be changed in the process of creating membranes \cite{membrane}. In this case, the tunneling probability between two configurations with cosmological constants differing by one unit can be defined \cite{tunneling}. The four-form fluxes have been used to address the hierarchy problem \cite{hierarchy,Higgscan}, inflation \cite{inflation}, quintessence \cite{quint}, strong CP problem \cite{strongCP}, etc. As the gauge field for the four-form flux is dynamical in 5D, it was used to source the warped metric with flat space independent of brane and bulk cosmological constants, known as the self-tuning solutions \cite{self-tuning}. There was also an interesting novel idea for the cosmological relaxation of the Higgs mass with an axion-like scalar field \cite{relaxation}. Recently, there is an interesting proposal for relaxing the cosmological constant and the Higgs mass parameter to observed values by the same four-form fluxes \cite{Giudice,Kaloper}. The key ingredient of the proposal is that there is a dimensionless coupling between the four-form flux and the Higgs field, and the flux parameter takes a weak-scale value to relax the Higgs mass parameter to a correct value. Although there is a need of anthropic argument for the cosmological constant \cite{anthropic}, the tunneling probability between two configurations with different cosmological constants can judge when the flux parameter stops changing. The important issue is then how a non-empty Universe is guaranteed by reheating dynamics at the end of relaxation. In this work, we consider the most general couplings for the four-form fluxes in 4D. These include another dimensionless non-minimal four-form coupling to gravity in addition to the four-form coupling to the Higgs field. The non-minimal four-form coupling to gravity gives rise to an $R^2$ term with negative coefficient, which corresponds to a dynamical scalar field with tachyonic mass. We cure the tachyonic instability with an extra positive $R^2$ term from the beginning and discuss the role of the new dynamical scalar field for inflation and reheating dynamics. The paper is organized as follows. We begin with an overview on the model containing the four-form flux in the SM minimally coupled to gravity. Then, we review the relaxation mechanism with the four-form flux for solving the hierarchy problem. Next we give the detailed discussion on the Einstein-frame action in a dual tensor-scalar gravity and explain how inflation/reheating takes place and determine the reheating temperature. \section{The Model} We consider a three-index anti-symmetric tensor field $A_{\nu\rho\sigma}$ and its four-form field strength $F_{\mu\nu\rho\sigma}=4\, \partial_{[\mu} A_{\nu\rho\sigma]}$. Then, the most general Lagrangian with four-form field couplings in the SM are composed of various terms as follows, \begin{eqnarray} {\cal L} = {\cal L}_0 +{\cal L}_{\rm int}+ {\cal L}_S +{\cal L}_L+ {\cal L}_{\rm memb} \label{full} \end{eqnarray} with \begin{eqnarray} {\cal L}_0 &=& \sqrt{-g} \Big[\frac{1}{2}R +\frac{1}{2} \zeta^2 R^2 -\Lambda -\frac{1}{48} F_{\mu\nu\rho\sigma} F^{\mu\nu\rho\sigma} - |D_\mu H|^2-V(H)\Big], \label{L0} \\ {\cal L}_{\rm int} &=& \frac{1}{24} \,\epsilon^{\mu\nu\rho\sigma} F_{\mu\nu\rho\sigma} \,(-c_1 R +c_2 |H|^2), \label{Lagint} \\ {\cal L}_S &=&\frac{1}{6}\partial_\mu \bigg[\Big( \sqrt{-g}\, F^{\mu\nu\rho\sigma} + \epsilon^{\mu\nu\rho\sigma} (c_1 R -c_2 |H|^2) \Big)A_{\nu\rho\sigma} \bigg], \\ {\cal L}_L &=& \frac{q}{24}\, \epsilon^{\mu\nu\rho\sigma} \Big( F_{\mu\nu\rho\sigma}- 4\, \partial_{[\mu} A_{\nu\rho\sigma]} \Big), \label{LL} \\ {\cal L}_{\rm memb}&=& \frac{e}{6} \int d^3\xi\, \delta^4(x-x(\xi))\, A_{\nu\rho\sigma} \frac{\partial x^\nu}{\partial \xi^a} \frac{\partial x^\rho}{\partial \xi^b} \frac{\partial x^\sigma}{\partial \xi^c} \,\epsilon^{abc}-T\int d^3\xi\, \delta^4(x-x(\xi)) \sqrt{-g^{(3)}}. \end{eqnarray} Here, the Higgs potential in the SM is given by \begin{eqnarray} V(H) = -M^2 |H|^2 +\lambda |H|^4. \end{eqnarray} In the interaction Lagrangian $ {\cal L}_{\rm int} $ in eq.~(\ref{Lagint}), $c_1, c_2$ are dimensionless parameters, both of which are taken to be positive in the later discussion. The four-form coupling to the Higgs $c_2$ was introduced before in the literature \cite{hierarchy,Giudice,Kaloper}, but the non-minimal four-form coupling to gravity $c_1$ is introduced here for the first time. We note that ${\cal L}_S$ is the surface term necessary for the well-defined variation of the action with the anti-symmetric tensor field \cite{cc2}, and $q$ in ${\cal L}_L$ (in eq.~(\ref{LL})) is the Lagrange multiplier, and $ {\cal L}_{\rm memb}$ is the membrane action coupled to $A_{\nu\rho\sigma}$ with membrane charge $e$, and the membrane tension can be also introduced by $T$ with $g^{(3)}$ being the determinant of the induced metric on the membrane. Here, $\xi^a$ are the membrane coordinates, $x(\xi)$ are the embedding coordinates in spacetime and $\epsilon^{abc}$ is the volume form for the membrane. We also note that the $R^2$ term in eq.~(\ref{L0}) is introduced to ensure the stability of the non-minimal four-form coupling to gravity\footnote{We note that the most general Lagrangian in the quadratic gravity contains $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ and a Gauss-Bonnet term. The latter term does not affect our discussion because it is a topological invariant, while the former term would induce a spin-2 ghost particle \cite{stelle}. Moreover, the spin-2 ghost does not change the dynamics of the Higgs-four-form coupling, but it could render the gravitational theory inconsistent at quantum level. But, we assume that the spin-2 ghost is decoupled in the effective theory of gravity and focus on the impact of the $R^2$ term in the later discussion. }, as will be discussed later. Then, following the strategy in Ref.~\cite{inflation,quint}, we derive the equation of motion for $F_{\mu\nu\rho\sigma}$ as follows, \begin{eqnarray} F^{\mu\nu\rho\sigma}=\frac{1}{\sqrt{-g}}\, \epsilon^{\mu\nu\rho\sigma} \Big( -c_1 R + c_2 |H|^2+q\Big), \end{eqnarray} and integrate out $F_{\mu\nu\rho\sigma}$. As a result, we obtain the full Lagrangian (\ref{full}) as \begin{eqnarray} {\cal L} &=&\sqrt{-g} \Big[\frac{1}{2}R+\frac{1}{2} \zeta^2 R^2 -\Lambda- |D_\mu H|^2 +M^2 |H|^2 -\lambda |H|^4-\frac{1}{2} (-c_1 R+ c_2 |H|^2+q)^2 \Big] \nonumber \\ &&+ \frac{1}{6}\epsilon^{\mu\nu\rho\sigma} \partial_\mu q A_{\nu\rho\sigma} +\frac{e}{6} \int d^3\xi \, \delta^4(x-x(\xi))\, A_{\nu\rho\sigma} \frac{\partial x^\nu}{\partial \xi^a} \frac{\partial x^\rho}{\partial \xi^b} \frac{\partial x^\sigma}{\partial \xi^c} \epsilon^{abc}. \label{Lagfull} \end{eqnarray} As a result, the equation of motion for $A_{\nu\rho\sigma}$ makes the four-form flux $q$ dynamical, according to \begin{eqnarray} \epsilon^{\mu\nu\rho\sigma} \partial_\mu q= -e\int d^3\xi \, \delta^4(x-x(\xi))\, \frac{\partial x^\nu}{\partial \xi^a} \frac{\partial x^\rho}{\partial \xi^b} \frac{\partial x^\sigma}{\partial \xi^c} \epsilon^{abc}. \end{eqnarray} The flux parameter $q$ is quantized in units of $e$ as $q=e\,n$ with $n$ being integer. Whenever we nucleate a membrane, we can decrease the flux parameter by one unit such that both the Higgs mass and the cosmological constant can be relaxed into observed values in the end. \section{Dynamical relaxation with four-form fluxes} From the result in eq.~(\ref{Lagfull}) apart from the second line, we collect the relevant terms in the following form, \begin{eqnarray} {\cal L} = \sqrt{-g} \bigg[ \frac{1}{2} f(H,q) R +\frac{1}{2}(\zeta^2-c^2_1) R^2- |D_\mu H|^2 +M^2_{\rm eff} |H|^2 -\Big(\lambda+\frac{1}{2}c^2_2\Big) |H|^4 -\Lambda_{\rm eff} \bigg] \label{Lagfull2} \end{eqnarray} where \begin{eqnarray} f(H,q) &=& 1 + c_1 (c_2 |H|^2+q), \\ M^2_{\rm eff}(q) &=& M^2 - c_2\, q, \\ \Lambda_{\rm eff} (q) &=& \Lambda + \frac{1}{2}\, q^2. \end{eqnarray} Then, we find that the Higgs mass parameter and the cosmological constant as well as the Planck mass are variable by the same quantity, the flux parameter $q$. Whenever the membrane nucleation occurs, we can reduce the flux parameter and scan the effective parameters. It is interesting to notice that there is an $R^2$ term with negative coefficient proportional to the non-minimal four-form coupling in the original Lagrangian (\ref{Lagint}). Thus, we had to include an $R^2$ term from the beginning to compensate the negative term for stability. The correction to the Higgs quartic coupling is independent of the flux parameter so it is absorbed by the tree-level value. The membrane is located at the boundary between two consecutive dS space configurations that are defined by the flux parameters and differ by one unit. Then, it is argued the tunneling probability between those configurations is given \cite{tunneling} by \begin{eqnarray} {\cal P}(n+1\rightarrow n) \approx {\rm exp} \left( -\frac{24\pi^2M^4_P}{\Lambda_{n+1}}\right) \label{probable} \end{eqnarray} when $\Lambda_{n+1}\ll T^2/M^2_P$ where $T$ is the membrane tension. Therefore, the probability of changing the flux parameter by one unit becomes large in the early stage of the nucleation, but it becomes extremely suppressed at the last stage, making the Universe entering in a metastable state with a small cosmological constant \cite{tunneling,membrane,Giudice,Kaloper}. In addition to the relaxation of the cosmological constant with four-form fluxes, the Higgs mass parameter is also scanned at the same time. For $q>q_c$ with $q_c\equiv M^2/c_2$, the Higgs mass parameter $M^2_{\rm eff}<0$, so electroweak symmetry is unbroken, whereas for $q<q_c$, we are in the broken phase. For $c_2={\cal O}(1)$ and the membrane charge $e$ of electroweak scale, we can explain the observed Higgs mass parameter once the flux change stops at $q=q_c-e$ by the previous argument for the tunneling probability \cite{Giudice,Kaloper}. For $\Lambda<0$, we can cancel a large cosmological constant by the contribution from the same flux parameter until $\Lambda_{\rm eff}$ takes the observed value at $q=q_c-e$, but we need to reply on an anthropic argument for that with $e$ being of order weak scale \cite{anthropic}. We remark the tunneling rate with membrane nucleation in more detail, in particular, in the last stage of the four-form scanning. The tunneling rate from the last dS phase to the true vacuum depends on the bounce action $B$ for the instanton solution with radius ${\bar r}_0$ \cite{coleman,tunneling}, given in the following, \begin{eqnarray} \gamma\equiv {\bar r}^{-4}_0\, e^{-B} \label{rate} \end{eqnarray} where the bounce action is given by \begin{eqnarray} B=\frac{27\pi^2}{2} \, \frac{T^4}{(\Delta \Lambda)^3}\,\left( 1+\frac{1}{4} r^2_0 H^2\right)^{-2}, \label{bounce} \end{eqnarray} with $r_0=\frac{3T}{\Delta \Lambda}$ being the instanton radius in the absence of gravity, and the instanton radius ${\bar r}_0$ and the dS radius $H^{-1}$ are given, respectively, by \begin{eqnarray} {\bar r}_0&=&\frac{r_0}{1+\frac{1}{4} r^2_0 H^2}, \label{iradius}\\ H^{-1} &=& \frac{\sqrt{3} M_P}{\sqrt{\Delta\Lambda}}. \end{eqnarray} Here, $\Delta\Lambda$ is the change of the cosmological constant due to the last tunneling, which is given by $\Delta\Lambda\simeq e q_c$ for $e\ll q_c$ after the last membrane nucleation. The gravitational corrections appear due to the curvature of the dS phase outside the membrane, suppressing both the bounce action and the instanton radius. We note that when $r_0< 2H^{-1}$, which corresponds to $\frac{T^2}{M^2_P}<\frac{4}{3} \Delta\Lambda$, we can ignore the curvature of the dS spacetime and it is enough to consider the membrane tension and the four-form action for calculating the bounce action. In this case, the tunneling rate becomes $\gamma\simeq r^{-4}_0\, e^{-B}$ with $B\simeq\frac{27\pi^2}{2} \, \frac{T^4}{(\Delta \Lambda)^3}$, from eq.~(\ref{rate}) with eqs.~(\ref{bounce}) and (\ref{iradius}). On the other hand, for $r_0\gtrsim 2 H^{-1}$, the bounce action is dominated by the curvature of the dS space. This is the case shown in eq.~(\ref{probable}), for which the tunneling rate becomes $\gamma\simeq r^{-4}_0 \Big(\frac{r_0 H}{2} \Big)^8\, e^{-B}$ with $B\simeq \frac{24\pi^2 M^4_P}{\Delta\Lambda}$, similarly from eq.~(\ref{rate}) with eqs.~(\ref{bounce}) and (\ref{iradius}). The last dS phase at $q=q_c$ becomes unstable within the Hubble volume when $\gamma> H^4$. In the case with $r_0< 2H^{-1}$ and $T=M^3_*$, we can obtain the condition on the brane tension for $\gamma> H^4$, as follows, \begin{eqnarray} M_*< \frac{1}{1.85^{1/12}}\, (\Delta \Lambda)^{1/4}=\frac{1}{1.85^{1/12}}\, (eq_c)^{1/4}. \label{instable} \end{eqnarray} Therefore, for $q_c\sim M^2_P$ and $e\sim (100\,{\rm GeV})^2$, the above instability bound becomes $M_*< 10^{10}\,{\rm GeV}$, which is consistent with the negligible gravity for $\frac{T^2}{M^2_P}<\frac{4}{3} \Delta\Lambda$, that is, $M_*<4\times 10^{12}\,{\rm GeV}$. On the other hand, if the above instability bound (\ref{instable}) is not satisfied, we have $\gamma<H^4$, even though the gravitational corrections become important and help suppress the bounce action, independent of the brane tension with $T>4\times 10^{12}\,{\rm GeV}$. Then, the last dS phase takes a long time before it decays, thus there would be a prolonged stage of the last dS phase. In the later discussion on reheating, depending on the brane tension, we divide our discussion into two cases in the next section, namely, reheating the Universe during or after the last membrane nucleation. \section{Four-form non-minimal couplings and effective theory} In this section, we discuss the implications of the four-form couplings for the reheating of the Universe. This is an important ingredient for the non-empty Universe at the end of relaxation. We first consider a dual description of the $R^2$ term in eq.~({\ref{Lagfull2}}) in terms of a real scalar field $\chi$ by \begin{eqnarray} \frac{1}{2}(\zeta^2-c^2_1) R^2 \longrightarrow \sqrt{\zeta^2-c^2_1}\, \chi R - \frac{1}{2} \chi^2. \end{eqnarray} Then, the Lagrangian ({\ref{Lagfull2}}) becomes \begin{eqnarray} {\cal L} = \sqrt{-g} \bigg[ \frac{1}{2}\,\Omega(H,\chi,q) R - |D_\mu H|^2 +M^2_{\rm eff} |H|^2 -\Big(\lambda+\frac{1}{2}c^2_2\Big) |H|^4 -\Lambda_{\rm eff} -\frac{1}{2} \chi^2 \bigg] \label{Lagfull3} \end{eqnarray} with \begin{eqnarray} \Omega(H,\chi,q)=1 + c_1 \Big(c_2 |H|^2+q\Big)+\sqrt{\zeta^2-c^2_1}\, \chi. \end{eqnarray} Furthermore, making the field redefinition by \begin{eqnarray} \sigma= c_2 |H|^2+q+\frac{\sqrt{\zeta^2-c^2_1}}{c_1}\, \chi, \end{eqnarray} we get $\Omega=1 + c_1\sigma$ and rewrite eq.~({\ref{Lagfull3}}) as \begin{eqnarray} {\cal L} = \sqrt{-g} \bigg[ \frac{1}{2}\,(1+c_1\sigma ) R - |D_\mu H|^2-V(H,\sigma,q) \bigg] \label{Lagfinal} \end{eqnarray} with \begin{eqnarray} V(H,\sigma,q) = -M^2_{\rm eff} |H|^2 +\Big(\lambda+\frac{1}{2}c^2_2\Big) |H|^4 +\Lambda_{\rm eff} +\frac{1}{2}\,\frac{c^2_1}{\zeta^2-c^2_1} \Big(\sigma-c_2 |H|^2-q \Big)^2. \end{eqnarray} We remark that for $\zeta^2>c^2_1$, the potential for a new scalar field $\sigma$ is bounded from below, so the stability of the potential is ensured even in the presence of the non-minimal four-form coupling to gravity. For $\zeta^2<c^2_1$, the potential is unbounded from below, so we would need a higher dimensional term for the sigma field to stabilize the potential. Due to the field-dependent Einstein term in eq.~(\ref{Lagfinal}), we make a Weyl scaling of the metric by $g_{\mu\nu}=g^E_{\mu\nu}/\Omega$ and get the Einstein frame Lagrangian as follows, \begin{eqnarray} {\cal L}_E =\sqrt{-g_E} \bigg[ \frac{1}{2} R(g_E) -\frac{3}{4}\,c^2_1\,\Omega^{-2}\,(\partial_\mu\sigma)^2 - \frac{1}{\Omega}\, |D_\mu H|^2- \frac{V(H,\sigma,q)}{\Omega^2} \bigg]. \end{eqnarray} For $|c_1\sigma|\lesssim 1$, we can make the sigma field kinetic term canonically normalized by ${\bar\sigma}=\sqrt{\frac{3}{2}}\, c_1 \sigma$ and get the Einstein-frame Lagrangian as \begin{eqnarray} {\cal L}_E &\approx& \sqrt{-g_E} \bigg[ \frac{1}{2} R(g_E) -\frac{1}{2} (\partial_\mu{\bar\sigma})^2 - |D_\mu H|^2-V(H,{\bar\sigma},q) \bigg] \end{eqnarray} where \begin{eqnarray} V(H,{\bar\sigma},q)= -M^2_{\rm eff} |H|^2 +\Big(\lambda+\frac{1}{2}c^2_2\Big) |H|^4 +\Lambda_{\rm eff} +\frac{1}{2} m^2_{\bar\sigma} \Big( {\bar\sigma}- \sqrt{\frac{3}{2}} \,c_1(c_2 |H|^2+q) \Big)^2 \end{eqnarray} with \begin{eqnarray} m_{\bar\sigma}= \sqrt{\frac{2}{3}}\, \frac{M_P}{\sqrt{\zeta^2-c^2_1}}. \label{inflatonmass} \end{eqnarray} Thus, in the minimum of the sigma field potential, we get the Higgs potential as in the case with the four-form coupling to the Higgs field only \cite{Giudice,Kaloper}. We note that the coupling between the sigma and Higgs fields is of the form, $ \frac{c_1c_2m^2_{\bar\sigma} }{M_P} \,{\bar\sigma} |H|^2$, which determines the reheating temperature after inflation. For general field values of $\sigma$, the canonical sigma field ${\bar\sigma}$ in Einstein frame is redefined by \begin{eqnarray} \sigma = \frac{1}{c_1} \Big(e^{\sqrt{\frac{2}{3}} {\bar\sigma}}-1 \Big), \end{eqnarray} and the Einstein frame Lagrangian becomes \begin{eqnarray} {\cal L}_E =\sqrt{-g_E} \bigg[ \frac{1}{2} R(g_E) -\frac{1}{2}(\partial_\mu{\bar\sigma})^2 - e^{-\sqrt{\frac{2}{3}}{\bar\sigma}}\, |D_\mu H|^2- V_E(H,{\bar\sigma}) \bigg] \end{eqnarray} with \begin{eqnarray} V_E(H,{\bar\sigma})&=& \Lambda_{\rm eff}\, e^{-2\sqrt{\frac{2}{3}}{\bar\sigma}}+\frac{3}{4} m^2_{\bar\sigma} \bigg(1-(1+c_1 q)e^{-\sqrt{\frac{2}{3}}{\bar\sigma}}-c_1 c_2\, e^{-\sqrt{\frac{2}{3}}{\bar\sigma}} |H|^2 \bigg)^2 \nonumber \\ &&+ e^{-2\sqrt{\frac{2}{3}}{\bar\sigma}} \Big( -M^2_{\rm eff}|H|^2+ \lambda_{H,{\rm eff}}|H|^4\Big). \end{eqnarray} Here, assuming that the SM Higgs is stabilized at $\langle H\rangle=v/\sqrt{2}$ in each dS phase, we can rewrite the above sigma field potential as \begin{eqnarray} V_E({\bar\sigma}) = V_0(q) + \bigg[\frac{3}{4}m^2_{\bar\sigma}\Big(1+c_1\Big(q+\frac{1}{2}c_2 v^2\Big)\Big)^2+\Lambda_{\rm eff} \bigg] \Big(e^{-\sqrt{\frac{2}{3}}{\bar\sigma}}- e^{-\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q)}\Big)^2 \label{finpot} \end{eqnarray} where \begin{eqnarray} e^{-\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q)}&=& \frac{3m^2_{\bar\sigma}(1+c_1 (q+\frac{1}{2}c_2 v^2))}{3m^2_{\bar\sigma}(1+c_1 (q+\frac{1}{2}c_2 v^2))^2+4\Lambda_{\rm eff}}, \label{minq} \\ V_0(q)&=&\frac{3m^2_{\bar\sigma} \Lambda_{\rm eff}}{3m^2_{\bar\sigma}(1+c_1 (q+\frac{1}{2}c_2 v^2))^2 +4\Lambda_{\rm eff}}. \label{minV} \end{eqnarray} Here, we note that the effect of the effective cosmological constant $\Lambda_{\rm eff}$ in Jordan frame is crucial in determining the minimum of the sigma field potential. This is important for a large shift in the minimum of the potential after the membrane nucleation. \section{Reheating} Now we discuss the role of the sigma field potential for reheating during or just after the last membrane nucleation. We keep $\langle H\rangle=0$ during the scanning with the flux parameter and regard the sigma field as the inflaton. \subsection{Reheating during the last membrane nucleation} We first consider the possibility of reheating during the last membrane nucleation. To this, imposing $m_{\bar\sigma}\sim H$, we can allow for the sigma field to start rolling from the initial misalignment after the next-to-last membrane nucleation, that is, the transition from $q=q_c+e$ to $q=q_c$. Then, the sigma field can decay into the SM particles through the Higgs coupling and reheat the Universe. Moreover, as discussed in the previous section, we assume that the last dS phase decays within the Hubble spacetime volume during the last dS phase, that is, $\gamma<H^4$, in order not to dilute much the radiation produced from the sigma field decay. Just before the next-to-last nucleation, we need $q=q_c+e$ and $v=0$, for which eqs.~(\ref{minq}) and (\ref{minV}) become \begin{eqnarray} e^{-\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q_c+e)} &\approx &\frac{1}{1+c_1q_c} \Big(1+\frac{8eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2} \Big)^{-1}, \label{min1}\\ V_0 (q_c+e) &\approx & \frac{6m^2_{\bar\sigma} e q_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2 +8e q_c} \end{eqnarray} where we used $\Lambda_{\rm eff}(q_c-e)=\Lambda+\frac{1}{2}(q_c-e)^2\simeq 0$ in the end, and \begin{eqnarray} \Lambda_{\rm eff}(q_c+e)= \Lambda + \frac{1}{2} (q_c+e)^2 \simeq 2e q_c. \end{eqnarray} On the other hand, after the next-to-last nucleation, we have $q=q_c$ and $v=0$, for which \begin{eqnarray} e^{-\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q_c)} &\approx &\frac{1}{1+c_1 q_c} \Big(1+\frac{4eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2} \Big)^{-1}, \label{min1}\\ V_0 (q_c) &\approx & \frac{3m^2_{\bar\sigma} e q_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2 +4e q_c} \end{eqnarray} where use is made of \begin{eqnarray} \Lambda_{\rm eff}(q_c)= \Lambda + \frac{1}{2} q^2_c = e\Big(q_c-\frac{1}{2}e\Big)\approx e q_c. \end{eqnarray} Thus, we find that both the minimum of the sigma field potential and the cosmological constant changes after the next-to-last nucleation. Taking the initial condition just before the next-to-last nucleation to be the minimum of the potential for $q=q_c+e$, i.e. ${\bar\sigma}_i={\bar\sigma}_{\rm m}(q_c+e)$, we can obtain the sigma field potential after the next-to-last nucleation as \begin{eqnarray} V_E({\bar\sigma}) &\approx&V_0(q_c+e) \nonumber \\&&+ \frac{1}{4}[3m^2_{\bar\sigma}(1+c_1 q_c)^2+8 eq_c] e^{-2\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q_c+e)} \Big(e^{-\sqrt{\frac{2}{3}}({\bar\sigma}-{\bar\sigma}_{\rm m}(q_c+e))}- e^{-\sqrt{\frac{2}{3}}({\bar\sigma}_{\rm m}(q_c)-{\bar\sigma}_{\rm m}(q_c+e))}\Big)^2 \nonumber \\ &=&V_0(q_c+e) \nonumber \\ &&+ \frac{3}{4}m^2_{\bar\sigma}\bigg(1+\frac{8 eq_c}{3m^2_{\bar\sigma} (1+c_1 q_c)^2} \bigg)^{-1} \Big(e^{-\sqrt{\frac{2}{3}}({\bar\sigma}-{\bar\sigma}_i)}-1-\frac{4eq_c}{3m^2_{\bar\sigma} (1+c_1 q_c)^2+4 e q_c}\Big)^2 \end{eqnarray} with \begin{eqnarray} V_0(q_c+e)\simeq \frac{6m^2_{\bar\sigma}eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2+8e q_c}. \end{eqnarray} As a result, the sigma field starts to oscillate at ${\bar\sigma}={\bar\sigma}_i$ with the initial potential energy, given by \begin{eqnarray} V_i&\equiv& V_0(q_c+e) + \frac{36 m^4_{\bar\sigma}(eq_c)^2}{[3m^2_{\bar\sigma}(1+c_1 q_c)^2+8e q_c][3m^2_{\bar\sigma}(1+c_1 q_c)^2+4e q_c]^2} \nonumber \\ &=&\frac{6m^2_{\bar\sigma}eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2+8e q_c} \bigg(1+\frac{6m^2_{\bar\sigma}eq_c}{[3m^2_{\bar\sigma}(1+c_1 q_c)^2+4e q_c]^2} \bigg). \label{maxpot0} \end{eqnarray} Then, the sigma field starts to oscillate around the minimum of the above potential, provided that the Hubble parameter at $q=q_c$ satisfies $H(q_c)= m_{\bar\sigma,{\rm eff}}$, i.e. \begin{eqnarray} H(q_c)= \frac{V_i}{\sqrt{3}}=m_{\bar\sigma,{\rm eff}} \label{inflatonmass} \end{eqnarray} where \begin{eqnarray} m^2_{\bar\sigma,{\rm eff}} \equiv \frac{3m^4_{\bar\sigma}(1+c_1 q_c)^2}{3m^2_{\bar\sigma}(1+c_1 q_c)^2+8e q_c}. \end{eqnarray} This implies that we need $m^2_{\bar\sigma}\simeq 2 eq_c$. For $q>q_c$, the Hubble parameter $H(q)$ becomes larger than the sigma field mass so the sigma field is stuck at a certain value or it undergoes a slow rolling. Therefore, only when the sigma field mass is appropriately chosen for a given flux parameter, the sigma field can start to oscillate and reheat the Universe at $q=q_c$. After the final membrane nucleation at $q=q_c-e$, the Higgs mass parameter becomes negative and takes a right value for the observed Higgs mass and the cosmological constant also takes the observed value by the anthropic argument. However, the sigma field couples to the SM Higgs through the non-minimal coupling to the four-form flux, which is suppressed by the Planck scale. Consequently, from the decay rate of the inflaton for the decay into two Higgs fields as \begin{eqnarray} \Gamma_{\bar\sigma}= \frac{3c^2_1 c^2_2}{64\pi} \frac{m^3_{\bar\sigma}}{M^2_P}, \label{decay} \end{eqnarray} and using eq.~(\ref{inflatonmass}), we obtain the reheating temperature as \begin{eqnarray} T_{\rm RH} &=&\left(\frac{90}{\pi^2 g_*} \right)^{1/4} (M_P \Gamma_{\bar\sigma})^{1/2} \nonumber \\ &=&0.2\left(\frac{100} {g_*} \right)^{1/4} c_1 c_2\, \bigg(\frac{e}{c_2 M^{2/3}} \bigg)^{3/4} \bigg(\frac{M}{M_P} \bigg)^2. \end{eqnarray} For instance, in order to solve the hierarchy between the Planck scale and the weak scale by the relaxation of the four-form flux, we choose $M\sim M_P$ and $\sqrt{e}\sim 1\,{\rm TeV}$ for $c_2={\cal O}(1)$. Then, the inflaton mass is $m_{\bar\sigma}\sim {\rm TeV}$ and the reheating temperature is $T_{\rm RH}\sim (c_1/10^3)\,10\,{\rm MeV}$. In other words, we need $\zeta\sim 10^{15}$ for $m_{\bar\sigma}\sim {\rm TeV}$, and $c_1\gtrsim 10^3$ for $T_{\rm RH}\gtrsim 10\, {\rm MeV}$. We comment on several issues in the case with reheating during the last membrane nucleation. First, after the next-to-last membrane nucleation, it is known that the Universe enters the open inflating phase with a negative spatial curvature \cite{coleman,hawking}. If the tunneling with the last membrane nucleation were efficient, there would no time for the negative spatial curvature to be diluted away by inflation, so it remains sizable after the last membrane nucleation. Secondly, we would need a large coupling of quadratic curvature gravity for a light dual scalar field. In this case, certainly, the perturbative expansion of the tree-level Lagrangian is in question. Moreover, we need to control even higher curvature terms such as $R^n$ for $n>2$ with sufficiently small coefficients. In this sense, there is a need of improving our discussion to justify the classical Lagrangian for the light sigma field at the $R+R^2$ gravity level. At least, we can argue that when we take the pure $R+R^2$ gravity as the effective theory, the UV cutoff scale for gravity does not decrease, because the theory is identical to a scalar-tensor gravity with a stable massive scalar field. In the next subsection, we also discuss a successful case of reheating after the last membrane nucleation but without problems of the open Universe or large couplings of quadratic curvature gravity or four-form flux. \subsection{Reheating after the last membrane nucleation} In the case when the last membrane nucleation takes a longer than the Hubble rate, that is, $\gamma<H^4$, radiation produced during the last dS phase would be diluted away by the exponential expansion of the Universe. Thus, in this subsection, we discuss the case when reheating occurs after the last membrane nucleation. Just after the last nucleation, we have $q=q_c-e$, $v\neq 0$ and $V_0\approx 0$, for which the minimum of the potential becomes \begin{eqnarray} e^{-\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q_c-e)}\approx \frac{1}{1+c_1 (q_c-e+\frac{1}{2}c_2 v^2)}\approx\frac{1}{1+c_1 q_c}. \label{min2} \end{eqnarray} Then, we can compare between the different minimum values in eqs.~(\ref{min1}) and (\ref{min2}) before and after the last membrane nucleation, which are crucial for obtaining a nonzero initial vacuum energy for the sigma field after the last membrane nucleation. The discussion on the flux-induced shift of the minimum and reheating has been generalized to the case with the four-form couplings to singlet scalar fields in a recent paper \cite{general}. Suppose that the sigma field settles into the minimum of the potential before the last nucleation. Then, after the last nucleation, the minimum of the potential is shifted from eq.~(\ref{min1}) to eq.~(\ref{min2}). Taking the initial condition just before the last nucleation to be the minimum of the potential for $q=q_c$, i.e. ${\bar\sigma}'_i={\bar\sigma}_{\rm m}(q_c)$, we can obtain the sigma field potential after the last nucleation as \begin{eqnarray} V_E({\bar \sigma}) &\approx&\frac{3}{4}m^2_{\bar\sigma}(1+c_1 q_c)^2 e^{-2\sqrt{\frac{2}{3}}{\bar\sigma}_{\rm m}(q_c)} \Big(e^{-\sqrt{\frac{2}{3}}({\bar\sigma}-{\bar\sigma}_{\rm m}(q_c))}- e^{-\sqrt{\frac{2}{3}}({\bar\sigma}_{\rm m}(q_c-e)-{\bar\sigma}_{\rm m}(q_c))}\Big)^2 \nonumber \\ &=& \frac{3}{4}m^2_{\bar\sigma} \Big(1+\frac{4eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2} \Big)^{-2} \Big(e^{-\sqrt{\frac{2}{3}}({\bar\sigma}-{\bar\sigma}'_i)}-1-\frac{4eq_c}{3m^2_{\bar\sigma}(1+c_1 q_c)^2} \Big)^2. \end{eqnarray} As a result, the sigma field starts to oscillate at ${\bar\sigma}={\bar\sigma}_i$ with the initial potential energy, given by \begin{eqnarray} V'_i\equiv V_E({\bar\sigma}_i) =\frac{12(e q_c)^2 m^2_{\bar\sigma}}{(3 m^2_{\bar\sigma} (1+ c_1q_c)^2+4 eq_c )^2} \label{maxpot} \end{eqnarray} where the latter approximation is made for $c_1 q_c\lesssim 1$. Here, we find that: for $m^2_{\bar\sigma}\ll eq_c$, $V'_i\approx \frac{3}{4} m^2_{\bar\sigma}$; for $ m^2_{\bar\sigma}\gg eq_c$, $V'_i\approx \frac{4}{3} (e q_c)^2/[m^2_{\bar\sigma}(1+c_1 q_c)^2]$. On the other hand, for $m^2_{\bar\sigma}=\frac{2}{3}\sqrt{2} eq_c/(1+c_1 q_c)^2 $, the initial potential energy is maximized to $V'_i\approx 0.25(e q_c)/(1+c_1 q_c)^2$. Thus, the maximum initial potential can be obtained for the inflaton mass of order $1\,{\rm TeV}$ for $e\sim(1\,{\rm TeV})^2$ and $q_c\sim M^2_P$, but a heavier inflaton mass is favored for a sufficiently high reheating temperature as will be shown below. When reheating is instantaneous, the temperature of the Universe after inflation would be given by the maximum temperature, $T_{\rm max}=\left( \frac{90 V'_i}{\pi^2 g_*} \right)^{1/4}$ with eq.~(\ref{maxpot}), thus becoming \begin{eqnarray} T_{\rm max}&\simeq& 2.5\times 10^{10}\,{\rm GeV} \left(\frac{100}{ g_*} \right)^{1/4} \left(\frac{eq_c}{(1\,{\rm TeV}\cdot M_P)^2}\right)^{1/4} \nonumber \\ &&\quad\times \bigg(\frac{m^2_{\bar\sigma} M^2_P}{eq_c}\bigg)^{1/4}\left(1+\frac{3}{4}\left(\frac{m^2_{\bar\sigma}M^2_P}{eq_c}\right)(1+c_1 q_c/M^2_P)^2 \right)^{-1/2} \end{eqnarray} where we have reintroduced the Planck scale for dimensionality. In particular, for $m^2_{\bar\sigma}\gg eq_c$ and $c_1 q_c/M^2_P\lesssim 1$, the maximum reheating temperature becomes \begin{eqnarray} T_{\rm max}&\simeq& 1.5\times 10^{9}\,{\rm GeV} \left(\frac{100}{ g_*} \right)^{1/4} \left(\frac{eq_c}{(1\,{\rm TeV}\cdot M_P)^2}\right)^{1/2} \left(\frac{380\,{\rm TeV}}{m_{\bar\sigma}}\right)^{1/2}. \end{eqnarray} Then, from the decay rate of the sigma field given in eq.~(\ref{decay}), the resulting reheating temperature becomes much lower than the maximum reheating temperature, as follows, \begin{eqnarray} T_{\rm RH} =\left(\frac{90}{\pi^2 g_*} \right)^{1/4} ( \Gamma_{\bar\sigma} M_P)^{1/2}= 10\,{\rm MeV}\left(\frac{100}{ g_*} \right)^{1/4} \Big(\frac{c_1}{1} \Big) \Big(\frac{c_2}{1} \Big) \Big(\frac{m_{\bar\sigma}}{380{\rm TeV}}\Big)^{3/2}. \end{eqnarray} In this case, the reheating temperature is much smaller than the maximum temperature, due to the double suppressions with the Planck scale and the inflaton mass. However, for $m_{\bar\sigma}>380\,{\rm TeV}$ (or $\zeta<5.2\times 10^{12}$ from eq.~(\ref{inflatonmass})) and $c_1,c_2={\cal O}(1)$, we can obtain a sufficiently high reheating temperature for the successful Big Bang Nucleosynthesis. We note that for $m_{\bar\sigma}\geq 1.6\times 10^8\,{\rm GeV}$, the reheating temperature becomes identical to the maximum reheating temperature, that is, $T_{\rm RH}=T_{\rm max}$. In the discussion of this subsection, we showed that there is no need of large couplings to the four-form flux for the successful reheating. But, we still need to see the details of the sufficient number of efoldings and the inflationary observables in a low-scale inflation. \section{Conclusions} We provided the most general Lagrangian for the four-form couplings to the SM and showed that the four-form flux parameter scans not only the Higgs mass and the cosmological constant but also the Planck mass. We found that the non-minimal four-form coupling to gravity gives rise to an tachyonic instability for a new scalar field, but it can be consistently cured in the effective Lagrangian. We discussed the conditions on new four-form couplings for a successful reheating of the Universe at the end of relaxation. \section*{Acknowledgments} The author thanks Kfir Blum, Cliff Burgess, Gian Giudice, Jinn-Ouk Gong and Kimyeong Lee for helpful discussions. The author also appreciates fruitful discussions with participants during the CERN-CKC Theory Workshop on Axion in the lab and in the cosmos, and attendees in the BSM forum in CERN theory group in October 2019. The important comments from the anonymous referee of the journal also helped much improving the discussion of the paper. The work of HML is supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2019R1A2C2003738 and NRF-2018R1A4A1025334).
0710.1167
\section{Introduction} \label{sec:intro} It is well known that many of the stars in the solar neighbourhood exist in multiple systems. As the number of planetary surveys increases, planets are regularly being found not only in single star systems, but binaries and triples as well. For example, recently a hot Jupiter has been claimed in the triple system HD 188753 (Konacki 2005; but see also Eggenberger et al. 2007), and \citet{DB07} lists 33 binary systems and 2 other hierarchical triples known to harbour exoplanets. As the majority of work on planetary dynamics has been for single star systems, the dynamics of bodies in these multiple stellar systems is of great interest. At first sight, it might not seem likely to expect long term stable planetary systems to exist in binary star systems, let alone triples, but numerical and observational work is starting to show otherwise. In recent years, much study has been devoted to the stability of planets and planetesimal discs in binary star systems. There are several investigations of individual systems (e.g. work on $\gamma$ Cephei by \citealt{Dv03}, \citealt{Ha06} and \citealt{VE06}) as well as substantial amounts of work on more general limits for stability of smaller bodies in these systems. Notably, \citet{HW99} approach this problem by using numerical simulation data to empirical fit critical semimajor axes for test particle stability as functions of binary mass ratio and eccentricity. These general studies are of great use in the investigation of observed systems and their stability properties, giving an effective and fast method of placing limits on stability in the large parameter space created by observational uncertainties. The aim of this work is to investigate test particle stability in hierarchical triple star systems, and to see if any similar boundaries can be defined for them. To do this, a new method for numerically integrating planets in such systems is presented, following the ideas for binary systems presented by \citet{CQDL02}. Although there have been empirical studies of the stability of hierarchical star systems themselves, there do not appear to have been studies of small bodies in such systems. There is a great deal of literature concerning periodic orbits in the general three and four body problems (see e.g. \citealt{Sz67}), but these are almost entirely devoted to considerations of planetary satellites in single star systems, for example satellite transfer in the Sun-Earth-Moon systems. Also, while periodic orbits prove that stable solutions exist in these problems, they are of little practical use in determining general stability limits. The problem of planetary orbits in triple systems is more complex than for those in binary systems, as there are many different orbital configurations possible relative to the three stars. These are considered in Section~\ref{sec:orbits}, and a method for classifying them is described in order to simplify the discussion in this paper. The derivation of a method to numerically integrate such planets is then given in Section~\ref{sec:maths}. In Section~\ref{sec:stats} is a brief overview of the statistical properties of known triple systems, as a basis for deciding the parameters of the systems used in the numerical simulations. The results of numerical investigations into stability properties are then presented in Section~\ref{sec:sp} for one of the possible orbital types. \section{The Types of Stellar and Planetary Orbits} \label{sec:orbits} The triple systems studied here are all hierarchical in nature. That is, they can all be considered to be a close binary orbited by a distant companion -- in effect, an inner binary pair and an outer binary pair. Here, the inner binary stars are labelled as A and B, with A being the more massive, and the distant companion star as C. There are however three possible ways to pair the three stars into the hierarchy described above. \citet{EK95} define the hierarchy by requiring that the instantaneous Keplerian orbits of the inner and outer pair are bound, that the outer binary has a longer period than the inner binary and that the ratio of the two periods is the largest of the three possible pairings of the stars. This definition is adopted here. Although other, non-hierarchical, types of motion are possible for triple stars (see for example \citealt{Sz77}) and observed (see for example the trapezium systems listed in \citealt{To97}), they are not considered here. It is an open question whether non-hierarchical systems can sustain long-term stable planets. Next some system of classifying planetary orbits is needed. The orbital motion of small particles under the influence of two or three masses has been studied to some extent through periodic orbits of the three and four body problems, as mentioned in the Introduction. As a result of this many families and classification schemes exist. \citet{Sz67} gives a good review, mentioning for example the (a) to (r) types designated for the Copenhagen problem, dependent on the nature and location of the particles motion in both inertial and corotating frames. Many other features have been used to describe orbital families, for example symmetry \citep{Br04} or a parameter used to generate the orbital family. However, as mentioned by \citet{Sz67}, there is no overall method, and the descriptions would not be applicable to non-periodic orbits. Orbital types in binary systems have generally been classified into three main groups, those around a single star (circumstellar), those around both stars (circumbinary) and those in the middle ground as it were, coorbiting with one of the stars i.e. librating about the triangular Lagrangian points, similar to Trojan asteroids in the Solar System. A convenient labelling of these was given by \citet{Dv84}, who designated planets as either S-orbits (satellite), P-orbits (planet), or L-orbits (librator) for circumstellar, circumbinary and coorbital respectively. These ideas can be extended to hierarchical triple systems fairly simply. It is clear that there will be three types of circumstellar motion, one for each star, and as such orbits like this can be labelled S(A), S(B) and S(C) for their primary. The circumtriple orbit about the centre of mass of all three can be identified as a P orbit and labelled as such in analogy to Dvorak's scheme. Finally, planets which orbit the centre of mass of the inner binary are circumbinary but also share characteristics with the satellite orbit, and can be labelled with the combination S(AB)-P. These orbital types are shown in Figure~\ref{fig:types} along with the binary cases for comparison, and listed in Table~\ref{tab:types}. A superscript 2 has been given to those in binary systems and a 3 for those in triple systems for clarity. Although these are the only types of motion studied here, for completeness the coorbital types are also be included. This type of motion can occur for both the inner and outer binary, and labels (AB) and (ABC) can be used to designate this. Instead of the single L-orbit type they can be broken up into T and H orbits to indicated tadpole (about one of the triangular Lagrangian points) or horseshoe (about both triangular points and the $L_3$ collinear point). These are again illustrated in Figure~\ref{fig:types} and included in Table~\ref{tab:types}. These orbits are defined to be bound relative to their focus and hierarchical in the same sense as the definition given for the stellar system. A particle that orbits outside the extent of the inner binary but has a bound orbit with respect to star A and the binary centre of mass would be an S(AB)-P orbit and not an S(A) orbit. This method of labelling clearly and concisely designates the exact nature of a planetary orbit, and extends a system already in use for binary stars. It can also be easily applied to systems with other levels of hierarchy, for example if star C was replaced with another close binary pair C and D additional classes S(D) and S(CD)-P would be possible, and the outer P type could be relabelled P(ABCD). \begin{figure*} \centering \includegraphics[width=\textwidth]{fig1.eps} \caption{ Basic planetary orbit types in multiple star systems as described in Section~\ref{sec:orbits}. The top row shows a binary system, in a frame corotating with the stars. The bottom row shows a triple system, in a frame corotating with either the inner or outer binary as appropriate. Stellar orbits are shown with dashed lines and stars marked with an asterisk. Planetary orbits are shown with a solid line. Note that these plots show real data from numerical simulations generated using a standard Bulirsch-Stoer integrator \citep{NR} for the tadpole and horseshoe orbits (based on initial conditions from \citealt{MD00}) and the integration scheme presented in this paper for the others.} \label{fig:types} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{fig2.eps} \end{center} \caption{ The coordinate system used for the symplectic integrator. Planets are all taken as relative to their primary, whilst each planetary-stellar subsystem barycentre is referred to the centre of mass of the preceding objects.} \label{fig:coord} \end{figure} \begin{table} \centering \caption{ Labels for the basic planetary orbital types in multiple star systems as defined in this work and compared to those of \citet{Dv84}. } \begin{tabular}{l|lll} Orbit & Triple & Binary & Dvorak \\ description & system & system & (1984) \\ \hline Circumstellar & S(A) & S(A) & S \\ & S(B) & S(B) & -- \\ & S(C) & -- & -- \\ Circumbinary & S(AB)-P & P(AB) & P \\ Circumtriple & P or P(ABC) & -- & -- \\ Coorbital with binary & T(AB) & T(AB) & L \\ & H(AB) & H(AB) & -- \\ Coorbital with triple & T(ABC) & -- & -- \\ & H(ABC) & -- & -- \\ \end{tabular} \label{tab:types} \end{table} \section{A Symplectic Integrator for Hierarchical Triples} \label{sec:maths} Given the number of simulations that are needed to accurately describe the stability of general triple systems, an accurate and fast method of numerically integrating the equations of motion is required. Methods such as Bulirsch-Stoer are reasonably accurate, but very slow. Instead, a symplectic method is favoured, as it is fast and shows excellent energy conservation. A symplectic integration method for planetary systems was introduced by \citet{WH91}. Here, the Hamiltonian of the system is split into a dominant Keplerian part and smaller interaction terms, all analytically integrable on their own. A leapfrog scheme is then applied to evolve the system forwards by one timestep. If $H_K$ is the Keplerian Hamiltonian and $H_I$ the smaller interaction terms, then to move forward by one step of $dt$ the system is evolved by $H_I$ for $dt/2$, $H_K$ for $dt$ and finally by $H_I$ for $dt/2$ again. This is a 2nd order method. There have been two symplectic integrators derived for planets in multiple star systems, by \citet{CQDL02} and \citet{Be03} respectively. \citet{Be03} uses modified Jacobi coordinates for hierarchical systems of any multiplicity. These `hierarchical Jacobi coordinates' account for the separate substructures in the hierarchy. Following the example given by \citet{Be03}, if a system consists of four stars in two separate binaries orbiting the systems barycentre, then each pair is assigned Jacobi coordinates within their subsystems, and then Jacobi coordinates are applied to the two binaries. \citet{CQDL02} uses a different modification. Here, the stars are in these hierarchical Jacobi coordinates, but planets are referenced purely to their primary. This permits close encounters to be implemented within each planetary system, but requires the small interaction Hamiltonian to be split into two separate parts. The system is evolved using leapfrog still, and the method is 2nd order so long as the order the two interaction terms is symmetric. Both symplectic integrators can be applied to the problem here, but that of \citet{CQDL02} is favoured due to the ease of implementing close encounters. Note that, in the limit of test particles only or a single planet, they are identical coordinate systems. This method requires some work to be extended to a triple system. First the coordinate system needs to be defined, as shown in Figure~\ref{fig:coord}. S(A) planets are taken relative to star A, S(B) to star B and S(C) to star C. The barycentre of star B and its planetary system is then referred to the barycentre of star A and its planets (similar to \citet{Be03}'s hierarchical coordinates). S(AB)-P planets are taken as relative to the centre of mass of the binary and all planets therein. The barycentre of the S(C) planets and their star is taken as relative to the binary and its circumstellar and circumbinary planetary systems. Finally, P type planets are referred to the centre of mass of all the other subsystems. Following the notation of \citet{CQDL02}, the transformed coordinates $\bmath{X}$ are related to the inertial coordinates $\bmath{x}$ by \begin{eqnarray} m_{\rm t}\bmath{X}_A &=& m_A\bmath{x}_A + m_B\bmath{x}_B + m_C\bmath{x}_C + \sum_jm_{SA_j}\bmath{x}_{SA_j} + \sum_jm_{SB_j}\bmath{x}_{SB_j} + \sum_jm_{SC_j}\bmath{x}_{SC_j} + \sum_jm_{SP_j}\bmath{x}_{SP_j} + \sum_jm_{P_j}\bmath{x}_{P_j} \nonumber\\ \bmath{X}_B &=& \frac{m_B\bmath{x}_B + \sum_jm_{SB_j}\bmath{x}_{SB_j}}{m_{(B+p)}} - \frac{m_A\bmath{x}_A + \sum_jm_{SA_j}\bmath{x}_{SA_j}}{m_{(A+p)}} \nonumber\\ \bmath{X}_C &=& \frac{m_C\bmath{x}_C + \sum_jm_{SC_j}\bmath{x}_{SC_j}}{m_{(C+p)}} - \frac{m_A\bmath{x}_A + \sum_jm_{SA_j}\bmath{x}_{SA_j} + m_B\bmath{x}_B + \sum_jm_{SB_j}\bmath{x}_{SB_j} + \sum_jm_{SP_j}\bmath{x}_{SP_j}} {m_{\rm b}+\sum_jm_{SP_j}} \nonumber\\ \bmath{X}_{SA_i} &=& \bmath{x}_{SA_i} - \bmath{x}_A \nonumber\\ \bmath{X}_{SB_i} &=& \bmath{x}_{SB_i} - \bmath{x}_B \nonumber\\ \bmath{X}_{SC_i} &=& \bmath{x}_{SC_i} - \bmath{x}_C \nonumber\\ \bmath{X}_{SP_i} &=& \bmath{x}_{SP_i} - \frac{m_A\bmath{x}_A + \sum_jm_{SA_j}\bmath{x}_{SA_j} +m_B\bmath{x}_B + \sum_jm_{SB_j}\bmath{x}_{SB_j}}{m_{\rm b}} \nonumber\\ \bmath{X}_{P_i} &=& \bmath{x}_{P_i} - \frac{m_{\rm t}\bmath{X}_A - \sum_jm_{P_j}\bmath{x}_{P_j}}{m_{\rm t} - \sum_jm_{P_j}} \end{eqnarray} where subscripts $A$, $B$, $C$ label the stars and $SA_i$, $SB_i$, $SC_i$, $SP_i$ and $P_i$ label the planets. $m_{\rm t}$ is the total mass of the system and $m_{\rm b}$ is the mass of the inner binary stars and their planets. To derive the full system of equations is somewhat involved, but individually (i.e. for one orbital type only) they are readily obtainable. For example, using the method of \citet{CQDL02}, the conjugate momenta for a hierarchical triple with $N$ S(AB)-P planets only are \begin{eqnarray} \bmath{P}_A &=& \bmath{p}_A + \bmath{p}_B + \bmath{p}_C + \sum{}\bmath{p}_j \nonumber\\ \bmath{P}_B &=& \bmath{p}_B - \frac{m_B}{m_{\rm b}}(\bmath{p}_A+\bmath{p}_B) \nonumber\\ \bmath{P}_i \,\, &=& \bmath{p}_i - \frac{m_i}{(m_{\rm t}-m_C)}(\bmath{p}_A + \bmath{p}_B + \sum{}\bmath{p}_j) \nonumber\\ \bmath{P}_C &=& \bmath{p}_C - \frac{m_C}{m_{\rm t}}(\bmath{p}_A + \bmath{p}_B + \bmath{p}_C + \sum{}\bmath{p}_j) \end{eqnarray} where here $m_{\rm b}$ is the mass of the inner binary only. The transformed Hamiltonian is split as follows \begin{eqnarray} H \quad \,\,&=& H_{\rm Kep} + H_{\rm Int} + H_{\rm Jump} \nonumber\\ H_{\rm Kep} &=& \frac{\bmath{P}_{B}^{2}}{2\mu_{b}} + \frac{\bmath{P}_{C}^{2}}{2\mu_{t}} - \frac{G\mu_{b}m_{\rm b}}{R_B} - \frac{G\mu_{t}m_{\rm t}}{R_C} + \sum_{j=1}^N \left( \frac{\bmath{P}_{i}^2}{2m_i}-\frac{Gm_{\rm b} m_i}{R_i} \right)\nonumber\\ H_{\rm Int} &=& -\sum_{j=1}^N \sum_{j>i} \frac{Gm_im_j}{R_{ij}} + Gm_{\rm b} m_{C}\Bigg( \frac{1}{R_C} - \frac{m_A}{|m_{\rm b}\bmath{X}_C+m_B\bmath{X}_B+m_{\rm b}\bmath{s}|} - \frac{m_B}{|m_{\rm b}\bmath{X}_C-m_A\bmath{X}_B+m_{\rm b}\bmath{s}|}\Bigg)\nonumber\\ & &{} + Gm_{\rm b}\sum_{j=1}^N m_i \Bigg( \frac{1}{R_i} - \frac{m_A}{|m_{\rm b}\bmath{X}_i+m_B\bmath{X}_B|} - \frac{m_B}{|m_{\rm b}\bmath{X}_i-m_A\bmath{X}_B|} \Bigg)\nonumber\\ & &{} + Gm_C \sum_{j=1}^N m_i \left(\frac{1}{R_C} - \frac{1}{|\bmath{X}_C - \bmath{X}_i +\bmath{s}|}\right)\nonumber\\ H_{\rm Jump} &=& \frac{1}{2m_{\rm b}}\left|\sum{}\bmath{P}_j\right|^2 \end{eqnarray} where $\mu_{b} = m_Am_B/m_{\rm b}$ and $\mu_{t} = m_C(m_{\rm t}-m_C) / m_{\rm t}$ are the reduced masses of the inner and outer binaries respectively and $\bmath{s} = \sum{}m_j\bmath{X}_j/(m_{\rm t}-m_C)$. The Hamiltonian $H_{\rm Kep}$ represents the Keplerian motion of the stellar orbits and the Keplerian motion of the planets about a fixed point. The recommended method by \citet{WH91} is to use the f and g functions of \citet{Da88} to evolve the system under this term. The Hamiltonian $H_{\rm Int}$ contains interaction terms dependent only on position. Hamilton's equations can be used to derive the accelerations on each object due to this term, and these can be analytically integrated to evolve the system over the interval $dt$. These accelerations are \begin{eqnarray} \frac{d\bmath{V}_{B}}{dt} &=& -Gm_{\rm b} m_A\sum_{j=1}^N m_i \Bigg(\cube{m_{\rm b}\bmath{X}_i+m_B\bmath{X}_B} - \cube{m_{\rm b}\bmath{X}_i-m_A\bmath{X}_B}\Bigg)\nonumber\\ & &{} - Gm_{\rm b} m_Am_C \Bigg( \cube{m_{\rm b}\bmath{X}_C +m_B\bmath{X}_B +m_{\rm b}\bmath{s}} - \cube{m_{\rm b}\bmath{X}_C -m_A\bmath{X}_B +m_{\rm b}\bmath{s}}\Bigg)\nonumber\\ \frac{d\bmath{V}_{C}}{dt} &=& \frac{G(m_{\rm t}-m_C)\bmath{X}_C}{R_C^3} - G\sum{}m_j\cube{\bmath{X}_C - \bmath{X}_j +\bmath{s}} \nonumber\\ & &{} - Gm_{\rm b}^2\Bigg( m_A\cube{m_{\rm b}\bmath{X}_C+m_B\bmath{X}_B +m_{\rm b}\bmath{s}} + m_B\cube{m_{\rm b}\bmath{X}_C -m_A\bmath{X}_B +m_{\rm b}\bmath{s}}\Bigg) \nonumber\\ \frac{d\bmath{V}_{i}}{dt} &=& Gm_C \cube{\bmath{X}_C - \bmath{X}_i +\bmath{s}} -\frac{Gm_C}{(m_{\rm t}-m_C)}\sum_{j=1}^N m_j\cube{\bmath{X}_C - \bmath{X}_j +\bmath{s}} - \sum_{{^{j=1} _{j\neq i}}}^N \frac{Gm_j}{R_{ij}^3} (\bX_i - \bX_j) \nonumber\\ & &{} -\frac{Gm_{\rm b}^2m_C}{m_{\rm t} - m_C} \Bigg(m_A \cube{m_{\rm b}\bmath{X}_C +m_B\bmath{X}_B +m_{\rm b}\bmath{s}} + m_B \cube{m_{\rm b}\bmath{X}_C -m_A\bmath{X}_B +m_{\rm b}\bmath{s}}\Bigg)\nonumber\\ & &{} + Gm_{\rm b} \Bigg( \frac{\bmath{X}_i}{R_i^3} - m_Am_{\rm b}\cube{m_{\rm b}\bmath{X}_i+m_B\bmath{X}_B} - m_Bm_{\rm b}\cube{m_{\rm b}\bmath{X}_i-m_A\bmath{X}_B} \Bigg) \end{eqnarray} Note that in these equations $\bmath{V}$ is a pseudo-velocity and equal to the transformed momenta divided by the mass of the objects they describe. That is, $\bmath{V}_B$ is $\bmath{P}_B / m_{\rm b}$ and so on. The final Hamiltonian is $H_{\rm Jump}$ and gives \begin{eqnarray} \frac{d\bmath{X}_{i}}{dt} = \sum_{j=1}^N \frac{\bmath{P}_j}{m_{\rm b}} \end{eqnarray} A similar method can be used to obtain the corresponding equations for the other four orbital types, and these are given in the Appendix. Further details of the derivation for a more general system with more than one orbital type are given in \citet{Ve08}. Close encounters can be included in the same way as described by \citet{CQDL02}, as well as variable timesteps, but are not implemented here. The method outlined above was implemented separately for each planetary type in a stand alone program {\sc Moirai}\footnote{The Moirai are in Greek mythology the three Fates. Given the tradition of naming planetary objects after gods and heros from classical mythology, and that the stars in these systems will be the largest influence on the orbital evolution of such objects, it seemed appropriate to name them (and the code) after the goddesses that were supposed to control the fate of men and gods alike.}, the testing of which is also described in the Appendix and in \citet{Ve08}. \section{Triple Star System Statistics} \label{sec:stats} It is helpful to determine the statistical properties of known hierarchical triple systems, so simulations can be run that are comparable to real systems. The Multiple Star Catalogue \citep{To97} contains 541 reasonably complete entries for such systems, and histograms of this data are plotted in Figure~\ref{fig:stats}. This catalogue is compiled from a variety of sources so contains a number of biases from different observing methods, as discussed by the author, but should be sufficient here. >From the histograms, it can be seen that the stellar masses tend to be near solar, with the inner binary mass ratio being generally less than 2 and the outer between 1 and 3, although sometimes being larger than the total mass of the binary. The semimajor axis of the inner binary tends to be less than 2au and the outer, although often many hundreds of astronomical units, generally concentrated around a few tens. There seems to be a big step in the distribution at about 15au. The ratio of the semimajor axes ranges but seems to be slightly peaked towards a factor of 10 or less. The inner eccentricity is peaked around zero, and a plot of the inner eccentricity as a function of semimajor axis shows that this does not only occur for close inner binaries. The outer eccentricity is fairly uniform from 0 to about 0.7. A plot of the outer eccentricity as a function of semimajor axis also shows no apparent trend. \citet{To97} considers the catalogue to be complete to a distance of 10pc. However, this includes only 14 objects with all stellar orbital elements known, so to determine if the data is representative it was ranked by distance and a comparison made of the first 250 entries to the entire sample. No major difference was apparent between the two, so the sample can be considered to give a reasonable overview of triple systems. \begin{figure} \centering \includegraphics[width=\textwidth]{fig3.eps} \caption{ The statistics of the orbits of stars in triple star systems, plotted using data from \citet{To97}. The top left panel shows a histogram of masses of each component (A black, B blue and C red-dashed), while the bottom left shows the mass ratios of the inner and outer binaries ($m_A/m_B$ black, $m_{bin}/m_C$ blue). The top middle panel shows the semimajor axes of the inner (black) and outer (blue) orbits, while the bottom middle panel shows the ratio of these. The top left panel shows the eccentricities of the inner (black) and outer (blue) orbits, and the bottom left panel shows the ratio of the two.} \label{fig:stats} \end{figure} \section{Numerical Integrations: S(AB)-P orbits} \label{sec:sp} \begin{figure} \centering \includegraphics[width=\textwidth]{fig4.eps} \caption{ Details of the sets of simulations with inner binary parameters $a_{B}=1$au, $e_{B}=0$, $m_A = m_B = 1 M_{\odot}$ and outer binary parameters $e_C = 0.6$, $m_C=4 M_{\odot}$ and $a_C =$ 20 to 100 au. The left and centre panels show the conservation of total energy and angular momentum for these eight simulations (note that the change in angular momentum is a factor of $10^{-8}$ smaller than that in energy, as in symplectic integration schemes angular momentum is conserved to machine precision). The right hand panel shows the variation of the semimajor axis of star C throughout the simulation. The different colours indicate the different initial values of the semimajor axis of star C, as apparent from the right panel. } \label{fig:el} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig5.eps} \caption{ The stability radii for S(AB)-P type test particles as a function of outer binary semimajor axis and eccentricity for eight different mass ratios. The colours indicate eccentricity of star C and the panels are labelled with the mass ratio. Solid lines show the first and last fully stable radii, and crosses show any unstable locations within this annulus. For comparison the location of the inner and outer critical semimajor axes predicted from \citet{HW99} are shown as dashed lines. } \label{fig:sp_critout_i} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig6.eps} \caption{ The stability radii for S(AB)-P type test particles as a function of outer binary semimajor axis and eccentricity, this time scaled to the semimajor axis of star C, for the smallest and largest mass ratios studied. The symbols are as for Fig~\ref{fig:sp_critout_i}, and it can be seen that the location of the outer stability boundary scales with $a_C$. } \label{fig:sp_critout_o} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig7.eps} \caption{ Some examples of test particle evolution for simulations with $\mu_C=0.09$. Different colours indicate different values of $a_C$ (see Figure~\ref{fig:el}). The left panels show average test particle survival time at each initial semimajor axis for $e_C=0.0$ (top) and $e_C=0.2$ (bottom), indicating that for the zero eccentricity case the stable region is well defined, and that that the dynamics scale with $a_C$. The island around the 5:1MMR in the $e_C=0.2$ case is clearly visible, and the location of the resonance overlayed as a dashed line. The middle panel shows the fraction of test particles surviving at each initial semimajor axis and The right panel shows test particle decay rates. A fast clearing out of the unstable regions is seen.} \label{fig:tpev01} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig8.eps} \caption{ Test particle critical outer semimajor axes for the $a_C = 100$ au cases: data and fits. In the left panel is the simulation data as a function of mass ratio, with symbols as before. The middle panel shows the 4 parameter fit to the data: now crosses correspond to the values to be fitted, the solid line is the fit and the dotted lines the results of \citet{HW99} for comparison. Note that the critical semimajor axis is taken as the innermost radii within the stable island not the line shown in the left panel. The right panel shows the fit as a function of eccentricity. } \label{fig:sp_crit_100} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig9.eps} \caption{ Test particle stability as a function of initial semimajor axis, as for Figure~\ref{fig:sp_critout_i}, in more detail for the $\mu_C = 0.09$, $\mu_C = 0.60$ and $\mu_C = 0.67$ simulations for various eccentricities of star C. Here additional simulations have been run for values of $a_C$ between 20 and 50 au in steps of 1 au. Symbols are as before: the solid lines are the inner and outer stable boundaries, crosses are unstable locations within these, and dotted lines the fit of \citet{HW99}.} \label{fig:sp_crit_detail} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig10.eps} \caption{ The inner critical radius as a function of the inner binaries eccentricity and mass ratio. As for Figure \ref{fig:sp_critout_i} the solid lines show the first and last stable radius, crosses any unstable points between these two and the dashed lines the fit of \citet{HW99}. } \label{fig:sp_crit_in} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig11.eps} \caption{ The fit to the inner critical radius as a function of the inner binaries eccentricity and mass ratio. Now shown as crosses are the locations of the inner first stable radii, and still shown as dashed lines is the fit of \citet{HW99}. Compared to these as solid lines is the 4 parameter fit discussed in the text. Colours indicate mass ratio and eccentricity as for Figure~\ref{fig:sp_crit_in}.} \label{fig:sp_i_fit} \end{figure} In this section, the numerical simulations of S(AB)-P type orbits are described and the results presented. The stability of this region between the inner and outer binary can be determined using grids of test particles. This is an efficient method to map out stable radii for different stellar orbital parameters. Since the stellar orbits and masses constitute a 7x7x7 dimensional space some simplifying assumptions are needed to make the investigation feasible. First, all orbits are taken as coplanar. Second, the initial orientation of the three stars is fixed, so that they start initially aligned all at pericentre. The effect of these assumptions will be discussed later. This leaves the masses, eccentricities and semimajor axes of the stars as free parameters. As the dynamics will scale with certain combinations of these, the system should be characterised by ratios of some of them. Given the discussion in Section~\ref{sec:stats}, the stellar systems investigated is as follows. The ratio of the masses of the inner binary stars $m_A/m_B$ is varied from 1 to 2, and the ratio of the outer binary $m_C/m_{\rm b}$ from 0.1 to 2. Note that these are not the mass ratios defined in ~\citet{HW99}, which are $\mu_B = m_B/m_{\rm b}$ and $\mu_C = m_C/(m_{\rm b} + m_C)$. The mass of the inner binary is fixed at 2 $M_\odot$ and the eccentricities of both orbits range from 0.0 to 0.6. The semimajor axis of the inner binary is varied between 1 and 5 au and the semimajor axis of the outer binary between 20 and 100 au. The effect of changing these six parameters ($\mu_B$, $\mu_C$, $e_B$, $e_C$, $a_B$ and $a_C$) is the purpose of this investigation. For each of these stellar configurations, a grid of test particles is added. Particles in this grid are spaced evenly in radius and longitude, and all on initially circular orbits. A semimajor axis is defined to be stable if all particles starting there remain stable themselves for the length of the simulation. Individual particles are considered unstable if their orbits are not bound to the barycentre of the inner binary or if they pass certain radial limits, as per the definition of the orbital type in Section~\ref{sec:orbits}. When either of these conditions are met, the particle is removed from the simulation. For S(AB)-P orbits the radial limits are the radius of the inner and outer binaries. In practice, most particles are lost as their orbits become unbound. In their study of planetary stability in binary star systems, \citet{HW99} find inner and outer critical semimajor axes for test particles in P and S orbits respectively. These are empirically fitted to a function of the binary's eccentricity and mass ratio ($\mu=m_2/(m_1+m_2)$). For S(AB)-P orbits in a hierarchical triple system, it is reasonable to expect there to be an inner and outer critical semimajor axis, primarily controlled by the inner and outer binaries respectively. If the stars are separated enough to be well approximated as two decoupled orbits, then these critical semimajor axes should be similar in form to those found by \citet{HW99}. There are two obvious ways that the system differs from this simple picture. First, as the stellar orbits will vary with time, the initial orbital configuration may not define the maximum extent of instability regions controlled by parameters such as eccentricity. However, for the systems studied here it will be shown that this is not an important effect as the orbits do not evolve significantly. Secondly, the instability zones in binary systems were shown by \citet{MW06} to be due to overlap between resonances. It may occur then that there are additional unstable regions in hierarchical triples where the resonances due to each sub-binary overlap. In light of this discussion, simulations can be set up to investigate the outer and inner stability boundaries separately. First, the outer boundary can be studied by fixing the inner binary as two 1$M_\odot$ stars in a circular 1 au orbit and varying star C as discussed above. The test particle grids in these cases are modelled on those chosen by \citet{HW99}, and extend from the radius of the inner binary to half that of the outer binary. They are spaced evenly in steps of 0.1 au and there are eight particles at each semimajor axis. Simulation lengths are 1 Myr, which corresponds to at least several thousand outer binary periods. The timesteps used are of the order of a few days, giving relative energy conservation at the $10^{-7}$ to $10^{-8}$ level. For an example of the this, see the right most panels of Figure~\ref{fig:el}, which shows the energy and angular momentum variation for a case with star C set to 4$M_\odot$ and with an eccentricity of 0.6. Also shown in this figure is the variation of the semimajor axis of the outer binary, to demonstrate that even in the most extreme case the stellar orbits are not evolving. This is not too unexpected as the ratio between the stellar orbits is still fairly high, even in this case. (For comparison, \citet{EK95} claim that the stable radius for this hierarchical triple is 16 au, and simulations show that it disintegrates at an initial separation of around 12 au). The results of these simulations are given in Figure~\ref{fig:sp_critout_i}, which shows test particle stability for each mass ratio. It is easy to define an inner (and outer) first (and last) stable radius respectively as that being the first (and last) encountered where all particles are stable. This is plotted for each eccentricity of star C in the figure. In addition, there are unstable radii within these bounds, and these are plotted as crosses. For comparison, the fits of \citet{HW99} are also shown as dashed lines. Note that these are the optimum values and that there are uncertainties given for their fit. It can be seen that the stable regions here have fairly well defined edges, and in fact match up in most simulations to those of \citet{HW99}. It is expected around the inner edge to see some odd unstable radii corresponding to the first $n:1$ mean motion resonances (MMRs) within the region. The inner edge of the stable region appears mostly constant as the orbit of star C is altered, as expected. The only exceptions are the occasional unstable location near this boundary and a general trend at small separations of the two stellar binaries for the stability zone to be less than that expected from the trends of the wider triple cases. It is possible to demonstrate that, apart from these exceptions, the outer edge scales the outer binaries semimajor axis by plotting the data scaled to this quantity, as shown in Figure~\ref{fig:sp_critout_o} for two of the mass ratio cases. The only unclear case in these simulations is that of $\mu_C=0.09$ and $e_C=0.2$. Here, there is a large unstable region that occurs just before the last stable radius. This effect is due to a small region of stability that appears as the last stable radius. Test particles appear to be stable for just about the length of the simulations. Interestingly, this island appears to be centred on the 5:1 MMR with star C. Figure~\ref{fig:tpev01} shows a comparison of the evolution of test particles in this case and that with the same mass ratio but $e_C=0.0$. These show as a function of initial semimajor axis average test particle survival time and fraction of surviving particles. Also shown are the decay rates in each simulation. From these, it can be seen that the dynamics in each $a_C$ case scale with this quantity and time. It is also clear that in the $a_C=20$ au simulation the region around the 5:1 MMR is unstable with a lifetime just less than 1Myr, but that as $a_C$ and the dynamical time increase the region is seen as stable as the test particles are not lost before the end of the simulation. In fact, the test particles within this region all have very high eccentricities at the end of the simulation as compared to the rest of the stable zone. In the $e_C=0.0$ graphs it can be seen that the stable region is very clearly defined, with very few particles surviving for any length of time outside its boundaries. Apart from this one feature for the $\mu_C=0.09$ case, the simulation lengths appear to be sufficiently long to describe stability. \citet{HW99} use an integration length of $10^4$ binary periods, comparable to those used here for all but the very low mass ratio and large $a_C$ cases. They also mention that the stability boundaries do not change much after a few hundred binary periods. This would indicate that even for the long binary period cases, the results are unaffected by this choice, and this is supported by the scaling seen with semimajor axis in the location of the outer boundary. As shown in Figure~\ref{fig:tpev01} and discussed above, the edges to the stable regions are distinct in most cases, and the test particle decay rates indicate that a stable situation has been reached long before the end of the simulations. Further support that the simulations times are sufficient is given by running the $\mu_C=0.09$ and $e_C=0.6$ cases for 2 Myr. The results from these are almost identical to the original 1 Myr simulations, with only a few additional unstable points appearing. Since the outer boundary is well modelled as a function of mass ratio and eccentricity and scales with $a_C$ for all but the smallest separations the $a_C = 100$ au data can be plotted and fitted. This semimajor axis is chosen as it has the most detailed determination of the critical radii due to the larger number of test particles used. To provide a better fit additional simulations were run for six more mass ratios, and the data to be fitted is shown in the left hand panel of Figure~\ref{fig:sp_crit_100}. Instead of using the last stable radius, the stable locations just within any odd unstable radii around the boundary are taken instead. Following \citet{HW99}, we fit this boundary with a function of the form \begin{equation} \frac{a_{out}}{a_C} = a_1 + a_2 \mu_C +a_3 e_C + a_4 \mu_C e_C + a_5e_C^2 + a_6\mu_C e_C^2 \label{eq:hw1} \end{equation} Performing this fit gives the parameters as shown in the left hand column of Table~\ref{tab:hwfit1}. Also given for comparison are the values suggested by \citet{HW99}. This fit has a $\chi^2$ of about 2600 and is in reasonable agreement with their values. Note however that here a smaller range of mass ratios has been fitted, as it is not realistic to increase the mass of star C to the point where $\mu_C$ = 0.9. A four parameter fit to the first four terms in Equation~\ref{eq:hw1} gives a simpler fit, also shown in the table. The $\chi^2$ value of this fit is higher at 3700 but still reasonably models the data, and this fit is shown in the middle and right panels of Figure~\ref{fig:sp_crit_100}. Note that the $\chi^2$ value is calculated using the standard formula \begin{equation} \chi^2 = \sum \frac{(y_i - y_{obs_i})^2}{\sigma_i^2} \end{equation} where $y_i$ is the fitted value of the function, $y_{obs_i}$ the observed value and $\sigma_i$ the error this value. Since the grid size here is 0.1 au the stability boundary cannot be located to any greater accuracy and hence this was assigned as the uncertainty in each measurement. The large $\chi^2$ values obtained for the fits reflect the complex nature of a boundary that is not easily fitted with such a simple function. However, as can be seen from the figures, it is still a useful approximation for quickly determining the stability properties of a system. As mentioned above, the inner stability boundary seems to be unaffected by changes in the orbit of the outer star except for at small separations of the two binary orbits. In this case, the stable region is smaller than expected and there are also far more isolated unstable radii, especially as the mass of star C increases. Figure~\ref{fig:sp_crit_detail} shows effect in more detail for the cases of $\mu_C = 0.09$, 0.60 and 0.67. Here several more semimajor axes for the outer star have been studied for various eccentricities. Although there seems to be a linear overlay of the two different boundaries there does seem to be a slight effect around the point where they overlap, becoming more pronounced as the mass ratio and eccentricity increases. There also seems to be a definite unstable island in the centre of the region, most noticeable in the $e_C = 0.4$ cases. This moves outwards as the position of the outer star is increased, and must be a resonant feature. Note that if the mass of star C is set to zero for the highest mass ratio and eccentricity case then the stable region has a clearly defined inner radius at 2.3 au and there are no unstable particles beyond this boundary, indicating that the structure seen in these graphs is a direct result of the combined effects of all three stars. Whether this is due to resonance overlap is unknown. \begin{table} \centering \caption{ Fitted parameters for Equation~\ref{eq:hw1}, the outer stability edge, compared to those of \citet{HW99}. The first column shows the results of a 6 parameter fit to the data and the middle column the results of a 4 parameter fit.} \begin{tabular}{lccccccccc} Parameter& \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{\citet{HW99}}\\ \hline $a_1$ & 0.477 &$\pm$& 0.001 & 0.466 &$\pm$& 0.001 & 0.464 &$\pm$& 0.006 \\ $a_2$ & -0.412 &$\pm$& 0.002 & -0.392 &$\pm$& 0.001 & -0.380 &$\pm$& 0.010 \\ $a_3$ & -0.708 &$\pm$& 0.006 & -0.542 &$\pm$& 0.002 & -0.631 &$\pm$& 0.034 \\ $a_4$ & 0.794 &$\pm$& 0.012 & 0.494 &$\pm$& 0.004 & 0.586 &$\pm$& 0.061 \\ $a_5$ & 0.276 &$\pm$& 0.009 & & -- & & 0.150 &$\pm$& 0.041 \\ $a_6$ & -0.500 &$\pm$& 0.020 & & -- & & -0.198 &$\pm$& 0.074 \\ \end{tabular} \label{tab:hwfit1} \end{table} \begin{table} \centering \caption{ Fitted parameters for Equation~\ref{eq:hw2}, the inner stability edge, compared to those of \citet{HW99}. The first column shows the results of a 7 parameter fit,and the second column the results of a 4 parameter fit.} \begin{tabular}{lccccccccc} Parameter& \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{This work} & \multicolumn{3}{c}{\citet{HW99}}\\ \hline $a_1$ & 3.45 &$\pm$& 1.10 & 2.92 &$\pm$& 0.12 & 1.60 &$\pm$& 0.04 \\ $a_2$ & 9.94 &$\pm$& 1.81 & 4.21 &$\pm$& 0.24 & 5.10 &$\pm$& 0.05 \\ $a_3$ & -6.95 &$\pm$& 1.47 & -2.67 &$\pm$& 0.38 & -2.22 &$\pm$& 0.11 \\ $a_4$ & -5.43 &$\pm$& 5.27 & -1.55 &$\pm$& 0.29 & 4.12 &$\pm$& 0.09 \\ $a_5$ &-14.09 &$\pm$& 4.42 & & -- & & -4.27 &$\pm$& 0.17 \\ $a_6$ & 6.22 &$\pm$& 6.26 & & -- & & -5.09 &$\pm$& 0.11 \\ $a_7$ & 25.46 &$\pm$& 8.51 & & -- & & 4.61 &$\pm$& 0.36 \\ \end{tabular} \label{tab:hwfit2} \end{table} A more detailed investigation of the inner boundary can be carried out in a similar manner to the outer edge, by fixing star C at 1 $M_\odot$ and in a circular 50 au orbit and varying the inner binaries mass ratio and eccentricity, as discussed earlier. The radius of the inner pair was kept at 1 au and their total mass as 2 $M_\odot$. The eccentricity of this binary was varied from 0.0 to 0.6 in steps of 0.2 again, and the mass ratio $m_A/m_B$ varied from 1.0 to 2.0 in steps of 0.1. Figure \ref{fig:sp_crit_in} shows the results from these simulations, with the locations of the first and last stable radii plotted as functions of the inner binaries eccentricity $e_B$ and mass ratio $\mu_B$. Also plotted again is the fit given by \citet{HW99} to the critical radius for P type planets in binary systems. As expected, the outer stability boundary seems unaffected by changes to the inner binary pair. There are some unstable points between the two stability boundaries, most notably around 3.2 au for the simulations with $e_B=0.0$. \citet{HW99} find unstable islands appearing at the first $n:1$ beyond the critical semimajor axis in this configuration. However, the location of these unstable radii here is well beyond the first $n:1$ MMR in the stable region. In addition, running the $\mu_B=0.34$ and $e_B=0.0$ case without star C reveals that these locations are now stable. This would indicate that this is again an effect due to the combination of all three stars. Because of this the inner edges location is simply taken as the same as the first stable radius. The location of this boundary appears to be a very weak function of mass ratio and an approximately linear function of eccentricity, and also agrees well with the predictions of \citet{HW99}. For this boundary, they fit a function of the form \begin{equation} \frac{a_{in}}{a_B} = a_1 + a_2 e_B + a_3 e_B^2 + a_4 \mu_B + a_5 e_B \mu_B + a_6 \mu_B^2 + a_7 e_B^2 \mu_B^2 \label{eq:hw2} \end{equation} where the constants are given in the third column of Table \ref{tab:hwfit2}. The terms up to fourth order are included to fit a variation in the position of the critical radius at smaller mass ratios than those considered here. Fitting this function to the results here does not produce well determined coefficients despite a low $\chi^2$ value of about 29, as shown in the first column of the table. In fact, a better solution is obtained by only including the first four terms, as shown in the middle column of the table. The $\chi^2$ value for this fit is slightly higher at about 42. This second fit is plotted in Figure~\ref{fig:sp_i_fit}, and seems to describe the data well, although it seems to slightly overestimate the size of the stable region. Despite the smaller $\chi^2$ values here the fitted parameters are less well determined than those for the outer edge. This is a reflection on the relative size of the test particle grid spacing compared to the size of the inner binaries orbit. Here the boundary is determined to within a tenth of the relative separation of the stars, while for the outer edge it was determined to within a thousandth of the size of the binary's orbit. \begin{figure} \centering \includegraphics[width=\textwidth]{fig12.eps} \caption{ The stability radii for S(AB)-P type test particles as a function of outer binary semimajor axis and mass ratio for different eccentricities of both stellar orbits. The colours indicate eccentricity of star B and the panels are labelled with the eccentricity of star C. Solid lines show the first and last fully stable radii, and crosses show any unstable locations within this annulus. } \label{fig:eccentricity} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig13.eps} \caption{ The stability radii for S(AB)-P type test particles as a function of outer binary semimajor axis and mass ratio for different eccentricities and inclinations of star C. The colours indicate inclination and the panels are labelled with the eccentricity. Solid lines show the first and last fully stable radii, and crosses show any unstable locations within this annulus. } \label{fig:inclination} \end{figure} The parameter space investigated so far is somewhat limited. The stars orbital longitudes have been ignored, assumed to be a minor influence on stability, the sets of simulations have always kept one star on a circular orbit, and all objects have been taken as coplanar. Brief investigations of these three extensions to the parameter space can be made. Firstly, the assumption that the initial longitudes of the stellar orbits has little effect on the stability boundaries was tested by running the inner edge simulations with mass ratios $\mu_B = $0.33 to 0.40 with a different initial longitude of the inner binary pair. The results from these simulations were almost identical to those of the initial set, providing some evidence that this assumption is valid. Next, the effect of both stars having non-circular orbits was investigated. The semimajor axes of the inner and outer binary were kept at 1 and 50 au, and the eccentricities of both varied in steps of 0.2 from 0.0 to 0.6. The mass ratio of the outer binary only was varied as before. The results of these simulations are shown in Figure~\ref{fig:eccentricity}. Each panel shows the location of the stability radii and unstable points as for all values of $e_B$ and one given value of $e_C$. There is almost no change in the position of the outer stability boundary, but as both stellar eccentricities and the mass ratio increase, the inner boundary starts to move outwards. The stellar eccentricities are still not varying to any significant extent and the additional instability must be due to the combined perturbations of all three stars. Lastly, the effect of inclination was studied. The semimajor axes were kept as 1 and 50 au, $e_B$ was set to 0, and the outer mass ratio and eccentricity varied as before. Sets of simulations where then run for inclinations of the outer star of $10^\circ$, $20^\circ$ and $30^\circ$. The test particles were kept coplanar with the inner binary. This is a rather limited investigation, as any dependence on the longitude of ascending node has been ignored, and only a small range of inclinations included. However, higher inclinations will be subject to the Kozai instability, causing large variations in the stellar orbits, which is expected to rapidly destabilise test particles. In these simulations the stellar mutual inclination remains fairly constant and their orbits are stable. Figure~\ref{fig:inclination} shows the stability boundaries for these simulations, each panel comparing the different inclination results for a different value of $e_C$. There is little change in the inner boundary but the outer edge moves somewhat. Interestingly, for the $e_C=0$ case higher inclinations are more stable. If the test particles are started instead coplanar with the outer binary the stability is similar, although not identical. These results are consistent with the conclusions of \citet{PFD03}, who show that inclination is not a significant effect on the stability of P type planets in binary system. \section{Conclusion} \label{sec:conclusion} The main achievement of this paper is the formulation of a symplectic integrator algorithm suitable for hierarchical triple systems. This extends the algorithm for binary systems presented by \citet{CQDL02}. The positions of the stars are followed in hierarchical Jacobi coordinates, whilst the planets are referenced purely to their primary. Each of the five distinct cases, namely circumtriple orbits, circumbinary orbits and circumstellar orbits around each of the stars in the hierarchical triple, requires a different splitting of the Hamiltonian and hence a different formulation of the symplectic integration algorithm. Here, we have given the mathematical details for each of the five cases, and presented a working code that implements the algorithm. As an application, a survey of the stability zones for circumbinary planets in hierarchical triples is presented. Here, the planet orbits an inner binary, with a more distant companion star completing the stellar triple. Using a set of numerical simulations, we found the extent of the stable zone which can support long-lived planetary orbits and provided fits to the inner and outer edges. The effect of low inclination on this boundary is minimal. A reasonable first approximation to a behaviour of a hierarchical triple is to regard it as a superposition of the dynamics of the inner binary and a pseudo-binary consisting of the outer star and a point mass approximation to the inner binary. If it is considered as two decoupled binary systems, then the earlier work of Holman \& Wiegert (1999) on binaries is applicable to triples, except in the cases of high eccentricities and close or massive stars. The implication here is that the addition of a stable third star does not distort the original binary stability boundaries. As mentioned, \citet{MW06} have shown that overlapping sub-resonances are the cause of the boundary in the binary case. It is reasonable to expect that in triples the same process is responsible, and the similarities between the binary and triple results support this theory. It is also expected that there is a regime in which the resonances from each sub-binary start to overlap as well, further destabilising the test particles. Evidence of this is the deviation from the binary results when the stars are close, massive and very eccentric, when resonances would be both stronger and wider. Since the parameter space investigated was chosen to reflect the observed systems it would seem to be a reflection on the characteristics of known triple stars that most lie in the decoupled regime. The relatively constant nature of the stellar orbits in the simulations is however a consequence of the test particle orbits being destabilised long before the stars are close enough to interact. By extension of all these arguments, it is expected that the binary criteria can be used to fairly accurately predict the stability zones in any hierarchical stellar system, no matter the number of stars. The results presented here can be used to estimate the number of known hierarchical triple systems that could harbour S(AB)-P planets. \citet{To97} lists 54 systems with semimajor axis, eccentricity and masses for both the inner and outer components. The mutual inclinations of most are not well known, but there are nine systems listed for which this angle can be determined. For five of these it is less than $15^\circ$, two are around $40^\circ$ and two are retrograde. Although a small sample it suggests that there are systems that fall within the low inclination regime investigated here. Using the criteria of \cite{HW99} and those found here for the position of the inner and outer critical semimajor axis the size of the coplanar stable region for each of these triples can be calculated. This can be considered an upper limit, since it is likely that very non-coplanar systems and those with significant eccentricities for both binary components will further destabilise planets. Of the 54 systems 13 are completely unstable to circumbinary planets according to the four parameter fits (compared to 11 using \citet{HW99}'s criteria). Figure \ref{fig:zone} shows a plot of the width of the stable region for the remaining systems. Interestingly, the majority seem to have very small stable zones, with 16 smaller than an au. Whether this is a feature of triple systems, or an observation bias is not apparent. It does indicate though that circumbinary planets are unlikely to exist in at least 50 \% of observable systems. \begin{figure} \centering \includegraphics[width=\textwidth]{fig14.eps} \caption{ Widths of the circumbinary stability zones for known triple systems in au and as a percentage of the area between the two sub-binaries, calculated using the four parameter fits derived in Section~\ref{sec:sp}. Note that using the criteria of \citet{HW99} gives almost identical results.} \label{fig:zone} \end{figure}
0710.0904
\section{Introduction} \label{INTRO} The idea that globular clusters (GCs) harbour important clues in relation with the early stages of galaxy formation is a widely accepted concept. One of the most compelling arguments in favour of the existence of a connection between GCs and major star formation episodes in the life of a galaxy is the constant cluster formation efficiency, defined in terms of total baryonic mass \citep{b46}, in different galaxies. However, breaking the code that leads to a detailed quantitative link between GCs and the underlying ``diffuse'' stellar population is still an open question. Such a connection has been discussed on theoretical (e.g. \citealt*{b3} or, more recently, \citealt{b56b}) and observational grounds (e.g. \citealt{b20}). In the particular case of Milky Way GCs, \citet*{b57b} found that the chemical similarities between clusters and field stars with $[Fe/H]\le -1$ suggests a shared chemical history in a well mixed early Galaxy. Clarifying this issue may certainly yield some arguments in favour (or against) some predominant ideas that have been widely referenced in the literature (e.g. \citealt{b16}; \citealt{b72b}) and later explored within the frame of different scenarios (e.g. \citealt{b1}, or \citealt{b18}). A good perspective of the complex situation in this context is given in the thorough review by \citet{b5} and \citet{b41}. An initial confrontation between GCs and halo stellar populations shows more differences than similarities: a) In general, GCs exhibit more shallow spatial distributions than those characterising galaxy light (e.g. \citealt{b59}; \citealt{b14}); b) There is a colour offset in the sense that mean integrated globular colours appear bluer than those of the galaxy halos at the same galactocentric radius (\citealt{b78}; \citealt*{b22}; \citealt{b37b}); c) GCs show frequently bimodal colour (and hence, metallicity) distributions (see, for example, \citealt{b53}). This feature does not seem exactly shared by the resolved stellar populations in nearby resolved galaxies (see \citealt{b15}; \citealt{b33}; \citealt{b62} or \citealt{b50}). As discussed later in this work, those differences arise, mainly, from the fact that GCs analysis usually provide {\bf number} weighted statistics while galaxy halos observations yield {\bf luminosity} weighted measurements. A preliminary quantitative approach to the globulars-stellar halo connection was presented in \citet*{b25} (hereafter FFG05). This last paper shows that, given the areal density distribution of the ``blue'' and ``red'' globular cluster subpopulations in NGC 1339, the galaxy surface brightness profile, galactocentric colour gradient and cumulative GCs specific frequency, can be matched by linearly weighting the areal density profiles. The ``weight'' corresponding to each component of the brightness profile is the inverse of the {\bf intrinsic} GCs frequency characteristic of each cluster population. The main argument behind that approach is that the shape of the colour (and metallicity) distribution of each globular cluster subpopulation does not change with galactocentric radius. Large angular scale studies (\citealt{b14}; \citealt{b2}) in fact show that the colour peaks in the GCs colour statistics of NGC 1399 keep the same position (or show very little variation) over large galactocentric ranges. A similar result is obtained for NGC 4486 by \citet{b40} who find those peaks do not show a detectable variation in colour over 75 kpc in galactocentric radius. It must be stressed that those subpopulations are ``phenomenologically'' defined in terms of their integrated colours but each might eventually have a given spread in age and/or metallicity. The presence of a ``valley'' in the globular colour statistics is usually adopted as a discriminating boundary between both subpopulations. The need to revise such a procedure was already suggested by figure 5 in FFG05. This last diagram showed that the NGC 1399 GCs bluer than the blue peak have a distinct behaviour of the areal density profile, exhibiting a flat core that disappears when all ``blue'' clusters (i.e. all GCs bluer than the colour valley) are included in the sample. That result prompted for a further analysis as discussed below. This paper generalises the FFG05 approach trough Monte-Carlo based models. In this frame, ``seed'' globulars are generated following a given abundance Z distribution and then associated with a ``diffuse'' stellar mass that shares its age, chemical composition and statistical spatial distribution. The luminosity associated with this mass is derived from a mass to luminosity ratio adequate for a given age and metallicity. These models aim at reproducing the features mentioned above and seek for a function that could link GCs and diffuse stellar populations keeping a minimum number of free parameters. Both NGC 1399 and NGC 4486 appear as adequate targets in order to perform such a modelling due to their prominent globular cluster systems (GCS). Although these systems show some structural similarities in terms of their spatial distribution, they also differ markedly both in the shape of their GCs colour statistics and specific frequencies \citep{b24}. This work also presents new Washington photometry, obtained and handled in a homogeneous way, that allows for a re-discussion of the GCS properties in the inner region of both galaxies and, in particular, of the behaviour of the areal density of GCs with colour. In turn, recent wide field photometric studies of the GCS associated with NGC 1399 \citep{b2} and NGC 4486 (\citealt{b80}; \citealt{b81}), are well suited for extending the analysis to larger galactocentric radii. \section{Observations and data handling} \label{ODH} \begin{figure*} \resizebox{0.4\hsize}{!}{\includegraphics{figure1a_n.eps}} \resizebox{0.4\hsize}{!}{\includegraphics{figure1b_n.eps}} \caption{Distribution GCs candidates brighter than T$_1=23.2$. Left panel: NGC 1399 field. North is to the right and East is up. Right panel: NGC 4486 field. North is up and East to the left. In both panels, circles have radii of 120 and 420 arcsec respectively. The GC sample completeness in these areas is close to 95\%. } \label{Sample_sky} \end{figure*} Photometric observations were carried out with the Mayall and Blanco 4-m telescopes at KPNO and CTIO respectively and 2048 pixels on a side CCDs, with a pixel scale of 0.43 arcsecs. The C filter from the Washington System \citep{b30} and the R$_{KC}$ filter of the Kron-Cousins system were used at both telescopes. As noted by \citet{b27} this last filter is comparable to the T$_1$ filter in the Washington system although much more efficient in terms of transmission. In what follows we keep the T$_1$ denomination for our red magnitudes. Two R$_{KC}$ images (exposure: 600 secs each) and three C images (exposure: 1500 secs each) were secured for both galaxy fields. Seeing quality for these frames varied between 1.0 and 1.6 arcsec. These images were processed with the CCDRED routines within IRAF \footnote{IRAF is distributed by the National Optical Astronomical Observatories,which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation} including bias and flat fielding. The galaxy background removal was performed with routins included in the VISTA image prossesing systems. PSF photometry on all the frames was then carried out using the ALLFRAME version of the DAOPHOT package (\citealt{b74}; \citealt{b75}). The instrumental photometry was transformed to the standard system using calibration standard stars from \citet{b27}. Image classification, in terms of resolved and non-resolved objects, was performed as in \citet{b23}. Briefly, that procedure combined the use of the round and sharpness parameters defined in DAOPHOT and also the mirrored envelope of the T$_1$ PSF ALLFRAME magnitude vs. the difference between this magnitude and that obtained using aperture photometry for every object detected on the images. Non-resolved objects brighter than T$_1=23.2$ and with (C-T$_1$) colours between 0.9 and 2.3 were considered as cluster candidates and their distribution on the sky is depicted in Figure \ref{Sample_sky}. Circles with r=120 and 420 arcsec delineate the area used for the analysis of the surface density distributions in the inner regions of the galaxies. Figure \ref{P_error} shows the errors on the (C-T$_1$) colours as a function of T$_1$ magnitude as delivered by DAOPHOT. A median error for the (C-T$_1$) colours of $\pm$ 0.07 mags. is reached at T$_1=23.2$, which is adopted in what follows as the limiting magnitude of the analysis in order to assure good quality colours. The photometric data for both galaxies, Table \ref{Phot_data1399} and \ref{Phot_data4486}, are available in the electronic journal version. Coordinates are referred to the galaxy centers and defined as in Figure \ref{Sample_sky} \begin{table} \centering \caption{Photometric data NGC 1399.} \label{Phot_data1399} \begin{tabular}{@{}cccccc@{}} \hline ID & X (arcsec) & Y (arcsec) & T$_1$ & (C-T$_1$) & roundness \\ \hline 537. & 196.8 & -427.6 & 24.15 & 0.63 & 0.77\\ 553. & 136.5 & -427.5 & 24.33 & 0.65 & 0.85\\ \end{tabular} \end{table} \begin{table} \centering \caption{Photometric data NGC 4486.} \label{Phot_data4486} \begin{tabular}{@{}cccccc@{}} \hline ID & X (arcsec) & Y (arcsec) & T$_1$ & (C-T$_1$) & roundness \\ \hline 23314. & -13.1 & -126.5 & 21.67 & 1.43 & 0.93\\ 23406. & -32.1 & -124.9 & 23.39 & 1.26 & 0.94\\ \end{tabular} \end{table} ADDSTARS experiments were carried out to estimate the completeness of the non-resolved objects (which is expected to be the case for GCs at the distances of NGC 1399 and NGC 4486 from the sun). Ten images, adding 1000 artificial stars each, on both C and T$_1$ master images, yielded a completeness of 94 and 96\% at T$_1=23.2$, for NGC 1399 and NGC 4486 respectively. A comparison field of 77.7 arcmin$^{\sq}$ was taken from \citet{b23} who performed C and T$_1$ photometry following the same procedure. This field has 146 non-resolved objetcs within the colour-magnitude boundaries adopted for the globular cluster candidates. \begin{figure} \resizebox{1.\hsize}{!}{\includegraphics{figure2f.eps}} \caption{ (C-T$_1$) colour errors as a function of T$_1$ magnitude for both galaxy fields. The vertical line at T$_1=23.2$ is the limiting magnitude adopted in the analysis. The median colour error for the sample is 0.04 mags.} \label{P_error} \end{figure} \section{Colour-magnitude diagrams and Colour Distributions} \label{CMDCD} \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure3a_n.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure3b_n.eps}} \caption{T$_1$ vs. (C-T$_1$) colour magnitude diagram for unresolved objects in the field of NGC 1399 (upper panel) and NGC 4486 (lower panel). The domain of the GCs candidates discussed in the text is indicated by the rectangular area. The tilted line, for NGC 4486, is defined by the modal values of the colour statistic inside 0.5 mags. intervals in T$_1$ for the blue GCs.} \label{CMD_1399_4486} \end{figure} The T$_1$ vs. (C-T$_1$) colour diagrams for non-resolved objects are displayed in Figure \ref{CMD_1399_4486}. The limiting magnitude T$_1=23.2$ is indicated as a horizontal line while vertical lines at (C-T$_1$)=0.90 and (C-T$_1$)=2.30 define the domain of the globular cluster candidates. A distinctive feature in the lower panel of Figure \ref{CMD_1399_4486}, in contrast with the upper panel of the same figure, is a noticeable tilt of the colours of the blue clusters associated with NGC 4486, which is not detectable in the case of NGC 1399. The tilted line Figure \ref{CMD_1399_4486} (lower panel) was obtained by fitting the modal values on a smoothed image of the colour magnitude diagram (adopting a round Gaussian kernel of 0.05 mags.) yielding: \begin{equation} T_1=41.50 -16.67 (C-T_1) \label{Tilt} \end{equation} This relation implies a 0.06 mags. (C-T$_1$) colour increase per magnitude that is comparable to that detectable in the (g-z) colours of 0.045 per magnitude \citep{b77}. The possible reason for the presence of a ``blue'' tilt in the colour magnitude diagram is discussed in Section \ref{CN4486}. The (C-T$_1$) colour histograms for GCs within a circular galactocentric region defined between 120 and 360 arcsec are depicted in Figure \ref{Histos}. These histograms have been corrected for background contamination by subtracting the comparison field histogram (scaled by area and also shown in these figures) and contain about 1000 and 1800 candidate GCs for both galaxies, respectively. In order to minimise the presence of bright objects that might be identified as compact dwarfs (see \citealt{b53b}) we adopted an upper cut off at T$_1$=21.0. We stress that \citet*{b52} noted that NGC 1399 cluster candidates brighter than this magnitude exhibit a unimodal colour distribution, a feature later confirmed in \citet{b14}. As a reference we point out that Omega Cen-like objects would appear at T$_1$ $\approx$ 21.4 and 21.0 for NGC 1399 and NGC 4486, respectively. These last figures also indicate the position of the so called ``colour valleys'' at (C-T$_1$)=1.55 and 1.52 for NGC 1399 and NGC 4486 respectively. These values were determined using Gaussian smoothed colour histograms with a colour kernel of 0.05 mags. The same procedure leads to (C-T$_1$)=1.26 and 1.21 for the blue peaks and (C-T$_1$)= 1.75 and 1.72 for the red peaks in both galaxies. \begin{figure*} \resizebox{0.4\hsize}{!}{\includegraphics{figure4a.eps}} \resizebox{0.4\hsize}{!}{\includegraphics{figure4b.eps}} \caption{ (C-T$_1$) background corrected colour histogram for NGC 1399 (left panel) and NGC 4486 (right panel) globular candidates within a circular annulus defined between 120 and 360 arcsec in galactocentric radius. In both cases the areal scaled subtracted background is shown by the dotted line histograms. The combined counting statistical error bars are also shown. The histograms contain $\approx 1000$ and $\approx 1800$ cluster candidates for NGC 1399 and NGC 4486, respectively. Vertical lines idicate the position of the colour valleys.} \label{Histos} \end{figure*} \section{Description of the model} \label{DMOD} In this section we describe each of the steps involved in the model an the main hypothesis behind it, namely: \begin{enumerate} \item[a)] The decomposition of the colour histograms in terms of the cluster subpopulations leading to their [Z/H], [Fe/H] and (C-T$_1$) colour distributions. \item[b)] The determination of the projected areal density distribution for each of the cluster subpopulations. \item[c)] Establishing the link between each cluster and its associated diffuse stellar population. \item[d)] Deriving the parameters that determine the shape of the predicted galaxy surface brightness profile. \end{enumerate} \noindent a) The decomposition of the colour histograms. The first step is the decomposition of the the observed colour histograms shown in Figure \ref{Histos} avoiding an {\it a priori} functional dependence (e.g., the usual Gaussian assumption). It must be emphasised that, matching the two-peaked colour histograms observed both in NGC 1399 and NGC 4486 through the adopted colour-metallicity relation (see below), necessarily requires {\bf two distinct} globular cluster populations.``Seed'' clusters were then randomly generated in the abundance Z domain according to a given statistical dependence. Trial and error shows that exponential behaviours {\bf $f(Z) \approx \exp [-(Z-Zi)/Zs]$} where {\bf $Zs$} is the {\bf abundance scale length} and {\bf $Zi$} is the minimum abundance, provides acceptable fits to the observed histograms (i.e., within the Poissonian uncertainties associated with each statistical bin). Some more complex functions cannot be rejected but would imply a larger number of free parameters not justified in terms of those uncertainties. As for the minimum abundance, we adopted $Zi=0.003 Z_{\odot}$, that corresponds to $[Fe/H]=-2.65$ in the metallicity scale presented below. As discussed later, a dependence of $Zi$ with T$_1$ leads to a blue tilt that reproduces the colour-magnitude diagram of the NGC 4486 GCS. The decomposition procedure aims at matching the position of the colour peaks and colour valley while keeping a minimum value of the quality index of the fit, $\chi^2$ , defined as in \citet{b10b}. The cluster abundance $Z$ was linked to metallicity on the Zinn and West (1984) scale. The adoption of the [Fe/H]zw scale imply some caveats (see for example, \citealt{b82} or \citealt{b76}) about the nature of this index. In this work we use the relation found by \citet*{b46b} for the stellar population models given by \citet*{b82b} : \begin{equation} [Fe/H]zw = [Z/H]-0.131 \end{equation} An integrated colour was then obtained for each cluster through an empirical (C-T$_1$)-[Fe/H] colour-metallicity calibration. Several approach have been made in the past aiming at determining the colour metallicity relation. For example, the original linear relation found by \citet{b26} for MW clusters was later improved by \citet{b33}. Being an important step in the modelling process, we attempted a new calibration, described in Section \ref{CMC}, which yields a quadratic relationship between metallicity and integrated cluster colours. Before comparing the model cluster colours with the observed histograms, we added interstellar reddenings, ($E(B-V)=0.015$ and $0.022$ for NGC 1399 and NGC 4486, respectively) from the \citet{b70} maps, adopting $E(C-T_1)= 1.97 E(B-V)$, and Gaussian errors matching their behaviour as a function of cluster brightness displayed in Figure \ref{P_error}. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure5.eps}} \caption{ Model fit (continuous line) to the kernel histogram representative of globular clusters associated with the brightest ellipticals (black dots) in the Virgo ACS by \citet{b53}. The adopted Gaussian colour kernel is 0.05 mags. The two components correspond to $Zs$ of $0.035 Z_\odot$ and $1.05 Z_\odot$ for the blue (dotted line) and red (dashed line) clusters subpopulations respectively with peaks at (g-z)=0.92 and 1.43 (vertical lines).} \label{Model_f1} \end{figure} \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure6a.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure6b.eps}} \caption{Upper panel: Colour histogram for 91 MW globular clusters. The bars indicate the statistical count uncertainties. The continuous line is a model fit described in text. Lower panel: Model components for MW globulars. Blue clusters are fit with an abundance scale $Zs(blue)=0.035 Z_\odot$ and red globulars with $Zs(red)=0.50 Z_\odot$. Colour peaks at $(C-T_1)_o=1.15$ and 1.51 are indicated by vertical bars.} \label{Model_f2} \end{figure} Examples of the decomposition procedure are shown in figures \ref{Model_f1} and \ref{Model_f2}. The first diagram displays the results obtained from fitting the kernel colour distribution of GCs belonging to the brightest galaxies sample in the Virgo ACS (\citealt{b53}; figure 5). These galaxies are comparable in brightness to both NGC 1399 and NGC 4486. The parameters of the best fit are $Zs(blue)=0.035 Z_\odot$ for the blue clusters (23\% of the total population) and $Zs(red)=1.05 Z_\odot$ for the red clusters. In this case, we transformed the (C-T$_1$) colours to (g-z) by adopting: \begin{equation} (g-z)= (C-T_1)-0.29 \label{CCR} \end{equation} This colour transformation is consistent with the colour of the peaks in the ACS bright galaxies sample compared to our estimate of the (C-T$_1$) peaks in Figure \ref{Histos} and also with the colours relation derived from the \citet{b44} models. Figure \ref{Model_f2} shows the best fit obtained for 91 MW globulars with (C-T$_1$) colours (from \citealt{b29}) or (B-I) colours (transformed according to $(C-T_1)_o= 1.03 (B-I)_o-0.43$) in \citet*{b61}. In this case, 70\% are assigned to the blue clusters subpopulation, with $Zs(blue)=0.035 \pm 0.01 Z_\odot$, and the remaining 30\% to the red one, with $Zs(red)=0.50 \pm 0.05 Z_\odot$. These parameters imply [Fe/H] peaks at -1.7 and -0.5, respectively. The small sample of MW clusters has large statistical uncertainties but the fit is consistent with the observed shape of the [Fe/H] distribution (see \citealt{b4} and references therein).\\ \noindent b) Projected spatial distributions. The model assumes that each of the GCs subpopulations has its own and distinctive spatial distribution. As noted before, however, the adoption of a given colour window to define each cluster subpopulation, leads to ambiguous results in the case of NGC 1399. This particular aspect is discussed with more detail in Section \ref{GCPADD} on the basis of the photometry presented in this paper. In particular, we stress that the variation of the slope observed for the blue GCs (depending on the colour window adopted as their domain) may arise as a partial superposition of the two cluster subpopulations.\\ \noindent c) The GCs-diffuse stellar population link. \citet{b86} introduced the {\bf T} parameter, defined as the total number of GCs per galaxy stellar mass unit. In this work, we generalise that parameter by assuming that the number of globular clusters per associated diffuse stellar mass t is a function of total abundance [Z/H]. After exploring different possible functions, we adopted: $t=\gamma \exp(-\delta[Z/H])$ (i.e. t increases when abundance decreases), and then: \begin{equation} dN/d[Z/H]=t([Z/H]) M([Z/H]) \label{f1} \end{equation} \noindent where dN is the number of CGs associated with a stellar mass M and an abundance [Z/H] that belongs to a given subpopulation. This assumption leads to a ``diffuse'' stellar mass per cluster with a given [Z/H]: \begin{equation} M^*=1/t \label{f2} \end{equation} \noindent and then to an integrated luminosity: \begin{equation} L=M^*/(M/L) \label{f3} \end{equation} \noindent Where (M/L) is the mass-luminosity ratio characteristic for stars with the same age and metallicity of the ``seed'' globular cluster. In this work we adopted the (M/L) ratio for the B (Johnson) band given by \citet{b83} and an age of 12 Gy. This ratio can be approximated as \begin{equation} (M/L)_B= 3.71 + ([Z/H] + 2.0)^{2.5} \label{f4} \end{equation} This approximation differs from the Worthey's (M/L) ratios by, at most, $\approx 7 \%$. Note that we adopt [Z/H] instead of [Fe/H] as Worthey's models assume solar scaled metallicities. A comparison with other models gives an idea about the uncertainty in this ratio. For example, models in \citet{b44}, for the same age and a Salpeter luminosity function, show an overall agreement better than $10 \%$ with Worthey's except at the lowest abundance where they deliver a ratio $\approx 24 \%$ larger. The effect of age variations on this ratio is discussed below. In particular, we choose the B and R bands since large scale surface photometry is available for both NGC 1399 and NGC 4486 (see Sections \ref{CN1399} and \ref{CN4486}) and no comparable data has been published in the C and T$_1$ bands for both galaxies. Although the $(M/L)_B$ ratio depends on age, we stress that most works (e.g. \citealt{b37}) have not detected significant age differences for the cluster subpopulations in NGC 4486. In turn, \citet{b19} (and also see \citealt{b55}; \citealt{b56} or \citealt{b35}) find arguments to support the presence of a fraction of ``intermediate age'' clusters in NGC 1399 and in other galaxies. However, age differences as large as $\pm$ 2 Gy will not have a strong impact on the integrated colours.\\ \noindent d) The shape and colour of the galaxy surface brightness profile. Each stellar mass element associated with a given ``seed'' GC (and determined by the adopted $\gamma$ and $\delta$ parameters) was split into a number of ``luminous'' particles (i.e. 100 per cluster). These particles were statistically located on the plane of the sky by adopting the same spatial distribution that characterises each of the cluster subpopulations in order to construct bi-dimensional blue image (2 arcsec per pixel) of the galaxies. A red image was also obtained by transforming the (C-T$_1$) colour of each luminous particle to (B-R) by means of: \begin{equation} (B-R)_{KC}=0.704(C-T_1)+0.269 \label{f5} \end{equation} \noindent empirically obtained from MW GCs with Johnson \citep{b61} and Washington \citep{b29} photometry. The synthetic B and R$_{KC}$ images were then analysed with the task ELLIPSE within IRAF in order to derive surface brightness profiles and colour gradients along the semi-major axis of the galaxies and, in the case of NGC 4486, the variation of ellipticity along the same axis. This treatment generalises the profile expression given in FFG05: \begin{eqnarray} \mu_B= (V-Mv)_o+A(B)+2.5log\big[ S_B(red) \big] \nonumber \\ -2.5log \big[ N(red)+\frac{N(blue)}{C_B} \big] \end{eqnarray} \noindent where N(blue), N(red) are the areal densities of the blue and red clusters at a given galactocentric distance and $C_B= S(blue)/S(red)$. Introducing the definition of the {\bf t} parameter given before, leads to: \begin{eqnarray} Sn^{-1}=\frac{1.0}{\gamma} \int_{[Z/H]_l}^{[Z/H]_u}{ \frac{dn}{d[Z/H]} \frac{1.0}{(M/L)} \exp{\{\delta [Z/H]\}} d[Z/H]} \label{Sn} \end{eqnarray} \noindent where the integrals are performed on the abundance domains covered by each cluster family and $dn/d[Z/H]=N^{-1} dN/d[Z/H]$, being N the projected areal density of each GCs subpopulations at a given galactocentric radius, and $dn/d[Z/H]$ comes from the histogram decomposition. Note that the Sn values do not change with galactocentric radius and are solely determined by the Z distribution parameters of each cluster and associated stellar subpopulation (which we also call ``blue'' and ``red'' in what follows). Both $\gamma$ and $\delta$ were iteratively changed in order to derive a surface brightness profile that minimises the rms of the residuals when confronted with the observed profiles at galactocentric distances larger than 120 arcsec. This last value was adopted since both GCS exhibit flat spatial density cores that contrast with the peaked shape of the galaxy surface brightness. These cores can be understood as the result of gravitational disruption effects that change the original population of GCs in the inner regions of galaxies (see, for example, \citealt{b9}, and references therein) and, presumably, become less important for cluster orbits with larger perigalactic values. \section{Colour-metallicity Calibration} \label{CMC} We present a new colour metallicity relation based on 198 clusters that combines revised data for MW GCs and also metallicity data obtained for GCs in three other galaxies: NGC 3379, NGC 3923, and NGC 4649 (\citealt{b55}; \citealt{b56}, and Norris et al. 2007, in prep.). [Fe/H] values for GCs in these galaxies were derived from Lick indices given in the \citet{b82b} stellar population models. Theses works were selected as they were homogeneous both in data handling and in the derivation of the Lick indices. In the case of MW clusters, we first looked for a transformation of colours in the Johnson system to (C-T$_1$). A large photometric sample, that includes (B-I) colours, is available in \citet{b61}. In turn, (C-T$_1$) colours were obtained from \citet{b29}. Intrinsic colours for these globulars were then obtained by using colour excesses determined by \citet{b60}, when available, or the \citet{b61} values. As a result we obtain: \begin{equation} (C-T_1)_o = 1.03(\pm 0.02) (B-I) - 0.43 (\pm 0.03) \label{W_J} \end{equation} In turn, the extragalactic GCs were observed in the (g-i) Sloan colour and transformed to (C-T$_1$) through two different ways. On one side using the (g-i) to (B-I) relation derived from model integrated colours given by \citet{b44} and then to (C-T$_1$) through our own transformation, leading to: \begin{equation} (C-T_1)_o = 1.43(\pm 0.03) (g-i) + 0.01(\pm 0.02) \label{W_S1} \end{equation} Alternatively, \citet{b65} have calibrated the Sloan indices in terms of Johnson's colour indices that can be transformed to (C-T$_1$)$_o$ (see, for example, \citealt{b20}) yielding: \begin{equation} (C-T_1)_o = 1.39(\pm 0.03) (g-i) + 0.01(\pm 0.03) \label{W_S2} \end{equation} As these transformations are very comparable, within errors, we adopted an average of both in order to obtain the extragalactic GCs colours on the (C-T$_1$) scale. The intrinsic colours for the extragalactic clusters, were derived by subtracting the interstellar reddening excess indicated by the \citet{b70} maps and assuming $E(C-T_1)=1.97 E(B-V)$. The adopted colour-metallicity relation is displayed in Figure \ref{W_Fe_fig}, where a quadratic fit gives a good representation of the data: \begin{equation} (C-T_1)_o = 0.94 + 0.068 ([Fe/H]_{zw} +3.5)^2 \label{W_Fe} \end{equation} The non-linear nature of this relation has been noted by other authors (e.g. \citealt{b33}; \citealt*{b41b}) and a linear fit to the data displayed in this last figure leaves significant colour residuals both at the high and low metallicity regimes. An analysis of colour residuals as a function of age (for globular with ages available in \citealt{b13}) or horizontal branch morphology, through the HB-index given by \citealp{b43}, reveals no trends with these quantities as displayed in Figure \ref{W_Fe_res}. These results are in agreement with a similar analysis presented by \citet{b73}, who discuss those effects for a number of different colour indices. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure7.eps}} \caption{ (C-T$_1$) colour versus metallicity ([Fe/H] on the Zinn and West scale) relation derived from Milky Way clusters (filled dots) and globulars in NGC 3379, NGC 3923 and NGC 4649 (open dots). The star represents the nucleus of NGC 1399, triangles are models from \citet{b44} (see text). The continuous line is a quadratic fit adopted as the mean calibration.} \label{W_Fe_fig} \end{figure} \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure8af.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure8bf.eps}} \caption{Upper panel: $(C-T_1)_o$ colour residuals from the mean colour-metallicity calibration for MW clusters included in Figure \ref{W_Fe_fig} as a function of normalised ages available in \citet{b13}. Lower panel: (C-T$_1$)o colour residuals for MW clusters included in Figure \ref{W_Fe_fig} and horizontal branch morphologies (HB index) available in \citet{b43}. No significant trends are detectable} \label{W_Fe_res} \end{figure} We note that the shape of the empirical calibration is similar, to within $\pm$ 0.015 mags. in (C-T$_1$), with the colours of the 12 Gy model, with a Salpeter luminosity function and blue horizontal branch, given in \citet{b44}. This agreement is reached after subtracting a zero point difference of 0.065 mags. to their (B-I) model colours and then transforming to (C-T$_1$) through the relation given above. Note that these models (shown as triangles in Figure \ref{W_Fe_fig}) are given as a function of total abundance [Z/H]. Figure \ref{W_Fe_fig} also includes, as a reference, the NGC 1399 nucleus according to the photometry by \citet{b52b} and the metallicity ($[Fe/H]=0.4$) obtained by \citet{b54}. \section{The globular clusters projected areal density distributions} \label{GCPADD} As shown in FFG05 (figure 5) the slope of the areal density of the bluest GCs (i.e., bluer than the blue peak at (C-T$_1$)=1.26) in NGC 1399 is significantly shallower than that corresponding to the ``whole'' blue population (i.e., all clusters bluer than the colour valley at (C-T$_1$)=1.55) in the {\bf inner} region of the galaxy. At larger galactocentric radii, these slopes become identical within the uncertainties. The significance of that result is analysed in this section on the basis of the new data set presented in this work. First, we focus on the areal density distributions of clusters in the inner regions of both galaxies. The size of this region was defined aiming at: a) including a large number of cluster candidates; b) keeping the overall completeness level of the sample at $\approx$ 95\% ; c) minimising the fraction of contaminating non-resolved field interlopers (19\% and 11\% for NGC 1399 and NGC 4486, respectively). These requirements are met within a circular annulus with inner and outer radii of 120 and 420 arcsec. Within 120 arcsec the searching routines are affected by the galaxy halo brightness while, further out in galactocentric radius, the background level increases and the effective areal coverage of our images decreases. We also set a magnitude range (T$_1$=21.0 to 23.2) for two reasons. On one side, in order to avoid the eventual presence of very bright objects whose nature might be connected with Omega Cen-like objects or compact dwarf galaxies (e.g. see \citealt{b53b}). On the other, the GCs colour distribution becomes "unimodal" in NGC 1399 (Ostrov et al. 1998) making difficult a separation between blue and red GCs. Given the relatively small angular scale of this analysis we adopt $r^{1/4}$ laws in order to obtain least squares fits to the logarithmic surface densities within concentric circular annuli (one arcmin wide): \begin{equation} log (den) = a r^{1/4}+ b \end{equation} The resulting slopes and their associated uncertainties are listed in Table \ref{fits_values} and depicted in Figures \ref{Pend_1399_4486}. The upper two fits, in each panel, belong to the red and blue globulars defined in terms of the colour valleys at (C-T$_1$)=1.55 and 1.52 for NGC 1399 and NGC 4486, respectively. These fits, that correspond to the regions with the highest GCs areal densities, are later overlapped (Figures 11, 12 , 17 and 18) with profiles that extend to larger galactocentric radii. \begin{table} \caption{$r^{1/4}$ law fits to logarithmic areal densities for globular clusters (T$_1$=21.0 to 23.0)} \label{fits_values} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{c}{colour range} & \multicolumn{1}{c}{a (slope)} & \multicolumn{1} {c} {b (zero point)} & \multicolumn{1} {l} {rms} \\ \hline \multicolumn{1}{c}{NGC 1399} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{ } & \multicolumn{1} {c} { } \\ 1.55 - 2.30 & -0.77 $\pm$ 0.03 & 3.64 $\pm$ 0.11 & 0.02 \\ 0.90 - 1.55 & -0.43 $\pm$ 0.10 & 2.28 $\pm$ 0.41 & 0.08 \\ 0.90 - 1.26 & -0.25 $\pm$ 0.11 & 1.20 $\pm$ 0.43 & 0.08 \\ \multicolumn{4} {c} { } \\ \multicolumn{1}{c}{NGC 4486} & \multicolumn{1}{c}{ } & \multicolumn{1}{c}{ } & \multicolumn{1} {c} { } \\ 1.52 - 2.30 & -0.91 $\pm$ 0.08 & 4.26 $\pm$ 0.34 & 0.06 \\ 0.90 - 1.52 & -0.46 $\pm$ 0.04 & 2.87 $\pm$ 0.20 & 0.04 \\ 0.90 - 1.21 & -0.22 $\pm$ 0.03 & 1.64 $\pm$ 0.12 & 0.02 \\ \hline \end{tabular} \end{table} \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure9a.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure9b.eps}} \caption{Upper panel: Projected areal density for clusters in NGC 1399. The upper line belongs to clusters redder than (C-T$_1$)=1.55 (``red'' globulars). The intermediate line corresponds to clusters with colours between 0.9 and 1.55 (``blue'' globulars according to the most usually adopted definition). The lowest line belongs to clusters bluer than the blue peak at (C-T$_1$)=1.26 (or ``genuine'' blue clusters according to the text). Lower panel: Projected areal density for clusters in NGC 4486. The upper line belongs to clusters redder than (C-T$_1$)=1.52 (``red'' globulars). The intermediate line corresponds to clusters with colours between 0.9 and 1.52 (``blue'' globulars according to the most usually adopted definition). The lowest line belongs to clusters bluer than the blue peak at (C-T$_1$)=1.21 (or ``genuine'' blue clusters according to the text).} \label{Pend_1399_4486} \end{figure} In turn, the lower fits belong to GCs bluer than the respective blue peaks (at C-T$_1$=1.26 and 1.21). Both galaxies show very similar behaviours in in the sense that the bluest globulars exhibit significantly shallower density slopes as found in FFG05 for the case of NGC 1399 but using Washington photometry from \citet{b14}. A further analysis, adopting different colour windows, shows no meaningful differences in the slopes of the clusters redder than the colour valley, and we consider that they ``genuinely'' belong to a single population. The intermediate slope value observed for the whole blue GCs sample (compared to the bluest GCs) tentatively suggests that an overlap between the blue and red globular subpopulations may occur in the colour range defined between the blue peak and the colour valley. This overlap would increase the density slope of the {\bf so far} called blue clusters as result of the presence of the blue tail of the red subpopulation within their formal domain (i.e., objects bluer than the colour valley). That effect should decrease with increasing galactocentric radius, as the presence of the red clusters becomes less prominent due to the steeper density profile of these clusters. This tentative picture is discussed in the following sections. \section{The case of NGC 1399} \label{CN1399} 1) Colour histogram decomposition. The background corrected GCs colour histogram is compared in Figure \ref{Model_1399} with a synthetic one derived through the modelling described in Section \ref{DMOD}. This histogram includes $\approx 1000$ GCs candidates with T$_1$=21.0 to 23.2 and (C-T$_1$)=0.90 to 2.30. The decomposition process yields 620 clusters to the red subpopulation with an abundance scale factor $Zs(red)= 1.45 \pm 0.1 Z_\odot$. The remaining 380 globulars are identified as belonging to the blue population with an abundance scale $Zs(blue)=0.045 \pm 0.01 Z_\odot$. Figure \ref{Model_1399} (lower panel), with comparison purposes, also displays the Gaussian components that give the best fit to the observed histograms (blue clusters: $\overline{(C-T_1)}=1.26$, $\sigma_b=0.12$; red clusters: $\overline{(C-T_1)}=1.77$, $\sigma_r=0.20$). These fits decrease the number of red clusters and increase the number of blue ones suggesting a smaller degree of colour overlapping between both GCs subpopulations in comparison with the results from the model. As discussed below, the eventual inclusion of a blue tilt comparable with that adopted for the NGC 4486 blue GCs does not have a detectable effect on the shape of the colour histogram. Figure \ref{Model_1399} suggests that a single abundance scale parameter for the red GCs population falls somewhat short in the extreme red end of the colour histogram. About 5\% of that population appears definitely redder than the model prediction. A tentative explanation might suggest some degree of field contamination in that colour range or a possible effect connected with a variation of the $[\alpha/Fe]$ ratio with age \citep{b38}. Figure \ref{Model_1399} also shows that the model colour distribution of the red GCs exhibit a ``blue'' tail (i.e., clusters bluer than the colour valley at (C-T$_1$)=1.55), representing about 31\% of the total number of red clusters. These objects appear as ``contaminating'' the formal domain of the genuine blue globulars and will have an impact on the density slopes derived for this last population if only the colour valley is adopted as a discriminating criteria between both GCs subpopulations. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure10a.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure10b.eps}} \caption{Globular populations fit to the background corrected colour histogram for clusters in NGC 1399 is depicted in the upper panel. The vertical line at (C-T$_1$)=1.55 is the colour ``valley'' usually adopted as the boundary between both subpopulations. The lower panel shows each of the globular populations derived from exponential distributions in the abundance domain Z. Vertical lines indicate the colours peaks of each clusters populations. Dotted lines in the lower panel show the components of the best Gaussian fit.} \label{Model_1399} \end{figure} Alternatively, the model blue GCs barely reach the colour valley, suggesting that this population will not affect the estimate of the areal density slope of the red clusters if only clusters redder than the colour valley are included in the sampling. \\ \noindent 2) The areal density distribution of the blue and red globulars. Due to the features discussed in the previous item, we only take clusters bluer than (C-T$_1$)=1.25 (the blue peak) as {\bf tracers} of the surface density of the {\bf genuine} blue GCs. Figure \ref{Model_1399} shows that there would still be a small degree of contamination by the bluest clusters of the red population (about 5\% of the total sample within that colour range). The density run with galactocentric radius of these clusters is depicted in Figure \ref{Dens_1399_B}. In this case we only include objects with T$_1$=21.0 to 23.2 taken from the photometric work by \citet{b2}, that reaches a galactocentric radius of 40 arcmin. The short straight line represents the density fit discussed in Section \ref{GCPADD} while the continuous line comes from projecting on the sky a volumetric density profile: \begin{equation} \rho(a)=C (1.0+(a/rs))^{-3} \label{Vol_1399} \end{equation} \noindent where $a$ is measured along the galaxy major axis, a scale length $rs=375$ arcsec and a spatial cut-off at a galactocentric radius of 450 kpc. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure11.eps}} \caption{Large scale areal density distribution for globular cluster bluer than the blue peak at (C-T$_1$)=1.26. This colour domain is practically uncontaminated by the blue tail of the red globulars population and considered as the real distribution of the ``genuine'' blue globulars. The filled black dots come from data in \citet{b2}. The continuous line is projected areal density described in text. The short straight line is the fit for the bluest globulars in our photometry as depicted in Figure \ref{Pend_1399_4486}}. \label{Dens_1399_B} \end{figure} The large scale density distribution for the red GCs was then derived using only clusters redder than (C-T$_1$)=1.55 as tracers of that population and is shown in Figure \ref{Dens_1399_R}. The short straight line is the fit discussed in Section \ref{GCPADD} while the continuous line is a Hubble profile with a core radius r$_c$=60 arcsec. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure12.eps}} \caption{The same as figure 8 but for clusters redder than (C-T$_1$)=1.55. The continuous line is a Hubble profile with a core radius r$_c$=60 arcsec. The short straight line is the fit for the red globulars in our photometry as depicted in Figure \ref{Pend_1399_4486}.} \label{Dens_1399_R} \end{figure} In order to estimate the density distribution of the {\bf total} number of clusters for each population, the density of the tracer GCs should be increased by factors that take into a account the total colour range covered by these populations (adding the blue clusters redder than the blue peak and the red clusters bluer than the colour valley, respectively), as indicated by the colour histogram modelling, and also the sampled fraction of globulars within their respective integrated luminosity functions. \citet{b28} have derived the luminosity functions of both blue and red GCs populations in NGC 1399 on the basis of HST WFPC2 observations. Assuming fully Gaussian luminosity functions, and transforming (B-I) colours to (C-T$_1$), their results lead to turn overs at T$_1$=23.40 and T$_1$=23.45 with dispersions of 1.24 and 1.16 mags for the blue and red populations respectively. The combined colour and luminosity completeness factors are then, 4.28 for the blue globulars and 3.29 for the red GCs.\\ \noindent 3) The surface brightness profile. Surface brightness photometry for NGC 1399 in the B band up to a galactocentric radius of 775 arsecs was presented in FFG05 and compared with other profiles available in the literature (e.g. \citealt{b71}; \citealt*{b8}). The predicted blue profile was obtained through the procedure described in Section \ref{DMOD} and adopting a distance modulus (V-Mv)$_o$=31.4, corresponding to 19 Mpc (see FFG05 and references therein), and an interstellar colour excess E(B-R)=0.011 (transformed from \citealt{b70}). Azimuthal counts do not show a detectable flattening of the NGC 1399 GCS and therefore we adopted the average flattening of the galaxy, q=0.86, as representative for both cluster populations. The model surface brightness profile delivered by ELLIPSE is compared with the FFG05 observations in Figure \ref{Prof_1399} and corresponds to $\gamma=0.82 (\pm 0.05) \times 10^{-8}$ and $\delta$=1.1 $\pm$ 0.1. The overall rms of the fit is $\pm$ 0.035 mags. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure13.eps}} \caption{Observed B surface brightness profile for NGC 1399 (FFG05; filled dots) confronted with the model fit (open circles). This last profile was obtained from a bi-dimensional model image processed with the ELLIPSE task within IRAF. Squares and triangles represent the luminosity associated with the ``blue'' and ``red'' stellar populations. Note that the model fails inside 100 arcsec in galactocentric radius where the globular distributions shows a flat core.} \label{Prof_1399} \end{figure} \section{The case of NGC 4486} \label{CN4486} 1) Colour histogram decomposition. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure14a.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure14b.eps}} \caption{A globular population fit to the background corrected colour histogram for clusters in NGC 4486 (see Figure \ref{Histos}) is depicted in the upper panel. The vertical line at (C-T$_1$)=1.52 is the colour ``valley'' usually adopted as the boundary between both populations. The lower panel shows each of the globular populations derived from exponential distributions in the abundance domain Z. Vertical lines indicate the colour peaks of each cluster populations. Dotted lines in the lower panel show the components of the best gaussian fit.} \label{Model_4486} \end{figure} The GCs background corrected colour histogram is compared with the model fit in Figure \ref{Model_4486}. In this case, 800 clusters were assigned to the red population with an abundance scale $Zs(red)=0.90 \pm 0.1 Z_\odot$ and 1000 clusters to the blue population with $Zs(blue)=0.012 \pm 0.005$ practically unde Z$_\odot$ (see below). Figure 14 also shows the Gaussian components (blue clusters: $\overline{(C-T_1)}=1.21$, $\sigma =1.12$; red clusters: $\overline{(C-T_1)}=1.72$, $\sigma=0.20$). Here we also adopt an initial abundance $Zi=0.003 Z_\odot$ for the red clusters. However, as shown in Section \ref{CMDCD}, the blue GCs display an evident tilt that we associate with a change in abundance that correlates with the cluster brightness and, hence, mass (see also figure 3 in \citealt{b5}). In this case, we find that a change in initial abundance as a function of brightness: \begin{figure*} \resizebox{0.4\hsize}{!}{\includegraphics{figure15a_n.eps}} \resizebox{0.4\hsize}{!}{\includegraphics{figure15b_n.eps}} \caption{ Model colour diagrams with $Zs(blue)=0.012 Z_\odot$ (left panel) and $Zs(blue)=0.05 Z_\odot$. Both models include a blue tilt similar to that discussed in the text. However, the larger Zs(blue) adopted in the right panel, makes more difficult the detection of the tilt. } \label{Tilt_4486_1399} \end{figure*} \begin{equation} \Delta Z = 0.01 (23.2-T_1)~~~~(Z_\odot ~~units) \label{Tilt_4486} \end{equation} \noindent reproduces the appearance of the blue GCs colour-magnitude diagram. The mean Z for blue GCs with T$_1$ from 21.0 to 21.25 mags. is $0.0371 Z_{\odot}$ while for clusters with T$_1$ from 22.95 to 23.2 is $0.017 Z_{\odot}$. These values are consistent with a mass/metallicity scaling relation (where M is the clusters mass): \begin{equation} Z \approx M^{0.44} \label{Z_M_4486} \end{equation} \noindent somewhat smaller than $Z \approx M^{0.55}$ but comparable to $Z \approx M^{0.48}$ suggested respectively by \citet{b34} and \citet{b77}. Figure 3 also suggests that the blue GCs tilt, as noted by the last authors, is in fact detectable over the whole magnitude range brighter than T$_1$=23.2. As mentioned in Section \ref{CMDCD}, a tilt is not detected in the case of the NGC 1399 GCs. In this galaxy, blue GCs exhibit a considerably larger Zs(blue) than in NGC 4486 and we suggest that, this larger abundance spread makes more difficult the detection of an eventual tilt. As an example, Figure \ref{Tilt_4486_1399} displays the colour magnitude diagram for the adopted model in the case of NGC 4486, showing the blue tilt (left panel). An increase of Zs(blue) from 0.012 to 0.05 (comparable to that of the blue GCs in NGC 1399) changes substantially the appearance of that diagram and makes the blue tilt (included in the model) less evident (right panel).\\ \noindent 2) The areal density distribution for blue and red clusters. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure16.eps}} \caption{Azimuthal counts (within 22.5 degrees bins) for the globular clusters brighter than T$_1$=23.2 and (C-T$_1$) from 0.9 to 2.3 (large filled dots). The statistical uncertainty of the counts is shown with bars. Open dots come from a composite model that includes red clusters with a flattening q=0.8 and blue clusters with q=0.5 according to text.} \label{Flat_4486} \end{figure} The fact that the NGC 4486 GCS exhibits a noticeable flattening has been pointed out by \citet{b45}. This feature is clearly seen in Figure \ref{Flat_4486}, which shows azimuthal counts within a galactocentric circular annulus with inner and outer radii of 120 and 360 arcsec, performed using our photometry (T$_1$=21 to 23.2 and (C-T$_1$)=0.90 to 2.30). This figure also displays the results from a model that includes red GCs with a flattened spatial distribution with q=0.80, and blue clusters with q=0.50 (where q=b/a, is the ratio of the minor to major semi axis). The details of this model, that provides a consistent fit to the observations, are discussed below. The adopted flattenings come from the assumption that, if GCs trace a given stellar population, they should share the same flattening. The inner regions of NGC 4486 (where the red stellar component dominates the surface brightness) exhibits a q=0.85 (at a=120 arcsec) and reaches about q=0.50 at the outermost detectable boundaries (see \citealt{b48}), where the blue stellar population should become more evident. The density run on a large angular scale was determined by using Suprime camera observations by \citet{b81}. These authors determine areal density in circular annuli on a rectangular strip that extends to the east of the galaxy centre. We stress that, as those authors use the colour valley at (V-I)=1.10 in their photometry as a discriminant between both cluster populations, their {\bf so called} blue clusters will eventually include the blue tail of the red population inferred from the colour histogram decomposition. We note that Tamura et al. also find that a projected NFW profile gives good representation of the areal density distribution of the so defined blue GCs as FFG05 did in NGC 1399 using the same definition for the blue clusters. In order to test the compatibility of their observations with our approach, we generated a model that assumes that both cluster populations follow elliptical distributions with the flattenings mentioned before and a major axis coincident with that of the galaxy halo (P.A. $\approx$ 155 degrees). The density distribution of the genuine blue clusters in the inner regions of the galaxy was fit using a surface density profile similar to that adopted for NGC 1399 but with a scale length rs=350 arcsec. This fit gives an adequate representation to the density depicted in Figure \ref{Pend_1399_4486} (lower panel). In turn, model red clusters were generated adopting a lowered Hubble density profile (or analytical King profile) with rc=60 arcsec (from \citealt{b39}). In this case, the tidal radius was changed iteratively until the best fit to the Tamura et al. densities was obtained, yielding r$_t$=3600 arcsec. Model GCs colours were generated as described above while the T$_1$ magnitudes were derived by adopting fully Gaussian integrated luminosity functions with turn-overs at T$_1$=22.9 and 23.2 and dispersions of 1.38 and 1.55 mags. for the blue and red globulars, respectively. These parameters were taken from \citet{b80}, who give values in the V band, and transformed to the T$_1$ band (through $(V-R) = (V-T_1)=0.21(C-T_1)+0.19$). The completeness factors, that allow an estimate of the total number of GCs in each subpopulation from the fractional sampling in colour and magnitude, were 3.20 for the blue GCs and 3.31 for the red GCs. The combined cluster population was then sampled in circular annuli, in order to compare with the Tamura et al. density profile and taking those GCs bluer than the colour valley at (C-T$_1$)=1.52 (or (V-I) $\approx$ 1.10, following their definition of the blue population). The result for {\bf this} blue population (shifted in log (dens.)) is shown in Figure \ref{Dens_4486_B} that also includes a straight line that represents the NFW profile (rs=226 arcsec; 20 kpc at their adopted galaxy distance) fit by \citet{b80} to this cluster population. The lower line in this diagram corresponds to the genuine blue GCs and comes from sampling, also in circular annuli, the projection on the sky of an oblate ellipsoid (with q=0.5). This ellipsoid follows the blue GCs spatial density dependence mentioned above with a cut off at 450 kpc from the galaxy centre. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure17.eps}} \caption{Projected areal distribution for NGC 4486 GCs bluer than the colour valley. Filled dots come from counts within circular annuli given by \citet{b81}. Open dots are from the model described in the text. The straight line is the NFW fit given by those authors. Note that, following this colour definition, the so called ``blue'' globulars do not exhibit the flat inner core. The adopted distribution of the ``genuine'' blue globulars is also shown (lower curve). The dashed line has the slope shown in figure \ref{Pend_1399_4486} for GCs bluer than (C-T$_1$)=1.21.} \label{Dens_4486_B} \end{figure} Figure \ref{Dens_4486_B} in fact shows that the model discussed in this section is able to match the Tamura et al. density profile, which does not show a flat core. The difference between this last profile and the adopted one for the genuine blue GCs, can thus be explained as the result of including the blue tail of the red cluster population, characterised by a steeper spatial distribution, within the sample of GCs bluer than (V-I)=1.10. Figure \ref{Dens_4486_R}, in turn, shows the comparison of the model with the red GCs as defined by Tamura et al. which, not suffering the colour overlapping effect, is directly comparable with our model red clusters.\\ \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure18.eps}} \caption{Projected areal distribution for NGC 4486 GCs redder than the colour valley (``red'' globulars). Filled dots come from counts within circular annuli given by \citet{b81}. Open dots come from the model described in the text. The model counts have been shifted vertically in order to take into account the limiting magnitude in that work. The dashed line has the slope shown in Figure 9 for GCs redder than C-T$_1$=1.72.} \label{Dens_4486_R} \end{figure} \noindent 3) The surface brightness profile. Two blue surface brightness profiles with a relatively large angular coverage are available for NGC 4486 in the literature: \citet{b10} and \citet{b8}. These profiles, along the major axis of the galaxy, show good agreement up to a $\approx$ 600 arcsec where the Caon et al. profile becomes systematically fainter. A comparison with the diffuse light map in the Virgo cluster by \citet{b48}, in turn, indicates a V surface brightness V $\approx$ 26.5 mags. per arcsec$^{\sq}$ at a $\approx$ 1800 arcsec that imply B= 27.1 to 27.5 mags. per arcsec$^{\sq}$ which is consistent with the Carter \& Dixon profile which we adopt in what follows. Figure \ref{Prof_4486} shows the best fit profile obtained through ELLIPSE from the blue synthetic image. The profile corresponds to a distance modulus (V-Mv)$_o$=31.0 and an interstellar reddening E(B-V)=0.022 from \citet{b70}. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure19.eps}} \caption{Observed B surface brightness profile for NGC 4486 ( \citealt{b10}; filled dots) confronted with the model fit (open circles). This last profile was obtained from a bi-dimensional model image processed with the ELLIPSE task within IRAF. Squares and triangles represent the luminosity associated with the ``blue'' and ``red'' stellar populations. Note that the model fails inside 100 arcsec in galactocentric radius where the globular distributions shows a flat core.} \label{Prof_4486} \end{figure} The profile fit requires $\gamma=1.18 (\pm 0.05) \times 10^{-8}$ and $\delta=1.2 \pm 0.1$ with and yields an rms of $\pm$ 0.07 mags. Again, and as already noted for NGC 1399, the flat core of the GCS does not allow a proper representation of the inner region of the galaxy. The ouput from ELLIPSE shows that, as a result of composing two diffuse populations with different flattenings, the galaxy model flattening varies with galactocentric radius. This trend is compared with the ellipticity values ($\epsilon$=1-q) obtained by \citet{b10} in Figure \ref{Ellip_4486}. The overall agreement is acceptable although it could be improved if the possibility of a variable q (for one or both cluster subpopulations) is allowed. However, the statistical uncertainties connected with the azimuthal counts prevents a meaningful estimate of this eventual dependence. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure20.eps}} \caption{Ellipticity ($\epsilon=1-q$) variation of the NGC 4486 stellar halo as a function of semi-major axis $a$ from \citet{b10} (open circles), compared with the expected variation from the model fit (filled circles).} \label{Ellip_4486} \end{figure} \section{Dependence of Results on Uncertainties of the Fitting Parameters} \label{DISC} The overall results from this modelling in terms of specific frequencies, characteristic t* parameter (integrated over metallicity for each of the cluster populations), diffuse stellar mass and $(M/L)_B$ ratios are listed in Table \ref{Model_values} . \begin{table} \centering \caption{Model results for NGC 1399 and NGC 4486.} \label{Model_values} \begin{tabular}{@{}l@{}rrc@{}} & NGC 1399 & NGC 4486 & \\ \hline Adopted (V-Mv)o & 31.4 & 31.0 & \\ Zs(blue) & 0.045 & 0.012 & a\\ Zs(red) & 1.45 & 0.90 & \\ $\gamma$ & 0.82$\times 10^{-8}$ & 1.18$\times 10^{-8}$ & \\ $\delta$ & 1.10 & 1.20 & \\ number of blue globs. & 3900 & 7000 & b\\ number of red globs. & 4500 & 4800 & b\\ $Sn^{*}$(blue globs) & 12.1 & 27.9 & c\\ $Sn^{*}$(red globs) & 5.3 & 8.4 & c\\ t* (blue globs) & 3.44$\times 10^{-8}$ & 8.41$\times 10^{-8}$ & d\\ t* (red globs) & 0.85$\times 10^{-8}$ & 1.43$\times 10^{-8}$ & d\\ t* (total) & 1.3$\times 10^{-8}$ & 2.82$\times 10^{-8}$ & d \\ $(M/L)_B$ (blue pop) & 4.2 & 3.9 & \\ $(M/L)_B$ (red pop) & 9.6 & 8.7 & \\ Total stellar mass ($M_{\odot}$) & 7.2$\times 10^{11}$ & 4.8$\times 10^{11}$ & e \\ Frac. mass (blue pop.) & 0.18 & 0.20 \\ Frac. mass (red pop.) & 0.82 & 0.80 & \\ \hline \multicolumn{4}{l}{\it \footnotesize a) Plus a blue ``tilt'': $\Delta Z=0.01(23.2-T_1)$}\\ \multicolumn{4}{l}{\it \footnotesize b) Inside a=1500 arcsec, assuming Gaussian LFs}\\ \multicolumn{4}{l}{\it \footnotesize c) Intrinsic values defined in terms of their associated stellar}\\ \multicolumn{4}{l}{\it \footnotesize luminosities in the V band.}\\ \multicolumn{4}{l}{\it \footnotesize d) Integrated values defined in terms of their associated stellar masses.}\\ \multicolumn{4}{l}{\it \footnotesize e) Inside a projected galactocentric radius of 100 kpc}\\ \end{tabular} \end{table} The total stellar masses given in this table include a correction that takes into account the region within 120 arcsec in galactocentric radius, where the model does not provide an adequate fit. In what follows we describe the uncertainties of these results in terms of the fitting parameters, $\gamma$ and $\delta$, as well as those connected with the colour-metallicity relation, (M/L) ratios, age, and adopted abundance scale.\\ \noindent -$\gamma$ parameter: Figure \ref{gamma_var} depicts the dependence of both $\gamma$ and total projected stellar mass (within a constant galactocentric radius of 100 kpc) with distance modulus. The adopted distance moduli for both galaxies are also shown along with a (formal) associated uncertainty of $\pm 0.25$ mags. Even such small uncertainties, do not rule out that both galaxies might be at similar distances from the Sun, and in this case, the $\gamma$ parameter and total stellar masses of NGC 1399 and NGC 4486 would also be very similar.\\ \noindent -$\delta$ parameter: $\delta$ is independent of the adopted distance and a variation of the order of the fit uncertainty ($\pm 0.1$), mainly impacts on the total mass of the diffuse stellar population associated with the blue GCs, that changes by $\pm 15 \%$. Larger $\delta$ values imply a decrease in the mass of these stars and also redder integrated colours of the composite stellar population.\\ \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure21.eps}} \caption{Variation of the $\gamma$ parameter (lower curves) and total projected stellar mass within 100 kpc (upper curves) obtained from model fits as a function of the adopted distance modulus (solid line: N1399; dashed: NGC 4486). Horizontal lines indicate a change of $\pm 0.25$ mags. around the adopted distance moduli.} \label{gamma_var} \end{figure} \noindent -M/L ratio and age: The (M/L) ratio depends on the age and metallicity of the seed GCs. We tentatively adopt an age of 12 Gy comparable to that of the Milky Way System (see, for example, \citealt{b13}). Synthetic population models by \citet{b83} show that a variation of $\pm 2$ Gy around the adopted model age increases or decreases those ratios by 15\% without changing the shape of the functional dependence with metallicity. Accordingly, masses also change in the same proportion. Age variations of that order however, do not have a noticeable impact on either the shape of the brightness profile or its integrated colour. The characteristic $(M/L)_B$ ratios for both diffuse populations (as well as for the composite stellar population) were obtained by integrating over the whole range of abundances determined by their respective abundance scale lengths and adopting an upper cut off of $4 Z_\odot$. These results are not critically dependent upon this formal upper limit. In particular, we note that the $(M/L)_B$ values of the red stellar populations are comparable to that obtained by \citet{b66} ($(M/L)_B =10$ ) in the case of the central regions of NGC 1399 where the red population dominates the integrated luminosity.\\ \noindent -Abundance scale: The relation between [Fe/H]zw and [Z/H], adopted as a constant over the whole abundance range, might not be totally appropriate since the \citet{b46b} work does not include GCs at a very low abundance regime. We also attempted models adopting the Mendel et al. [Z/H]-[Fe/H]zw off set value (0.131) for the red GCs and a tentatively larger value (0.3) for the blue clusters. This modification leads to larger $(M/L)_B$ ratios for the blue population and to an increase of the total mass of about 20\% (although no significant improvement of the fits of the colour histograms is obtained). \section{Implications of the profile fit} \label{IPF} The $\gamma$ and $\delta$ parameters that provide the best fit to the shape of the B band surface brightness profile of each galaxy will also lead to a given: \begin{enumerate} \item [a)] Galactocentric colour gradient of the galaxy halo. \item [b)] Colour off-set between GCs and galaxy halo. \item [c)] Behaviour of the cumulative GCs specific frequency with galactocentric radius. \item [d)] Metallicity distribution of the diffuse stellar population. \end{enumerate} A comparison of these predicted features with the observed ones, as follows, then may provide independent clues about the success of the model.\\ \noindent a) Galactocentric colour gradient of the galaxy halo. The main implication of the model is that, at galactocentric distances larger than 120 arcsec, the main driver of the galaxy colour gradient is the (luminosity weighted) composition of the associated blue and red diffuse stellar populations. FFG05 presented the expected colour (B-R) gradient for the NGC 1399 halo on the basis that the colours of the diffuse stellar populations could be identified with the colours of the peaks of the associated GCs. That assumption is no longer necessary in this work as the colour of each mass element connected with a given globular, as well as its mass to luminosity ratio, are determined by the metallicity of the ``seed'' cluster.\\ \noindent b) Globulars-galaxy halo colour off set. The GCs {\bf mean} integrated colours (including all clusters) in massive elliptical galaxies usually exhibit a galactocentric colour gradient comparable to that of the galaxy halo but blue ward shifted (\citealt{b78}; \citealt{b22}). In the context of the model discussed in this work, the cluster gradients arise as a consequence of (number weighting) averaging the two GCs subpopulations, characterised by different spatial scale lengths. The same reasoning apply to the associated diffuse stellar populations but, in this case, weighted through the (M/L) ratios determined by metallicity, then leading to the observed colour off-set. Colour gradients derived from the profile fits are shown in Figure \ref{colour}. In these diagrams, the predicted halo colours are compared with the mean globular integrated model colours and also with those obtained from the photometry presented in section II. These diagrams show that, in fact, the galaxy halos exhibit colour gradients comparable, but redder, than those of the GCs. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure22af.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure22bf.eps}} \caption{(B-R) colour gradient as a function of galactocentric distance for the halo (open squares) compared with the mean globular cluster colours from the model fit (open circles). The globular mean observed colours, derived from the photometry presented in this paper, are also shown (filled circles). The short straight line indicates (B-R) colours derived from \citet{b47}. Upper panel: NGC 1399; Lower panel: NGC 4486. The dashed lines indicate, for each galaxy, the peak colours of the blue and red GCs.} \label{colour} \end{figure} Figure \ref{colour} also shows that the (B-R) colour gradients determined by \citet{b47} in the inner regions of the galaxies are in very good agreement with with the predicted colours of the halo.\\ \noindent c) Cumulative globular cluster specific frequency. Figure \ref{Sn_cum} shows the galactocentric variation of the cumulative specific frequency derived from the best fit models. The GCs populations inside a galactocentric radius of 120 arcsec were taken from \citet{b17} and \citet{b39}. Even though each cluster subpopulation has its own intrinsic frequency, a variation of the {\bf composite} Sn is expected as the number ratio of blue to red GCs changes with galactocentric radius. The parametric Sn values \citep{b45}, defined at a galactocentric radius of 40 kpc, from this figure are $Sn \approx 3.5$ and $Sn \approx 8.5$ for NGC 1399 and NGC 4486, respectively. These values are considerably lower than previous estimates given in the literature as already noted in \citet{b24}.\\ \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure23.eps}} \caption{Cumulative globular cluster specific frequencies for NGC 1399 (lower curve) and NGC 4486 (upper curve) derived from the model fits. The vertical lines indicate galactocentric radius of 40 kpc (dotted line: NGC 1399; dashed line: NGC 4486) and indicate $Sn \approx 3.5$ and $Sn \approx 8.5$, respectively.} \label{Sn_cum} \end{figure} \noindent d) The metallicity distribution of the diffuse stellar population. The shape of the [Fe/H] distribution expected for the diffuse stellar population was schematically derived in FFG05 on the basis of estimating a characteristic $(M/L)_B$ ratio and intrinsic specific frequency for each cluster population. In contrast, in this work each stellar mass element has a given metallicity, and hence (M/L) ratio. The statistic distribution of these masses, as a function of [Fe/H] is given in Figure 24 for NGC 1399 and NGC 4486. For both galaxies we show the inferred metallicity distribution, for galactocentric ranges from 10 to 15 and 15 to 25 kpc convolved with a Gaussian kernel (dispersion: 0.20 dex) aimed introducing some degree of smoothing comparable to observational errors. \begin{figure} \resizebox{1.0\hsize}{!}{\includegraphics{figure24a.eps}} \resizebox{1.0\hsize}{!}{\includegraphics{figure24b.eps}} \caption{Stellar mass histogram as a function of metallicity for NGC 1399 (upper panel) and NGC 4486 (lower panel). These histograms belong to elliptical galactocentric radii between 10 and 15 kpc (solid line) and 15 to 25 kpc (dotted line) at an adopted distance of 19 Mpc (NGC 1399) and 15.8 Mpc (NGC 4486). The histograms asume an $\alpha$ ratio of 0.3} \label{Hist_Stellar} \end{figure} These mass statistics can be transformed into star-number ones under the assumption that the stellar luminosity functions do not depend strongly on metallicity. A comparison with the cases of NGC 5128 (\citealt{b33} or \citealt{b62}) and also M31 \citep{b15} shows good qualitative agreement, i.e, the presence of a broad high metallicity component and an extended tail towards low metallicity that becomes more evident as galactocentric radius increases. More recently, \citet{b50} presents stellar number statistics for a number of edge on spirals which also shows low metallicity skewed distributions, a feature that seems independent of the galaxy morphology. \section{Caveats about the model} \label{CAM} The model described in this work has several caveats, namely:\\ -colour bimodality is attributed to two different GCs subpopulations. An alternative view, based on the presence of an inflection region in the colour-metallicity relation as the main driver of the shape of the colour histograms, has been suggested by \citet{b84}. So far, however, neither our calibration nor recent spectroscopic results in NGC 4472 \citep{b76} seem to support that situation. Further results in this last direction can also be found in \citet{b40} for the case of NGC 4486. - Although an exponential dependence of the number of GCs with abundance was adopted, more complex functional dependences, hidden by the noise of the GCs statistics, could not be ruled out. - An abundance dependence with brightness, that leads to a blue tilt in the case of the blue GCs, is included in the models. However, we cannot infer whether the tilt is just a local effect or if it is actually shared by the diffuse stellar population associated with the blue clusters. Nevertheless removing the tilt from the models, has little impact on the output integrated colours that then would become slightly bluer ($\approx 0.015$ mags. in (B-R)). -The two dominant cluster subpopulations are assumed to be coeval. Further refinement of this aspect could be incorporated once meaningful ages become available. On one side some works (e.g \citealt{b37}) do not find detectable age differences between the blue and red populations in NGC 4486. On the other, (e.g. \citealt{b19}; \citealt{b55}; \citealt{b56}, or \citealt{b35}) do find a certain fraction of intermediate age clusters in NGC 1399 and in other galaxies. -The $[\alpha/ Fe]$ is adopted as constant and equal for the cluster subpopulations in both galaxies in order to derive the [Fe/H] stellar distributions . Even though the adopted value is appropriate for the MW (see \citealt{b6}, or \citealt{b82}) and also representative for ellipticals \citep{b58}, a possible variation with metallicity as noted by these last authors and, earlier, by \citet*{b69}, cannot be ruled out. \section{Discusssion and Conclusions} \label{Conclu} The results presented in previous sections show that the surface brightness profiles of both NGC 1399 and NGC 4486 can be traced using a common link between GCs and the stellar halo populations in these galaxies. This link imply that the number of GCs per diffuse stellar mass, defined as {\bf $t=\gamma \exp(-\delta[Z/H])$ },increases when chemical abundance decreases. We note that \citet{b33} had already found an increase of Sn with decreasing GCs metallicity in the case of NGC 5128. This suggests that, on a large scale, the dominant globular cluster subpopulations formed along major star forming episodes and following a similar pattern. However, it is not yet clear whether abundance, through the role that it plays in the t parameter, is the physical reason that governs the fraction of clustered to diffuse stellar mass or, eventually, some other ``hidden'' variable in turn correlated with abundance. The quality of the profile fits is comparable to any other parametric approximation in a range that covers a galactocentric radius from 10 to 100 kpc. In the inner regions, GCs fail to map the brightness distribution probably as a consequence of cluster destruction processes due to gravitational effects. However, it seems that survivor clusters, with large perigalacticon orbits, are still able to trace their associated stellar populations. It is worth mentioning that, based on far UV observations of NGC 1399 , \citet{b42}, find arguments that support the coexistence of two widely different stellar populations, in terms of chemical abundance, in the nucleus of the galaxy. Even though the approach only aims at reproducing the brightness profiles, other connected features as galactocentric colour gradients, GCs-halo colour offset, cumulative cluster specific frequency and inferred stellar metallicity distributions compare very well with observations. It seems also remarkable that the common quantitative GCs-stellar halo link works in both galaxies although their cluster populations exhibit detectable differences in cluster numbers and chemical abundance. Both GCs subpopulations have larger abundance scale lengths (as defined in Section 4) in NGC 1399 than in NGC 4486 leading to total mean abundance ratios of 1.4 and 2.5 for the red and blue GCs, respectively. As shown in Figures 10 and 14, the approach presented in this paper delivers GCs colour distributions that are not strongly different from a two Gaussians fit requiring five free parameters, a common procedure in the literature (e.g Ostrov, Forte \& Geisler 1998). However, we note that our models indicate a lower ratio of the number of blue to red GCs and only require three free parameters. Blue GCs in NGC 4486 have a small abundance scale lenght, about four times smaller than that of the blue GCs in NGC 1399, and we speculate that this may be connected with a shorter formation time scale. In turn, that lower abundance spread may be the reason behind the presence of a blue tilt, connected with cluster mass, in the colour magnitude diagram. This feature seems absent, or probably masked, by the larger abundance scale of the blue GCs in NGC 1399, a situation that may also hold in other galaxies (e.g. NGC 4472, Strader et al. 2006). This last result argues in favour of the idea that blue GCs ``know'' about the galaxy they are associated with (\citealt*{b77b}). In any case, and in both galaxies, the blue GCs barely reach an abundance close to [Z/H]=-0.5. The reason for this upper metallicity cut off may be connected with some kind of sincrhonising event as the re-ionisation of the Universe (\citealt{b11}; \citealt{b67}; \citealt{b63}). However, the different abundance scale lengths of the blue GCs also suggest that a local phenomenon, distinct for each galaxy, may have also played a role in modulating the the star formation rate (e.g. the onset of galaxy nuclear activity). The overall picture seems consistent with some scenarios already discussed in the literature (\citealt{b18}; \citealt{b34}) that invoke two different cluster formation mechanisms and, probably, environmental conditions. In contrast with the relative abundance homogeneity and large spatial (half density) core radii ($\approx$ 25 kpc) of the blue GCs, the red ones exhibit a large abundance heterogeneity and much smaller core radii ($\approx$ 5 kpc) also shared by the red diffuse stellar population. This degree of heteregeneity may be connected , for example, to mergers of different nature (e.g. \citealt{b72}). The total globular cluster formation efficiencies, in terms of stellar mass, indicated by the models, and adopting an average cluster mass of $2.5 \times 10^{5 } M_\odot $, are $2.4 \times 10^{-3}$ for NGC 1399 and $5.6 \times 10^{-3}$ for NGC 4486. They are comparable to the efficiency derived by \citet{b46} although, in that case, the definition of efficiency included total baryonic mass. Blue clusters show a higher formation efficiency (in terms of the stellar mass they are associated with), when compared with the red ones, probably as a consequence of a lower star formation efficiency during the early phases of galaxy formation at a low metallicity regime. Although there is no strong evidence of an age difference between the blue and red globular subpopulations (e.g. \citealt{b37}) within the uncertainties of the measurements, a possible temporal sequence that assumes the formation of the blue population first, cannot be discarded as a result of the relatively small time scales involved at the early phases of galaxy formation (see \citealt{b3b}). In this frame, the chemical enrichment provided by a presumably progenitor blue population might be important to boost later stellar formation efficiency through an abundance enrichment that may reach $[Z/H] \approx $-0.60 within 100 kpc of the galaxy nucleus. Some scenarios suggest that blue GCs formation is associated with dark matter (\citealt{b3}; \citealt{b49}; \citealt{b57}), and it is tempting to look for such a connection. For example, the results listed in Table \ref{Model_values} show that while both galaxies have a similar total number of red GCs, NGC 4486 outnumbers NGC 1399 in a factor of about 1.8 in terms of blue GCs. Dark mass estimates within a galactocentric radius of 100 kpc are 3.4$\times 10^{12} M_\odot$ for NGC 1399 (extrapolating data from \citealt{b64}) and 7.4$\times 10^{12} M_\odot$ for NGC 4486 \citep{b10b}, leading to a ratio $\approx 2.0$ comparable to that in the number of blue GCs. The similarity of the stellar galaxy masses, and the difference in their total masses, had already been noticed by \citet{b36} on the basis of their X ray analysis. FFG05 found that, adopting {\bf their} definition of blue clusters, the density profiles of the NGC 1399 GCs could be fit with a NFW profile \citep{b51} with a scale length of 375 arcsec, coincident with that derived for the inferred dark matter halo by \citet{b64}. \citet{b80} also perform a NFW profile fit to the blue clusters in NGC 4486. However, both approaches deserve a revision since, on the basis of the results presented in this work, the ``genuine'' blue GCs exhibit a rather extended inner core in their surface density profiles. As shown here, these cores have been disguised by the inclusion of the blue tail of the red subpopulation within the ``blue'' GCs sample. This overlapping should be even more severe when using colour indices less sensitive to metallicity than (C-T$_1$) and should be taken into account when doing, for example, kinematic analysis of the cluster subpopulations. \\ \section*{Acknowledgments} This work was supported by grants from La Plata National University, Agencia Nacional de Promocion Cientifica y Tecnologica, and CONICET, Argentina. DG gratefully acknowledges support from the Chilean {\it Centro de Astrof\'isica } FUNDAP No 15010003.
0710.1084
\section{Introduction} There have been several studies of the structure of QCD vacuum in high magnetic fields \cite{Shushpanov:1997sf,Kabat:2002er,Miransky:2002rp,Cohen:2007bt}. The typical strength of a magnetic field which would change the structure of the QCD vacuum is very high and can be estimated as \begin{equation}\label{B10^20} B \sim \frac{m_\rho^2} e \sim 10^{20}~\textrm{G}, \end{equation} where $m_\rho=770~\textrm{MeV}$ is the typical energy scale of QCD. For example, the typical magnetic field that changes substantially the chiral condensate is $(4\pi f_\pi)^2/e$ \cite{Shushpanov:1997sf}, which is of the same order as in Eq.~(\ref{B10^20}). In Ref.~\cite{Kabat:2002er} it was argued that for $B\gtrsim 10~\textrm{GeV}^2\approx 5\cdot10^{21}~\textrm{G}$ a condensate of spin-polarized $u\bar u$ pairs appear. The behavior of nuclear matter in strong magnetic fields has been studied more extensively. The motivation for such studies is the high magnetic field observed in magnetars~\cite{Duncan:1992hi}. On general grounds one expects (see, e.g., Ref.~\cite{Broderick:2000pe}) that the magnetic field affects significantly the structure of the matter once the synchrotron (Landau level) energy $\sqrt{eB}$ is comparable to the typical energy associated with charge excitations in the system, such as, e.g., proton Fermi energies in nuclear matter. The response of color-superconducting quark matter to a strong magnetic field has also been studied \cite{Alford:1999pb,Ferrer:2005vd,Ferrer:2006vw,Ferrer:2007iw,Ferrer:2006ie,Fukushima:2007fc,Noronha:2007wg}. Similarly, in all mechanisms studied so far, the ground state is affected above some value of the magnetic field determined by the superconducting gap $\Delta$ and/or the chemical potential $\mu$. For example, fields of order $\mu\Delta/e$ or higher are needed to destroy color superconductivity~\cite{Alford:1999pb}. In this paper we show that, due to the anomalous coupling of neutral pseudoscalar Golstone bosons to electromagnetism, the structure of the ground state is modified at much lower values of the magnetic field. In fact, these values are parametrically lower than~(\ref{B10^20}) in the limit where the Golsdtone bosons become massless (e.g., the chiral limit). For the low-density nuclear matter we find two scales of magnetic field that are relevant (see Sec.~\ref{sec:pi0-domain-wall}): \begin{equation}\label{B0} B_0 = \frac{3m_\pi^2}e, \qquad B_1 = 16\pi \frac{f_\pi^2 m_\pi}{em_N}\,. \end{equation} In particular, above $B_1$ nuclear matter is replaced by a different state. The most striking feature of Eq.~(\ref{B0}) is that both $B_0$ and $B_1$ \emph{vanish} in the chiral limit: when $m_\pi=0$, the structure of nuclear matter is altered at an arbitrarily small magnetic field! This is in sharp contrast to the previous estimates of the critical magnetic field, Eq.~(\ref{B10^20}). The state of QCD associated with scales (\ref{B0}) is a $\pi^0$ domain wall---a configuration in which the local expectation value of the $\pi^0$ field varies along the direction of the magnetic field $\bm B$ over a scale of pion Compton wavelength. We show that for $|\bm B|>B_0$ the domain wall becomes locally stable (metastable). The central observation of this paper is that such a domain wall carries nonzero surface baryon charge density proportional to $|\bm B|$. As we show, this is a consequence of the quantum axial anomaly---the triangle anomaly involving the baryon, electromagnetic and neutral axial currents.\footnote{The physics of triangle anomaly at finite density has also received some interest recently, see, e.g., \cite{Son:2004tq,Metlitski:2005pr,Newman:2005as,Harvey:2007rd}.} When $|\bm B|>B_1$ the parallel stack of such domain walls is energetically more favorable at $\mu\approx m_N$ than low density nuclear matter, as it carries less energy per baryon. That means nuclear matter turns into a stack of $\pi^0$ domain walls at such large magnetic fields. For larger magnetic fields this ``wall state'' should persist down to chemical potentials $\mu \gtrsim m_N\,B_1/|\bm B|$. We note right away that although both $B_0$ and $B_1$ vanish in the chiral limit $m_\pi\to0$ (with $B_0\ll B_1$), for the physical pion mass, these magnetic fields are of order $10^{19}~\textrm{G}$, smaller than the QCD scale~(\ref{B10^20}), but still much larger than the fields typical of magnetars. The crucial role in our analysis is played by the Wess-Zumino-Witten (WZW) term describing the anomalous interaction of the neutral pion field with the external electromagnetic field, and a related pion contribution to the baryon current. For example, the WZW term describes the anomalous $\pi^0\to2\gamma$ decay. We review the prerequisite basics of the WZW action in Sec.~\ref{sec:WZW}. We then derive the scales~(\ref{B0}) in Sec.~\ref{sec:pi0-domain-wall}. In Sec.~\ref{sec:colorsup} we show that the same mechanism that leads to the formation of $\pi^0$ domain walls in vacuum also operates in color-superconducting phases of QCD at high baryon densities. Such phases could exist in the cores of dense neutron or quark stars. The Nambu-Goldstone bosons associated with broken symmetries in these phases are much lighter~\cite{inverse-ordering,SSZ} than $\pi^0$ in vacuum. As a result, in these phases, the domain walls appear spontaneously at lower magnetic fields of order $10^{17}-10^{18}~\textrm{G}$, which decrease with increasing $\mu$ due to the decrease of the Nambu-Goldstone boson masses. Finally, in Sec.~\ref{sec:ferro} we consider another consequence of the anomaly: the spontaneous generation of magnetization, i.e., ferromagnetism, in dense QCD matter. Ferromagnetism of nuclear and quark matter, under various mechanisms, has been discussed in the literature \cite{Tatsumi:2000dv,Isayev:2003fz,Inui:2007zc,Ferrer:2007uw}. It has been suggested that ferromagnetism may help explaining certain features of magnetars~\cite{Bhattacharya:2007ud}. We point out that for such magnetization to appear, it is sufficient for a {\em pseudoscalar} Goldstone boson field to develop a nonzero average spatial gradient. Such a situation may indeed appear in the so-called ``Goldstone boson current'' phases of quark matter with mismatched quark Fermi surfaces. In the case when all gapless fermions are electrically neutral, we show that the magnitude of the magnetization is determined by the triangle anomalies. We estimate this magnitude in one particular scenario of Goldstone boson current in the color-flavor-locked phase with neutral kaon condensation (CFLK$^0$ phase) to be of order $10^{16}~\textrm{G}$. Since only a finite (and presumably small) region inside the neutron star is occupied by this current phase, we estimate the typical magnetic field generated by such a mechanism to be of order $10^{14}-10^{15}~\textrm G$. If such a mechanism indeed operates within the cores of some magnetars, it might account for their unusually large magnetic fields. \section{The WZW action in electromagnetic field} \label{sec:WZW} \subsection{SU(3) case} \label{sec:su3} We start from the SU(3) chiral perturbation theory, which describes the octet of pseudoscalar Nambu-Goldstone bosons in terms of a $3\times3$ unitary matrix $\Sigma$ \begin{equation}\label{Sigma} \Sigma = \exp\left( \frac{i\lambda^a \varphi^a}{f_\pi}\right), \end{equation} where $\lambda^a$ are the 8 Gell-Mann matrices and \begin{equation} \frac1{\sqrt2} \lambda^a\varphi^a =\left( \begin{array}{ccc} \frac{\pi^0}{\sqrt2}+\frac\eta{\sqrt6}& \pi^+ & K^+ \\ \pi^- & -\frac{\pi^0}{\sqrt2}+\frac\eta{\sqrt6} & K^0 \\ K^- & K^0 & -\frac{2\eta}{\sqrt6} \end{array} \right). \end{equation} Without the WZW term, the Lagrangian of the theory in an external electromagnetic field $A_\mu$ is \begin{equation}\label{eq:sigma-lagrangian} {\cal L} = \frac{f_\pi^2}4 \tr D_\mu\Sigma^\+ D_\mu\Sigma + \tr(M\Sigma+\textrm{h.c.}), \end{equation} where \begin{equation} D_\mu\Sigma = \d_\mu\Sigma + ieA_\mu [Q,\, \Sigma], \end{equation} with $Q=\mathrm{diag}(2/3,-1/3,-1/3)$. The Lagrangian is invariant under global SU(3)$_L\times$SU(3)$_R$ symmetry, and under the local U(1)$_Q$ subgroup of this symmetry. Gauging the whole SU(3)$_L\times$SU(3)$_R$ in QCD is not possible due to the axial anomalies~\cite{anomaly}. The anomalies are captured by the Wess-Zumino-Witten (WZW) term in the action~\cite{Wess:1971yu,Witten:1983tw}. We introduce the standard notations, \begin{equation} L_\mu = \Sigma\d_\mu\Sigma^\+, \qquad R_\mu = \d_\mu\Sigma^\+\Sigma . \end{equation} In the background of the external electromagnetic field $A_\mu$ as well as an auxiliary gauge potential $A_\mu^B$ coupled to baryon current, the WZW term is given by~\cite{Wess:1971yu,Witten:1983tw,Kaymakcalan:1983qq,DGH} \begin{multline}\label{SWZW} S_{\rm WZW}[\Sigma, A_\mu, A^B_\mu] = S_{\rm WZW}[0] - \int\!d^4x\, A^B_\mu j_B^\mu + \frac{\epsilon^{\mu\nu\alpha\beta}}{16\pi^2} \int\!d^4x\, \Bigl[ e A^\mu \tr(QL_\nu L_\alpha L_\beta + QR_\nu R_\alpha R_\beta)\\ -ie^2 F_{\mu\nu}A_\alpha \tr(Q^2L_\beta +Q^2 R_\beta + \tfrac12 Q\Sigma Q\d_\beta\Sigma^\+ -\tfrac12 Q\Sigma^\+ Q\d_\beta\Sigma ) \Bigr]. \end{multline} Here $S_{\rm WZW}[0]$ is the WZW term without the gauge field (which can be written in the form of a five-dimensional integral). The additional terms in (\ref{SWZW}) make the action invariant with respect to local U(1)$_B$ and U(1)$_Q$ (baryon and electric charge) transformations. The U(1)$_B$ transformation is not a part of the SU(3)$_L\times$SU(3)$_R$ group and the fields $\Sigma$ do not transform under it. However, the external U(1)$_B$ gauge potential $A^B_\mu$ does couple to $\Sigma$ via the Goldstone-Wilczek baryon current $j_B^\mu$~\cite{Goldstone:1981kk,Witten:1983tw}. In the external electromagnetic field, the conserved and gauge invariant baryon current $j_B^\mu$ can be found using the ``trial and error'' gauging, following Witten \cite{Witten:1983tw} \begin{equation} j_B^\mu = -\frac1{24\pi^2}\epsilon^{\mu\nu\alpha\beta}\left\{ \tr(L_\nu L_\alpha L_\beta) - 3ie \d_\nu[A_\alpha \tr (QL_\beta +QR_\beta)] \rule{0pt}{1em}\right\}, \label{jB-cons} \end{equation} or the ``covariant derivative'' gauging, following Goldstone and Wilczek~\cite{Goldstone:1981kk} \begin{equation} j_B^\mu = - \frac1{24\pi^2}\epsilon^{\mu\nu\alpha\beta} \left\{ \tr[(\Sigma D_\nu\Sigma^\+) (\Sigma D_\alpha\Sigma^\+) (\Sigma D_\beta\Sigma^+)] -\frac{3ie}{2} F_{\nu\alpha} \tr[Q(\Sigma D_\beta\Sigma^+ + D_\beta\Sigma^\+\Sigma)] \right\}. \label{jB-gi} \end{equation} In the form (\ref{jB-cons}) both terms are obviously conserved, but not separately gauge invariant. In the form (\ref{jB-gi}) both terms are obviously gauge invariant, but not separately conserved. It can be checked that the two forms are equivalent. \subsection{SU(2) case} \label{sec:su2} If one specializes to the SU(2) case [i.e., only $\varphi^1$, $\varphi^2$, $\varphi^3$ are nonzero in Eq.~(\ref{Sigma})], then the previous formulas simplify. We can write \begin{equation} \Sigma = \frac1{f_\pi} (\sigma + i\tau^a\pi^a), \qquad \sigma^2+\pi^a\pi^a=f_\pi^2, \end{equation} and $Q=t^3+1/6$ ($t^3=\tau^3/2$) to verify, e.g., that $\tr(Q\Sigma Q\d_\beta\Sigma^\+ -Q\Sigma^\+ Q \d_\beta\Sigma)=(1/3)\tr [t^3 (L_\beta+R_\beta)]$. The WZW action is zero in the absence of the external fields: $S_\textrm{WZW}[0]=0$. In the presence of external fields, it becomes \begin{equation} S_{\rm WZW} = \int\!d^4x\, \left\{ -A^B_\mu j_B^\mu + \frac{\epsilon^{\mu\nu\alpha\beta}}{16\pi^2} \left(\frac13 eA_\mu \tr(L_\nu L_\alpha L_\beta) - \frac{ie^2}{2} F_{\mu\nu}A_\alpha \tr [t^3 (L_\beta+R_\beta)] \right) \right\}, \end{equation} and \begin{align}\label{jB-2fl} j^\mu_B = -\frac1{24\pi^2}\epsilon^{\mu\nu\alpha\beta} \left\{ \tr(L_\nu L_\alpha L_\beta) - {3ie} \d_\nu \left[ A_\alpha \tr(t^3 L_\beta + t^3 R_\beta)\right] \rule{0pt}{1em}\right\}, \end{align} or \begin{equation} \label{eq:jB-gw2} j^\mu_B = - \frac1{24\pi^2}\epsilon^{\mu\nu\alpha\beta} \left\{ \tr[(\Sigma D_\nu\Sigma^\+) (\Sigma D_\alpha\Sigma^\+) (\Sigma D_\beta\Sigma^+)] -\frac{3ie}{2} F_{\nu\alpha} \tr[t^3(\Sigma D_\beta\Sigma^+ + D_\beta\Sigma^\+\Sigma)] \right\}. \end{equation} The WZW action can therefore be written as \begin{equation}\label{eq:Q=B/2} S_{\rm WZW} = -\int\!d^4x\, \left( A^B_\mu + \frac e2 A_\mu\right) j^\mu_B. \end{equation} The second term is the contribution of the baryon charge to the electric charge of a baryon as in the Gell-Mann-Nishijima formula $Q=I_3+N_B/2$. Consider one particular case, when $\Sigma$ is restricted to the form \begin{equation} \Sigma = \exp\left(\frac i{f_\pi} \tau_3 \varphi_3 \right), \end{equation} and the external field is chosen to be a constant magnetic field $B_i=\epsilon_{ijk}F_{jk}/2$ and baryon chemical potential $A_\nu^B=(\mu,\bm{0})$. In this case the WZW action assumes an even simpler form [only the last term in Eq.~(\ref{jB-2fl}) survives]: \begin{equation}\label{eq:Sphi3} S_{\rm WZW} = \frac e{4\pi^2f_\pi}\int\! d^4x\, \mu \bm{B} \cdot \bm{\nabla}\varphi_3. \end{equation} This form of the magnetic effective action has been written down and discussed in Ref.~\cite{Son:2004tq}, where it was interpreted as a nonzero magnetization of a $\pi^0$ domain wall at finite $\mu$ given by \begin{equation} \label{eq:magnetization} \bm M = \frac e{4\pi^2f_\pi} \mu \bm{\nabla}\varphi_3. \end{equation} In this paper we point out that the same term is responsible for the nonzero baryon density of a domain wall in an external magnetic field: \begin{equation} \label{eq:baryondensity} n_B= \frac e{4\pi^2f_\pi} \bm{B} \cdot \bm{\nabla}\varphi_3. \end{equation} \section{$\pi^0$ domain wall in a magnetic field} \label{sec:pi0-domain-wall} \subsection{Local stability} \label{sec:local-stability} To treat the $\pi^0$ domain wall and the fluctuations around it, it is most convenient to use the following parametrization \begin{align}\label{param} \sigma = f_\pi \cos\chi \cos\theta,\qquad & \pi^1 = f_\pi\sin\chi\cos\phi,\\ \pi^0 = f_\pi \cos\chi \sin\theta,\qquad & \pi^2 = f_\pi\sin\chi \sin\phi. \end{align} The Lagrangian (without the magnetic field) is given by \begin{equation}\label{eq:Lpi0} {\cal L} = \frac{f_\pi^2}2[ (\d_\mu\chi)^2 + \cos^2\chi (\d_\mu\theta)^2 + \sin^2\chi(\d_\mu\phi)^2 ] - f_\pi^2 m_\pi^2(1-\cos\chi\cos\theta). \end{equation} The $\pi^0$ domain wall corresponds to the following static solution to the field equations, \begin{equation}\label{kink} \chi =0, \qquad \theta = 4\arctan e^{m_\pi z}. \end{equation} Topologically, since Eq.~(\ref{kink}) corresponds to a contractible loop in the SU(2) group manifold (S$^3$), the wall can be ``unwound.'' Moreover, in the absence of a magnetic field the $\pi^0$ domain wall is not even {\em locally} stable. This can be seen by analyzing small fluctuations around the solution (\ref{kink}). For small $\pi_1$ and $\pi_2$ the Lagrangian is given by \begin{equation} {\cal L} = \frac12[(\d_\mu\pi_1)^2 +(\d_\mu\pi_2)^2] - \frac{m_\pi^2}2\left( 1 - \frac6{\cosh^2 m_\pi z}\right)(\pi_1^2+\pi_2^2). \end{equation} The equations of motion are \begin{equation} -(\d_x^2 + \d_y^2)\pi^a - \d_z^2 \pi^a + m_\pi^2 \left( 1 - \frac6{\cosh^2 m_\pi z}\right) \pi^a = E^2\pi^a. \end{equation} The corresponding Schr\"odinger equation has two bound states. The lowest state is tachyonic, \begin{equation} E^2 = k_x^2 + k_y^2 - 3 m_\pi^2\,, \end{equation} so the wall is locally unstable. (The second bound state corresponds to a zero mode of the wall.) In the magnetic field, the Laplacian in the $(x,y)$ plane becomes the Hamiltonian of a particle in a magnetic field, whose spectrum (the Landau levels) is well known, leading to \begin{equation}\label{eq:E2=eB-3m2} E^2 = (2n+1) eB - 3 m_\pi^2,\quad n=0,1,\ldots \end{equation} Therefore, when the magnetic field exceeds the value \begin{equation} B_0 = \frac{3 m_\pi^2}e \approx 1.0\times 10^{19}~\textrm{G}, \end{equation} the $\pi^0$ domain wall becomes locally stable. \subsection{Global stability at finite $\mu$} \label{sec:stab-fin-mu} Substituting the configuration (\ref{kink}) into the Lagrangian (\ref{eq:Lpi0}), one finds the following energy density per unit area, \begin{equation} \label{eq:Energy/S} \frac{\cal E}{S} = 8f_\pi^2 m_\pi. \end{equation} At finite baryon chemical potential $\mu$ and in the presence of a magnetic field $F_{xy}=B$ (i.e., $B_z=-B$), the configuration (\ref{kink}) carries a baryon number according to Eq.~(\ref{eq:baryondensity}) with $\varphi_3=f_\pi\theta$. The baryon number per unit surface area is thus given by \begin{equation}\label{rhoB} \frac{N_B}{S} = \frac{eB}{2\pi}\,. \end{equation} Being a total derivative, the WZW term (\ref{eq:Sphi3}) does not affect the field equations. The energy per baryon number of the $\pi^0$ domain wall is \begin{equation}\label{eq:E/N-wall} \frac{\cal E}{N_B} = 16\pi\frac{f_\pi^2 m_\pi}{eB}\,. \end{equation} When the baryon chemical potential exceeds the value of that ratio, i.e., for $\mu>16\pi{f_\pi^2 m_\pi}/{(eB)}$, the wall becomes energetically more favorable than the vacuum, and the ground state must be a stack of parallel domain walls, (at least) as long as $\mu\lesssim m_N$---the energy per baryon number of the nuclear matter. In order to be more favorable than the nuclear matter at $\mu\approx m_N$ the ratio (\ref{eq:E/N-wall}) must be less than $m_N$. This happens if the magnetic field exceeds \begin{equation} B_1 = \frac{16\pi f_\pi^2 m_\pi}{e m_N} \approx 1.1 \times 10^{19}~\textrm{G}. \end{equation} In the chiral limit $m_\pi\to0$, $B_1\gg B_0$, but for the real-world pion mass $B_1$ is only slightly higher than $B_0$. According to Eq.~(\ref{eq:Q=B/2}), the $\pi^0$ domain wall carries a finite surface electric charge density equal to a half of the baryon charge density given by Eq.~(\ref{rhoB}). Within QCD, this charge can be neutralized by the $\pi^-$ bosons localized on the wall: according to Eq.~(\ref{eq:E2=eB-3m2}) the energy cost of adding a~$\pi^-$ vanishes at $B=B_0$. The number of charged pions necessary to neutralize the wall fills exactly a half of the first Landau level. This suggests that the electrically neutral ground state may show quantum Hall behavior. For $B>B_0$, each pion cost an energy of $(e(B-B_0))^{1/2}$. However, for $B>B_0$, within the full Standard Model (with electromagnetism), other mechanisms of neutralizing the electric charge of the wall may compete with adding charged pions (e.g., adding electrons). Since the energy of adding one electron to the system is only $m_e$ (its lowest Landau level energy), our estimate for $B_1$ is largely unaffected. \subsection{Structure and baryon charge of a finite domain wall} \label{sec:struct-finite-wall} So far we have considered an infinite domain wall. Let us now consider a large, but finite-size, domain wall. For the infinite wall, the baryon charge, given by Eq.~(\ref{rhoB}), comes from the {\em second term} in the baryon current (\ref{jB-2fl}), which gives Eq.~(\ref{eq:Sphi3}). This term is a full derivative, so for a {\em finite} wall it must vanish. Where does the baryon number come from in this case? We now demonstrate explicitly that the finite domain wall carries a baryon number that comes from the first term in Eq.~(\ref{jB-2fl}). We consider a flat domain wall with a circular boundary. We use cylindrical coordinates $(\rho,\varphi,z)$ with the origin at the center of the wall. The boundary of the wall is chosen to be $z=0$, $\rho=R$. We assume the radius $R$ is much larger than the thickness of the wall, $R\gg m_\pi^{-1}$. We use the parametrization~(\ref{param}). We expect that when $\rho<R$ and $R-\rho>m_\pi^{-1}$, we are sufficiently far away from the boundary so that the domain wall is given by Eq.~(\ref{kink}). In particular, when $z$ varies from $-\infty$ to $+\infty$, $\theta$ jumps by $2\pi$: \begin{equation} \theta(z=+\infty) - \theta(z=-\infty) = 2\pi, \qquad \rho<R. \end{equation} When $\rho>R$, one does not cross any domain wall as one moves along the $z$ direction, \begin{equation} \theta(z=+\infty) - \theta(z=-\infty) = 0, \qquad \rho>R. \end{equation} We find that $\theta$ is a multiple-valued function: it changes by $2\pi$ when we move along a small loop around the boundary $\rho=R$, $z=0$. To avoid a singularity in the fields themselves, $\cos\chi$ has to vanish on the boundary. We can choose \begin{equation} \chi(\rho=R,z=0) = \frac\pi 2\,. \end{equation} We expect that $\chi$ is nonzero only near the boundary. So the $\pi^1$ and $\pi^2$ fields differ substantially from 0 only near $\rho=R$. As these fields describe the charged pions, the boundary of the domain wall is a superconducting string~\cite{Witten:1984eb}. At the boundary $\rho=R$, the charged pion condensate is largest, $(\pi^1)^2+(\pi^2)^2=f_\pi^2$. Moreover, the phase $\phi$ of the charged pion condensate has a nontrivial winding number around the circle $\rho=R$. Indeed, in order to minimize the kinetic energy, this winding number is equal to the magnetic flux that goes through the contour, in unit of the elementary flux: \begin{equation} \phi(\varphi=2\pi) - \phi(\varphi=0) = \frac1{2\pi}eB(\pi R^2) = \frac12 eB R^2 . \end{equation} Because of continuity, the phase $\phi$ has the same winding number on any contour that surrounds the $z$ axis, $\rho=0$. To avoid singularity on this axis, we must have $\sin\chi=0$ at $\rho=0$. We choose $\chi(\rho=0)=0$. Thus we find that a finite $\pi^0$ domain wall has a peculiar feature: the phase $\phi$ makes $\frac12 eBR^2$ full circles on any contour that surrounds the axis $z=0$, and the phase $\theta$ makes a full circle on any contour that has linking number one with the boundary $\rho=R$ of the wall. The phase $\chi$ changes from $0$ on the $z$ axis to $\pi/2$ on the boundary of the wall. It is easy to see that the configuration has the topology of a Skyrmion with the baryon charge $N_B=\frac12 eBR^2$. In can be already seen from Eq.~(\ref{jB-gi}) but it is instructive to check that Eq.~(\ref{jB-cons}) gives the same result. Indeed, the full derivative term in Eq.~(\ref{jB-cons}) does not contribute to the total baryon charge and we have \begin{equation} N_B = -\frac1{24\pi^2}\int\!d^3x\, \epsilon^{ijk} \tr(L_i L_j L_k). \end{equation} Changing coordinate system to $\chi$, $\theta$, and $\phi$, one finds that the baryon charge is equal to $\frac12 eB R^2$. The baryon charge per unit surface area is the same as in Eq.~(\ref{rhoB}). \section{Color superconducting phases} \label{sec:colorsup} So far, we have considered the effect of the magnetic field on low-density matter. In this Section, we consider the effect of the magnetic field on the structure of high-density quark matter. Such high-density matter may exist in one of the color-superconducting phases (see, e.g., Refs.~\cite{Rajagopal:2000wf,Schafer:2003vz,Alford:2006wn,Shovkovy:2007zz,Alford:2007xm} for reviews). We shall see that due to the existence of light pseudoscalar Nambu-Goldstone bosons, stacks of domain walls for such bosons can be generated, and because the corresponding bosons are light, the critical magnetic field can be much lower than in vacuum. \subsection{2SC phase in a magnetic field} \label{sec:2sc-phase-magnetic} Theoretically, the simplest color superconducting phase is the two-flavor superconducting (2SC) phase~\cite{Alford:1997zt,Rapp:1997zu}. On the phase diagram, this phase occupies a window of chemical potential next to low-density nuclear matter: right after the chiral symmetry is restored, but before the density of strange quarks becomes significant. In this regime, the attraction between quarks in the color-triplet mutual state leads to an instability of the Fermi surface due to the familiar Cooper mechanism. The resulting Cooper pair condensate has the quantum numbers of a color triplet and an isospin singlet, and carries zero angular momentum. Perturbatively, there are two such condensates: the left- and the right-handed quark pairs: $X\sim q_Lq_L$ and $Y\sim q_Rq_R$. The gauge-invariant (color singlet) order parameter is the singlet made out of $X$ and $Y$ color vectors: $\Sigma=XY^\dagger$. Like $X$ and $Y$, $\Sigma$ is also an isosinglet: the isospin SU(2)$_L\times$ SU(2)$_R$ chiral symmetry is not broken in the 2SC phase. However, since the phases of $X$ and $Y$ change in opposite directions under the axial isospin singlet U(1)$_A$ symmetry, the phase of the order parameter $\Sigma=XY^\dagger$ changes under U(1)$_A$. This means that the U(1)$_A$ symmetry is broken by the condensate. In reality, this U(1)$_A$ symmetry is not a true symmetry of QCD---it is violated by the quantum fluctuations of the gluon fields via an anomaly. However, the vacuum configurations of the gluon fields responsible for this violation, i.e., the instantons, are suppressed at large baryon density due to color Debye screening, and the U(1)$_A$ transformation can be treated as an approximate symmetry at large $\mu$. In the 2SC phase, where the U(1)$_A$ is spontaneously broken, the measure of the explicit violation of this symmetry by anomaly/instantons is the mass $m_\eta$ of the Goldstone boson (which we call $\eta$). This mass decreases very fast with $\mu$ (see below and Ref.~\cite{SSZ}). The smallness of $m_\eta$ is what is responsible for the low value of the critical magnetic field. The effective Lagrangian density for the $\eta$ boson in the 2SC phase is \cite{SSZ} \begin{equation} {\cal L} = f^2 [(\d_0\varphi)^2 - u^2 (\d_i\varphi)^2 - m_\eta^2(1-\cos\varphi)]\, , \label{LeffV} \end{equation} where $\varphi$ is the local value of the U(1)$_A$ phase whose fluctuations generate Goldstone boson $\eta$. For asymptotically large $\mu\gg\Lambda_\textrm{QCD}$ the low-energy constants in the effective Lagrangian (\ref{LeffV}) are calculable~\cite{Beane:2000ms,inverse-ordering}: \begin{equation}\label{eq:f,u} f^2 = \frac{\mu_q^2}{8\pi^2} \, , \qquad u^2 = \frac13 \, . \end{equation} and \begin{equation} m_\eta = \sqrt{\frac a2} \, \frac{\mu_q}{f} \Delta = 2\pi \sqrt a \Delta \, , \label{meta} \end{equation} where $\Delta$ is the superconducting gap and $a$ has been estimated in Ref.~\cite{SSZ} \begin{equation} a = 5 \times 10^4 \biggl(\ln\frac{\mu_q}{\LambdaQCD}\biggr)^7 \biggl(\frac{\LambdaQCD}{\mu_q}\biggr)^{29/3} \, . \label{amu} \end{equation} In Eqs.~(\ref{eq:f,u})-- (\ref{amu}), $\mu_q$ denotes the quark chemical potential: $\mu_q\equiv\mu/3$. The domain wall configuration $\varphi=4\arctan[\exp(m_\eta z/u)]$ is a static solution of the equations of motion with energy per unit surface area given by \begin{equation} \frac {\cal E}S = 16 u f^2 m_\eta \, . \label{sigma} \end{equation} Unlike the $\pi^0$ domain wall in Section~\ref{sec:pi0-domain-wall}, it is locally stable because of the topology of U(1)$_A$: the wall can be unwound only by changing the magnitude of $\Sigma$, which requires energies beyond the scale of the effective Lagrangian (\ref{LeffV}). The interaction of $\varphi$ with the magnetic field due to the axial anomaly is described by \cite{Son:2004tq} \begin{equation} \label{eq:L-B-phi} {\cal L} = \frac{e\mu}{36\pi^2} \,\bm {\nabla}\varphi \bm {\cdot B}. \end{equation} Being a total derivative, this term does not change the field equations for $\varphi$, but it does contribute to the total free energy of a domain wall. In particular, for the domain wall perpendicular to the homogeneous field $\bm B$, the magnetic free energy per unit area is given by $e\mu B/(18\pi)$, which can be interpreted as the surface density of dipole magnetic moment directed perpendicularly to the wall, \begin{equation} \label{eq:mag-moment} \frac{|\textswab m|}{S}=\frac {e\mu}{18\pi} \, . \end{equation} For sufficiently large $B$, the free energy gain due to the interaction of the wall with the magnetic field outweighs the surface energy cost of creating a wall (\ref{sigma}). Thus the critical field is \begin{equation} \label{eq:B-crit} B_c=\frac{\cal E}{|\textswab{m}|}=288 \pi u \frac{f^2m_\eta}{e\mu} = \frac{4}{\sqrt3\pi}\frac{\mu m_\eta}{ e} \approx 1.2\cdot 10^{18}\textrm{ G} \times \left(\frac\mu{1 \textrm{ GeV}}\right) \left(\frac {m_\eta}{10 \textrm{ MeV}}\right). \end{equation} For $B>B_c$, the domain walls are energetically favorable and (provided boundary conditions allow) they will stack up until their mean separation is of the order of their width $1/m_\eta$. For comparison, the critical magnetic field needed to destroy superconductivity is at least of order $\mu\Delta/e$~\cite{Alford:1999pb}. Due to fast descrease of $m_\eta$ with $\mu$, the value of $B_c$ is much lower than the critical field at large $\mu$. \subsection{CFL} \label{sec:cfl} At large $\mu$ one eventually enters the regime where the mass of the strange quark can be neglected, the density of strange quarks is as large as that of up and down and the pairing involving all three flavors becomes energetically favorable. This pairing state is called color-flavor-locked (CFL) phase~\cite{CFL}. In the CFL phase, the Cooper pairs are both flavor and color triplets, i.e., $X\sim q_Lq_L$ and $Y\sim q_Rq_R$ each carry a color and a flavor index and transform as color-flavor matrices $X\to LXC^T$ and $Y\to RYC^T$ under the flavor and color SU(3)$_L\times$SU(3)$_R\times$SU(3)$_C$ transformations. The gauge-invariant order parameter $\Sigma=XY^\dagger$ transforms in the same way as the ordinary chiral condensate in vacuum, $\Sigma\to L\Sigma R^\dagger$. Therefore the chiral SU(3)$_L\times$SU(3)$_R$ is broken, in the CFL phase, down to the vector-like SU(3)$_{L+R}$ as it is in the vacuum. Similarly to the 2SC phase, the U(1)$_A$ symmetry is also spontaneously broken in the CFL phase. The SU(3)$_L\times$SU(3)$_R\times$U(1)$_A$ symmetry is explicitly violated by instantons and quark masses, so all Nambu-Goldstone bosons are massive. For simplicity, we consider the regime reached at asymptotically high $\mu$ where one can neglect the contribution of instantons to all masses. The lightest Nambu-Goldstone boson in this case is an isosinglet which has the quantum number of $\bar s s$, i.e., a mixture of $\eta$ and $\eta'$~\cite{inverse-ordering}. Its mass square is given by~\cite{inverse-ordering} \begin{equation} m_{\bar ss}^2 = \frac{3\Delta^2 m_u m_d}{\pi^2f^2} \end{equation} where $f^2\sim\mu^2$ is given below in Eqs.~(\ref{ff1f8}) and (\ref{f1f8}). The effective Lagrangian for this field, $\varphi_{\bar ss}$, is similar to the Lagrangian (\ref{LeffV}), \begin{equation} {\cal L} = f^2 [(\d\varphi_{\bar ss})^2 - u^2 (\d_i\varphi_{\bar ss})^2 -m_{\bar ss}^2 (1-\cos\varphi_{\bar ss})]. \end{equation} Since the boson is a mixture of the $\eta$ and $\eta'$, its decay constant is a linear combination of the singlet and the octet decay constants. One can easily derive \begin{equation}\label{ff1f8} f^2 = \frac1{12} (f_{\eta'}^2 + 2 f_\pi^2), \end{equation} where $f_\eta'^2$ and $f_\pi^2$ have been computed in Ref.~\cite{inverse-ordering}, \begin{equation}\label{f1f8} f{_\eta'}^2 = \frac 34 \frac{\mu_q^2}{2\pi^2}, \qquad f_\pi^2 = \frac{21-8\ln 2}{18} \frac{\mu_q^2}{2\pi^2}\,. \end{equation} The anomalous coupling of the $\varphi_{\bar ss}$ field to the magnetic field and baryon chemical potential is given by~\cite{Son:2004tq} \begin{equation} \label{eq:L-B-phi-prime} {\cal L}' = \frac{e\mu}{12\pi^2} \,\bm {\nabla}\varphi_{\bar ss} \bm {\cdot B}. \end{equation} Therefore the critical magnetic field in CFL can be estimated as \begin{equation} \label{eq:H_c-cfl} B_c' = 96\pi u \frac{f^2 m_{\bar ss}}{e\mu} = \frac{111-32\ln2}{81\sqrt 3\pi} \frac{\mu m_{\bar ss}}e = \frac{8\sqrt{111-32\ln2}}{3\sqrt6\pi} \Delta\sqrt{m_u m_d}\,. \end{equation} Numerically, it can be written as \begin{equation} B_c' = 1.0 \cdot 10^{17}\textrm{ G} \times \left(\frac\mu{1.5 \textrm{ GeV}}\right) \left(\frac {m_{\bar ss}}{2 \textrm{ MeV}}\right) = 8.3\cdot 10^{16}\textrm{ G} \times \left( \frac\Delta {30 \textrm{ MeV}}\right) \left(\frac{\sqrt{m_u m_d}}{5 \textrm { MeV}}\right). \end{equation} Numerically, the value obtained here is close to the theoretical upper limit of magnetic fields possible in neutron stars~\cite{Duncan:1992hi}. \section{Ferromagnetic quark matter} \label{sec:ferro} The presence of the anomaly term $\mu \bm\nabla\varphi\bm{\cdot B}$ in the Lagrangian implies that if a gradient of a pseudoscalar boson is spontaneously generated in the ground state, then the state will carry a spontaneous magnetization proportional to $\mu\bm\nabla\varphi$---i.e., it will be ferromagnetic.\footnote{The ferromagnetism of an axial domain wall in vacuum has been discussed in Refs.~\cite{Iwazaki:1996xf,Cea:1998ep} using a microscopic approach in connection with the primordial magnetic field generation (see also Ref.~\cite{Forbes:2000gr}). It is worth pointing out that unlike the vacuum case, where the magnetization is forbidden by $C$ parity~\cite{Voloshin:2001iq}, in the case we consider the $C$ parity is explicitly broken by the background baryon charge density.} Such a phase has been discussed in the literature under the name ``Goldstone boson current'' or ``supercurrent'' phase. This phase becomes favorable in the range of chemical potentials between CFL and 2SC phases. If we start from the CFL phase and decrease the chemical potential $\mu$, the splitting of the Fermi surfaces, $m_s^2/(2p_F)$, caused by strange quark mass $m_s$ leads to an instability~\cite{Casalbuoni:2004tb}. A similar instability occurs in the 2SC phase~\cite{Huang:2004bg}. In the language of the effective theory (chiral perturbation theory with baryon excitations~\cite{Kryjevski:2004jw}), the instability arises when a fermion excitation mode (a baryon) is about to turn gapless~\cite{Alford:1999xc,Alford:2003fq} due to the effective chemical potential, $m_s^2/(2p_F)$, introduced by the strange quark mass. Because of the existence of a bilinear coupling $\bm\nabla\varphi\bm\cdot\bm j$ of the ``supercurrent'' $\bm\nabla\varphi$ of a Goldstone boson to the normal current $\bm j=\psi^\dag\bm v\psi$ of the fermion $\psi$, when the fermion is nearly gapless one can lower the energy by simultaneously generating the Goldston boson current $\bm\nabla\varphi$ and the ordinary current $\bm j$ of opposite directions~\cite{Son:2005qx,Kryjevski:2005qq,Schafer:2005ym}. For definiteness, we shall discuss the Goldstone boson current state in the kaon-condensed CFL phase (CFLK$^0$)~\cite{Gerhold:2006np}. Most of the discussion is also relevant for the current phase in the CFL phase without kaon condensation~\cite{Gerhold:2006dt} and in the 2SC phase~\cite{Huang:2005pv}. As discussed in Ref.~\cite{Gerhold:2006dt}, to leading order in the strong-coupling constant $\alpha_s$, there is a degeneracy between the ``vector current'' state and the ``axial current'' state. In the vector current state $X$ and $Y$ rotate in the same direction as one moves along the $z$ direction, and in the axial current state they rotate in the opposite directions. We shall assume that the axial current state is favored. In this state, the gauge invariant order parameter $\Sigma$ varies in space. We should stress that the term ``current state'' is somewhat misleading, as the total current in the ground state is zero. For example, in the axial current state the axial current from the condensate is compensated by the axial current of gapless fermions. However, in contrast to the conserved currents, there is no reason for the {\em magnetization} to vanish. According to Ref.~\cite{Gerhold:2006np}, the Goldstone boson current CFLK$^0$ phase appears when the effective chemical potential $\mu_s$ induced by the strange quark mass is in a narrow range \begin{equation} \label{eq:mus-range} 1.605\Delta < \mu_s\equiv \frac{m_s^2}{2p_F} <1.615\Delta. \end{equation} Here $p_F=\mu/3$ is the quark Fermi momentum. The chiral field $\Sigma$ in the CFLK$^0$ phase is \begin{equation} \Sigma = \exp(-i cz Q) \exp\left( \frac{i\pi}2 \lambda_6\right)\exp(-i cz Q) = \exp(-i 2 cz Q)\exp\left(\frac{i\pi}2 \lambda_6\right), \end{equation} where $c$ is some constant that is determined by energy minimization. There is also a U(1)$_A$ linear background but it does not contribute to the anomaly that we need (since $\tr Q=0$). It turns out~\cite{Gerhold:2006np} that the minimum of the energy is achieved when $c\approx\Delta$, so one is stretching the applicability of the effective theory. We are interested in rough estimates, so we shall use the effective theory extrapolation. In the ground state, \begin{equation} \Sigma\d_z \Sigma^\+ = \d_z \Sigma^+ \Sigma = 2icQ, \end{equation} so the WZW term contribution to the Lagrangian is \begin{equation} \frac e{2\pi^2} \mu B \tr(c Q^2) = \frac e{3\pi^2} \mu B c. \end{equation} Putting $c=\Delta$, we find the magnetic moment density (magnetization) \begin{equation} \label{eq:M} M = \frac e{3\pi^2} \mu\Delta = 2.4 \cdot 10^{16}~\textrm{G}\times \left(\frac\mu{1.5 \textrm{ GeV}}\right) \left(\frac\Delta{30\textrm{ MeV}} \right). \end{equation} An important point not to be overlooked in such a calculation of the magnetization is a possible contribution of the near-gapless fermions that are present in the system. In the particular case of CFLK$^0$ considered here, these fermions are electrically neutral and do not contribute. What is a typical value of the magnetic field generated by this mechanism inside a neutron or quark star? The local baryon chemical potential is a function of the distance to the center of the star and is increasing towards the center of the star. Let us assume that it reaches the narrow range in which the Goldstone boson current CFLK$^0$ phase appears~\cite{Gerhold:2006np} \begin{equation} \label{eq:mu-range} \frac{m_s^2}{2\Delta} (1.615)^{-1} <\frac\mu3< \frac{m_s^2}{2\Delta} (1.605)^{-1}, \end{equation} before reaching the maximum at the star's center. This range maps onto a relatively thin shell inside the star, and we denote its mean radius as $R$ and the thickness $d$ (we estimate below $d\sim 100~\textrm{m}$ for a typical star of $R_*\sim 10$ km radius). Assuming that the magnetization in the shell is uniform, one finds that the magnetic field it creates outside is the same as that of a dipole moment equal to the total magnetic moment of the shell $M\cdot 4\pi R^2 d$. Near the surface of the shell this field is of order \begin{equation} \label{eq:H-vs-M} B \sim M \,\frac{d}{R} \end{equation} (within the shell the field is much larger $B\sim M$ and it is zero inside the non-ferromagnetic region surrounded by the shell -- the shell screens the field out of it). From Eq.~(\ref{eq:mu-range}) the width of the range in $\mu$ is of the order of 10 MeV. Taking the typical range of variation of $\mu$ in the star of order 500 MeV, we estimate $d/R\sim 10/500=0.02$. Using the estimate (\ref{eq:M}) for the magnetization $M$, we find from (\ref{eq:H-vs-M}) that typical fields generated by such mechanism are of order $B\sim 10^{14}-10^{15}$ G, which is the right order of magnitude to account for the observed magnetic fields of magnetars. \section{Conclusion} In this paper we discussed the effects of the magnetic field on the ground state of QCD at different values of baryon density. The key mechanism which leads to the effects we describe is due to the axial anomaly. In the effective low-energy description of QCD -- the chiral Lagrangian for the Goldstone bosons -- this effect is represented by a term which appears when we gauge the topological (Goldstone-Wilczek) baryon current. On the microscopic level, it is given by the triangle diagram with the baryon, electromagnetic and axial charge currents at the vertices. We have demonstrated that in a sufficiently strong magnetic field the most stable state with finite baryon number is not nuclear matter, but a $\pi^0$ domain wall. Similarly, at higher baryon densities, the most stable state in a sufficiently strong magnetic field is that of an isoscalar axial ($\eta$ or $\eta'$) domain wall. We also show that the states of quark matter with Goldstone boson current are ferromagnetic, and show that their magnetization is related to triangle anomalies. We estimate the magnetic field generated by such a mechanism in a typical neutron/quark star to be of order $10^{14}-10^{15}~\textrm{G}$, which is a relevant magnitude for neutron star phenomenology. Further work is needed to understand if such ferromagnetic quark matter exists. In particular, one should understand whether the ``vector current'' or ``axial current'' state is favored. In addition, one should determine if the current states are favored compared to other candidate ground states (for example, the Fulde-Ferrell-Larkin-Ovchinnikov states with multiple plane waves)~\cite{Alford:2007xm}. \acknowledgments We thank T.~D.~Cohen, D.~B.~Kaplan, S.~Reddy and M.~Voloshin for discussions. D.T.S. is supported, in part, by DOE grant No.\ DE-FG02-00ER41132. M.A.S. is supported, in part, by DOE grant No.\ DE-FG02-01ER41195.
0710.2450
\section{Introduction} The puzzle associated with recent cosmic acceleration, triggered by $70\%$ of dark energy or more \cite{a1} is far from being resolved uniquely. In the mean time, cosmologists are being confronted with yet another more intriguing challenge to explain the crossing of the so called phantom divide line $(w_{\Lambda}=-1)$, at sufficiently late time of cosmological evolution. Some recent analysis \cite{a1},\cite{b1} of the presently available observational data are in favour of the value $w_{de}<-1$, at present., $w_{de}$ being the dark energy equation of state. There are also a lot of evidence all around \cite{c}, of a dynamical dark energy equation of state, which has crossed the so called phantom divide line $w_{\Lambda}=-1$ recently, at the value of red-shift parameter $z \approx 0.2$. Apparently though the problem turns out to be more serious and complicated, but then, the puzzle of crossing the phantom divide line has also rendered some sort of selection rule. $\Lambda CDM$-model, which is known to suffer from the disease of fine tuning (see \cite{d} for a comprehensive review) can now be ruled out due to the requirement of a dynamic state parameter. Further, if the analysis of Vikman \cite{e} is correct, then it is not possible to cross the phantom divide line in a single minimally coupled scalar field theory, without violating the stability both at the classical \cite{f} and also at the quantum mechanical levels \cite{g}, (though it has recently been inferred \cite{on} that quantum Effects which induce the $w<-1$ phase, are stable in the $\phi^4$ model). Thus single minimally coupled scalar field models like quintessence $(w>-1)$ and phantom $(w<-1)$ are to be kept aside. Consequently, we are now left with some what more complicated models. One of these is a hybrid model, composed of two scalar fields, viz, quintessence and phantom - usually dubbed as quintom model \cite{h}. Other models like non-minimal scalar tensor theory of gravity \cite{i}, hessence \cite{j} and models including higher order curvature invariant terms \cite{k} also exist in the literature. \\ Gauss-Bonnet term is yet another candidate which may be pursued for the purpose. The possibility of crossing the phantom divide line through Gauss-Bonnet interaction has been explored in some recent works \cite{l},\cite{m}. But then, these models are even complicated in the sense that either brane-world scenario \cite{l} or scalar field and matter coupling \cite{m} are invoked. In this article the possibility of smooth crossing of the phantom divide line $w_{\Lambda}=-1$ has been expatiated simply by introducing Gauss-Bonnet-Scalar coupling term in the 4-dimensional Einstein-Hilbert action.\\ Gauss-Bonnet term arises naturally as the leading order of the $\alpha'$ expansion of heterotic superstring theory, where, $\alpha'$ is the inverse string tension \cite{n}. Gauss-Bonnet term is topologically invariant and thus does not contribute to the field equations in four dimensions. However, the low energy limit of the string theory gives rise to the dilatonic scalar field which is found to be coupled with various curvature invariant terms \cite{o}. The leading quadratic correction gives rise to Gauss-Bonnet term with a dilatonic coupling \cite{p}. Therefore it is reasonable to consider Gauss-Bonnet interaction in four dimension with dilatonic-scalar coupling. Several works with Gauss-Bonnet-dilatonic coupling are already present in the literature \cite{q}. In particular, important issues like - late time dominance of dark energy after a scaling matter era and thus alleviating the coincidence problem, crossing the phantom divide line and compatibility with the observed spectrum of cosmic background radiation have also been addressed recently \cite{km}.\\ In a recent work with Gauss-Bonnet interaction \cite{a}, a solution in the form $a=a_{0}e^{A\sqrt t}$ ($a$ being the scale factor, and $A>0$) has is been found to satisfy the field equations with different forms (sum of exponentials, sum of inverse exponentials, sum of powers and even quadratic) of potentials. Solution in a more general form $(a=a_{0}e^{A t^f}), A>0, 0<f<1$, for Einstein's gravity with a minimally coupled scalar field was found in the nineties \cite{r} and was dubbed as intermediate inflation. We \cite{a}, on the other hand, observed that such solution depicts a transition from decelerated to accelerated expansion at sufficiently later epoch of cosmic evolution, which asymptotically goes over to de-Sitter expansion. Thus, it appeared that such solution may construct viable cosmological models of present interests. Under this consequence, a comprehensive analysis has been carried out \cite{b} with such solution in the context of a generalized k-essence model. It has been observed that it admits scaling solution with a natural exit from it at a later epoch of cosmic evolution, leading to late time acceleration with asymptotic de-Sitter expansion. The corresponding scalar field has also been found to behave as a tracker field \cite{s}, thus avoiding cosmic coincidence problem. \\ In the present work, we show that Gauss-Bonnet-Dilatonic scalar coupling with Einstein's gravity in four dimensions, admits solution in a general form $(a=a_{0}e^{A t^f}), A>0, 0<f<1$, which is viable of crossing the phantom divide line twice, once from above and the other from below in the recent epoch. Since the crossing is transient, so we may conclude that it does not show any pathological behaviour like Big-Rip \cite{f}, at least in the classical level. \section{The Model with Gauss-Bonnet Interaction} We start with the following action containing Gauss-Bonnet interaction \begin{equation} S=\int d^4x\sqrt{-g}[\frac{R}{2\kappa^2}+ \frac{\Lambda(\phi)}{8}G(R)-\frac{1}{2}\phi,_{\mu}\phi'^{\mu}-V(\phi)+L_{m}], \end{equation} where, \[G(R)=R^2-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\] is the Gauss-Bonnet term which appears in the action with a coupling parameter $\Lambda(\phi)$ and $L_{m}$ is the matter Lagrangian. For the spatially flat Robertson-Walker space-time \[ds^2=-dt^2+a^2(t)[dr^2+r^2 d\theta^2+r^2 sin^2\theta d\phi^2],\] the field equations in terms of the Hubble parameter $H=\frac{\dot a}{a}$, are \begin{equation} 2\dot H+3H^2=-[\frac{1}{2}\dot\phi^2-V(\phi)+2\Lambda'\dot\phi(H\dot H+H^3)+(\Lambda'\ddot\phi+\Lambda''\dot\phi^2)H^2+p_{m}]=- (p_{de}+p_{m}), \end{equation} \begin{equation} 3H^2=[\frac{1}{2}\dot \phi^2+V(\phi)-3\Lambda'\dot{\phi}H^3+\rho_{m} ]= (\rho_{de}+ \rho_{m}), \end{equation} in the units $\kappa^2 (= 8\pi G) = \hbar = c =1$. In our analysis the Gauss-Bonnet scalar interaction plays the role of dark energy, for which suffix ($de$) has been introduced. Thus, $p_{de}$ and $\rho_{de}$ are the effective pressure and the energy density generated by the Gauss-Bonnet-scalar interaction, while $p_{m}$ and $\rho_{m}$ are the pressure and the energy density corresponding to background matter distribution respectively. The background matter satisfies the equation of state, \begin{equation} \rho_{m}=\rho_{i}a^{-3(1+w_{m})}, \end{equation} where, $\rho_{i}$ is a constant and $w_{m}$ is the state parameter of the background matter. In addition we have got the $\phi$ variation equation \[ (\ddot\phi+3H\dot\phi+V')=3\Lambda'H^2(\dot H+H^2), \] which is not an independent equation and will not be required in our analysis. In the above, over-dot and dash ($\prime$) stand for differentiations with respect to the proper time $t$ and $\phi$ respectively. Now, in view of equations (2) through (4), we are required to solve for $a, \phi, V(\phi), \Lambda(\phi), p_{m}$ and $\rho_{m}$, which requires three additional assumptions. Firstly, we consider that the Universe is filled with cold dark background matter with equation of state, $p_{m} = 0$ while the second assumption is the one made previously in \cite{a}, viz., \begin{equation} \Lambda'\dot\phi= \lambda, \end{equation} where, $\lambda$ is a constant. This, as indicated in \cite{a} is physically reasonable, since it implies that the Gauss-Bonnet coupling parameter $\Lambda(\phi(t)) = \lambda t$, grows in time to contribute at the later epoch of cosmological evolution. In view of the above assumption the field equations (2) through (4) are expressed as, \begin{equation} 2\dot H+3H^2=-[\frac{1}{2}\dot \phi^2-V(\phi)+2\lambda H\dot H+2\lambda H^3]=- p_{de}, \end{equation} \begin{equation} 3H^2=[\frac{1}{2}\dot \phi^2+V(\phi)-\lambda H^3+\rho_{m}]=(\rho_{de}+\rho_{m}), \end{equation} and, \begin{equation}\rho_{m}=\rho_{i} a^{-3}.\end{equation} Now, for our third assumption, we start from the ansatz, \begin{equation} H=\frac{f}{nt^{1-f}},\end{equation} with $0 < f < 1, n = A^{-1} >0$, which leads to the form of the solution of the scale factor mentioned in the introduction. Thus the complete set of solutions are given by, \[a=a_{0}\exp(\frac{t^f}{n}) ;~~\rho_{m}=\frac{\rho_{i}}{a_{0}^3}\exp(-\frac{3}{n}t^f);~~p_{de}=\frac{2f(1-f)}{nt^{(2-f)}} -\frac{3f^2}{n^2t^{2(1-f)}};~~\rho_{de}=\frac{3f^2}{n^2 t^{2(1-f)}}-\frac{\rho_{i}}{a_{0}^3 \exp{(\frac{3}{n}t^f})};\] \[w_{de}= a_{0}^3\left(\frac{2nf(1-f)t^{-f}-3f^2}{3a_{0}^3f^2- \rho_{i}n^2t^{2(1-f)}\exp(-\frac{3}{n}t^f)}\right);\] \begin{equation}\rho_{de}+3p_{de}= \frac{6f(1-f)}{ n t^{(2-f)}}-\frac{6f^2}{n^2t^{2(1-f)}} -\frac{\rho_{i}}{a_{0}^3\exp(\frac{3}{n}t^f)};~~ \rho_{de}+p_{de}=\frac{2f(1-f)}{ n t^{2-f}}-\frac{\rho_{i}}{a_{0}^3\exp(\frac{3}{n}t^f)};~~ \end{equation} \[\dot\phi^2=\frac{\lambda f^3}{n^3 t^{3(1-f)}}+ \frac{2\lambda f^2(1-f)}{n^2 t^{3-2f}}+\frac{2f(1-f)}{n t^{2-f}}- \frac{\rho_{i}}{a_{o}^3 \exp{(\frac{3}{n}t^f})},\] \[V=\frac{3\lambda f^3}{2n^3 t^{3(1-f)}}-\frac{\lambda f^2(1-f)}{n^2 t^{3-2f}}+\frac{3f^2}{n^2 t^{2(1-f)}}-\frac{f(1-f)}{n t^{2-f}}- \frac{\rho_{i}}{2a_{o}^3 \exp{(\frac{3}{n}t^f})}.\] Above set of solutions (10) indicates that such a model of the Universe admits an early deceleration, but during evolution it starts accelerating since strong energy condition is violated, $\rho_{de}+3p_{de}<0$. Further, the dark energy equation of state also admits the possibility of crossing the $w_{\Lambda} = -1$ line, since, transient violation of the weak energy condition, $\rho_{de}+p_{de} < 0$ is seemingly possible. Finally, the equation of state asymptotically touches the $w_{\Lambda} = -1$ line from above and behaves as cosmological constant. To show such behavior graphically, let us express the state parameter $w_{de}$ in terms of the red-shift parameter $z$ which is defined as, \[1+ z=\frac{a(t_{o})}{a(t)}=\exp[{\frac{1}{n}(t_{o}^f-t^f)}],\] where, $a(t_{o})$ is the present value of the scale factor, while $a(t)$ is that value at some arbitrary time $t$, when the light was emitted from a cosmological source. Thus, \begin{equation} t^f=t_{o}^f - n \ln(1+z).\end{equation} In view of equation (11) $w_{de}$ can be expressed as, \begin{equation} w_{de}= \left(\frac{(2nf(1-f)-3f^2[t_{o}^{f}-n\ln{(1+z)}]}{3f^2[t_{o}^{f}-n\ln{(1+z)}] - \rho_{i} n^2[t_{o}^{f}-n\ln{(1+z)}]^{\frac{2-f}{f}} \exp{(-\frac{3}{n}[t_{o}^{f}-n\ln{(1+z)}])}}\right),\end{equation} where, $a_{o}$ has been set equal to one without any loss of generality. Let us now choose $f=0.5$. The motivation of choosing such a value of $f$ is twofold. Primarily, it is impossible to find an explicit form of the potential $V = V(\phi)$, otherwise. Further, since $n$ has the dimension of $t^f$, so the parameter $n^2$ gets a comfortable dimension of time. If we now take up some more numbers, like the present value of the Hubble parameter $H_{o}^{-1}$ and the age of the Universe $t_{o}$ as, \[H_{0}^{-1}= \frac{9.78}{h}~Gyr, ~~ t_{0}=13~Gyr,\] then, for $h=0.66$, $n$ can be found from the ansatz (9) as $n=2.0552$. Further taking the present value of the matter density parameter $\Omega_{mo}= 0.26$, we find in view of solution (10), \[\Omega_{mo}= \frac{\rho_{mo}}{\rho_{co}}= \rho_{i}(\frac{H_{o}^{-2}\exp{(-\frac{3}{n}t_{o}^f})}{3})=0.26,\] where, $\rho_{mo}$ and $\rho_{co}$ are the present values of the matter density and the critical density respectively. Thus, we find, \[n^2 \rho_{i}= 2.897.\] Noting that in this model the red-shift parameter does not go beyond the value $z=4.78$, we plot the dark energy equation of state parameter $(w_{de})$ versus the red-shift parameter $(z)$ in figure (1). It is apparent that the phantom divide line $\omega_{\Lambda}$ has been crossed twice, once from above at $z \approx 1.92$ and then from below at $z \approx 0.39$. Such double crossing of the phantom divide line is devoid of any sort of pathological behaviour. \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {w1.eps} \caption{State parameters $w_{de}(z)$ has been plotted against the red-shift parameter $z$, (with, $f=0.5, h=0.66, t_{0}=13~Gyr, \Omega_{mo}=0.26$). Smooth double crossing of the Cosmological constant barrier is observed at sufficiently later epoch, $z\approx 1.92$ from above and $z\approx 0.39$ from below.} \end{center} \end{figure} To check how far our present model fits with the standard $\Lambda CDM$ model, we also make the luminosity-redshift and distance modulus-redshift plots. For $\Lambda CDM$ model the relation between the luminosity distance and redshift is the following, \[H_{0}dL=(1+z)\int_{0}^{z} \frac{dz}{0.74+0.26(1+z)^3},\] while the relation corresponding to the present model is, \[H_{0}dL=\frac{1+z}{\sqrt t_{0}}\int_{0}^{z}[\sqrt t_{0}-n \ln{(1+z)}]dz,\] with $t_{0} = 13$ Gyr., and $n = 2.055$. The plot (figure 2) shows a perfect fit between the two models up to $z = 3.5$. There is a little discrepancy there after. Since, luminosity distance has already been expressed as a function of redshift, so the relation between distance modulus and redshift may be found in view of the following equation, \[m-M=5\log _{10}(\frac{dL}{Mpc})+25,\] where, $m$ and $M$ are the apparent and absolute bolometric magnitudes respectively. However, since we use $H_{0}dl$ instead, so our relation is slightly modified as, \[m-M=5\log _{10}(DL)+31,\] where, $DL = H_{0} dL$. The plot (figure 3) demonstrates that the two models are practically indistinguishable. \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {l1.eps} \caption{The fit is almost perfect up to $z = 3.5$. A little discrepancy is observed there after as $\Lambda CDM$ model (blue) slightly takes over the present model (red).} \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {d1.eps} \caption{The fit is absolutely perfect and the two models (blue for $\Lambda CDM$ and red for the present) are practically indistinguishable.} \end{center} \end{figure} Now we can proceed to make some even more comfortable choice of the parameters of the theory, like $f=0.5$, and $t_{0} = 13$ Gyr. as before, but with $n=2$, for which $H_{0}^{-1}=14.42$ Gyr., in view of ansatz (9), which corresponds to $h\approx 0.68$. The scale factor now has a convenient form as $a = \exp{\frac{\sqrt{t}}{2}}$. With these values one can find, \[n^2 \rho_{i}= 3.09,\] for $\Omega_{mo} = 0.24$. The plot (fig.4) is almost the same as before with transient double crossing. Other plots viz., luminosity distance versus redshift (in figure 5) and distance modulus versus redshift (in figure 6) show even better fit than the earlier one, with the standard $\Lambda CDM$ model. \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {w2.eps} \caption{State parameters $w_{de}(z)$ has been plotted against the red-shift parameter $z$, (with, $f = 0.5, h \approx 0.68, t_{0} = 13~Gyr., \Omega_{mo} = 0.24$). Smooth double crossing of the Cosmological constant barrier is observed at sufficiently later epoch, $z\approx 2.085$ from above and $z\approx 0.46$ from below.} \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {l2.eps} \caption{The fit is perfect up-to $z = 3.6$. The $\Lambda CDM$ model (blue) slightly overtakes the present model (red) there after.} \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {d2.eps} \caption{The fit is absolutely perfect and there is practically no way to distinguish the $\Lambda CDM$ model (blue) with the present one (red).} \end{center} \end{figure} Thus, we observe that with the age $t_{0} = 13$ Gyr., and $0.66 \le h \le 0.68$, such transient crossing of the phantom divide line is permissible for the present value of dark energy density parameter $\Omega_{de}|_{present} \ge 0.74$. In order to consider some higher value of the age of the Universe, $t_{0} = 13.73$ Gyr. (say) as suggested by Spergel et al\cite{t}, either one has to go to almost the lowest limiting value of $h \approx 0.61$ \cite{u} or one has to accept much higher value of the present dark energy density parameter $\Omega_{de}|_{present} > 0.78$, otherwise, the state parameter versus redshift plot shows certain discontinuities. We certainly remember that in order to simplify the field equations considerably, we have made one important assumption, viz., in equation (5). Relaxing this assumption one might get rid of such discontinuities, as well. This we pose to study in a future communication.\\ So far, we remain silent about the form of the potential. It is simply because, despite the most convenient choice of the parameter, $f = 0.5$, it is still impossible to find an analytical solution for $\phi$, in view of the solution (10). As a result, the form of the potential as a function of $\phi$ remains obscure. However, we can plot the potential as a function of time, by choosing our second case, $n = 2$, for further simplification. It is important to note that though the results sofar obtained, are independent of the value and signature of $\lambda$, the form of the potential depends largely on it. In the following we make three such plots (taking the help of "Manipulation" programme of Mathematica 6) to show how the form of the potential changes with different values (starting from negative to large positive) of $\lambda$. \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {V1.eps} \caption{The form of the potential as a function of time for $\lambda \le -9$. Note that it remains negative throughout the evolution.} \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {V2.eps} \caption{The form of the potential for $\lambda \approx 65$. Note that it is zero at the present epoch but tends to grow in the future.} \end{center} \end{figure} \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.034in, width=3.3797in] {V3.eps} \caption{The form of the potential for $\lambda = 200$. Note that the form is appreciably different.} \end{center} \end{figure} \section{Concluding remarks} Altogether we have obtained a late time transient crossing of the phantom divide line first from above and more recently from below, starting from the inclusion of a Gauss-Bonnet-dilatonic scalar coupling term in the standard Einstein-Hilbert action in four dimension. Since the crossing is transient, so such double crossing is free from any sort of pathological behaviour both at the classical \cite{f} and at the quantum mechanical \cite{g} levels. The striking feature of the model lyes in it's indistinguishability with the standard $\Lambda CDM$ model, in terms of the luminosity-redshift and more precisely for distance modulus-redshift curves. To identify between the two models we therefore require to observe dark energy equation of state $w_{de}$ independently. If $w_{de}$ is truly found to be dynamical and has really encountered a recent crossing of the phantom divide line, then only we can definitely distinguish the standard $\Lambda CDM$ model with the present one. It is highly interesting to learn that smooth transient crossing of the phantom divide line is allowed for both negative and positive $(\lambda \lessgtr 0)$ type of Gauss-Bonnet-scalar interaction. Figures (7), (8) and (9) also reveal that it is true even for different forms of the potential. Such transient crossing independent of the value of $\lambda$ also signals that it might be possible to carry out the same treatment even for a single scalar field model. \textbf{Acknowledgement}:{Acknowledgement is due to Dipartimento di Scienze Fisiche, Universit$\grave{a}$ degli studi di Napoli, Federico II, for their hospitality, TRIL (ICTP) for financial assistance and Prof. Claudio Rubano for some illuminating discussion.}
0710.0090
\section{Introduction \label{intro.sec}} W3 (Westerhout 3) is perhaps the most active region of current star formation in the nearby Galaxy. Extending 30~pc along the edge of a $M \simeq 5 \times 10^4$~M$_\odot$ giant molecular cloud (GMC), the star-forming complex has dozens of embedded young massive stars producing a variety of pre-stellar condensations, hot molecular cores, hypercompact to small H{\sc II} regions, maser clusters, and molecular outflows \citep[e.g.,][]{Lada78, Reid80, Dreher81, Tieftrunk97, Chen06}. Its infrared sources have an integrated luminosity of several~$\times~ 10^5$~L$_\odot$. Situated just east of W3 are the older IC~1795 and IC~1805 clusters, the latter lying within the enormous W4 superbubble/chimney structure blown by generations of massive stars. The W4--IC~1795--W3 complex is widely considered to be an examplar of sequential triggered star formation \citep{Lada78, Oey05}. Recent SCUBA observations of the W3 GMC find a higher percentage of the gas mass gathered into dense molecular clumps at the eastern edge compared to the undisturbed parts of the W3 GMC, supporting this triggering scenario \citep{Moore07}. A detailed description of the W3 and W4 complexes and a thorough review of the literature are given by \citet{Megeath07}. The richest site of massive star formation in W3 is the W3~Main cluster of embedded OB stars, dominated by the very young and luminous IRS~4 and IRS~5 sources. IRS~5 lies at the center of a 0.1~pc concentration of massive stars resembling a nascent counterpart of the Orion Trapezium \citep{Megeath05}. W3(OH) to the southeast and W3~North to the north have massive stars but appear less active than W3~Main. The distance to the complex is accurately measured from maser kinematics to be 2.0~kpc \citep{Xu06, Hachisuka06}. Despite the intense study of W3 at radio, millimeter, and infrared wavelengths, little is known about its low mass stellar population. For example, a $JHK$ near-infrared (NIR) survey of $5\arcmin \times 5\arcmin$ in W3~Main reveals $\sim 40$ sources with $K$-band excesses indicative of Class~I--II pre-main sequence (PMS) stars with disks \citep{Ojha04}. Hundreds of other stars are detected, but infrared photometry cannot discriminate disk-free Class~III PMS stars from the strongly contaminating population of unrelated Galactic field stars (mostly red giants). A new mid-infrared (MIR) photometric survey of W3~Main, IC~1795, and W3(OH) with the {\it Spitzer Space Telescope} helps to identify cluster members \citep{Ruch07}, but it suffers confusion from three effects: foreground and background Galactic field stars \citep{Benjamin05}, bright diffuse emission produced by heated dust around the H{\sc II} regions \citep{Povich07}, and extragalactic objects with MIR excesses \citep{Harvey07}. X-ray surveys of young stellar clusters (YSCs) with the {\it Chandra X-ray Observatory} are surprisingly efficient at detecting low mass PMS populations, even at distances around 2~kpc and at obscurations $10<A_V<150$ mag typical for W3 stars. PMS X-ray emission arises primarily from violent magnetic reconnection events, similar to solar flares but far more powerful, and is largely independent of circumstellar disks or accretion \citep[see reviews by][]{Feigelson99, Feigelson07}. Luminous and spectrally hard X-ray flares are present throughout the PMS phases of Class~I--II--III at levels $10^2-10^3$ above that seen in old disk populations \citep{Preibisch05a}, so relatively few Galactic disk interlopers appear in X-ray samples. These field star X-ray sources and extragalactic contaminants are easily removed \citep{Getman06}. Due to a poorly-understood statistical association between X-ray luminosity and PMS stellar mass \citep{Preibisch05b,Guedel07}, a flux-limited X-ray observation of a young stellar cluster will be roughly complete down to a corresponding mass limit. Taking together these properties of X-ray studies, we find that X-ray surveys at sufficiently high spatial resolution and sensitivity provide uniquely rich, largely disk-unbiased, mass-limited, and nearly uncontaminated samples of PMS stars in both embedded and unobscured YSCs. These samples complement MIR surveys obtained with the {\it Spitzer Space Telescope}, which generally extend down to lower masses (including the brown dwarf regime) but cannot readily discriminate disk-free PMS stars from field stars. {\it Spitzer} thus detects more disky Class~0-I-II systems while {\it Chandra} effectively samples Class~III systems in addition to many Class I and II stars. The X-ray samples are useful for various astrophysical purposes such as probing the stellar Initial Mass Function, protoplanetary disk evolution, and magnetic activity. In an early {\it Chandra} study, two $\sim$20~ks exposures of a $\sim 300$~arcmin$^2$ field in W3~Main revealed 236 X-ray sources \citep{Hofner02}. Several are associated with massive stars ionizing H{\sc II} regions but most do not have counterparts in $JHK$ images. We report here an extension of those efforts with a {\it Chandra} mosaic of 7 exposures totaling $\sim 230$~ks over $\sim 800$~arcmin$^2$, spanning much of the W3 star forming complex (Figure~\ref{fig:ACIS_mosaic}). A preliminary discussion of this mosaic, a {\it Chandra} Large Project, is given by \citet{Townsley06}. Over 1300 X-ray sources are seen; a full listing and study of their properties will be presented in a separate paper. For each source, {\it Chandra} observations provide a sub-arcsecond position, line-of-sight absorption, and rough mass estimate in addition to magnetic activity characteristics. We discuss here insights into the global structure and origins of the W3 stellar populations derived from the new {\it Chandra} data. The brief presentation of the observations in \S\ref{sec:obs} will be expanded in a forthcoming paper with complete source lists similar to our group's recent studies of the Cep~OB3b \citep{Getman06}, Pismis~24 \citep{Wang07a}, M~17 \citep{Broos07}, RCW~49 \citep{Tsujimoto07}, and Rosette star forming region \citep{Wang07b, Wang07c} YSCs. The three well-studied star forming regions in W3 are described and contrasted in \S\ref{sec:Xray}, and explanations for their origin are considered in \S\ref{sec:diversity}. Section~\ref{sec:W3Main.origin} considers in more detail the implications of the W3~Main results for astrophysical models of star cluster formation. \section{Observations and data reduction \label{sec:obs}} The X-ray observations were made with the Advanced CCD Imaging Spectrometer (ACIS) camera on board the {\it Chandra X-ray Observatory} \citep{Weisskopf02}. Three contiguous regions of the W3 star forming complex were observed with the $17\arcmin \times 17\arcmin$ ACIS imaging array (ACIS-I) for roughly 80~ks each, divided into seven exposures: three on W3~Main, one on W3(OH), and three on W3~North. Except for the two $\sim$20-ks W3~Main exposures discussed by \citet{Hofner02}, all were obtained between January and November 2005. Data analysis followed procedures described in our group's previous ACIS studies of young stellar clusters \citep[e.g.,][]{Townsley03, Getman05, Wang07a, Broos07}. X-ray events were corrected for CCD charge transfer inefficiency \citep{Townsley02} and the data were cleaned in a variety of ways \citep{Townsley03}. The event data were corrected to the {\it Hipparcos} reference frame by alignment of bright on-axis X-ray sources with 2MASS stars then registered to a common astrometric reference frame based on {\it Chandra} boresights. A preliminary source list was identified from the merged observations using a wavelet-based algorithm \citep{Freeman02}. Individual positions are generally accurate to $\pm 0.4$\arcsec\/ and double sources can be resolved at separations $\ga 0.7$\arcsec. Images were created in soft (0.5--2~keV) and hard (2--7~keV) bands and were corrected for exposure variations, then adaptively smoothed using the CIAO tool {\it csmooth} \citep{Ebeling06} to make the mosaic shown in Figure~\ref{fig:ACIS_mosaic}. \section{The X-ray stellar populations in W3 \label{sec:Xray}} \subsection{W3 Main} W3~Main is a massive YSC, famous for containing every known type of radio H{\sc II} region from hypercompact to diffuse, 0.01 to 1~pc in diameter, with ages $10^3$--$10^6$~yrs \citep{Tieftrunk97}. These H{\sc II} regions are embedded in a complex, highly clumped molecular environment, with the younger (smaller) regions associated with the densest clumps \citep[][and references therein]{Megeath07}. Most radio and NIR studies have concentrated on the dense central regions of the cluster, hence the complex was described as only $\sim$4$\arcmin$ in size \citep{Tieftrunk97}. Figure~\ref{fig:W3Main_small} shows this central region of the W3~Main cluster in the mosaicked {\it Chandra} image. X-ray point sources in our preliminary source list are marked with blue circles; additional faint X-ray sources are likely to emerge in our complete analysis, which will involve image reconstructions of crowded regions \citep{Townsley06b}. The radio H{\sc II} regions detailed in \citet{Tieftrunk97} are shown schematically as magenta ovals. X-ray sources that match the bright infrared sources from Table~1 of \citet{Ojha04} are marked with green circles and labeled. Six of these sources were found in the earlier {\it Chandra} study of this region \citep{Hofner02}. Two sources in the IRS5 region match NIR sources in Table~1 of \citet{Megeath05} and are labeled using their nomenclature. The wide-field {\it Chandra} mosaic (Figure~\ref{fig:ACIS_mosaic}) shows that the W3~Main YSC extends well beyond this central region; it is rich and roughly spherical, resembling the clusters that dominate many massive star forming regions. Its full extent is quite large; over 900 X-ray sources are distributed over the 17\arcmin\/ (10~pc) ACIS-I field centered around ($\alpha,\delta$)=(02:25:41,+62:05.9) close to W3~IRS5. The cluster is so large that for some stars, it is difficult to distinguish membership in W3~Main from membership in the less absorbed IC~1795 cluster to the southeast. Figure~\ref{fig:W3Main_smooth} shows how the dense concentration of lower mass stars in the inner $\sim 1\arcmin$\/ centered around IRS~5, known from earlier NIR observations \citep{Megeath96,Tieftrunk98,Ojha04}, extends smoothly with the stellar surface density decreasing a factor of $\sim 300$ out to a radius of 5\arcmin\/ (3~pc). When the X-ray luminosity function is corrected for contaminating sources and limited sensitivity, and then scaled to the well-characterized Orion Nebula Cluster \citep[e.g.][]{Wang07a}, the inferred cluster population will be several thousand stars. The stellar distribution in the central region is nearly but not entirely symmetrical; an excess of stars is seen $\sim 1$\arcmin\/ NW of IRS~5 around the W3~D and W3~H UCHIIs compared to a symmetrical region SE of IRS~5 where no massive stars are present. The finding we emphasize here is not the previously known concentration of high mass stars in the cluster core but the richness, extent, and symmetrical appearance of the W3~Main stellar cluster on scales of several parsecs. The vast majority of these sources are low-mass PMS stars with only minor contamination by Galactic field stars or extragalactic sources \citep[e.g.][]{Wang07a}. While the stellar concentration in the central portion can be seen in $K$-band images \citep{Megeath96,Tieftrunk98,Ojha04}, the full extent and simple structure of W3~Main cannot be discerned in the NIR due to the combination of patchy obscuration, nebular emission, and Galactic field star contamination. \subsection{W3(OH)} W3(OH) is a rapidly developing UCHII region, seen in the radio as an expanding shell of dense ionized gas around a heavily obscured ($A_V \sim 50$) late-O star \citep{Dreher81, Turner84}. The expansion age is $\sim 2 \times 10^3$~yr although an astrochemical model of the molecular species suggests an age around $10^4-10^5$~yr \citep{Kawamura98, Kim06}. Six arcseconds east of W3(OH) lies the molecular hot core W3(H$_2$O) with three radio continuum peaks, maser emission, and an unusual radio synchrotron jet \citep[][and references therein]{Wyrowski99}. This small complex is sometimes called the Turner-Welch Object. In a NIR image with $K<17.5$, a stellar cluster with $\sim 200$ stars is seen with an elongated $\sim 1\arcmin$\/ distribution around the UCHII region; this is the one of the richest groupings of NIR stars within the huge W3/W4/W5 star forming complex \citep{Tieftrunk98, Carpenter00}. NIR data also reveal two smaller clusters northeast of W3(OH) \citep{Tieftrunk98}. The {\it Chandra} image shows that the YSC surrounding W3(OH) is much smaller, sparser, and less symmetrical than W3~Main (Figure~\ref{fig:W3OH}). About 50 absorbed X-ray stars lie in a region $0.5 \times 1$\arcmin\/ ($0.3 \times 0.6$~pc) oriented NE--SW around W3(OH). This cluster is accompanied by two sparse clumps with about 5 and 20 stars, respectively, lying $\sim 0.5-1.5$~pc to the NE of W3(OH), cospatial with small clusters seen in earlier NIR studies \citep{Tieftrunk98}. The young massive star ionizing W3(OH) is clearly detected in our {\it Chandra} observation, at (02:27:03.84, +61:52:24.9). It is a surprisingly hard X-ray source; this hard emission allows it to be seen through a large absorbing column ($A_V \sim 75$ mag) inferred from the soft X-ray absorption. The nearby high-mass system W3(H$_2$O), likely powered by a protobinary of early-B stars \citep{Chen06}, is undetected in our X-ray data. \subsection{W3 North} The bright H{\sc II} region G133.8+1.4 = W3~North is less well-studied than the regions considered above. The nebula is excited by an optically visible O6 star, \#102 in the study of IC~1795 by \citet{Ogura76} and \#7044 in the study by \citet{Oey05}. It lies in a molecular cloud environment with density $\sim 600$~cm$^{-3}$ and mass $\sim 230$~M$_\odot$, more than an order of magnitude below values for W3~Main and W3(OH) \citep{Thronson84, Thronson85}. The ionized nebula is as bright as W3~A near IRS~5 in W3~Main; it has a diameter of 2\arcmin\/ with estimated age $\sim 10^5$~yr \citep{vanderWerf90}. We clearly detect the O6 star in our {\it Chandra} observation, at (02:26:49.62, +62:15:35.0). However the {\it Chandra} source distribution around this O star differs dramatically from that seen in either W3~Main or W3(OH): no cluster of PMS stars is found in its vicinity (Figure~\ref{fig:W3North}). The nearest X-ray source is $>35$\arcsec\/ distant and the local source density is consistent with the general level of distributed young stars and contaminants seen $5\arcmin-10$\arcmin\/ away. The absence of a cluster in W3~North was suggested by \citet{Carpenter00} based on NIR imagery, and we strongly confirm this result with our X-ray observations. \section{Interpreting population differences \label{sec:diversity}} We find that three famous massive star-forming sites in the W3 cloud show remarkable variety in their low mass stellar distributions: a rich spherical cluster, an elongated collection of sparse star clumps, and a completely isolated O star. Evidence for these differences was provided by earlier infrared studies, but the {\it Chandra} dataset gives a more definitive view of this morphological diversity from its more complete and unbiased sample of the low mass population. We can discuss the origin of the structures in W3. The W3(OH) cluster is an order of magnitude smaller and roughly 20 times less rich than the W3~Main cluster. The patchy distribution of stars, elongated along an axis perpendicular to a vector pointing towards the older IC~1795 cluster, supports a triggered origin due to IC~1795 ionization and wind shock fronts as discussed by \citet{Oey05}. The two small clusters, seen both in NIR \citep{Tieftrunk98} and X-ray images, lie along this same line and could have been triggered by the same shocks. The morphology of the PMS stellar distribution around W3(OH) resembles those seen in small cometary globules \citep{Sugitani95, Getman07} and in larger molecular clouds \citep{Zavagno06, Deharveng06, Broos07} at the edges of H{\sc II} regions. The elongation in stars around W3(OH) appears perpendicular to the axis pointing towards IC~1795, similar to the elongation of the M17~SW stellar distribution which lies along the photodissociation region and perpendicular to the axis pointing towards the M17 central cluster NGC~6618 \citep{Broos07}. The W3(OH) structure has the fragmented and elongated appearance expected from the ``collect and collapse'' scenario of triggered star formation at the edge of an H{\sc II} region \citep{Elmegreen77, Whitworth94, Dale07}. For W3~North, we have a clear demonstration that its ionizing O star is isolated, unaccompanied by a cluster of lower mass stars. The simplest explanation is a runaway O star ejected from a rich cluster in the W3/W4 region. The W3~North radio continuum structure does have a cometary tail on the SSE side of the ionizing star, suggesting a northwesterly motion through the molecular medium \citep{vanderWerf90}. This is inconsistent with an origin in W3~Main which would require a northeasterly motion, but may indicate an origin in the older IC~1795 or IC~1805 clusters. Accurate proper motions are needed to test this model\footnote{The NOMAD catalog \citep{Zacharias05} reports very large proper motions ($\sim$200~mas~yr$^{-1}$) for this star based on photographic sky survey plates, but examination of the Digitized Sky Survey using NASA's {\em SkyView} service shows a bright source in the same location in both DSS1 and DSS2, implying that the NOMAD proper motions are erroneous. The region has bright irregular nebular emission, so mistakes can easily be made.}. An alternative explanation, which seems feasible although improbable, is an origin within the local W3~North molecular cloud. Statistical simulations of sparse clusters with random draws from a standard Initial Mass Function show a wide dispersion of maximum stellar masses \citep{Bonnell99} and a few cases of field O stars support occasional formation of massive stars in isolation \citep{deWit05}. In particular, the late-O star ionizing KR~140, lying a degree south of W3~Main in the W3 molecular cloud, may have formed in isolation \citep{Ballantyne00}. Our findings do not support a simple unified origin of W3~North, W3~Main, and W3(OH) as proposed by \citet{Oey05}. In their interpretation, the three regions of high-mass star formation are components of a shell of molecular cloud material triggered into gravitational collapse by the ionization and wind shocks produced by the older IC~1795 star cluster lying east of the W3 molecular cloud. In contrast, we find that only the W3(OH) stellar population has the morphology expected from direct triggering by IC~1795 shock fronts. The other two W3 stellar populations have very different morphologies: the single O star ionizing W3~North either formed in isolation or was dynamically ejected from one of the richer nearby clusters, and the W3~Main cluster has a spherical, centrally condensed appearance that does not reflect the recent passage of a shock. \section{The origin of the W3~Main cluster \label{sec:W3Main.origin}} \subsection{Two critical properties of W3~Main} Two inferences can be made from the morphology shown in Figure~\ref{fig:W3Main_smooth}. These provide strong constraints on the formation process of the rich W3~Main cluster. First, as outlined above and shown in Figure~\ref{fig:W3Main_smooth}, the large-scale sphericity of the cluster implies that the role of triggering by shocks from the older IC~1795 cluster (or by the W4 superbubble further to the east) discussed by \citet{Oey05} is negligible, or at most indirect, in the sense that the star formation did not follow the passage of a localized shock. There is no elongation of the stellar distribution along an East-West axis associated with a shock. W3~Main was either formed independently of an external trigger, or has dynamically evolved so that evidence of its triggered origin has been erased. The centrally concentrated, spherical morphology resembles the distribution of X-ray stars in the Orion Nebula Cluster ionizing the Orion Nebula \citep{Feigelson05}, the NGC~6618 cluster ionizing the M~17 H{\sc II} region \citep{Broos07}, the NGC~2244 cluster ionizing the Rosette Nebula \citep{Wang07b}, and many other YSCs. These stand in contrast to the unconcentrated and elongated stellar distributions attributable to shock triggering in small cometary globules \citep{Sugitani95, Getman07, Ogura07} and in larger molecular clouds \citep{Zavagno06, Deharveng06, Broos07} at the edges of H{\sc II} regions. The second inference concerning the origin of W3~Main to be made from the {\it Chandra} image is that at least some of the OB stars---those ionizing the well-studied hypercompact and ultracompact H{\sc II} regions at the core of W3~Main---formed after the bulk of the more widely distributed cluster PMS stars. These H{\sc II} regions have dynamical ages of $10^3-10^5$~yr \citep{Tieftrunk97}\footnote{Under special circumstances, an H{\sc II} region can appear as an UCHII at later times \citep{Franco07}. This requires either that the O star is nearly stationary ($<<1$~km~s$^{-1}$ motion) at the center of a dense molecular core, or that it has entered a second core at a later time. It seems doubtful that this would fortuitously occur for several O stars in the central region of W3, and in any case can not explain the hypercompact regions in W3~M.}. If the PMS stars had similar ages, they would all be Class~0--I protostars. However, only a few percent of ACIS sources are Class~II, and $<1$\% appear to be Class~O--I in the \citet{Ruch07} dataset of the brighter {\it Spitzer} sources. It is not possible that there exists a vast population of Class~O--I sources undetected by {\it Spitzer} in the outer regions of W3~Main. In the central region around IRS~5, whre {\it Spitzer} sensitivity is limited by the bright diffuse emission, \citet{Megeath96} found that no more than 30\% of the NIR sources were Class~I. This implies that most of the PMS stars in W3~Main are Class~III, as in most other young stellar clusters observed with {\it Chandra}, and the age of the low-mass population is $> 0.5$~Myr. Even in clusters rich in Class 0 protostars, such as NGC~1333, many X-ray sources are Class II and III systems \citep{Getman02}. This discrepancy in W3~Main may constitute the best case that a PMS population is much older ($> 0.5$~Myr) than at least part of its associated OB population ($< 0.1$~Myr). If the PMS stars are characteristically $10^6$ years old and some of the central OB stars are $< 10^5$ years old, those OB stars must have formed after the lower mass stars. This form of age spread has long been noted in older stellar clusters from studies of HR diagrams \citep{Herbst82, Adams83, Doom85, Shull95, DeGioia01}. A young age has also been indirectly suggested for the Trapezium OB stars in Orion based on the inferred short lifetimes of proplyds in the presence of ultraviolet photoevaporation \citep{ODell98}. The W3~Main OB stars are directly confirmed to be extremely young and still forming based on their very small H{\sc II} regions; this is crucial for establishing that the central OB stars formed after the larger PMS population. \subsection{Implications for the formation of W3~Main} Together, these two results strongly preclude the application of an old and simple model of cluster and high mass star formation (see reviews by Bonnell et al.\ 2007 and Larson 2007). Pre-stellar molecular cloud condensations were traditionally thought to be centrally concentrated with higher densities $\rho$ at the center; e.g., an isothermal equilibrium Bonnor-Ebert sphere. The free-fall time is then shorter at the core, $t_{ff} \propto \rho^{-1/2}$, implying rapid gravitational collapse and fragmentation. Gas quickly falls into the central region where, if Bondi-Hoyle accretion is unimpeded, the more massive protostars tend to grow fastest according to $\dot{M} \propto M^2$. Disk accretion of high mass protostars can be very rapid with $\dot{M} \sim 10^{-4}$~M$_\odot$~yr$^{-1}$ implying full growth in $\sim 10^5$~yr \citep[see review by][]{Cesaroni07}. In these simple scenarios for cluster formation, OB stars concentrated in the cores might be older, but certainly not younger, than the surrounding lower mass PMS stars. However, several more complicated models for cluster and high-mass star formation are consistent with our W3~Main results: \begin{enumerate} \item Star formation in large molecular clouds may occur inefficiently over a prolonged period, perhaps because their dynamics are dominated by supersonic turbulence within which only a small fraction of the molecular material resides in dense cores at a given moment \citep{Tan06, Krumholz07}. The bulk of the stars may form in a quick burst of star formation at the end of the cloud's life, as the star formation rate becomes efficient only when turbulence has subsided and the cloud contracts \citep{Palla00}. Astrophysical issues relating slow and fast star formation in clusters are discussed by \citet{Elmegreen07}. W3~Main exhibits a particular mass-dependency in its extended star formation history in that the majority of lower mass stars appear in the widely distributed older population while only a minority accompany the OB stars at the core. \item The formation of massive OB stars specifically might be delayed with respect to lower mass stars. This delay might occur during the gaseous phase, where the formation of a high density core may be inhibited by the combined effect of many protostellar outflows \citep{Li04}. Or the delay might occur during the stellar dynamical phase, waiting for stars to settle into the core gravitational potential where mergers form massive stars \citep{Bonnell98}. \item Star formation may occur primarily in spatially distributed molecular cores which, only after forming many lower mass stars over an extended time, settle towards the cluster center where densities are sufficiently high to form high-mass stars. A version of this model is described by \citet{McMillan07} as an explanation for mass segregation in massive clusters. \item W3~Main may contain two generations of OB stars, the latter arising from triggering by the growing H{\sc II} regions of the former \citep{Tieftrunk97}. The basis for this model is the presence of both diffuse H{\sc II} regions (W3 A, D, H, J, and K in Figure \ref{fig:W3Main_small}) and ultracompact and hypercompact H{\sc II} regions (W3 B, E, F, G, and M). This would be a case of internal triggering by W3 OB shocks rather than external triggering by IC~1795 shocks. \end{enumerate} At present, we cannot differentiate between these models for W3~Main. A useful observation would be high spatial resolution MIR imaging to study disk properties of the lower mass {\it Chandra} stars in the close vicinity of the OB stars (Figure \ref{fig:W3Main_small}). This would reveal whether these concentrated PMS stars are younger than the more widely distributed PMS stars. \section{Conclusion} This paper introduces a new high-resolution X-ray mosaic of the W3 star forming complex, a Large Project of the {\it Chandra X-ray Observatory}. A rich population of $\sim 1300$ young stars is imaged and the three well-known regions of high-mass star formation are shown to have very different populations of low mass stars: W3~Main is a large, rich, nearly spherical cluster; W3(OH) lies in an elongated group of sparse stellar clumps; and W3~North is an isolated O star without low-mass companions. Suggestions of these differences were inferred from earlier infrared studies, but they are more apparent here because the X-ray selection has the advantage of low contamination by the Galactic field population or diffuse interstellar emission, high penetration into molecular environments, and little bias towards stars with massive protoplanetary disks. We emerge from this study with an improved view of star formation in the region. The W3(OH) structures are consistent with collect-and-collapse triggering process caused by by shocks from the older IC~1795 cluster, as previously suggested. The W3~Main cluster, however, does not show the elongated and patchy structure of a recently triggered star cluster and appears to have formed in an earlier episode. Its PMS population strongly resembles those seen in other {\em Chandra} studies of massive star-forming regions such as those ionizing the Orion, M~17 and Rosette Nebulae. A major difference is that the individual H{\sc II} regions in these other clusters have already merged into a large blister and dispersed their natal clouds. In contrast, the W3~Main OB stars are very recently formed with small individual H{\sc II} regions still embedded in a dense, clumpy molecular medium. Star formation in W3 has proceeded in a prolonged fashion, and apparently with a time-dependent Initial Mass Function. The OB stars exciting the hypercompact and ultracompact HII regions at the center of W3~Main formed more recently than the hundreds of X-ray emitting PMS stars distributed over several parsecs. W3~Main thus becomes a critical testbed for theories of rich cluster formation. \acknowledgements We thank Bruce Elmegreen (IBM) and our colleagues at Penn State -- Patrick Broos, Gordon Garmire, Kostantin Getman, Masahiro Tsujimoto, and Junfeng Wang -- for thoughtful discussions. Patrick Broos and Junfeng Wang additionally provided technical assistance. This work was supported by the {\it Chandra} General Observer grant G05-6143X (PI Townsley) and by the ACIS instrument team contract SV4-74018 (PI Garmire), both issued by the {\it Chandra X-ray Observatory} Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. {\it Facilities:} \facility{CXO (ACIS)} \dataset [ADS/Sa.CXO\#obs/446] {Chandra ObsID 446} \dataset [ADS/Sa.CXO\#obs/611] {Chandra ObsID 611} \dataset [ADS/Sa.CXO\#obs/5889] {Chandra ObsID 5889} \dataset [ADS/Sa.CXO\#obs/5890] {Chandra ObsID 5890} \dataset [ADS/Sa.CXO\#obs/5891] {Chandra ObsID 5891} \dataset [ADS/Sa.CXO\#obs/6335] {Chandra ObsID 6335} \dataset [ADS/Sa.CXO\#obs/6348] {Chandra ObsID 6348}
2210.13262
\section{Introduction} A graph zeta function is a formal power series associated with a finite graph. It enumerates the closed paths of given length, expose the primes, or depicts the cycles in a finite graph. The prototype of the graph zeta functions was introduced by Y. Ihara \cite{ihara66} in 1966 from the number theoretical point of view. Ihara's zeta was subsequently pointed out by J. -P. Serre \cite{serre80} that it can be formulated in terms of finite graphs, and is now called the Ihara zeta function for a finite graph \cite{hashimoto90, kotanisunada00, sunada88}. In a paper by H. Bass \cite{bass92}, as was implicitly implied by Ihara \cite{ihara66}, the Ihara zeta is provided the determinant expression described by the adjacency matrix and the degree matrix of the corresponding graph. This determinant expression is now called the Ihara expression \cite{IIMSS21, ishikawamoritasato22+} (see also \cite{morita20}), and the theorem is called the Bass-Ihara theorem. The Ihara expression is one of the main interests in the study of graph zeta functions, and many researches have pursued on this subject \cite{bartholdi99, choekwakparksato07, foatazeilberger99, IIMSS21, ishikawamoritasato22+, mizunosato04, northshield99, sato07, starkterras96} . It is also necessary to mention that the Ihara expression was recently provided a new point of view from quantum walk theory \cite{konnosato12}, and thus the significance of the Ihara expression is now increasing in related areas. The subject of the present paper is the Ihara expression for the generalized weighted zeta function. The generalized weighted zeta function was introduced in \cite{morita20} as a single scheme which unifies the graph zetas appeared in previous studies, for instance, the Ihara zeta \cite{ihara66}, the Bartholdi zeta \cite{bartholdi99}, the Mizuno-Sato zeta \cite{mizunosato04} and the Sato zeta \cite{sato07}. Graph zetas may have in general four expressions called the exponential expression, the Euler expression, the Hashimoto expression and the Ihara expression. It is verified in \cite{morita20} that the first three expressions are equivalent for graph zeta functions. The last two expressions are both determinant expressions, where the size of the matrix used in the last one, the edge matrix, is smaller in general than the other ones, the adjacency matrix and the degree matrix. For known examples of graph zetas, the Ihara expression is obtained by transforming the Hashimoto expression. In this paper, we will show that the existence of the Ihara expression for the generalized weighted zeta function by reformulating its Hashimoto expression. In particular, we verify the main theorem for the case where the underlying graph is a finite digraph, which generalizes the developments in previous researches. Graph zetas are usually defined via the symmetric digraph of a given finite graph, so it is natural to define for finite digraphs rather than finite graphs. In addition, a finite digraph in this paper allows multi-arcs and multi-loops, and one will see in the procedure that it is an unavoidable issue how one defines the inverses for each arc of a digraph. For this, we can consider two extreme ways. One is the case where all the arcs with inverse direction to an arc $a$ are defined to be the inverses of $a$, and the other one is the case where a single arc with inverse direction, if exists, defined to be the inverse of $a$. In \cite{ishikawamoritasato22+}, we treat the former case. In the present article, we treat the latter case. These two cases are natural generalization for the case where the underlying graph is a finite graph. Throughout this paper, we use the following notation. The ring of integers is denoted by ${\mathbb Z}$. The field of rational numbers and complex numbers are denoted by ${\mathbb Q}$ and ${\mathbb C}$ respectively. For a set $X$, the cardinality of $X$ is denoted by $|X|$. The Kronecker delta is denoted by $\delta_{xy}$, which returns $1$ if $x=y$, $0$ otherwise. The symbol $I$ stands for the identity matrix. \section{Preliminaries}\label{section : Preliminaries} \subsection{Graphs and Digraphs}\label{subsection : Graphs and Digraphs} A {\it digraph} is a pair $\Delta=(V, {\cal A})$ of a set $V$ and a multi-set ${\cal A}$ consisting of ordered pairs $(u, v)$ of elements $u ,v$ in $V$. If the cardinalities of $V$ and ${\cal A}$ are finite, then $\Delta$ is called a {\it finite} digraph. An element of $V$ (resp. ${\cal A}$) is called a {\it vertex} (resp. an {\it arc}) of $\Delta$. An arc $a=(u, v)$ is depicted by an arrow from $u$ to $v$. An arc $l$ of the form $(u, u)$ is called a {\it loop}, and the vertex $u$ is called the {\it nest} of $l$. The set of loops is denoted by ${\cal L}$. The vertex $u$ is called the {\it tail} of $a$, and $v$ the {\it head} of $a$, which are denoted by ${\frak t}(a)$ and ${\frak h}(a)$ respectively. Let $u, v\in V$. We denote by ${\cal A}_{uv}$ the set $$ \{a\in{\cal A}\mid {\frak t}(a)=u, {\frak h}(a)=v\} $$ of arcs with the tail $u$ and the head $v$. Hence ${\cal L}=\sqcup_{u\in V}{\cal A}_{uu}$. Note that $|{\cal A}_{uv}|\geq 1$ may occur in general. Thus an arc $a\in{\cal A}_{uv}$ sometimes called a {\it multi-arc}, and the cardinality $|{\cal A}_{uv}|$ is called the {\it multiplicity} of $a$. Similarly, a loop $l\in {\cal A}_{uu}$ may be called a {\it multi-loop}. The cardinality $|A_{uu}|$ is called the {\it multiplicity} of $l$. Set $ {\cal A}_{u*}=\sqcup_{v\in V}{\cal A}_{uv} $ and $ {\cal A}_{*v}=\sqcup_{u\in V}{\cal A}_{uv}. $ A digraph $\Delta=(V, {\cal A})$ is called {\it simple} if ${\cal A}_{uu}=\emptyset$ for any $u\in V$ and $|{\cal A}_{uv}|=1$ if ${\cal A}_{uv}\neq\emptyset$. Let ${\cal A}(u, v)={\cal A}_{uv}\cup{\cal A}_{vu}$ denote the set of arcs lying between vertices $u$ and $v$. A digraph $\Delta$ is called {\it connected} if ${\cal A}(u ,v)\neq\emptyset$ for any distinct $u$, $v$. A digraph in this paper is always assumed to be connect otherwise stated. Let $\Delta=(V, {\cal A})$ be a finite digraph, and $u$, $v$ two distinct vertices. We may assume that $|{\cal A}_{uv}|\leq |{\cal A}_{vu}|$. If ${\cal A}_{uv}$ and ${\cal A}_{vu}$ are both not empty, then one can fix an injection $$\iota_{uv} : {\cal A}_{uv}\rightarrow {\cal A}_{vu}.$$ If $u=v$, then we agree that $\iota_{uu}$ is the identity map. In this case, we say that an arc $a\in {\cal A}_{uv}$ {\it has inverse}, and the arc $\iota_{uv}(a)\in {\cal A}_{vu}$ is the {\it inverse arc}, or simple the {\it inverse} of $a$, denoted by $a^{-1}$; and vice versa, $a$ is the inverse of $a^{-1}$. We also note that, by this definition, a loop $l\in{\cal A}_{uu}$ satisfies $l^{-1}=l$, that is, each loop is {\it self-inverse}. An arc $a'\in{\cal A}_{vu}$ not lying in the image of $\iota_{uv}$ has no inverse. In the case where ${\cal A}_{uv}=\emptyset$, any arc $a'\in {\cal A}_{vu}$ is defined to have no inverse. Alternatively, one can also define any arc belonging to ${\cal A}_{vu}$ to be inverse of an arc of ${\cal A}_{uv}$. This alternate definition also works, and the development with this definition will be found in \cite{ishikawamoritasato22+}. A {\it graph} is a pair $\Gamma=(V, E)$ of a set $V$ and a multi-set $E$ consisting of $2$-subsets $\{u, v\}$ of $V$. If $V$ and $E$ are finite (multi-)sets, then the graph $\Gamma$ is called {\it finite}. An element $\{u, v\}\in E$ is called a {\it edge}. In particular, an edge of the form $l=\{u, u\}$ is called an {\it loop}. The vertex $u$ is called the {\it nest} of $l$. The set of loops is denoted by $L$. Obviously, $\{u, v\}\in E\setminus L$ implies $u\neq v$. We also suppose that an edge or a loop has multiplicity. Hence these are sometimes called an {\it multi-edge} and {\it multi-loop}. In other words, if we denote by $E(u, v)$ the set of multi-arcs lying between vertices $u, v\in V$, then we assume that $|E(u, v)|\geq 1$ for any $u, v\in V$. Note that $E(u, u)$ denotes the set of loops with nest $u$. The cardinality $|E(u, v)|$ is called the {\it multiplicity} of an edge $\{u, v\}$. A graph is called {\it simple} if it has no loops and the multiplicity of any edge is at most one. The matrix $$A_{\Gamma}=(|E(u, v)|)_{u, v\in V}$$ is called the {\it adjacency matrix} of $\Gamma$. For a vertex $u\in V$, the number of edges $\{u, v\}$ ($v\in V$) is called the {\it degree} of $u$, denoted by $d_u$. Thus we have $ d_u = \sum_{v\in V}|E(u, v)|. $ for $u\in V$. The diagonal matrix $$D_{\Gamma}=(\delta_{uv}d_u)_{u, v\in V}$$ is called the {\it degree matrix} of $\Gamma$. Let $\Gamma=(V, E)$ be a finite graph. We recall the definition of the symmetric digraph of $\Gamma$. We assign for each edge $\{u, v\}\in E\setminus L$, two arc $(u, v)$ and $(v, u)$ in mutually reverse direction. For a loop $\{u, u\}\in L$, we assign a single directed loop $(u, u)$. Then we have an set of arcs $${\cal A}=\{(u, v), (v, u)\mid \{u, v\}\in E\setminus L\}\sqcup\{(u, u)\mid \{u, u\}\in L\}.$$ The finite digraph constructed in this manner is called the {\it symmetric digraph} of a finite graph $\Gamma$. An arc $a'=(v, u)$ is called the {\it inverse} of $a=(u, v)$, and denote it by $a'=a^{-1}$. Any loop $l=(u, u)$ is defined to be self-inverse, i.e., $l^{-1}=l$. Therefore, the notion of inverse arcs is straightforwardly defined for the symmetric digraph of a finite graph, and one can see that the preceding definitions of inverse arcs, including the alternating one, are natural generalization of the case for the symmetric digraph. It can readily be confirmed that the symmetric digraph of a simple graph is simple. \subsection{Graph zeta functions}\label{subsection : Graph zeta functions} Let $\Delta=(V, {\cal A})$ be a finite digraph, ${\cal A}^{\mathbb Z}$ the set of two-sided infinite sequence. Let $\varphi$ be the left shift operator on ${\cal A}^{\mathbb Z}$, and $\Xi$ a $\varphi$-stable subset of ${\cal A}^{\mathbb Z}$, i.e., a subset of ${\cal A}^{\mathbb Z}$ satisfying $\varphi(\Xi)\subset\Xi$. An example of $\Xi$ is the set $$ \Pi_{\Delta} = \{ (a_i)_{i\in{\mathbb Z}}\in{\cal A}^{\mathbb Z} \mid {\frak h}(a_i)={\frak t}(a_{i+1}),\ \forall i\in{\mathbb Z} \} $$ of two-sided infinite path of $\Delta$. If we denote the restriction $\varphi|_{\Xi}$ by $\lambda$, then we have a {\it quasi-finite} dynamical system $(\Xi, \lambda)$, that is, the set $ X_m = \{ x\in\Xi \mid \lambda^m(x)=x \} $ of $m$-periodic points in $(\Xi, \lambda)$ is a finite set for each $m\geq 1$, since we have $|X_m|\leq |{\cal A}|^m$. For $x\in X_m$, the integer $m$ is called a {\it period} of $x$. Thus the union $X=\cup_{m\geq 1}X_m$ consists of all periodic points in $(\Xi, \lambda)$. Note that the union is not disjoint, since any multiple of a period of $x\in X$ is again a period of it. Let $x=(a_i)\in X$, and let a period $m$ of $x$ be fixed. A consecutive $m$-section $(a_k, a_{k+1}, \dots, a_{k+m-1})$ is called a {\it fundamental section} of $x\in X_m$. In the case where $\Xi=\Pi_{\Delta}$, then an element $x\in X_m$ is called a {\it closed path} of $\Delta$ of {\it length} $m$. Thus the closed paths of $\Delta$ are in one-to-one correspondence with the fundamental sections of $X_m$. If we consider the $\varphi$-stable subset $ \Pi_{\Delta}^{\flat} = \{ (a_i)\in\Pi_{\Delta} \mid a_i^{-1}\neq a_{i+1},\ \forall i \} $, then an element of $X_m$ is called a {\it reduced} closed path of length $m$. Let $\varpi(x)$ denote the minimum period of $x$. Hence it is obvious that $x\in X_{\varpi(x)}$ for any $x\in X$. If we consider $x\in X$ as an element of $X_{\varpi(x)}$, then $x$ is called a {\it prime element}. If $x\in X$ is prime, then we denote it by $\pi(x)$. Obviously we have $\pi(x)\in X_{\varpi(x)}$. In the case where $\Xi=\Pi_{\Delta}$ (resp. $\Pi_{\Delta}^{\flat}$), a prime element is called a {\it prime} (resp. {\it prime reduced}) closed path of $\Delta$. Let $\Delta=(V, {\cal A})$ be a finite digraph, $R$ a commutative ${\mathbb Q}$-algebra, and $\theta : {\cal A}\times{\cal A}\rightarrow R$ a map. A finite sequence $w=\alpha_0\alpha_1\cdots \alpha_{m-1}$ on ${\cal A}$ is called a {\it word} with {\it alphabet} ${\cal A}$. The set of words with alphabet ${\cal A}$ is denoted by ${\cal A}^*$. A word $w=\alpha_0\alpha_1\cdots \alpha_{m-1}$ is called {\it prime} if there exists no word $u\in {\cal A}^*$ satisfying $w=u^k$ for some positive integer $k$. Given a word $w=\alpha_0\alpha_1\cdots \alpha_{m-1}\in {\cal A}^*$, we consider the two-sided infinite sequence $x=(a_i)\in{\cal A}^{\mathbb Z}$ defined by $a_i=\alpha_j$ if $i$ is congruent to $j$ modulo $m$. We denote this sequence by $w^{\natural}$. The following product $$ \theta(\alpha_0, \alpha_1)\theta(\alpha_1, \alpha_2) \cdots \theta(\alpha_{m-2}, \alpha_{m-1})\theta(\alpha_{m-1}, \alpha_0) $$ is denoted by ${\rm circ}_{\theta}(w)$, called the {\it circular product} of $\theta$ {\it along with} $w$. Let $\Xi$ be a $\varphi$-stable subset of ${\cal A}^{\mathbb Z}$. If $$ {\rm circ}_{\theta}(w)\neq 0\Rightarrow w^{\natural}\in\Xi $$ for any prime word $w\in{\cal A}^*$, then we say that $\Xi$ satisfies the {\it path condition} on $\theta$, or sometime we say simply that $(\Xi, \theta)$ satisfies the path condition. Let $\Delta=(V, {\cal A})$ be a finite digraph, $R$ a commutative ${\mathbb Q}$-algebra, and $\theta : {\cal A}\times{\cal A}\rightarrow R$ a map. Let $x=(a_i)\in X_m$ and $w=a_ka_{k+1}\cdots a_{k+m}$ any fundamental section regarded as a word on ${\cal A}^{\mathbb Z}$. Obviously the circular product ${\rm circ}_{\theta}(w)$ does not depend on the choice of fundamental section, and we denote it by ${\rm circ}_{\theta}(x)$ for $x\in X_m$. Let $N_m(\theta)$ denote the sum $ \sum_{x\in X_m}{\rm circ}_{\theta}(x), $ and consider the following formal power series with a variable $t$: $$ Z_{\Xi}(t; \theta) = \exp\left[ \sum_{m\geq 1} \frac{N_m(\theta)}{m}t^m \right]. $$ \begin{df}[Graph zeta function]\label{df : Graph zeta function} {\em Let $\Delta=(V, {\cal A})$ be a finite digraph, $R$ a commutative ${\mathbb Q}$-algebra, and $\theta : {\cal A}\times{\cal A}\rightarrow R$ a map. Let $\Xi=\Pi_{\Delta}$ and suppose that $(\Xi, \theta)$ satisfies the path condition. Then the formal power series $Z_{\Xi}(t; \theta)$ is called the {\it graph zeta function} for $\Delta$ with a {\it weight map} $\theta$, which is denoted by $Z_{\Delta}(t; \theta)$. } \end{df} \begin{ex}[Ihara zeta function]\label{ex : Ihara zeta function} {\em Given a map $\theta^{\rm I} : {\cal A}\times{\cal A}\rightarrow R$ by $$ \theta^{\rm I}(a, a') =\delta_{{\frak h}(a){\frak t}(a')}-\delta_{a^{-1}a'}, $$ one can see that $(\Pi_{\Delta}, \theta^{\rm I})$ satisfies the path condition as follows. Suppose that a prime word $w=\alpha_0\cdots\alpha_{m-1}\in{\cal A}^*$ satisfies the condition ${\rm circ}_{\theta^{\rm I}}(w)\neq 0$. This implies $\theta^{\rm I}(\alpha_{i-1}, \alpha_i)\neq 0$, hence ${\frak h}(\alpha_{i-1})={\frak t}(\alpha_i)$ for all $i=1,2,\dots, m$, where $\alpha_m=\alpha_0$. Therefore we have $w^{\natural}\in\Pi_{\Delta}$, and the graph zeta $Z_{\Delta}(t; \theta^{\rm I})$ is called the {\it Ihara zeta function}\cite{bass92, hashimoto90, ihara66, kotanisunada00, serre80}. } \end{ex} \begin{rem} {\em If a digraph $\Delta$ consists of several connected components $\Delta=\sqcup_{i=1}^n\Delta_i$, then one can see easily that $ Z_{\Delta}(t; \theta) = \prod_{i=1}^n Z_{\Delta_i}(t; \theta). $ } \end{rem} \subsection{Three expressions} Graph zeta functions have two other expressions. Let $\Delta=(V, {\cal A})$ be a finite digraph, $R$ a commutative ${\mathbb Q}$-algebra, and $\theta : {\cal A}\times{\cal A}\rightarrow R$ a map. Two elements $x, y\in\Pi_{\Delta}$ is called {\it equivalent} iff there exists an integer $k$ satisfying $y=\lambda^k(x)$. We denote by $x\sim y$ this equivalence relation on $\Pi_{\Delta}$. Note that the relation $\sim$ also affords an equivalence relation on $X$. An equivalence class with representative $x\in \Xi$ is denoted by $[x]$. An element of the coset ${\frak X}=X/\sim$ is called a {\it cycle} of $\Delta$. Since the relation $\sim$ affords an equivalence relation on each $X_m$, we have ${\frak X}=\cup_{m\geq 1}{\frak X}_m$, where ${\frak X}_m=X_m/\sim.$ If $[x]\in{\frak X}_m$, then positive integer $m$ is called the {\it period} of the cycle $[x]$. A cycle $[x]$ with reduced (resp. prime) $x$ is called a {\it reduced} (resp. {\it prime}) cycle of $\Delta$. If $[x]$ is prime, then we denote it by $\pi([x])$, which belongs to ${\frak X}_{\varpi(x)}$. In other words, we have $\varpi([x])=\varpi(x)$. Let $M_{\Delta}(\theta)=(\theta(a, a'))_{a, a'\in{\cal A}}$, which is a square matrix of degree $|{\cal A}|$. We consider the following two formal power series: $$ E_{\Delta}(t; \theta)= \prod_{[x]\in{\frak X}} \frac{1}{1-{\rm circ}_{\theta}(\pi([x]))t^{\varpi([x])}}, \quad H_{\Delta}(t; \theta)= \frac{1}{\det(I-tM_{\Delta}(\theta))}. $$ \begin{prop} For a finite digraph $\Delta$, it follows that $ Z_{\Delta}(t; \theta)=E_{\Delta}(t; \theta)=H_{\Delta}(t; \theta). $ \end{prop} For a graph zeta, these three expressions are equivalent to each other. The first identity follows only from the definitions of both sides. The second identity actually needs the path condition. See \cite{morita20} for precise information. These three expressions are called the {\it exponential expression}, the {\it Euler expression} and the {\it Hashimoto expression}, respectively. The existence of the Hashimoto expression is significant for our development. We will construct the Ihara expression by reformulating the Hashimoto expression (c.f., \cite{watanabefukumizu10}). \section{Main result} Let $\Delta=(V, {\cal A})$ be a finite digraph and $R$ a commutative ${\mathbb Q}$-algebra. Given two functions $\tau$, $\upsilon : {\cal A}\rightarrow R$, we consider the weight map $\theta^{\rm G}:{\cal A}\times{\cal A}\rightarrow R$ defined by $$ \theta^{\rm G}(a, a')=\tau(a')\delta_{{\frak h}(a){\frak t}(a')}-\upsilon(a')\delta_{a^{-1}a'}, $$ for $a, a'\in{\cal A}$. The graph zeta function $Z_{\Delta}(t; \theta^{\rm G})$ is called the {\it generalized weighted zeta function} \cite{morita20}. \begin{rem} {\em Let $\Delta$ be the symmetric digraph of a finite graph. If $\tau=\upsilon=1$, then $Z_{\Delta}(t; \theta^{\rm G})$ is nothing but the Ihara zeta function \cite{bass92, hashimoto90, ihara66, kotanisunada00, serre80}. If $\tau=\upsilon$, it is the graph zeta treated in \cite{mizunosato04}, which is called the {\it Mizuno-Sato zeta function}. If $\upsilon=1$, it is the one defined in \cite{sato07}, called the {\it Sato zeta function}. Let $q$ be an indeterminate, and replace $R$ by the polynomial ring $R[q]$. If we let $\tau=1$ and $\upsilon=1-q$, then the resulting graph zeta is called the {\it Bartholdi zeta function} defined in \cite{bartholdi99}. The {\it egde zeta function} and the {\it path zeta function}\cite{starkterras96} also come out from this framework \cite{morita20}. } \end{rem} One can easily verify that the map $\theta^{\rm G}$ satisfies the adjacency condition, and hence $(\Pi_{\Delta}, \theta^{\rm G})$ satisfies the path condition. Therefore the generalized weighted zeta $Z_{\Delta}(t; \theta^{\rm G})$ has the three expressions. See \cite{morita20} for precise information. \begin{prop} We have the identities $ Z_{\Delta}(t; \theta^{\rm G}) = E_{\Delta}(t; \theta^{\rm G}) = H_{\Delta}(t; \theta^{\rm G}). $ \end{prop} Thus we are in position to construct the Ihara expression for the generalized weighted zeta $Z_{\Delta}(t; \theta^{\rm G})$ with the definition of inverse arc in this paper. Let $\Delta=(V, {\cal A})$ be a finite digraph which allows multi-arcs and multi-loops. For $u, v\in V$, recall that $ \Phi_{\Delta}= \{ (u, v)\in\Phi_{\Delta} \mid {\cal A}(u, v)\neq\emptyset \}. $ Thus it follows that ${\cal A}=\sqcup_{(u, v)\in\Phi_{\Delta}}{\cal A}(u, v)$. For vertices $u, v\in V$, we write $ u\preceq v $ if $u\neq v$ and $ |{\cal A}_{uv}|\leq|{\cal A}_{vu}|. $ If $u\neq v$ and $ |{\cal A}_{uv}|<|{\cal A}_{vu}|, $ we write $u\prec v$. Thus, any $(u, v)\in V\times V$ satisfies $u\preceq v$, $v\prec u$ or $u=v$. If $u=v$ or $u\preceq v$, then we fix a injection $ \iota_{uv} : {\cal A}_{uv}\rightarrow{\cal A}_{vu} $ as in section \ref{subsection : Graphs and Digraphs}. In particular, the map $\iota_{uu}$ is assume to be the identity map on ${\cal A}_{uu}$. Let \begin{eqnarray*} && \Phi_{\Delta}^{(1)} = \{ (u, v)\in\Phi_{\Delta} \mid u\preceq v, {\cal A}_{uv}\neq\emptyset \},\\ && \Phi_{\Delta}^{(2)} = \{ (u, u)\in\Phi_{\Delta} \mid {\cal A}_{uu}\neq\emptyset \},\\ && \Phi_{\Delta}^{(3)} = \{ (u, v)\in\Phi_{\Delta} \mid u\prec v, {\cal A}_{uv}=\emptyset \}, \end{eqnarray*} and let \begin{eqnarray*} {\cal A}^{(1)} = \bigcup_{(u, v)\in\Phi_{\Delta}^{(1)}} {\cal A}_{uv}, & {\displaystyle {\cal A}^{(-1)} = \bigcup_{(u, v)\in\Phi_{\Delta}^{(1)}} {\cal A}_{uv}^{-1}, } & \overline{{\cal A}^{(1)}} = \bigcup_{(u, v)\in\Phi_{\Delta}^{(1)}} {\cal A}_{vu}\setminus{\cal A}_{uv}^{-1}, \\ {\cal A}^{(2)} = \bigcup_{(u, u)\in\Phi_{\Delta}^{(2)}} {\cal A}_{uu}, & {\displaystyle {\cal A}^{(3)} = \bigcup_{(u, v)\in\Phi_{\Delta}^{(3)}} {\cal A}_{vu}, } & \end{eqnarray*} where $ {\cal A}_{uv}^{-1} = \{ \iota_{uv}(a)\mid a\in{\cal A}_{uv} \}. $ Thus, it follows that $ {\cal A} = {\cal A}^{(1)}\sqcup{\cal A}^{(-1)}\sqcup\overline{{\cal A}^{(1)}}\sqcup{\cal A}^{(2)}\sqcup{\cal A}^{(3)}. $ The set of arcs with inverse is given by ${\cal A}^{(1)}\sqcup{\cal A}^{(2)}$, and $\overline{{\cal A}^{(1)}}\sqcup{\cal A}^{(3)}$ give the set of arcs without inverse. We set ${\cal A}^{\times}=\overline{{\cal A}^{(1)}}\sqcup{\cal A}^{(3)}$. For an arc $a\in{\cal A}$, let $$ {\cal E}(a) = \left\{ \begin{array}{ll} \{a, a^{-1}\},& \mbox{if $a\in{\cal A}^{(1)}$},\\ \{a\}, & \mbox{otherwise.} \end{array} \right. $$ and let $c_a(t)=c_a(t; \theta^{\rm G})=1-\prod_{\alpha\in{\cal E}(a)}(-\upsilon(\alpha)t)$. For $u, v\in V$, define $$ a_{uv} = \sum_{a\in{\cal A}_{uv}} \frac{\tau(a)}{c_a(t)} \in R[[t]], \quad b_{uv} = \delta_{uv} \sum_{a\in{\cal A}^{(1)}\cap{\cal A}_{u*}} \frac{\tau(a)\upsilon(a^{-1})}{c_a(t)} \in R[[t]]. $$ \begin{df} {\em Let $\Delta=(V, {\cal A})$ be a finite digraph. The following $|V|\times |V|$ matrices $$ A_{\Delta}(\theta^{\rm G})=(a_{uv})_{u, v\in V}, \quad B_{\Delta}(\theta^{\rm G})=(b_{uv})_{u, v\in V} $$ are called the {\it weighted adjacency matrix} and the {\it weighted backtrack matrix} for $\Delta$ respectively. } \end{df} \begin{ex} {\em Let $V=\{1, 2, 3\}$ and ${\cal A}_{12}=\{a_1\}$, ${\cal A}_{21}=\{a_2, a_3\}$, ${\cal A}_{23}=\{a_ 4\}$, ${\cal A}_{32}=\{a_ 5\}$, ${\cal A}_{13}=\emptyset$, ${\cal A}_{31}=\{a_6\}$, ${\cal A}_{11}=\{a_ 7, a_8\}$, say $\iota_{12}(a_1)=a_2$, $\iota_{23}(a_4)=a_5$, $\iota_{11}(a_7)=a_7$, $\iota_{11}(a_8)=a_8$, i.e., $a_1^{-1}=a_2$, $a_4^{-1}=a_5$, $a_7^{-1}=a_7$, $a_8^{-1}=a_8$, and $a_3, a_6$ have no inverse. In this case, we have: $ \Phi_{\Delta}^{(1)} = \{(1, 2), (2, 3)\} $, $ \Phi_{\Delta}^{(2)} = \{(1, 1)\} $, $ \Phi_{\Delta}^{(3)} = \{(1, 3)\} $; $ {\cal A}^{(1)} = \{a_1, a_4\} $, $ {\cal A}^{(-1)} = \{a_2, a_5\} $, $ \overline{{\cal A}^{(1)}} = \{a_3\} $, $ {\cal A}^{(2)} = \{a_7, a_8\} $, $ {\cal A}^{(3)} = \{a_6\} $; $c_{a_1}(t)=c_{a_2}(t)=1-\upsilon(a_1)\upsilon(a_2)t^2$, $c_{a_4}(t)=c_{a_5}(t)=1-\upsilon(a_4)\upsilon(a_5)t^2$, $c_{a_7}(t)=1+\upsilon(a_7)t$, $c_{a_8}(t)=1+\upsilon(a_8)t$, $c_{a_3}(t)=c_{a_6}(t)=1$; and $$ \begin{array}{l} A_{\Delta}(\theta^{\rm G}) = \left[ \begin{array}{ccc} \frac{\tau(a_7)}{1+\upsilon(a_7)t}+\frac{\tau(a_8)}{1+\upsilon(a_8)t} & \frac{\tau(a_1)}{1-\upsilon(a_1)\upsilon(a_2)t^2} & 0\\ \frac{\tau(a_2)}{1-\upsilon(a_2)\upsilon(a_1)t^2}+\tau(a_3) & 0 & \frac{\tau(a_4)}{1-\upsilon(a_4)\upsilon(a_5)t^2}\\ \tau(a_6) & \frac{\tau(a_5)}{1-\upsilon(a_5)\upsilon(a_4)t^2} & 0 \end{array} \right], \\\\ B_{\Delta}(\theta^{\rm G}) = \left[ \begin{array}{ccc} \frac{\tau(a_1)\upsilon(a_2)}{1-\upsilon(a_1)\upsilon(a_2)t^2} &0 & 0\\ 0 & \frac{\tau(a_2)\upsilon(a_1)}{1-\upsilon(a_2)\upsilon(a_1)t^2} +\frac{\tau(a_4)\upsilon(a_5)}{1-\upsilon(a_4)\upsilon(a_5)t^2} &0\\ 0 & 0 & \frac{\tau(a_5)\upsilon(a_4)}{1-\upsilon(a_5)\upsilon(a_4)t^2} \end{array} \right]. \end{array} $$ } \end{ex} \begin{figure}[h] \centering \includegraphics[scale=0.37]{graph.png} \caption{The digraph $\Delta$ of Example 8} \end{figure} \begin{rem} {\em Let $\Gamma=(V, E)$ be a finite simple graph and $\Delta=\Delta(\Gamma)$ the symmetric digraph. Note that by definition $\Gamma$ has no loops. Then one can see that $A$ and $B$ are natural generalization of the adjacency matrix $A_{\Gamma}$ and the degree matrix $D_{\Gamma}$ of $\Gamma$ respectively (c.f., \cite{IIMSS21}). In this case, we have $|{\cal A}_{uv}|= |{\cal A}_{vu}|=1$ for non-empty ${\cal A}(u, v)$. In addition, if we consider the case where $\theta^{\rm G}=\theta^{\rm I}$, i.e., $\tau=\upsilon=1$, then it follows that $$c_a(t)=1-t^2$$ for all $a\in {\cal A}$, since $|{\cal A}_{uv}|=1$ if ${\cal A}_{uv}\neq\emptyset$ and the inclusions $\iota_{uv} : {\cal A}_{uv}\rightarrow {\cal A}_{vu}$ are bijective. A simple observation shows that, for $u, v\in V$, $$ a_{uv} = \frac{|{\cal A}_{uv}|}{1-t^2}, \quad b_{uu} = \frac{|{\cal A}^{(1)}\cap{\cal A}_{u*}|}{1-t^2}. $$ One can easily see that $|{\cal A}_{uv}|=1$ iff $\{u, v\}\in E$ (otherwise zero), and $|{\cal A}^{(1)}\cap{\cal A}_{u*}|$ gives the number of edges in $\Gamma$ satisfying $\{u, v\}\in E$ for some $v\in V$. This shows that $$ A_{\Delta}(\theta^{\rm G}) = \frac{1}{1-t^2}A_{\Gamma}, \quad B_{\Delta}(\theta^{\rm G}) = \frac{1}{1-t^2}D_{\Gamma}. $$ Thus we can regard that the weighted adjacency matrix and the weighted backtrack matrix are, respectively, natural generalization of the adjacency matrix and the degree matrix. } \end{rem} Let $\Delta=(V, {\cal A})$ be a finite digraph. Recall that $M=M_{\Delta}(\theta^{\rm G})=(\theta^{\rm G}(a, a'))_{a, a'\in{\cal A}}$. Let \begin{eqnarray*} &&H=H_{\Delta}(\theta^{\rm G})=(\tau(a')\delta_{{\frak h}(a){\frak t}(a')})_{a, a'\in{\cal A}},\\ &&J=J_{\Delta}(\theta^{\rm G})=(\upsilon(a')\delta_{a^{-1}a'})_{a, a'\in{\cal A}},\\ &&K=K_{\Delta}(\theta^{\rm G})=(\delta_{{\frak h}(a)v})_{a\in{\cal A}, v\in V},\\ &&L=L_{\Delta}(\theta^{\rm G})=(\tau(a')\delta_{u{\frak t}(a')})_{u\in V, a'\in{\cal A}}. \end{eqnarray*} For each arc $a\in{\cal A}$, we consider the following restrictions $$ \begin{array}{l} J(a)=(\upsilon(a')\delta_{\alpha^{-1}\alpha'})_{\alpha, \alpha'\in{\cal E}(a)},\\ K(a)=(\delta_{{\frak h}(\alpha), v})_{\alpha\in{\cal E}(a), v\in V},\\ L(a)=(\tau(a')\delta_{u{\frak t}(\alpha')})_{u\in V, \alpha'\in{\cal E}(a)} \end{array} $$ for the matrices $J$, $K$, and $L$. Note that $J(a)$ is $2\times 2$-matrix if $a\in{\cal A}^{(1)}$, $1\times1$ otherwise. Hence we can arrange the arcs so as to the matrix $J$ is a direct sum $$ J= \left(\bigoplus_{a\in{\cal A}^{(1)}}J(a)\right) \oplus \left(\bigoplus_{a\in{\cal A}\setminus ({\cal A}^{(1)}\cup{\cal A}^{(-1)})}J(a)\right) $$ of $2\times 2$ blocks and $1\times 1$ blocks. We fix such a total order on ${\cal A}$. If we denote by $I(a)$ ($a\in{\cal A}$) the identity matrix of degree $|{\cal E}(a)|$, then the matrix $I+tJ$ is a direct sum of the matrices $\oplus_{a\in{\cal A}\setminus{\cal A}^{(-1)}}(I(a)+tJ(a))$, where the direct summands are all invertible on $R[[t]]$. \begin{lem} The matrix $I+tJ$ is invertible. \end{lem} For $\Delta$ and $\theta^{\rm G}$, we denote by $I_{\Delta}(t; \theta^{\rm G})$ the following formal power series with indeterminate $t$: $$ \frac {1} {\det(I+tJ)\det(I-tA_{\Delta}(\theta^{\rm G})+t^2B_{\Delta}(\theta^{\rm G}))}. $$ \begin{thm}[Main theorem] Let $\Delta$, $R$ and $\theta^{\rm G}$ be as above. We have $$Z_{\Delta}(t; \theta^{\rm G})=I_{\Delta}(t; \theta^{\rm G}).$$ \end{thm} {\it Proof.} Since $(\Pi_{\Delta}, \theta^{\rm G})$ satisfies the path condition (c.f., \cite{morita20}), we have the identity $$ Z_{\Delta}(t; \theta^{\rm G}) = \frac{1}{\det(I-tM)}, $$ where $M=M_{\Delta}(\theta^{\rm G})$. Let $H, J, K$ and $L$ be as above. By definition, it follows that $M=H-J$. It also follows that $H=KL$, thus $M=KL-J$. Hence we have \begin{eqnarray*} \det(I-tM) &=& \det(I-t(KL-J))\\ &=& \det((I+tJ)-tKL)\\ &=& \det(I+tJ)\det(I-t(I+tJ)^{-1}KL)\\ &=& \det(I+tJ)\det(I-tL(I+tJ)^{-1}K), \end{eqnarray*} where the final identity follows from the well-known identity $\det(I-AB)=\det(I-BA)$ in linear algebra. Since each direct summand of $$ I+tJ = \bigoplus_{a\in {\cal A}\setminus{\cal A}^{(-1)}} I(a)+tJ(a) $$ is invertible, we have $ (I+tJ)^{-1} = \bigoplus_{a\in {\cal A}\setminus{\cal A}^{(-1)}} (I(a)+tJ(a))^{-1}, $ and it follows that $$ L(I+tJ)^{-1}K = \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} L(a)(I(a)+tJ(a))^{-1}K(a). $$ Note that $\det(I(a)+tJ(a))=c_a(t)$ for $a\in{\cal A}\setminus{\cal A}^{(-1)}$, and we have $$ (I(a)+tJ(a))^{-1} = \left\{ \begin{array}{ll} c_a(t)^{-1}(I(a)-tJ(a)), & \mbox{if $a\in{\cal A}^{(1)}$}\\ c_a(t))^{-1}I(a), & \mbox{if $a\in{\cal A}^{(2)}\sqcup{\cal A}^{\times}$}. \end{array} \right. $$ Hence it follows that $$ \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} L(a)(I(a)+tJ(a))^{-1}K(a) = \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} c_a(t)^{-1}L(a)K(a) -t \sum_{a\in{\cal A}^{(1)}} c_a(t)^{-1}L(a)J(a)K(a). $$ The $(u, v)$-entry $r_{uv}$ of the matrix $ \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} c_a(t)^{-1}L(a)K(a) $ is given by \begin{equation}\label{(u, v)-entry} r_{uv}= \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} c_a(t)^{-1} \sum_{\alpha\in{\cal E}(a)} \tau(\alpha)\delta_{u{\frak t}(\alpha)}\delta_{{\frak h}(\alpha)v}. \end{equation} Note that $ \delta_{u{\frak t}(\alpha)}\delta_{{\frak h}(\alpha)v}\neq 0 $ is equivalent to $a\in{\cal A}_{uv}$. It follows that $$ r_{uv} = \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} c_a(t)^{-1} \sum_{\alpha\in{\cal E}(a)\cap{\cal A}_{uv}} \tau(\alpha). $$ We verify $r_{uv}=a_{uv}$ for all $(u, v)\in V\times V$. Suppose that $u\preceq v$. If $a\in {\cal A}^{{(1)}}$, then ${\cal E}(a)\cap{\cal A}_{uv}=\{a\}$. Otherwise, we have $ {\cal E}(a)\cap{\cal A}_{uv}=\emptyset. $ This implies that $ r_{uv} = \sum_{a\in{\cal A}\setminus{\cal A}^{(-1)}} c_a(t)^{-1}\tau(a). $ In the case where $v\prec u$, ${\cal E}(a)\cap{\cal A}_{uv}\neq\emptyset$ implies that $a\in{\cal A}^{\times}$, and we have ${\cal E}(a)\cap{\cal A}_{uv}=\{a\}$ for $a\in{\cal A}^{\times}$. Suppose that $u=v$. In this case, ${\cal E}(a)\cap{\cal A}_{uu}\neq\emptyset$ implies $a\in{\cal A}^{(2)}$, and we have ${\cal E}(a)\cap{\cal A}_{uu}=\{a\}$. Therefore, putting all these together, it follows that $r_{uv}=a_{uv}$ for all $(u, v)\in V\times V$. The $(u, v)$-entry $s_{uv}$ of the matrix $ \sum_{a\in{\cal A}^{(1)}} c_a(t)^{-1}L(a)J(a)K(a) $ is given by \begin{equation}\label{s_{uv}} s_{uv}= \sum_{a\in{\cal A}^{(1)}} c_a(t)^{-1} \sum_{\alpha, \beta\in{\cal E}(a)} \tau(\alpha)\upsilon(\beta) \delta_{u{\frak t}(\alpha)} \delta_{\alpha^{-1}\beta} \delta_{{\frak h}(\beta)v}. \end{equation} We verify $s_{uv}=b_{uv}$ for any $(u, v)\in V\times V$. Let $a\in{\cal A}^{(1)}$. We have ${\cal E}(a)=\{a, a^{-1}\}$ with $a\neq a^{-1}$. Thus it follows that $$ s_{uv} = \sum_{a\in{\cal A}^{(1)}} c_a(t)^{-1} \tau(a)\upsilon(a^{-1}) \delta_{u{\frak t}(a)} \delta_{{\frak h}(a^{-1})u}, $$ which equals $ \sum_{a\in{\cal A}^{(1)}\cap{\cal A}_{u*}} c_a(t)^{-1} \tau(a)\upsilon(a^{-1}). $ Now we have show that $s_{uv}=b_{uv}$. \hfill$\Box$ \vspace*{5mm} \noindent {\bf Acknowledgements} The authors would like to express his deep gratitude to Professor Iwao Sato, Oyama National College of Technology, who suggested the problem considered in this article, for illuminating discussions and valuable comments. The first named author is partially supported by Grant-in-Aid for JSPS Fellows, Grant Number JP20J20590. The second named author is partially supported by JSPS KAKENHI, Grant Number JP22K03262.
1512.08113
\section{Introduction} \label{sec:introduction} There is no doubt that quantum field theory and mathematics are deeply connected. There are many examples where field theory intuition helped formulate mathematical conjectures or even theorems (Seiberg-Witten theory in topology~\cite{Witten1994}, Wilson loops in Chern-Simons theory for knot theory~\cite{Witten1989}). Similarly, progress in mathematics has stimulated progress in field theory (as a prime example we have ADHM construction~\cite{Atiyah1978} of instantons, but also work in index theory~\cite{Atiyah1963} which helped in the understanding of field theory anomalies). And these are just a few of many examples. In this review we will focus on one of the many connecting bridges between quantum field theory and number theory: polylogarithms. In quantum field theory polylogarithms and the closely related multiple zeta values are ubiquitous. They arise in the perturbative computations of various quantities. There are many quantities one may attempt to compute and, moreover, there are many different quantum field theories. Many results are already available but frequently the complexity of the final answers (not to mention the complexity of the computation) is forbidding. We are then naturally led to ask which field theories and what quantities are most likely to be understood in simple terms. These questions, while very natural, are not at all obvious, but in recent years an answer has began to emerge. As we will explain, the answer is somewhat surprising. The textbook example for the simplest interacting field theory is called the \(\phi^4\) theory. This is a theory of a single scalar field with a four-point interaction. The Feynman diagrams in this theory have internal vertices of degree four. Many results are known in this theory see, for example, ref.~\cite{BROADHURST1995, Schnetz2010}. However, it has recently emerged that there is a better candidate for study, which we will discuss below. Relativistic field theories are symmetric under the Poincar\'e group. The Poincar\'e group has the Lorentz group \(\grp{O}(1,3)\) as a subgroup and particles are in correspondence with irreducible representations of these symmetry groups. The scalar particles transform in the trivial representation of \(\grp{O}(1,3)\) so they realize the relativistic symmetry in the simplest possible way. As mentioned above, the \(\phi^4\) theory is a theory of scalar (or spin zero) fields. Other representations of the Lorentz symmetry may appear: fermions which transform as a representation of the covering group \(\grp{Spin}(1,3)\), gauge fields which are vectors of \(\grp{O}(1,3)\), the graviton which is rank two tensor representation, etc. In the case of the gauge fields and of the graviton the formulation of the quantum theory is complicated by the fact that states are defined modulo gauge transformations. This also complicates the computations since one has to make a choice of gauge (or a choice of representative in the equivalence class). Despite these technical complications, in many cases the final results, when expressed in terms of appropriate variables, turn out to be strikingly simple (the computation of Parke and Taylor in ref.~\cite{PhysRevLett.56.2459} being a prime example). Then, we are led to suspect that there should be more efficient ways to find these answers. We have briefly discussed the theories but we still haven't specified the types of quantities we are going to compute. We turn to this question next. The quantities which will be most relevant in the following discussion are scattering amplitudes. Let us give a rough definition of scattering amplitudes. A field theory of the kind we will consider is defined by a functional \(S[\phi]\) called action, depending of functions \(\phi(\vec{x},t)\) called fields (here \(t\) is time, \(\vec{x}\) is a three-dimensional vector and \(\phi\) is a generic name for a field; in general the theory can contain several fields with different \(\grp{O}(1,3)\) transformations). From this functional we can obtain by variational methods partial differential equations (called equations of motion) for the fields of the theory. Now, given some boundary conditions \(\phi_{\pm}\) at \(t = \pm \infty\) for the fields, from the solution \(\phi_0\) to the equation of motion satisfying these boundary conditions one can build a complex number \(\exp(i S[\phi_0])\) which is called the tree level amplitude of transition between \(\phi_{-}\) and \(\phi_{+}\) (if there is no solution for the prescribed boundary conditions, then the amplitude is defined to be zero). The name `tree' is due to the fact that this quantity can be computed as a sum of tree-shaped Feynman diagrams. The computation using the definition can be tedious in general, especially for gauge theories where one has to make an arbitrary choice of gauge (in the final result the dependence on this arbitrary choice must cancel; when this happens we call the answer `gauge invariant'). The tree level amplitudes have two important properties: analyticity (in a certain domain) and factorization.\footnote{Analyticity survives after adding quantum corrections, but factorization becomes more subtle in case there are infrared divergences (see ref.~\cite{Bern1995}). Since scattering amplitudes in gauge theories are infrared divergent, exploiting factorization at loop level seems to be much harder.} Factorization here means that the amplitude has certain poles whose residues are products of simpler amplitudes. The requirement of factorization is a very powerful constraint; using it, the BCFW (\cite{Britto2005}) recursion relations allow the computation of all tree-level amplitudes of the \(\mathcal{N} = 4\) theory we will describe in the next section. In the quantum theory graphs with loops appear as well. Graphs with loops correspond to non-trivial integrals, which yield mathematically interesting results. It is an empirical observation that the transcendentality of an \(\ell\)-loop result is bounded from above by \(2 \ell\); for a one-loop quantity the most complicated part can be expressed in terms of dilogarithms. For theories relevant experimentally, like Quantum Chromodynamics (QCD), a one-loop answer will contain not only dilogarithms, but also logarithms and even rational terms. The transcendentality of the answer is not uniform. However, for the special case of \(\mathcal{N} = 4\), the answers are of uniform transcendentality. In some cases, see ref.~\cite{Kotikov2004}, the \(\mathcal{N} = 4\) answer can be obtained from the uniform transcendentality of the more complicated QCD result. \section{The maximally supersymmetric theory} We mentioned previously that the theories with spin are in some sense simpler than theories of scalar (spinless) particles. Even so, there are many possible theories of particles with spin. Supersymmetry is a remarkable symmetry which can transform between particles of different spins. The maximal supersymmetry of a non-gravitational theory in three space and one time dimensions is called \(\mathcal{N} = 4\) supersymmetry. The reason for the name is that \(\mathcal{N} = 1\) supersymmetry is the minimal supersymmetry and the maximal supersymmetry has four times as many supersymmetries as the minimal one. In ref.~\cite{Coleman1967}, Coleman and Mandula proved a theorem about the possible symmetries of a relativistic theory. Under certain assumptions they showed that the symmetry group has the structure of a product between the Lorentz and some other `internal' symmetry group. Later, Haag, \L opusza\'nski and Sohnius~\cite{Haag1975} showed that a non-trivial symmetry structure is possible, but it has to be a supergroup symmetry, not a Lie group symmetry. A supergroup is obtained by exponentiating Lie superalgebra elements, where a Lie superalgebra is a \(\mathbb{Z}_2\)-graded algebra with a bracket satisfying graded commutativity and a graded version of Jacobi identity. The supergroup has a usual Lie group as a subgroup and, somewhat surprisingly, this is also enlarged with respect to a typical relativistic theory. In a relativistic theory the symmetry group is the Poincar\'e group, which now gets enhanced to a \(\grp{SO}(2,4)\) group, also known as the conformal group. The new symmetries are the dilatation \(D\) and four conformal transformations \(K_{0}, \dots, K_{3}\). The theory with maximal supersymmetry was constructed shortly after in ref.~\cite{Brink1977} by Brink, Schwarz and Scherk. This theory is uniquely defined by its symmetry. It is a theory of a connection \(A\) on an \(\grp{SU}(N)\) principal bundle over Minkowski space \(\mathbb{M}\), together with fermionic field \(\Psi\) and scalar fields \(\Phi\). The action functional is given by the Yang-Mills term together with other terms dictated by supersymmetry, which we do not write explicitly since they will not be important in the following \begin{equation} S[A, \Psi, \Phi] = \frac 1 {2 g^2} \int_{\mathbb{M}} \tr(F \wedge * F + \dots). \end{equation} Here the trace is taken in the fundamental representation of \(\grp{SU}(N)\) and \(g^2\) is a real number, called coupling constant. \(F = d A + A \wedge A\) is the curvature of the connection \(A\) and \(* F\) is the Hodge dual. The scattering amplitudes, can be expanded as a power series in \(g\). Terms in the perturbative expansion are computed by summing Feynman graphs. The contribution of a Feynman graph can be factored in two different types of terms: the kinematic part, depending on the positions (or on the momenta after Fourier transform) and the `color' part which depends on the Lie algebra \(\mathfrak{su}(N)\) of the gauge group \(\grp{SU}(N)\). The observables can then be decomposed on a basis of \(\mathfrak{su}(N)\) invariants whose coefficients depend on \(N\) and \(g\). If we select invariants which can be written as a single trace and, for these terms, we select the dominant behavior when \(N \to \infty\), then the topology of the contributing graphs simplifies. We find that only planar graphs contribute. The way to select the planar graph contributions is to reorganize the perturbation theory as an expansion in \(\lambda = g^2 N\) around \(\lambda = 0\), with \(N \to \infty\) and \(g^2 \to 0\). This is the well-known 't Hooft limit~\cite{Hooft1974}. From his study of the large \(N\) limit, 't Hooft conjectured that the result in the 't Hooft limit is the genus zero term in an expansion of a theory which sums over surfaces. A theory which sums over surfaces is a string theory (in a theory of particles, one sums\footnote{The sum over particle histories is not well-defined mathematically. Nevertheless, we can use it formally to compute the perturbative expansion. A similar statement holds for a string theory, where we sum over string histories also called worldsheets.} over particle paths, as instructed by the Feynman path integral). The conjecture also stated that subleading terms in \(N\) correspond to sums over surfaces of higher genera. This conjecture of 't Hooft is very general, and was initially proposed for QCD, where the gauge group \(\grp{SU}(3)\) was to be replaced by \(\grp{SU}(N)\). It was hoped that understanding \(N \to \infty\) case could shed some light on the \(N=3\) case. If instead of QCD we consider the \(\mathcal{N}=4\) supersymmetric theory, the conjecture was sharpened by the AdS/CFT correspondence of Maldacena (see ref.~\cite{Maldacena1999}). The AdS/CFT correspondence identifies the precise measure on the space of surfaces. In fact, we should use super-strings, but if we set the fermions to zero we obtain a theory of a string moving in an \(\text{AdS}_5 \times \mathbb{S}^5\) geometry. Here CFT means Conformal Field Theory, which in this case is a theory with a symmetry group containing \(\grp{SO}(2,4)\). The \(\text{AdS}_5\) space is the five-dimensional hyperbolic space with a non-definite metric, which can be obtained by analytically continuing some coordinates to imaginary values (a procedure called Wick rotation in the Quantum Field Theory literature). This is similar to the relation between Euclidean space \(\mathbb{R}^4\) and Minkowski space \(\mathbb{M}\). The isometry group of \(\text{AdS}_5\) is again \(\grp{SO}(2,4)\). In fact, the full \(\grp{PSU}(2,2\vert 4)\) symmetry groups match on both sides of the correspondence. The AdS/CFT duality describes a physical system in two different ways. When the 't Hooft coupling \(\lambda\) is small, the field theory perturbative expansion in powers of \(\lambda\) is reliable. When the 't Hooft coupling is large, instead, one should use string theory on the \(\text{AdS}_5 \times \mathbb{S}^5\) should be used instead. In this case, the expansion variable is \(\lambda^{-1/2}\). Therefore, the duality is of strong-weak type; the strong coupling (\(\lambda \to \infty\)) in the CFT can be mapped to a weakly coupled description in the dual string theory. The computation of the scattering amplitudes can also be done in the dual string theory, as described in ref.~\cite{Alday2007a}. In the dual string theory scattering amplitudes are given by the exponential of a minimal surface in \(\text{AdS}_5\) which ends on the boundary of \(\text{AdS}_5\) on a polygon whose sides are the momenta of the scattered particles (the polygon closes by momentum conservation). \section{Kinematics} \label{sec:kinematics} In this section we describe the kinematics of a scattering process in terms of configurations of points in \(\mathbb{CP}^3\). This was initiated in ref.~\cite{Hodges:2009hk} for tree-level amplitudes, later extended to superspace in ref.~\cite{Mason2009a} and further studied in ref.~\cite{ArkaniHamed:2009dn}. The usefulness of these variables for loop amplitudes was emphasized in ref.~\cite{Arkani-Hamed2011} and also in ref.~\cite{Goncharov2010} for an explicit two-loop result. Consider an \(n\)-particle scattering process. The particle labeled by \(i\) is described by the on-shell momentum \(p_{i}\) (with \(p_{i}^{2} = 0\), where the norm is computed using the Minkowski metric), its helicity \(s_i\) and a gauge algebra generator \(t_{i} \in \mathfrak{su}(N)\). The helicity labels the representation under the compact subgroup \(\grp{U}(1)\) of the Lorentz group \(\grp{O}(1,3)\) which preserves the momentum \(p_i\). In fact, if our theory contains fermions we need to pass to the covering group \(\grp{Spin}(1,3)\) of part of the Lorentz group connected to the identity. In the end, the representations turn out to be labeled by \(s \in \mathbb{Z}/2\). As we discussed above, in the 't Hooft limit \(N \to \infty\), \(g^{2} N = \lambda\) fixed, only single-trace terms survive in the scattering amplitudes. If we look at one of these single-trace terms, we see that the scattered particles are cyclically ordered. We can therefore introduce a \emph{dual} space with coordinates \(x\) such that the momenta \(p_{i}\) are expressed as \(p_{i} = x_{i-1} - x_{i}\). The \(x_i\) coordinates are only defined up to a translation \(x_i \sim x_i + a\). We denote by \(\tilde{\mathbb{M}}\) the space parametrized by dual coordinates \(x\). The \(\mathcal{N}=4\) super-Yang-Mills theory is superconformal invariant. Besides this superconformal symmetry, the \(\mathcal{N}=4\) super-Yang-Mills theory also has a surprising \emph{dual} superconformal symmetry, whose bosonic subgroup acts on the dual coordinates \(x\). In the following we will mostly be interested in the conformal subgroup of this dual superconformal group. The dual superconformal symmetry is a hidden symmetry, which only arises in the 't Hooft limit. In particular, it can not be verified on the Lagrangian of the theory. Historically, this symmetry arose as follows. First, the authors of ref.~\cite{Drummond:2006rz} noticed that integrals appearing in the perturbative computations of refs.~\cite{Anastasiou2003a, Bern2005} have a curious inversion property in the dual space. Together with the obvious Lorentz symmetry, this generates the conformal group. This symmetry was then confirmed, and in fact used to guide the computations, at higher loop orders and for larger numbers of external particles in refs.~\cite{Bern2007, Bern2007a, Bern2008b}. In a parallel development~\cite{Alday2007a}, Alday and Maldacena showed how to compute scattering amplitudes in the dual string theory. This turned out to be closely related to the computation of a Wilson loop (in a language more familiar to mathematicians, a Wilson loop is the trace of the holonomy of the connection \(A\) around a curve). The strong coupling computation leads us to believe that there is a connection between scattering amplitudes and a Wilson loop around a polygonal contour with vertices \(x_i\). This was confirmed also at weak coupling in several papers~\cite{Drummond2008c, Brandhuber2008a, Drummond2008b, Bern2008a, Drummond:2008aq}. Under the duality the scattering amplitudes map to Wilson loops and the dual conformal symmetry of scattering amplitudes maps to the conformal symmetry of the Wilson loops. Ref.~\cite{Drummond:2008vq} showed that in fact the scattering amplitudes enjoy a dual \emph{super}-conformal symmetry. This corresponds in the dual side to the superconformal symmetry of a Wilson super-loop, which is the trace of the holonomy of a superconnection in superspace along a polygonal contour. The corresponding super-loops were first defined in refs.~\cite{Mason2010, Caron-Huot2011}. The dual space \(\tilde{\mathbb{M}}\) is noncompact and it does not have an action of the conformal group since some points are sent to infinity under conformal transformations. This problem can be solved by compactifying \(\tilde{\mathbb{M}}\) is a way compatible with the action of the conformal group. Moreover, \(\tilde{\mathbb{M}}\) comes with a Minkowski signature. It is more convenient to use complex coordinates instead and to impose reality conditions when needed. Doing this, we can treat both the cases of Lorentz signature and of split signature. The complexified and compactified dual space can be represented as the \(\mathbb{G}(2,4)\) Grassmannian of two-planes in \(\mathbb{C}^{4}\) containing the origin. Therefore, to each point in dual space \(\tilde{\mathbb{M}}\) we can associate a two-plane in \(\mathbb{C}^{4}\). Two points in dual space are light-like separated if their corresponding planes intersect in a line (it is easy to check that this imposes one constraint). If we projectivize this construction, to a line through the origin in \(\mathbb{C}^{4}\) corresponds a point in \(\mathbb{CP}^{3}\) and to a two-plane through the origin in \(\mathbb{C}^4\) corresponds a projective line in \(\mathbb{CP}^3\). We can do this for all pairs of points \((x_{i-1}, x_{i})\) and associate to each of them a point \(Z_{i} \in \mathbb{CP}^{3}\). So instead of describing the kinematics by giving the momenta \(p_{i}\) subject to on-shell conditions \(p_{i}^{2} = 0\) and momentum conservation \(\sum_{i=1}^{n} p_{i} = 0\), we can describe it by giving \(n\) points \(Z_{i} \in \mathbb{CP}^{3}\). The variables \(Z_{i}\) are known as momentum twistors\footnote{A similar construction can be done for Minkowski space \(\mathbb{M}\) instead, in which case we obtain the Penrose's twistor space (see ref.~\cite{Penrose:1967wn}).} and were introduced in ref.~\cite{Hodges:2009hk}. Unlike for the variables \(p_{i}\) or \(x_{i}\), the momentum twistors are unconstrained. The complexified dual conformal group acts as \(\grp{SL}(4, \mathbb{C})\) on the momentum twistors \([Z] \to [M Z]\), where \(M\) is an \(\grp{SL}(4, \mathbb{C})\) matrix and we have denoted by \([Z]\) the homogeneous coordinates of the point \(Z\). The \(\grp{SL}(4, \mathbb{C})\) is the double cover of the complexified orthogonal group \(\grp{SO}(6, \mathbb{C})\). There is a small subtlety here. We defined the Lorentz group to be \(\grp{O}(1,3)\) and its complexification is \(\grp{O}(4, \mathbb{C})\). However, the parity transformation in \(\grp{O}(4, \mathbb{C})\) does not embed in \(\grp{SO}(6, \mathbb{C})\), nor in its double cover \(\grp{SL}(4, \mathbb{C})\). Then, the question is how does this discrete parity transformation act on the momentum twistor space. The answer is as follows. There is another space which, for lack of a better name, we call conjugate momentum twistor space whose points we label by \(W_i\). There is a pairing between points in these two spaces, defined up to rescaling which we denote by \(W \cdot Z\). Then we impose the rescaling invariant constraints \(W_{i} \cdot Z_{i} = 0\) and \(W_{i-1} \cdot Z_{i} = W_{i+1} \cdot Z_{i} = 0\) (here \(i \pm 1\) are considered modulo \(n\), the number of particles in the scattering process). Given the \(Z_i\), the \(W_i\) are determined up to a rescaling. Then, parity acts as the discrete transformation \(Z_{i} \leftrightarrow W_{i}\). The translation of the kinematics to momentum twistor language makes it easy to build conformal invariants. In order to make \(\grp{SL}(4, \mathbb{C})\) invariants, we can form four-brackets \(\langle i j k l\rangle = \text{Vol}(v_{i}, v_{j}, v_{k}, v_{l})\), where \(v_{i}\) is a vector in \(\mathbb{C}^{4}\) corresponding to \(Z_{i}\) and \(\text{Vol}\) is a volume form which is preserved by the action of \(\grp{SL}(4, \mathbb{C})\). So we have established that we can describe the kinematics of a scattering process by giving a configuration of \(n\) ordered points \(Z_i\) in \(\mathbb{CP}^3\). The homogeneous coordinates of these points fit in a \(4 \times n\) matrix. The conformal invariants are built from the \(4 \times 4\) minors of this \(4 \times n\) matrix. The description above is very similar to the description of coordinates on a Grassmannian. For \(k \leq n\), the Grassmannian \(\mathbb{G}(k,n)\) of \(k\)-planes in an \(n\)-dimensional space can be described as the space of \(k \times n\) matrices of full rank modulo the left action by \(\grp{GL}(k)\). Given such a \(k \times n\) matrix, we can form \(\binom{n}{k}\) minors of type \(k \times k\). They can be labeled by \(k\) integers \(i_{1}, \dotsc, i_{k} \in \lbrace 1, \dotsc, n\rbrace\), corresponding to the columns of the initial \(k \times n\) matrix. We will denote the determinants of these minors by \(\langle i_{1}, \dotsc, i_{k}\rangle\). These determinants are also known as Pl\"ucker coordinates, and satisfy Pl\"ucker relations \begin{equation} \label{eq:plucker-rel} \langle i, k, I\rangle \langle j, l, I\rangle = \langle i, j, I\rangle \langle k, l, I\rangle + \langle j, k, I\rangle \langle i, l, I\rangle, \end{equation} where \(I\) is a multi-index with \(k-2\) entries. The Pl\"ucker relations define an embedding, called Pl\"ucker embedding, of the Grassmannian into a projective space of dimension \(\binom{n}{k}\). In the next section we will show that the Pl\"ucker relations in eq.~(\ref{eq:plucker-rel}) are the same as the exchange relations in a cluster algebra (see eq.~(\ref{eq:mutation}, for example). This will also provide a way to build more complicated coordinates starting from simple minors. Such combinations naturally appear in expressions for scattering amplitudes in \(\mathcal{N} = 4\). Grassmannians have the important property of duality which identifies \(\mathbb{G}(k, n)\) with \(\mathbb{G}(n-k, n)\). This is useful since it allows to simplify the geometric picture (as has been done in refs.~\cite{Goncharov2010, Golden:2013xva}). Consider first the case \(n=6\). The kinematics is described by a configuration of six ordered points in \(\mathbb{CP}^3\) or by the Grassmannian \(\mathbb{G}(4,6)\). By Grassmannian duality this is the same as \(\mathbb{G}(2,6)\) which then can be translated to a configuration of six ordered points in \(\mathbb{CP}^1\), a much simpler-looking (though equivalent) geometric configuration. A similar simplification can be performed for the case of \(n=7\), where a configuration of seven points in \(\mathbb{CP}^3\) can be mapped to a configuration of seven points in \(\mathbb{CP}^2\). In general, this means that the configurations of \(n\) ordered points in \(\mathbb{CP}^{k-1}\) are the same as configurations of \(n\) ordered points in \(\mathbb{CP}^{n-k-1}\). Therefore we can restrict to \(2 \leq k \leq \lfloor\tfrac {n-1} 2\rfloor\) without loss of generality. \section{Introduction to cluster algebras} \label{sec:intr-clust-algebr} In this section we present some useful facts about cluster algebras. In the next section we will make the connection with Grassmannians and Pl\"ucker coordinates. Cluster algebras have been introduced in a series of papers~\cite{1021.16017, 1054.17024, 1135.16013, 1127.16023} by Fomin and Zelevinsky. Since the formal definition is a bit complicated, we will content ourselves with an informal description. Cluster algebras are characterized as follows: they are commutative algebras constructed from distinguished generators (called \emph{cluster variables}) which are grouped into non-disjoint sets of constant cardinality (called \emph{clusters}). The clusters are constructed recursively by an operation called \emph{mutation} from an initial cluster. The number of variables in a cluster is called the rank of the cluster algebra. Let us consider an example. The \(A_{2}\) cluster algebra is defined by the following data: \begin{itemize} \item cluster variables: \(x_{m}, \quad m \in \mathbb{Z}\) \item clusters: \(\lbrace x_{m}, x_{m+1}\rbrace\) \item initial cluster: \(\lbrace x_{1}, x_{2}\rbrace\) \item rank: \(2\) \item exchange relations: \(x_{m-1} x_{m+1} = 1 + x_{m}\) \item mutation: \(\lbrace x_{m-1}, x_{m}\rbrace \to \lbrace x_{m}, x_{m+1}\rbrace\). \end{itemize} Using the exchange relations we find that \begin{equation} x_{3} = \frac {1+x_{2}}{x_{1}},\quad x_{4} = \frac {1+x_{1}+x_{2}}{x_{1} x_{2}},\quad x_{5} = \frac {1+x_{1}}{x_{2}},\quad x_{6} = x_{1}, \quad x_{7} = x_{2}, \quad \dots . \end{equation} Therefore, the sequence \(x_{m}\) is periodic with period five and the number of cluster variables is finite. When expressing the cluster variables \(x_{m}\) in terms of the variables \((x_{1}, x_{2})\), we encounter two unexpected features (which hold in general for arbitrary cluster algebras). First, the denominators of the cluster variables are always monomials. In general, we expect the cluster variables to be rational fractions of the initial cluster variables, but in fact the denominator is always a monomial. This is known under the name of ``Laurent phenomenon'' (see.~\cite{1021.16017}). The second observation is that the numerator is a polynomial with positive coefficients. As we alluded to before, this construction has a connection with Pl\"ucker relations. If we set \(x_1 = \tfrac {\langle 23\rangle \langle 14\rangle}{\langle 12\rangle \langle 34\rangle}\) and \(x_2 = \tfrac {\langle 13\rangle \langle 45\rangle}{\langle 34\rangle \langle 15\rangle}\), where \(\langle i j\rangle\) are coordinates of the Grassmannian \(\mathbb{G}(2,5)\), we can compute the rest of cluster variables by using the Pl\"ucker identities \(\langle i k\rangle \langle j l\rangle = \langle i j\rangle \langle k l\rangle + \langle i l\rangle \langle j k\rangle\), to obtain \begin{gather*} x_1 = \frac {\langle 23\rangle \langle 14\rangle}{\langle 12\rangle \langle 34\rangle},\quad x_2 = \frac {\langle 13\rangle \langle 45\rangle}{\langle 34\rangle\langle 15\rangle},\quad x_3 = \frac {\langle 12\rangle\langle 35\rangle}{\langle 15\rangle\langle 23\rangle},\quad x_4 = \frac {\langle 25\rangle\langle 34\rangle}{\langle 23\rangle\langle 45\rangle},\quad x_5 = \frac {\langle 15\rangle\langle 24\rangle}{\langle 12\rangle\langle 45\rangle}. \end{gather*} In the following we will use a description of cluster algebras starting with quiver. We now describe how to obtain a cluster algebra from a quiver. A quiver is an oriented graph which we will require to be connected, finite, without loops (arrows with the same origin and target) and two-cycles (pairs of arrows going in opposite directions between two vertices). Starting with a quiver with a given vertex \(k\) we define a new quiver obtained by mutating at vertex \(k\). The new quiver is obtained by applying the following operations on the initial quiver: \begin{itemize} \item for each path \(i \to k \to j\) we add an arrow \(i \to j\), \item reverse all the arrows on the edges incident with \(k\), \item remove all the two-cycles that may have formed. \end{itemize} The mutation at \(k\) is an involution; when applied twice in succession we obtain the initial cluster. Each quiver of the restricted type defined above is in one-to-one correspondence with skew-symmetric matrices, once we fix an ordering of the vertices. The skew-symmetric matrix \(b\) is such that \(b_{i j}\) is the difference between the number of arrows \(i \to j\) and the number of arrows \(j \to i\). Since only one of the terms above is nonvanishing, \(b_{i j} = -b_{j i}\). Under a mutation at vertex \(k\) the matrix \(b\) transforms to \(b'\) given by \begin{equation} \label{eq:b-mutation} b'_{i j} = \begin{cases} -b_{i j}, &\quad \text{if \(k \in \lbrace i, j\rbrace\)},\\ b_{i j}, &\quad \text{if \(b_{i k} b_{k j} \leq 0\)},\\ b_{i j} + b_{i k} b_{k j}, &\quad \text{if \(b_{i k}, b_{k j} > 0\)},\\ b_{i j} - b_{i k} b_{k j}, &\quad \text{if \(b_{i k}, b_{k j} < 0\)} \end{cases}. \end{equation} If we start with a quiver with \(n\) vertices and associate to each vertex \(i\) a variable \(x_{i}\), we can use the skew-symmetric matrix \(b\) to define a mutation relation at the vertex \(k\) by \begin{equation} \label{eq:mutation} x_{k} x_{k}' = \prod_{i \vert b_{i k} > 0} x_{i}^{b_{i k}} + \prod_{i \vert b_{i k} < 0} x_{i}^{-b_{i k}}, \end{equation} with the understanding that an empty product is set to one. The mutation at \(k\) changes \(x_{k}\) to \(x_{k}'\) defined by eq.~(\ref{eq:mutation}) and leaves the other cluster variables unchanged. The \(A_{2}\) cluster algebra can be expressed by a quiver \(x_{1} \to x_{2}\). Then, a mutation at \(x_{1}\) replaces it by \(x_{1}' = \frac {1+x_{2}}{x_{1}} \equiv x_{3}\) and reverses the arrow. A mutation at \(x_{2}\) replaces it by \(x_{2}' = \frac {1+x_{1}}{x_{2}} \equiv x_{5}\). In the diagram~\eqref{eq:pentagon} below we represent the quivers and the mutations for the \(A_2\) cluster algebra (the arrows between quivers are labeled by the mutated variable). \begin{equation} \label{eq:pentagon} \begin{xy} 0;<1pt,0pt>:<0pt,-1pt>:: (100,30) *+{\framebox{$x_3 \leftarrow x_2$}} ="0", (85,70) *+{\framebox{$x_3 \to x_4$}} ="1", (10,70)*+{\framebox{$x_5 \leftarrow x_4$}} ="2", (0,30) *+{\framebox{$x_5 \to x_1$}} ="3", (50,0) *+{\framebox{$x_1 \to x_2$}} ="4", "0", {\ar^{x_2} "1"}, "4", {\ar^{x_1} "0"}, "1", {\ar^{x_3} "2"}, "2", {\ar^{x_4} "3"}, "3", {\ar^{x_5} "4"}, \end{xy} \end{equation} \section{The cluster algebra for \texorpdfstring{\(\mathbb{G}(k,n)\)}{G(k,n)}} \label{sec:clust-algebra} The Grassmannian \(\mathbb{G}(k,n)\) has a cluster algebra structure which was described in ref.~\cite{MR2078567} (this construction is also reviewed in ref.~\cite{1215.16012}). For \(k < n\) we consider the description of the Grassmannian \(\mathbb{G}(k,n)\) as the equivalence classes of \(k \times n\) matrices of full rank, where two matrices are equivalent if they differ by the left action of a \(\grp{GL}(k)\) matrix. If the leftmost \(k \times k\) minor is non-singular, i.e.\ \(\langle 1, \dotsc, k\rangle \neq 0\) then, by left multiplication with an appropriate \(\grp{GL}(k)\) matrix, we can transform it to the identity matrix. After this operation the representative \(k \times n\) matrix has the form \((\mathbf{1}_{k}, Y)\), where \(\mathbf{1}_{k}\) is the \(k \times k\) identity matrix and \(Y\) is a \(k \times l\) matrix with \(l = n-k\). The entries \(y_{i j}\), \(1 \leq i \leq k\), \(1 \leq j \leq l\) of the matrix \(Y\) are coordinates on the cell of the Grassmannian where \(\langle 1, \dotsc, k\rangle \neq 0\). Now we define a matrix \(F_{i j}\) for \(1 \leq i \leq k\), \(1 \leq j \leq l\), which is the biggest square matrix which fits inside \(Y\) and whose lower-left corner is at position \((i,j)\) inside \(Y\). Then we define \(l(i,j) = \min(i-1, n-j-k)\) and \begin{equation} f_{i j} = (-1)^{(k-i)(l(i,j)-1)} \det F_{i j}. \end{equation} According to ref.~\cite{MR2078567}, the initial quiver for the \(\mathbb{G}(k,n)\) cluster algebra is given by\footnote{Here we are presented a flipped version of the quiver and with the arrows reversed with respect to the quivers of refs.~\cite{MR2078567, 1215.16012}.} \begin{equation} \label{eq:initial-quiver-gkn} \begin{xy} 0;<1pt,0pt>:<0pt,-1pt>:: (0,0) *+{f_{1 l}} ="0", (50,0) *+{\cdots} ="1", (100,0) *+{f_{13}} ="2", (150,0) *+{f_{12}} ="3", (200,0) *+{\framebox[5ex]{$f_{11}$}} ="4", (0,50) *+{f_{2 l}} ="5", (50,50) *+{\cdots} ="6", (100,50) *+{f_{23}} ="7", (150,50) *+{f_{22}} ="8", (200,50) *+{\framebox[5ex]{$f_{21}$}} ="9", (0,100) *+{\vdots} ="10", (50,100) *+{\vdots} ="11", (100,100) *+{\vdots} ="12", (150,100) *+{\vdots} ="13", (200,100) *+{\vdots} ="14", (0,150) *+{\framebox[5ex]{$f_{kl}$}} ="15", (50,150) *+{\cdots} ="16", (100,150) *+{\framebox[5ex]{$f_{k3}$}} ="17", (150,150) *+{\framebox[5ex]{$f_{k2}$}} ="18", (200,150) *+{\framebox[5ex]{$f_{k1}$}} ="19", "0", {\ar"1"}, "0", {\ar"5"}, "6", {\ar"0"}, "1", {\ar"2"}, "2", {\ar"3"}, "2", {\ar"7"}, "8", {\ar"2"}, "3", {\ar"4"}, "3", {\ar"8"}, "9", {\ar"3"}, "5", {\ar"6"}, "5", {\ar"10"}, "6", {\ar"7"}, "7", {\ar"8"}, "7", {\ar"12"}, "13", {\ar"7"}, "8", {\ar"9"}, "8", {\ar"13"}, "14", {\ar"8"}, "10", {\ar"15"}, "12", {\ar"17"}, "19", {\ar"13"}, "18", {\ar"12"}, "13", {\ar"18"}, \end{xy} \end{equation} The quiver above has two types of vertices, boxed and unboxed. The boxed vertices are special and called \emph{frozen vertices}. We do not allow mutations in the frozen vertices. The associated variables to the frozen vertices are called \emph{coefficients} instead of \emph{cluster variables}. We define the \emph{principal part} of such a quiver to be the quiver obtained by erasing the frozen vertices and the edges incident with them. For the case \(n=5\) and \(k=2\), we can compute \(f_{11} = \langle 23\rangle\), \(f_{12} = \langle 24\rangle\), \(f_{13} = \langle 25\rangle\), \(f_{21} = \langle 34\rangle\), \(f_{22} = \langle 45\rangle\), \(f_{23} = \langle 15\rangle\). Then, the the initial quiver diagram looks like below \begin{equation} \label{eq:g25} \begin{xy} 0;<1pt,0pt>:<0pt,-1pt>:: (25,25) *+{25} ="0", (75,25) *+{24} ="1", (125,25) *+{\framebox[5ex]{$\langle 23\rangle$}} ="2", (125,75) *+{\framebox[5ex]{$\langle 34\rangle$}} ="3", (75,75) *+{\framebox[5ex]{$\langle 45\rangle$}} ="4", (25,75) *+{\framebox[5ex]{$\langle 15\rangle$}} ="5", (0,0) *+{\framebox[5ex]{$\langle 12\rangle$}} ="6", "0", {\ar"1"}, "4", {\ar"0"}, "0", {\ar"5"}, "6", {\ar"0"}, "1", {\ar"2"}, "3", {\ar"1"}, "1", {\ar"4"}, \end{xy} \end{equation} where we have also included explicitly a frozen variable \(\langle 12\rangle\) which is equal to unity in the special parametrization we chose (on the part of the Grassmannian where \(\langle 12\rangle \neq 0\)). After doing a mutation on the node \(\langle 14\rangle\), we obtain a similar quiver diagram where the frozen vertex \(\langle 15\rangle\) is special instead of \(\langle 34\rangle\). Just like in the four-point case the arrows containing the mutated node get reversed and the link between \(\langle 13\rangle\) and \(\langle 34\rangle\) gets deleted and replaced with a link \(\langle 13\rangle \to \langle 15\rangle\). It is easy to see that by mutating one gets the five similar quivers and nothing more. The principal part of the quiver for configurations of five points in \(\mathbb{CP}^{1}\) is the same as the Dynkin diagram of \(A_{2}\) Lie algebra. Indeed, this \emph{is} the \(A_2\) cluster algebra we discussed in sec.~\ref{sec:intr-clust-algebr}. The appearance of the \(A_{2}\) Dynkin diagram provides the motivation for the name. We can define scaling invariant cross-ratios associated to any unfrozen node by taking the ratio of the product of coordinates in the quiver which can be reached by going against the arrows going in by the product of coordinates in the quiver which can be reached by following the arrows going out. For example, the cross-ratio corresponding to \(\langle 13\rangle\) in the quiver~\eqref{eq:g25} is given by \(\tfrac {\langle 12\rangle \langle 34\rangle}{\langle 14\rangle \langle 23\rangle}\). A mutation reverses the arrows and therefore transforms these ratios to their inverse. These cross-ratios are the cluster variables of the \(A_2\) algebra, and the exchange relations following from the quiver description can be shown to be the same as the exchange relations of the \(A_2\) algebra. More complicated cases appear for six points in \(\mathbb{CP}^{2}\), where we obtain a \(D_{4}\) Dynkin diagram. We can start with an initial quiver at the left below and mutate at vertex \(\langle 236\rangle\) to obtain the principal part of the quiver shown at right, which is the same as the Dynkin diagram of \(D_{4}\). \begin{equation} \label{eq:d4} \begin{xy} 0;<1pt,0pt>:<0pt,-1pt>:: (30,30) *+{\langle 236\rangle} ="0", (30,80) *+{\langle 136\rangle} ="1", (30,130) *+{\framebox[7ex]{$\langle 126\rangle$}} ="2", (80,130) *+{\framebox[7ex]{$\langle 156\rangle$}} ="3", (80,80) *+{\langle 356\rangle} ="4", (80,30) *+{\langle 235\rangle} ="5", (130,30) *+{\framebox[7ex]{$\langle 234\rangle$}} ="6", (130,80) *+{\framebox[7ex]{$\langle 345\rangle$}} ="7", (130,130) *+{\framebox[7ex]{$\langle 456\rangle$}} ="8", (0,0) *+{\framebox[7ex]{$\langle 123\rangle$}} ="9", "0", {\ar"1"}, "4", {\ar"0"}, "0", {\ar"5"}, "9", {\ar"0"}, "1", {\ar"2"}, "3", {\ar"1"}, "1", {\ar"4"}, "4", {\ar"3"}, "5", {\ar"4"}, "4", {\ar"7"}, "8", {\ar"4"}, "5", {\ar"6"}, "7", {\ar"5"}, \end{xy} \qquad \qquad \begin{xy} 0;<1pt,0pt>:<0pt,-1pt>:: (30,30) *+{\bullet} ="0", (30,80) *+{\bullet} ="1", (80,80) *+{\bullet} ="4", (80,30) *+{\bullet} ="5", "1", {\ar"0"}, "0", {\ar"4"}, "5", {\ar"0"}, \end{xy} \end{equation} We should note that for the quiver in~\eqref{eq:d4}, the cross-ratio corresponding to the entry \(\langle 356\rangle\) is given by \(\tfrac {\langle 136\rangle \langle 235\rangle \langle 456\rangle}{\langle 156\rangle \langle 236\rangle \langle 345\rangle}\). This is more complicated than the cross-ratios which were obtained previously and it has some interesting properties. It appeared already in~\cite{Goncharov1995} (before the cluster algebras were discovered), in connection with functional equations for the trilogarithm. For a geometrical interpretation of this quantity see sec.~\ref{sec:projective-geometry} and figs.~\ref{fig:triple1}, \ref{fig:triple2}, \ref{fig:triple3}. In ref.~\cite{1054.17024}, Fomin and Zelevinsky showed that a cluster is of finite type (i.e.\ it has a finite number of cluster variables), if the principal part of its quiver can be transformed to a Dynkin diagram by a sequence of mutations. Furthermore, if the principal part of the quiver contains a subgraph which is an affine Dynkin diagram, then the cluster algebra is of infinite type. Using this characterization, one can show that the cluster algebras arising from \(\mathbb{G}(2,n)\) and \(\mathbb{G}(3,6)\), \(\mathbb{G}(3,7)\) and \(\mathbb{G}(3,8)\) are of finite type. In ref.~\cite{1088.22009}, Scott has shown that all the other \(\mathbb{G}(k,n)\) with \(2 \leq k \leq \tfrac n 2\) are of infinite type. This has striking implications for scattering amplitudes in \(\mathcal{N}=4\) super-Yang-Mills theory which, as we have reviewed, are based on Grassmannians \(\mathbb{G}(4,n)\), for \(n \geq 6\). If \(n=6\) we obtain \(\mathbb{G}(4,6) = \mathbb{G}(2,6)\) which is of finite type. If \(n=7\) we obtain \(\mathbb{G}(4,7) = \mathbb{G}(3,7)\) which is again of finite type. However, starting at eight-point the cluster algebras are not of finite type anymore. Notice that the seeds we have been using break the cyclic symmetry of the configuration of points. In order to see that the cyclic symmetry is preserved we need to show that two quivers whose labels are permuted by one unit are linked by a sequence of mutations. This can be shown in full generality (see ref.~\cite{Golden:2013xva} for details). So far the most studied cases were \(\mathbb{G}(4,n)\) for \(n=6, 7\). The case \(n=8\) is more complicated also because the cluster algebra is infinite. In the remainder of this section we will list a few of the cluster coordinates appearing for \(\mathbb{G}(4,8)\) and discuss their properties. By using mutations, one encounters \begin{equation} \langle 1 2 (3 4 5) \cap (6 7 8)\rangle \equiv \langle 1 3 4 5\rangle \langle 2 6 7 8\rangle - \langle 2 3 4 5\rangle \langle 1 6 7 8\rangle. \end{equation} Here, the \(\cap\) notation emphasizes the following geometrical fact: the composite bracket \(\langle 1 2 (3 4 5) \cap (6 7 8)\rangle\) vanishes whenever the projective line \((3 4 5) \cap (6 7 8)\) obtained by intersecting two projective planes \((3 4 5)\) and \((6 7 8)\) and the points \(1\) and \(2\) lie in the same projective plane. This notation has been introduced in ref.~\cite{Arkani-Hamed2011}. Already for \(n=7\) we encounter \(\langle 1 2 (3 4 5) \cap (5 6 7)\rangle\), when expressed in \(\mathbb{CP}^{3}\) language. In previous work (see ref.~\cite{Goncharov1995}) a different notation has been used for this quantity. First, a transformation to \(\mathbb{CP}^2\) language was performed. Points in \(\mathbb{CP}^2\) can be represented as vectors in \(\mathbb{C}^3\), modulo rescalings. For two three-vectors \(v_1\), \(v_2\) we have a notion of vector product \(v_1 \times v_2\) which is the vector orthogonal to the plane spanned by \(v_1\) and \(v_2\). Then, the composite brackets containing \(\cap\) can be translated to \begin{equation} \langle v_1 \times w_1, v_2 \times w_2, v_3 \times w_3\rangle = \langle v_1 v_2 w_2\rangle \langle w_1 v_3 w_3\rangle - \langle w_1 v_2 w_2\rangle \langle v_1 v_3 w_3\rangle. \end{equation} Above, the right-hand side does not have the same manifest symmetry as the left-hand side so more equivalent expressions can be found by applying permutations to the vector labels. Notice that the left-hand side vanishes when \(v_1 \times w_1\) and \(v_2 \times w_2\) differ by a rescaling. This is equivalent to the statement that the planes spanned by \((v_1, w_1)\) and \((v_2, w_2)\) are identical. Hence, \(\langle v_1 v_2 w_2\rangle = 0\) and \(\langle v_2 w_1 w_2\rangle = 0\) so the right-hand side vanishes as well. Since the \(\mathbb{G}(4,8)\) cluster algebra is infinite, we are bound to find more and more complicated expressions. One remarkable feature of the mutations is that the denominator can always be canceled by the numerator, after using Pl\"ucker identities. Therefore, these coordinates always seem to be \emph{polynomials} in the Pl\"ucker coordinates. This is an analog of the Laurent phenomenon, but this time we obtain polynomials.\footnote{This holds in many explicit examples, but I have not found a proof in the literature.} As an example in \(\mathbb{G}(4,8)\), we have the following identity \begin{equation} \frac {\langle 1 2 3 7\rangle \langle 1 2 4 5\rangle \langle 1 6 7 8\rangle + \langle 1 2 7 8\rangle \langle 4 5 (6 7 1) \cap (1 2 3)\rangle}{\langle 1 2 6 7\rangle} = \langle 4 5 (7 8 1) \cap (1 2 3)\rangle. \end{equation} Here the left-hand side is the expression obtained following a mutation, while the right-hand side is the expression where the denominator has been canceled. Even more complicated coordinates can be generated. As an example, we also find \begin{equation} \langle (1 2 3) \cap (3 4 5), (5 6 7) \cap (7 8 1)\rangle. \end{equation} This vanishes when the lines \((1 2 3) \cap (3 4 5)\) and \((5 6 7) \cap (7 8 1)\) intersect. Equivalently, we can say that the lines \((3 4 5) \cap (5 6 7)\) and \((7 8 1) \cap (1 2 3)\) intersect. \section{Poisson brackets} \label{sec:poisson-brackets} One can define a Poisson bracket on the cluster coordinates. It is enough to define the Poisson bracket between the coordinates in a given cluster. If \(X_i\), \(X_j\) belong to the same cluster, i.e.\ they are vertices in the same quiver, then their Poisson bracket is defined as \begin{equation} \label{eq:poisson-x-coords} \lbrace X_{i}, X_{j}\rbrace = b_{i j} X_{i} X_{j}, \end{equation} where \(b_{i j} = -b_{j i}\) is the \(b\) matrix of the cluster. The Poisson bracket is compatible with mutations. That is, \begin{equation} \lbrace X_{i}', X_{j}'\rbrace = b_{i j}' X_{i}' X_{j}', \end{equation} where \(X_{i}'\) and \(b_{i j}'\) are obtained by a mutation from \(X_{i}\) and \(b_{i j}\), respectively. The Poisson structure is easiest to understand for \(\mathbb{G}(2,n)\) cluster algebras (see ref.~\cite{MR2567745} for a discussion). To a configuration of \(n\) points in \(\mathbb{CP}^{1}\) with a cyclic ordering we associate a convex polygon. Each of the vertices of this polygon corresponds to one of the \(n\) points. Then consider a complete triangulation of the polygon. Each of the \(n-3\) diagonals in this triangulation determines a quadrilateral and therefore four points in \(\mathbb{CP}^{1}\). Suppose a diagonal \(E\) determines a quadrilateral with vertices \(i,j,k,l\) where the ordering is the same as the ordering of the initial polygon. Using these four points we can form a cross-ratio \(r(i,j,k,l) = \frac {z_{i j} z_{k l}}{z_{j k} z_{i l}}\). We have \(r(i,j,k,l) = r(k,l,i,j)\) which implies that the cross-ratio is uniquely determined by the diagonal \(E\) and we don't have to chose an orientation. If we flip the diagonal \(E\) then the initial cross-ratio goes to its inverse, but the cross-ratios corresponding to neighboring quadrilaterals change in a more complicated way. In fact, they transform in the same way as the cluster coordinates, if the matrix \(b_{i j}\) is defined as follows. Two diagonals \(E\) and \(F\) in a given triangulation are called adjacent if they are the sides of one of the triangles of the triangulation. If the diagonals are adjacent we set \(b_{E F} = 1\) if the diagonal \(E\) comes before \(F\) when listing the diagonals at the common vertex in clockwise order. Otherwise we set \(b_{E F} = -1\). If two diagonals \(E\) and \(F\) are not adjacent we set \(\epsilon_{E F} = 0\). In general, it is hard to compute the Poisson bracket between two coordinates in different clusters. One approach is to express the second coordinate in terms of the coordinates of a cluster containing the first one. Then, we can use the definition. In general this is hard. Another approach is to use the Sklyanin bracket (see ref.~\cite{MR2078567}). To explain this, we restrict again to the part of the Grassmannian \(\mathbb{G}(k, n)\) where \(\langle 1, \dots, k\rangle \neq 0\) and we use a representative under the left \(\grp{GL}(k)\) action which is \((\mathbf{1}_k, Y)\), where \(Y\) is a \(k \times l\), \(l = n-k\) matrix. We denote the entries of the matrix \(Y\) by \(y_{i j}\), \(i = 1, \dots k\), \(j = 1, \dots, l\). On these coordinates we introduce a bracket called Sklyanin bracket given by \begin{equation} \label{eq:sklyanin} \{y_{i j}, y_{\alpha \beta}\}_S = (\sgn(\alpha - i) - \sgn(\beta - j)) y_{i \beta} y_{\alpha j}. \end{equation} In general, Sklyanin bracket is defined using an \(R\)-matrix, which is a solution of a modified classical Yang-Baxter equation (see ref.~\cite{MR2078567} for details). Now, we can extend the Sklyanin bracket to arbitrary functions of the variables \(y\), in the usual way \begin{equation} \{f, g\}_S = \sum_{i,j,\alpha,\beta} \frac {\partial f}{\partial y_{i j}} \{y_{i j}, y_{\alpha \beta}\}_S \frac {\partial g}{\partial y_{\alpha \beta}}. \end{equation} This bracket satisfies the Jacobi identity, as can be shown by direct computation, using the identity \(\sgn(x) \sgn(y) + \sgn(y) \sgn(z) + \sgn(z) \sgn(x) = -1\) for \(x+y+z=0\) and \(x y z \neq 0\). The cluster coordinates can be expressed in terms of variables \(y\) and their bracket can be computed using the formula above. As an example, consider the case of the \(A_2\) algebra again. There we have the cluster coordinates \begin{equation} X_1 = \frac {(12)(45)}{(15)(24)} = -\frac {y_{12} y_{23} - y_{13} y_{22}}{y_{12} y_{23}}, \qquad X_2 = \frac {(25)(34)}{(23)(45)} = \frac {y_{13} (y_{11} y_{22} - y_{12} y_{21})}{y_{11} (y_{12} y_{23} - y_{13} y_{22})}. \end{equation} The computation of the bracket \(\{X_1, X_2\}_S\) is a bit tedious, but straightforward. We find \begin{equation} \{X_1, X_2\}_S = 2 X_1 X_2. \end{equation} Up to a factor of \(2\), we obtain the answer expected from the definition in terms of the \(b\) matrix of the quiver. Now, we can compute Poisson brackets between any cluster coordinates, even if they don't belong to the same cluster. Most of the Poisson brackets between coordinates which don't belong to the same cluster will be very complicated, but sometimes one obtains zero. This information combined with other physical requirements, can uniquely determine some parts of the amplitudes, as done for example in ref.~\cite{Golden2015}. \section{Elements of projective geometry} \label{sec:projective-geometry} It is very useful to understand the cross-ratios geometrically. For example, the \(A_2\) cluster algebra described above involves the geometry of five points on \(\mathbb{CP}^1\). The simplest type of cross-ratio is the cross-ratio of four points \((a,b,c,d)\) in \(\mathbb{CP}^{1}\). If the points have have coordinates \((z_{a}, z_{b}, z_{c}, z_{d})\), then their cross-ratio is \begin{equation} r(a,b,c,d) = \frac {z_{a b} z_{c d}}{z_{b c} z_{d a}}, \end{equation} with \(z_{a b} = z_a - z_b\). In the following we will try to reduce more complicated situations to configurations of four points on a projective line. By duality, a point in \(\mathbb{CP}^{2}\) is in correspondence with a line in \(\mathbb{CP}^{2}\). A configuration of four points on a projective line in \(\mathbb{CP}^2\) dualizes to a configuration of four lines intersecting in a point. Therefore, we can talk about the cross-ratio of four lines in \(\mathbb{CP}^{2}\) (see fig.~\ref{fig:linesCR}). \begin{figure} \centering \includegraphics{linesCR} \caption{The cross-ratio of four lines in \(\mathbb{CP}^2\).} \label{fig:linesCR} \end{figure} The cross-ratios of four lines \((\alpha, \beta, \gamma, \delta)\) containing a point \(O\) can be related to the cross-ratio of four points by taking an arbitrary line \(\rho\) (not containing the point \(O\)) and computing the intersection points \(a = \rho \cap \alpha\), \(b = \rho \cap \beta\), \(c = \rho \cap \gamma\), \(d = \rho \cap \delta\). Then, the cross-ratio of the points \((a,b,c,d)\) on \(\rho\) is independent on \(\rho\) and is equal to the cross-ratio of the lines \((\alpha, \beta, \gamma, \delta)\) \begin{equation} r(\alpha, \beta, \gamma, \delta) = r(a,b,c,d). \end{equation} If the lines are defined by pairs of points \(\alpha = (O A)\), \(\beta = (O B)\), \(\gamma = (O C)\), \(\delta = (O D)\), as in fig.~\ref{fig:projCR}, then the cross-ratio of the four lines is \begin{equation} r(\alpha, \beta, \gamma, \delta) = r(a, b, c, d) = (O\vert A,B,C,D) \equiv \frac {\langle O A B\rangle \langle O C D\rangle}{\langle O B C\rangle \langle O D A\rangle}, \end{equation} where \(\langle X Y Z\rangle\) is proportional to the oriented area of the triangle \(\Delta(X,Y,Z)\). \begin{figure} \centering \includegraphics{projCR} \caption{The cross-ratio of four lines determined by their common intersection point \(O\) and another point on each on of them.} \label{fig:projCR} \end{figure} If the four points \(A\), \(B\), \(C\), \(D\) do not belong to a line we can't generically define their cross-ratio. However, given a conic \(\mathcal{C}\) such that \(A\), \(B\), \(C\), \(D\) belong\footnote{Any conic is determined by five points. Given four points there is an infinity of conics which contain them.} to \(\mathcal{C}\), then we can define their cross-ratio as follows: pick a point \(X\) on the conic \(\mathcal{C}\). Then, by Chasles' theorem the cross-ratio of the lines \((X A)\), \((X B)\), \((X C)\) and \((X D)\) is independent on the point \(X\) and is defined to be the cross-ratio of the points \(A\), \(B\), \(C\), \(D\) (with respect to the conic \(\mathcal{C}\)). See fig.~\ref{fig:ellipse}. \begin{figure} \centering \includegraphics{ellipse} \caption{The cross-ratio of points \(A\), \(B\), \(C\), \(D\) with respect to the conic \(\mathcal{C}\).} \label{fig:ellipse} \end{figure} Let us now discuss the triple ratio of six points in \(\mathbb{CP}^{2}\) which was introduced by Goncharov. We take the six points to be \(A\), \(B\), \(C\), \(X\), \(Y\), \(Z\). Numerically, this triple ratio is given by \begin{equation} \label{eq:triple-ratio} r_{3}(A,B,C;X,Y,Z) = \frac {\langle ABX\rangle \langle BCY\rangle \langle CAZ\rangle}{\langle ABY\rangle \langle BCZ\rangle \langle CAX\rangle}. \end{equation} It turns out that this ratio has several geometrical interpretations. Consider first the situation in fig.~\ref{fig:triple1}. There, we have four lines which are dashed and blue: \(\alpha = (CB)\), \(\beta = (Cb)\), \(\gamma = (Cc)\), \(\delta = (Cd)\), where \(b = (AX) \cap (BY)\), \(c = A\) and \(d = (CZ) \cap (AX)\). Their cross-ratio, obtained by intersecting with the line \((AX)\), is given by \begin{equation} r(\alpha, \beta, \gamma, \delta) = r(a, b, c, d) = (C\vert B, (AX) \cap (BY), A, Z). \end{equation} \begin{figure} \centering \includegraphics{triple1} \caption{Triple ratio, expressed as a cross-ratio of points on the line \((AX)\).} \label{fig:triple1} \end{figure} But, instead of considering the intersections of the lines \((\alpha, \beta, \gamma, \delta)\) with the line \((AX)\) as above, we can consider the intersection with the line \((BY)\). The intersection points are \begin{align} a' &= \alpha \cap (BY) = B,\\ b' &= \beta \cap (BY) = b = (AX) \cap (BY),\\ c' &= \gamma \cap (BY) = (CA) \cap (BY),\\ d' &= \delta \cap (BY) = (CZ) \cap (BY). \end{align} The corresponding figure is fig.~\ref{fig:triple2}. If we denote by \(\alpha' = (AB)\), \(\beta' = (AX)\), \(\gamma' = (AC)\), \(\delta' = (Ad')\), we have \begin{multline} r(a,b,c,d) = r(\alpha, \beta, \gamma, \delta) = r(a',b',c',d') =\\= r(\alpha', \beta', \gamma', \delta') = (A\vert B, X, C, (BY) \cap (CZ)). \end{multline} \begin{figure} \centering \includegraphics{triple2} \caption{Triple ratio, expressed as a cross-ratio of points on the line \((BY)\).} \label{fig:triple2} \end{figure} Now we can repeat the previous procedure. We compute the cross-ratio \(r(\alpha', \beta', \gamma', \delta')\) by considering the intersection with \((CZ)\). The intersection points are \begin{align} a'' &= \alpha' \cap (C Z) = (A B) \cap (C Z),\\ b'' &= \beta' \cap (C Z) = (A X) \cap (C Z),\\ c'' &= \gamma' \cap (C Z) = C,\\ d'' &= \delta' \cap (C Z) = (B Y) \cap (C Z). \end{align} See fig.~\ref{fig:triple3} for a geometrical representation. If we define the lines \(\alpha'' = (B A)\), \(\beta'' = (B b'')\), \(\gamma'' = (B C)\), \(\delta'' = (B d'')\), we have \begin{multline} (B\vert A, (C Z) \cap (A X), C, Y) = r(\alpha'', \beta'', \gamma'', \delta'') = r(a'', b'', c'', d'') = r(\alpha', \beta', \gamma', \delta'). \end{multline} \begin{figure} \centering \includegraphics{triple3} \caption{Triple ratio, expressed as a cross-ratio of points on the line \((CZ)\).} \label{fig:triple3} \end{figure} We have therefore shown that \begin{equation} \label{eq:proj-equality} (A\vert B, X, C, (B Y) \cap (C Z)) = (B\vert A, (C Z) \cap (A X), C, Y) = (C\vert B, (A X) \cap (B Y), A, Z). \end{equation} Notice that this is also implied by the symmetry \(r_{3}(A,B,C;X,Y,Z) = r_{3}(B,C,A;Y,Z,X)\). Let us now show that the invariant \((A\vert B, X, C, (B Y) \cap (C Z))\) has the same zeros and poles as \(r_{3}(A,B,C; X,Y,Z)\). Form the definition, we know that \((A\vert B, X, C, (B Y) \cap (C Z))\) vanishes when \(\langle A B X\rangle = 0\) or \(\langle A C (B Y) \cap (C Z)\rangle = 0\). The second three-bracket vanishes if \(\langle B C Y\rangle = 0\) or \(\langle C A Z\rangle = 0\). In the first case \(B, C, Y\) are collinear and therefore \((B Y) \cap (C Z) = C\) so we have \(\langle A C (B Y) \cap (C Z)\rangle = \langle A C C\rangle = 0\). In the second case, when \(\langle C A Z\rangle = 0\) we have that \(A \in (C Z)\), \(C \in (C Z)\) and \(P \equiv (B Y) \cap (C Z) \in (C Z)\). Since all the entries of the three-bracket are collinear, we find that \(\langle A C (B Y) \cap (C Z)\rangle = 0\). We have shown that \((A\vert B, X, C, (B Y) \cap (C Z))\) vanishes if \(\langle A B X\rangle = 0\) or \(\langle B C Y\rangle = 0\) or \(\langle C A Z\rangle = 0\) which is the same as the numerator of \(r_{3}(A,B,C;X,Y,Z)\). In order to find the poles we reason in the same way. \section{Polylogarithm identities} \label{sec:polylog-ident} In this section we provide some more mathematical details on transcendental functions and explain how to partially integrate them. We denote by \(\mathcal{L}_n\) the Abelian group (under addition) of transcendental functions of transcendentality weight \(n\). An important character in this story is the Bloch group \(B_n\), also called the classical polylogarithm group: it is the subgroup of \(\mathcal{L}_n\) generated by the classical polylogarithm functions \(\Li_n\) and their products. Consider first the simplest kind of transcendental function, the logarithm. If we are working modulo \(2 \pi i\), then we have that \(\ln z + \ln w = \ln (z w)\), for any \(z, w \in \mathbb{C}^*\). In order to express this simple functional relation formally, define \(\mathbb{Z}[\mathbb{C}^{*}]\) to be the free Abelian group generated by \(\lbrace z\rbrace\), with integer coefficients and \(z\) non-zero complex numbers. Concretely, elements of this group are quantities like \(\lbrace z\rbrace + \lbrace w\rbrace\) and the group operation is defined in the obvious way. Then, we can quotient this group by the relations satisfied by the logarithm to obtain the logarithm group \(B_{1}\), \begin{equation} B_{1} = \mathbb{Z}[\mathbb{C}^{*}]/(\lbrace z\rbrace + \lbrace w\rbrace - \lbrace z w\rbrace). \end{equation} This group is isomorphic to the multiplicative group of complex numbers, \(\mathbb{C}^{\times}\). The next simplest transcendental functions are the dilogarithms, \(\Li_{2}\). The dilogarithms satisfy a simple five-term functional relation. One way to express this functional relation is to consider five points on \(\mathbb{CP}^{1}\) with coordinates \(z_{1}, \dotsc, z_{5}\). From any four such points we can form a cross-ratio \(r(z_{1}, \dotsc, \hat{z}_{i}, \dotsc z_{5})\), where the hatted argument is missing. We use the definition \(r(i,j,k,l) = \tfrac {z_{i j} z_{kl}}{z_{j k} z_{l i}}\) with \(z_{ij} = z_i - z_j\). Then the five-term identity can be written as \begin{equation} \sum_{i=1}^{5} (-1)^{i} \Li_{2}(-r(z_{1}, \dotsc, \hat{z}_{i}, \dotsc, z_{5})) = \text{logs}, \end{equation} where we have denoted by \(\text{logs}\) the terms which can be written uniquely in terms of logarithms. There is a theorem (see ref.~\cite{MR1760901}) that all the relations between dilogarithms are consequences of the five-term relations. We can now define the Bloch group \(B_{2}\) by analogy to the logarithm case. We first define \(\mathbb{Z}[\mathbb{C}]\) to be the free Abelian group generated by \(\lbrace z\rbrace_{2}\), where \(z\) is a complex number. Then, we quotient be the five-term relations and the quotient is denoted by \(B_{2}\) \begin{equation} B_{2} = \mathbb{Z}[\mathbb{C}]/(\text{five-term relations}). \end{equation} In this case we have a group morphism \(\delta\), \(B_{2} \xrightarrow{\delta} \Lambda^{2} \mathbb{C}^{*}\) which is defined by \(\delta(\lbrace z\rbrace_{2}) = (1-z) \wedge z\). To check that this is a group morphism we need to show that \(\delta(\text{five-term relation}) = 0\) or \begin{equation} \sum_{i=1}^{5} (-1)^{i} (1 + r(z_{1}, \dotsc, \hat{z}_{i}, \dotsc, z_{5})) \wedge r(z_{1}, \dotsc, \hat{z}_{i}, \dotsc, z_{5}) = 0, \end{equation} which can be done by a short computation. Let us now discuss \(\Li_{3}\) functions. There is a theorem stating that all transcendentality three functions can be written as a linear combination of \(\Li_{3}\) and products of lower transcendentality functions (see ref.~\cite{Goncharov1995}). Just like in the previous cases, we first need to find the functional relations satisfied by \(\Li_{3}\) functions. The identity satisfied by \(\Li_{3}\) is very similar to the one satisfied by \(\Li_{2}\) and can be described in terms of configurations of seven points on \(\mathbb{CP}^{2}\). It is convenient to describe each of these points in terms of their homogeneous \(v_{i} \in \mathbb{C}^{3}\) coordinates, with \(i=1, \dots, 7\). For three such vectors \(v_{i}\), \(v_{j}\), \(v_{k}\) we can define a three-bracket \(\langle \cdot, \cdot, \cdot\rangle : \mathbb{C}^{3} \times \mathbb{C}^{3} \times \mathbb{C}^{3} \to \mathbb{C}\) by the volume of the parallelepiped generated by them \(\langle i,j,k\rangle = \text{Vol}(v_{i}, v_{j}, v_{k})\). Given six points in \(\mathbb{CP}^{3}\), we can form a cross-ratio \begin{equation} r_{3}(1,2,3,4,5,6) = \frac {\langle 124\rangle \langle 235\rangle \langle 316\rangle}{\langle 125\rangle \langle 236\rangle \langle 314\rangle}. \end{equation} Such cross-ratios have been introduced and extensively used in ref.~\cite{Goncharov1995} and we also discuss their geometric interpretation in sec.~\ref{sec:clust-algebra}. The \(\Li_{3}\) functional relations can be expressed in terms of this cross-ratio as \begin{equation} \sum_{i=1}^{7} (-1)^{i} \Alt_{6} \Li_{3}(- r_{3}(1, \dotsc, \hat{i}, \dotsc, 7)) \approx 0, \end{equation} where \(\Alt_{6}\) mean antisymmetrization in the six points on which \(r_{3}\) depends and \(\approx\) means that we have omitted the terms which are products of lower transcendentality functions. Now we define \begin{equation} B_{3} = \mathbb{Z}[\mathbb{C}]/(\text{seven-term relations}). \end{equation} There is a morphism \(\delta : B_{3} \to B_{2} \otimes \mathbb{C}^{*}\), \(\delta(\lbrace x\rbrace_{3}) = \lbrace x\rbrace_{2} \otimes x\). In order to show that this morphism is well-defined, we need to show that that \(\delta\) annihilates the seven-term relations. It may seem that we can continue in the same way to higher transcendentality. However, this is not the case. At transcendentality four there are new functions which can not be expressed in terms of \(\Li_{4}\) and products of lower transcendentality functions. We can define \(B_{n}\) for \(n \geq 4\) in the same way as before, but there is a bigger group \(\mathcal{L}_{n}\) which is the Abelian group related to weight \(n\) polylogs, some of which are not classical polylogs. We defined \(B_n\) to be the Abelian groups generated by classical polylogs and \(\mathcal{L}_n\) to be the Abelian groups of all polylogs of weight \(n\). Now we want to characterize them. The most mathematically concise way to describe their (conjectural!) connection is by an exact sequence, which for \(n=4\) reads \begin{equation} 0 \to B_4 \to \mathcal{L}_4 \to \Lambda^{2} B_2 \to 0. \end{equation} An exact sequence is a sequence of maps between spaces such that the image of a map falls in the kernel of the next one. In the example above, the first arrow says that \(B_4\) maps to \(\mathcal{L}_4\) injectively, which is obvious since \(B_4\) is contained in \(\mathcal{L}_4\). The last arrow says that the map \(\mathcal{L}_4 \to \Lambda^{2} B_2\) is surjective. This is less obvious, but it means that for any element of \(\Lambda^{2} B_2\) one can find a weight four polylog with that \(\Lambda^{2} B_2\) projection. Finally, the rest of the sequence means that \(\ker(\mathcal{L}_4 \to \Lambda^{2} B_2) = B_4\). This means that if a weight four polylog has zero \(\Lambda^{2} B_2\) projection, which is to say it belongs to \(\ker(\mathcal{L}_4 \to \Lambda^{2} B_2)\), then it is a classical polylog, and vice versa. Notice that in fig.~\ref{fig:triple1}, we have five points \((a, b, X, c, d)\) on the line \((A X)\). From five points \((z_{1}, \dotsc, z_{5})\) in \(\mathbb{CP}^{1}\) we can produce a dilogarithm identity \begin{equation} \label{eq:dilog-identity} \sum_{i=1}^{5} (-1)^{i} \lbrace -r(z_{1}, \dotsc, \widehat{z_{i}}, \dotsc, z_{5})\rbrace_{2} = 0. \end{equation} This motivates us to find the expressions in terms of three-brackets for the other cross-ratios that can be constructed from these five points on \((A X)\) (see fig.~\ref{fig:triple1}): \begin{align} r(b,X,A,d) &= \frac {\langle B X Y\rangle \langle A C Z\rangle}{\langle A \times X, B \times Y, C \times Z\rangle},\\ r(a,X,A,d) &= (C\vert B,X,A,Z),\\ r(a,b,A,d) &= r_{3}(A,B,C; X,Y,Z),\\ r(a,b,X,d) &= r_{3}(X,B,C; A,Y,Z),\\ r(a,b,X,A) &= (B\vert C,Y,X,A). \end{align} This provides a geometric proof for the following dilogarithm identity \begin{multline} -\left\lbrace \frac {\langle B X Y\rangle \langle A C Z\rangle}{\langle A \times X, B \times Y, C \times Z\rangle}\right\rbrace_{2} +\left\lbrace \frac {\langle CBX\rangle \langle CAZ\rangle}{\langle CXA\rangle \langle CZB\rangle}\right\rbrace_{2} -\left\lbrace \frac {\langle ABX\rangle \langle BCY \rangle \langle CAZ\rangle}{\langle ABY\rangle \langle BCZ \rangle \langle CAX\rangle}\right\rbrace_{2}\\ +\left\lbrace \frac {\langle XBA\rangle \langle BCY \rangle \langle CXZ\rangle}{\langle XBY\rangle \langle BCZ \rangle \langle CXA\rangle}\right\rbrace_{2} -\left\lbrace \frac {\langle BCY\rangle \langle BXA\rangle}{\langle BYX\rangle \langle BAC\rangle}\right\rbrace_{2} = 0. \end{multline} Here is a \(40\)-term trilogarithm identity which was discovered when analyzing results of two-loop computations in \(\mathcal{N} = 4\) theory \begin{multline} \label{eq:40-term-li3} \left\lbrace -\frac {\langle 125\rangle\langle 134\rangle}{\langle 123\rangle\langle 145\rangle}\right\rbrace_3 + \left\lbrace -\frac {\langle 126\rangle\langle 145\rangle}{\langle 124\rangle\langle 156\rangle}\right\rbrace_3 + \left\lbrace -\frac {\langle 126\rangle\langle 145\rangle\langle 234\rangle}{\langle 123\rangle\langle 146\rangle \langle 245\rangle}\right\rbrace_3 +\\ \frac 1 3 \left\lbrace -\frac {\langle 136\rangle\langle 145\rangle\langle 235}{\langle 123\rangle\langle 156\rangle\langle 345\rangle}\right\rbrace_3 + (\text{cyclic permutations}) -\\ (\text{anti-cyclic permutations}) = 0. \end{multline} In order to check that the \(B_{2} \wedge \mathbb{C}^{*}\) projection of the \(40\)-term trilogarithm identity is zero we need some dilogarithm identities. For example, one of the dilogarithm identities which is useful is \begin{multline -\left\{-\frac{\langle 123\rangle \langle 456\rangle }{\langle 1\times 2,3\times 4,5\times 6\rangle }\right\}_2- \left\{-\frac{\langle 125\rangle \langle 134\rangle }{\langle 123\rangle \langle 145\rangle}\right\}_2- \left\{-\frac{\langle 123\rangle \langle 156\rangle \langle 345\rangle }{\langle 125\rangle \langle 134\rangle \langle 356\rangle}\right\}_2+\\ \left\{-\frac{\langle 124\rangle \langle 156\rangle \langle 345\rangle }{\langle 125\rangle \langle 134\rangle \langle 456\rangle}\right\}_2- \left\{-\frac{\langle 156\rangle \langle 345\rangle }{\langle 135\rangle \langle 456\rangle }\right\}_2=0. \end{multline} It can be interpreted geometrically as five points \((3, 4, (15)\cap(34), (12)\cap(34), (34)\cap(56))\) on the line \((34)\). The second useful dilogarithm identity is \begin{multline \left\{-\frac{\langle 156\rangle \langle 234\rangle}{\langle 1\times 2,3\times 4,5\times 6\rangle }\right\}_2 -\left\{-\frac{\langle 136\rangle \langle 234\rangle}{\langle 123\rangle \langle 346\rangle} \right\}_2 -\left\{-\frac{\langle 156\rangle \langle 236\rangle }{\langle 126\rangle \langle 356\rangle }\right\}_2\\ +\left\{-\frac{\langle 123\rangle \langle 156\rangle \langle 346\rangle}{\langle 126\rangle \langle 134\rangle \langle 356\rangle }\right\}_2 -\left\{-\frac{\langle 123\rangle \langle 256\rangle \langle 346\rangle }{\langle 126\rangle \langle 234\rangle \langle 356\rangle }\right\}_2 = 0. \end{multline} It can be interpreted geometrically as five points \((1, 2, (12)\cap(34), (12)\cap(36), (12)\cap(56))\) on the line \((12)\). The third useful dilogarithm identity is \begin{multline -\left\{-\frac{\langle 156\rangle \langle 234\rangle }{\langle 1\times 2,3\times 4,5\times 6\rangle }\right\}_2+ \left\{-\frac{\langle 145\rangle \langle 234\rangle }{\langle 124\rangle \langle 345\rangle}\right\}_2+ \left\{-\frac{\langle 156\rangle \langle 245\rangle }{\langle 125\rangle \langle 456\rangle}\right\}_2-\\ \left\{-\frac{\langle 124\rangle \langle 156\rangle \langle 345\rangle }{\langle 125\rangle \langle 134\rangle \langle 456\rangle}\right\}_2+ \left\{-\frac{\langle 124\rangle \langle 256\rangle \langle 345\rangle }{\langle 125\rangle \langle 234\rangle \langle 456\rangle }\right\}_2 = 0. \end{multline} It can be interpreted geometrically as five points \((1, 2, (12)\cap(34), (12)\cap(45), (12)\cap(56))\) on the line \((12)\). The fourth useful dilogarithm identity is \begin{multline \left\{-\frac{\langle 123\rangle \langle 456\rangle }{\langle 1\times 2,3\times 4,5\times 6\rangle }\right\}_2+ \left\{-\frac{\langle 125\rangle \langle 234\rangle }{\langle 123\rangle \langle 245\rangle}\right\}_2+ \left\{-\frac{\langle 123\rangle \langle 256\rangle \langle 345\rangle }{\langle 125\rangle \langle 234\rangle \langle 356\rangle }\right\}_2-\\ \left\{-\frac{\langle 124\rangle \langle 256\rangle \langle 345\rangle }{\langle 125\rangle \langle 234\rangle \langle 456\rangle}\right\}_2+ \left\{-\frac{\langle 256\rangle \langle 345\rangle }{\langle 235\rangle \langle 456\rangle }\right\}_2=0. \end{multline} It can be interpreted geometrically as five points \((3, 4, (12)\cap(34), (25)\cap(34), (34)\cap(56))\) on the line \((34)\). The identities above are the identities needed to show the vanishing of terms of type \(\ast \otimes \langle 123\rangle\) in the projection to \(B_{2} \otimes \mathbb{C}^{*}\) of the \(40\)-term trilogarithm identity. For the terms of type \(\ast \otimes \langle 124\rangle\) the same identities are sufficient, but there is another, simpler identity too, written below \begin{multline} -\left\{-\frac{\langle 126\rangle \langle 145\rangle}{\langle 124\rangle \langle 156\rangle}\right\}_2+ \left\{-\frac{\langle 126\rangle \langle 245\rangle}{\langle 124\rangle \langle 256\rangle}\right\}_2- \left\{-\frac{\langle 146\rangle \langle 245\rangle}{\langle 124\rangle \langle 456\rangle}\right\}_2+\\ \left\{-\frac{\langle 156\rangle \langle 245\rangle}{\langle 125\rangle \langle 456\rangle}\right\}_2- \left\{-\frac{\langle 156\rangle \langle 246\rangle}{\langle 126\rangle \langle 456\rangle }\right\}_2=0. \end{multline} This identity is special because it does not depend on point \(3\) at all. It can be more geometrically written as \begin{equation} \left\lbrace (1\vert 2 6 5 4)\right\rbrace_{2}+ \left\lbrace (2\vert 1 4 5 6)\right\rbrace_{2}+ \left\lbrace (4\vert 1 6 5 2)\right\rbrace_{2}+ \left\lbrace (5\vert 1 2 4 6)\right\rbrace_{2}+ \left\lbrace (6\vert 1 5 4 2)\right\rbrace_{2} = 0. \end{equation} Curiously, this simple-looking identity has a slightly more obscure geometrical interpretation. Through the five points \(1\), \(2\), \(4\), \(5\), \(6\) passes a unique conic \(\mathcal{C}\). The cross-ratio \((1\vert 2 6 5 4)\) is the cross-ratio of the points \((2,6,5,4)\) with respect to the conic \(\mathcal{C}\). But we can pick another point \(X \in \mathcal{C}\) and we have, by Chasles' theorem, that \((X\vert 2 6 5 4) = (1\vert 2 6 5 4)\). Then the previous identity becomes \begin{equation} \left\lbrace (X\vert 2 4 5 6)\right\rbrace_{2}- \left\lbrace (X\vert 1 4 5 6)\right\rbrace_{2}+ \left\lbrace (X\vert 1 2 5 6)\right\rbrace_{2}- \left\lbrace (X\vert 1 2 4 6)\right\rbrace_{2}+ \left\lbrace (X\vert 1 2 4 5)\right\rbrace_{2} = 0, \end{equation} which is the usual form of the dilogarithm identity, where the cross-ratios are cross-ratios of the lines \((X1)\), \((X2)\), \((X4)\), \((X5)\), \((X6)\). \section{Open questions} \label{sec:questions} The scattering amplitudes in \(\mathcal{N} = 4\) theory split into sub-sectors which are not related by supersymmetry transformations. Scattering amplitudes in the simplest sectors are called MHV (maximally helicity violating) amplitudes, for historical reasons. More complicated sectors are called NMHV (next to MHV), etc. The six-point MHV amplitude has transcendentality four but, surprisingly, can be expressed in terms of classical polylogarithms only, as found in ref.~\cite{Goncharov2010}. The next simplest amplitudes are the six-point NMHV, or the seven point MHV, which can not be written in terms of classical polylogarithms, since their \(B_2 \wedge B_2\) projection does not vanish. Consider the \(\Lambda^{2} B_{2}\) projection of the seven-point MHV amplitude computed in ref.~\cite{Caron-Huot2011a}. In \(\mathbb{CP}^{2}\) language it is given by \begin{multline} -\Big\lbrace -\frac {\langle 2 \times 3, 4 \times 6, 7 \times 1 \rangle}{\langle 1 6 7\rangle \langle 2 3 4\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 7 \times 1, 2 \times 3, 4 \times 5\rangle}{\langle 1 2 7\rangle \langle 3 4 5\rangle}\Big\rbrace_{2}\\ % -\Big\lbrace -\frac {\langle 2 \times 3, 4 \times 6, 7 \times 1 \rangle}{\langle 1 6 7\rangle \langle 2 3 4\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 2 3 4\rangle \langle 4 5 6\rangle}{\langle 2 4 6\rangle \langle 3 4 5\rangle}\Big\rbrace_{2}\\ % -\Big\lbrace -\frac {\langle 2 \times 3, 4 \times 6, 7 \times 1 \rangle}{\langle 1 6 7\rangle \langle 2 3 4\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 1 4 6\rangle \langle 5 6 7\rangle}{\langle 1 6 7\rangle \langle 4 5 6\rangle}\Big\rbrace_{2}\\ % -\Big\lbrace -\frac {\langle 2 \times 3, 4 \times 6, 7 \times 1 \rangle}{\langle 1 6 7\rangle \langle 2 3 4\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 5 \times 6, 7 \times 1, 2 \times 3\rangle}{\langle 1 2 3\rangle \langle 5 6 7\rangle}\Big\rbrace_{2}\\ % +\Big\lbrace -\frac {\langle 1 3 7\rangle \langle 4 6 7\rangle}{\langle 1 6 7\rangle \langle 3 4 7\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 1 2 3\rangle \langle 3 4 7\rangle}{\langle 1 3 7\rangle \langle 2 3 4\rangle}\Big\rbrace_{2} % -\Big\lbrace -\frac {\langle 1 3 7\rangle \langle 4 6 7\rangle}{\langle 1 6 7\rangle \langle 3 4 7\rangle}\Big\rbrace_{2} \wedge \Big\lbrace -\frac {\langle 3 4 7\rangle \langle 4 5 6\rangle}{\langle 3 4 5\rangle \langle 4 6 7\rangle}\Big\rbrace_{2}\\ % + \text{cyclic permutations of \(1, 2, \dotsc, 7\)}. \end{multline} Goncharov suggested to look at the Poisson bracket \({x, y}\) for any \(\lbrace -x\rbrace_{2} \wedge \lbrace -y\rbrace_{2} \in \Lambda^{2} B_{2}\). This is well-defined since \(\lbrace -x\rbrace_{2} \wedge \lbrace -y\rbrace_{2} = -\lbrace -y\rbrace_{2} \wedge \lbrace -x\rbrace_{2}\) and a similar sign change appears from the Poisson bracket. It is not understood why, but we find that these Poisson brackets are zero. We can show that for every term \(\lbrace -x\rbrace_{2} \wedge \lbrace -y\rbrace_{2} \in \Lambda^{2} B_{2}\) listed above there is at least one cluster containing \(x\) and \(y\). In order to prove this, for every pair \((x,y)\) we need to exhibit a quiver graph which contains them and which is such that there are no arrows between \(x\) and \(y\). Alternatively, one can compute the Sklyanin bracket as in sec.~\ref{sec:poisson-brackets} As mentioned in the introduction, scattering amplitudes have the property of factorization (see ref.~\cite{Anastasiou2009}). Formulating this precisely and studying its implications for the cluster algebra structure would be very interesting. A complete discussion would take us too far, but we want to mention only one important aspect: factorization only works if the transcendental functions satisfy some identities. In mathematics one prefers to work with some \emph{real} analytic functions, like \begin{gather} L_2(z) = \Im\left(\Li_2(z) + \ln |z| \ln(1-z)\right),\\ L_3(z) = \Re\left(\Li_3(z) - \ln |z| \Li_2(z) - \frac 1 3 \ln^2 |z| \ln (1-z)\right), \end{gather} which have simple functional relations (modulo some additive constants, one can simply replace \(\{z\}_2 \to L_2(z)\) and \(\{z\}_3 \to L_3(z)\)) to obtain an identity for functions. However, for physics we need to have \emph{complex} analytic functions instead. Therefore, it is not yet clear what are the best building blocks for the scattering amplitudes. The reader might be puzzled by the following fact: we have a big symmetry group \(\grp{PSU}(2,2\vert 4)\) but in terms of Grassmannians only the conformal group \(\grp{SU}(2,2)\) or the complexified \(\grp{SL}(4)\) is visible. How to make the rest of the symmetry visible? This is not known at present. Maybe recent developments like the definition of cluster superalgebras in ref.~\cite{Ovsienko2015} hold the key to further progress. Are there other polylogarithm identities of cluster type? As we have reviewed, the dilogarithm identity contains arguments which form an \(A_2\) (or \(\mathbb{G}(2,5)\) cluster algebra, while the trilogarithm identity contains arguments which form a \(D_4\) (or \(\mathbb{G}(3,6)\) cluster algebra. A computer search for a \(\Li_4\) identity with arguments in finite cluster algebra did not find anything. It is possible that there are such identities for infinite cluster algebras. Before ending this brief review, let us point out some references which discuss complementary details. Cluster algebras appeared in ref.~\cite{ArkaniHamed:2012nw} in connection with scattering amplitudes, but in a different way than we reviewed here. Ref.~\cite{Golden2014} also reviews the connection between scattering amplitudes and cluster algebras, with an emphasis on the combinatorics of Stasheff polytopes. Ref.~\cite{Huang2014} reviews the case of a three-dimensional analog of the \(\mathcal{N}=4\) theory which we described here. Many results were obtained by applying the bootstrap method (see refs.~\cite{Dixon:2011pw, Dixon2012a, Caron-Huot2012a, Dixon2013, Dixon2014, Golden2014a, Dixon2014a, Golden2015, Drummond2015}). \section{Acknowledgments} First, I would like to thank the organizers of the Opening Workshop of the Research Trimester on Multiple Zeta Values, Multiple Polylogarithms, and Quantum Field Theory: Jos\'e I. Burgos Gil, Kurusch Ebrahimi-Fard, D. Ellwood, Ulf K\"uhn, Dominique Manchon and P. Tempesta. I would also like to thank the participants and particularly Fr\'ed\'eric Chapoton and Herbert Gangl for discussions during the opening workshop Numbers and Physics (NAP2014). Finally, I am grateful to my coauthors in refs.~\cite{Goncharov2010, Golden:2013xva} for collaboration. \bibliographystyle{plain}
1512.07678
\section{Introduction} \label{sec:intro} Conventional Bayesian inference and other likelihood-based paradigms rest upon the existence of a statistical data-generating model that is both experimentally plausible and computationally tractable. Because this is challenging when the data is inherently complex, common practice to implement feasible inference algorithms is to use deliberately misspecified data-generating models \cite{White-82,Walker-13} such as in {\em na\"ive Bayes} \cite{Ng-01} or minimum description length \cite{Grunwald-07}, or to resort to highly supervised discriminative modeling approaches \cite{Ng-01}, not to mention ad-hoc methodology. Composite likelihood (see \cite{Varin-11} and the references therein) is a middle-way approach that extends the familiar notion of likelihood without requiring a full data-generating model. The key idea is to model an arbitrary set of low-dimensional features separately and then combine them, instead of modeling the data distribution as a whole. This may also be viewed as a {\em divide \& conquer} method of approximating the actual likelihood. While maximum composite likelihood does not inherit the general property of maximum likelihood to yield asymptotically minimum-variance estimators, it may offer an excellent trade-off between computational and statistical efficiency. In this note, composite likelihood is interpreted as a probabilistic opinion pool \cite{Genest-86,Garg-04} of ``agents'' using different pieces of information, or clues, extracted from the data. Each agent acts as a local Bayesian statistician expressing an opinion in the form of a posterior distribution on the unknown parameters of interest, or hypotheses, given a specific clue. Composite likelihood can therefore be associated with a probability distribution on hypotheses, hence extending Bayesian analysis to problems where the proper likelihood function is intractable. I further justify a generalization of composite likelihood called {\em super composite likelihood} whereby clues can be weighted differently depending on hypotheses to better deal with multiclass inference problems. This more general concept also encompasses another likelihood approximation strategy known as {\em PDF~projection} \cite{Baggenstoss-03,Minka-04,Baggenstoss-15}. This paper is intended to serve as supporting material for a companion paper (in preparation), where the super composite likelihood framework is applied in computer vision to develop a probabilistic theory of image registration \cite{Viola-97}. \section{Composite likelihood as opinion pooling} \label{sec:pool} Let $Y$ an observable multivariate random variable with sampling distribution $p(y|\theta)$ conditional on some unobserved parameter of interest, $\theta\in\Theta$, where $\Theta$ is a known set. Given an experimental outcome $y$, the likelihood is the sampling distribution evaluated at $y$, seen as a function of $\theta$: $$ L(\theta) = p(y|\theta) . $$ For a high-dimensional $Y$, this expression may be intractable if a plausible generative model is lacking, or involves nuisance parameters that are cumbersome to integrate out. A natural workaround known as {\em data reduction} is then to extract some lower-dimensional feature $z=f(y)$, where $f$ is a many-to-one mapping, and consider the potentially more convenient likelihood function: $$ \ell(\theta) = p(z|\theta) . $$ Substituting $L(\theta)$ with $\ell(\theta)$ boils down to restricting the sample space, thereby ``delegating'' statistical inference to an ``agent'' provided with partial information. While it is valid for such an agent observing~$z$ only to consider $\ell(\theta)$ as the likelihood function of the problem, the drawback is that $\ell(\theta)$ might be poorly informative about~$\theta$ due to the information loss incurred by data reduction. To make the trick statistically more efficient, we may extract several features, $z_i=f_i(y)$ for $i=1,2,\ldots,n$, and try to aggregate the likelihood functions $\ell_i(\theta) = p(z_i|\theta)$ that they elicit. This leads to a classical problem of combining probabilistic opinions from possibly redundant agents \cite{Tarantola-82,Genest-86,Garg-04,Allard-12}. Genest {\em et al} \cite{Genest-86b} showed, in particular, that the only pooling operator that does not explicitly depend on~$\theta$ and preserves {\em external Bayesianity} is the generalized logarithmic opinion pool: \begin{equation} \label{eq:log_pool} p_\star(\theta) = \frac{1}{Z} \pi(\theta) \prod_{i=1}^n \ell_i (\theta)^{w_i}, \qquad {\rm with}\quad Z = \int \pi(\theta) \prod_{i=1}^n \ell_i (\theta)^{w_i} d\theta, \end{equation} where $\pi(\theta)$ is some reference distribution or prior, and $w_i$ are arbitrary positive\footnote{Negative weights can be chosen only if the parameter set $\Theta$ is finite \cite{Genest-86b}.} weights that sum up to one, $$ \sum_{i=1}^n w_i = 1 . $$ External Bayesianity essentially means that it should not matter whether a prior on~$\theta$ is incorporated before or after pooling opinions, provided that all agents agree on the same prior. Importantly, the log-linear pool does not assume mutual feature independence, as would the same factorized form as (\ref{eq:log_pool}) with all unitary weights, $w_1=\ldots=w_n= 1$. Instead, redundancy between agents is assumed by default, and is effectively encoded by the unit sum constraint on weights\footnote{Nevertheless, features which are {\em known} to be mutually independent can be merged into a single feature. This results in increasing their weights in the log-linear pool.}. Strikingly, (\ref{eq:log_pool}) reduces to an analogous of Bayes rule: $p_\star(\theta)\propto \pi(\theta) L_c(\theta,\mathbf{w})$, where $\mathbf{w}=(w_1,w_2,\ldots,w_n)^\top$ denotes the vector of weights, and the quantity: \begin{equation} \label{eq:comp_lik} L_c(\theta,\mathbf{w}) \equiv \prod_{i=1}^n \ell_i (\theta)^{w_i} \end{equation} plays exactly the same role as a traditional likelihood function. $L_c(\theta,\mathbf{w})$ shares a convenient factorized form with the likelihood derived under mutual feature independence, sometimes called {\em na\"ive Bayes} likelihood in the literature \cite{Ng-01}. The key difference is that the single-feature likelihoods are scaled by positive weights~$w_i$ smaller than one, hence producing a flatter posterior distribution. In comparison with the proper likelihood (not assuming feature independence), the clear computational advantage is that we only need to evaluate the marginal feature distributions, rather than the joint distribution of all features. We recognize in~(\ref{eq:comp_lik}) a general expression known as a {\em marginal composite likelihood} \cite{Varin-11}, although it is derived here under the restriction that the weights sum up to one (as already motivated in \cite{Wang-14} by a different argument using the maximum entropy principle). Owing to the opinion pooling interpretation, this simple constraint justifies plugging composite likelihood into Bayes rule. Previous attempts at Bayesian composite likelihood include tuning a constant weight so as to best adjust the pseudo posterior variance matrix to the asymptotic variance matrix of the maximum composite likelihood estimator \cite{Pauli-11}, or performing a close-in-spirit curvature adjustment \cite{Ribatet-12}. Such approaches reconcile the frequentist and Bayesian notions of uncertainty to some extent, but are not externally Bayesian since they do not warrant unit sum weights. When we refer to composite likelihood in the sequel, we assume unit sum weights. \section{Composite likelihood as message approximation} \label{sec:message} Composite likelihood may also be understood as a means to approximate the ``true'' likelihood function\footnote{By {\em true likelihood}, we mean the likelihood corresponding to a specified yet intractable generative model. In practice, such a model is obviously not required.} or, in the language of graphical models, the {\em message} that the data sends to the latent variable~$\theta$. Several ``clues'' are sent to~$\theta$ via different data features, and then integrated assuming {\em non-coalescent} emitting sources, meaning that statistical dependences between clues are treated as unknown, although not ignored. The whole idea is depicted by the factor graph\footnote{See \cite{Bishop-06} for a didactic introduction to factor graphs.} in Figure~\ref{fig:fgraph2}, where the latent variable is connected to multiple factors involving single clues (rather than a single factor involving multiple clues), thereby enabling efficient computations. \begin{figure}[!ht] \begin{center} \subfigure[Generative model]{\includegraphics[width=.35\textwidth]{fgraph1.pdf}\label{fig:fgraph1}} \subfigure[Composite likelihood model]{\includegraphics[width=.6\textwidth]{fgraph2.pdf}\label{fig:fgraph2}} \caption{Factor graph representations of a generative model (a) and its approximation using composite likelihood (b). In (a), the factor connecting the data~$y$ and the latent variable~$\theta$ is the true likelihood, $L(y,\theta)=p(y|\theta)$. In (b), factors connecting $y$ and the clues $z_1,z_2,\ldots$ represent feature extractions, $\alpha_i(y,z_i)=\delta[y-f_i(z_i)]$, while factors connecting the clues and~$\theta$ are scaled single-feature likelihoods, $\beta_i(z_i,\theta)=p(z_i|\theta)^{w_i}$. In both graphs, the factor shown in black is the prior $\pi(\theta)$.} \label{fig:fgraph} \end{center} \end{figure} This approximation scheme happens to be optimal in an information-theoretic sense. As noted in \cite{Garg-04}, the log-linear pool minimizes the average Kullback-Leibler (KL) divergence to the probabilistic opinions: \begin{equation} \label{eq:avg_kl} p_\star = \arg\min_p \sum_{i=1}^n w_i D(p\|p_i), \end{equation} where $p_i(\theta) \propto \pi(\theta)\ell_i(\theta)$ is the posterior distribution of agent~$i$. Note that, even though the weights are arbitrary in~(\ref{eq:avg_kl}), the optimal solution normalizes them to unit sum. Composite likelihood thus arises as the best possible consensus among agents according to a natural criterion, in addition to satisfying the external Bayesianity axiom. If one interprets the average divergence from agent opinions~(\ref{eq:avg_kl}) as a proxy for the divergence from ``the truth'', $p(\theta|y)$, composite likelihood can be seen as a variational approximation to the true likelihood. This corresponds to the intuitive notion that a consensus among sufficiently many experts should yield a reasonable guess. An essential difference with usual approximate inference methods \cite{Bishop-06,Minka-05} is that the likelihood function does not need to be computable owing to the use of a proxy. However, as we shall see in Section~\ref{sec:weights}, knowledge of the feature sampling distributions can be further exploited to optimize the likelihood approximation with respect to the composite likelihood weights. \section{Super composite likelihood} \label{sec:super} Due to the distribution of weights between clues, a drawback of composite likelihood is that it is prone to {\em information overload} in the sense that it tends to ``flatten out'' when too many clues are included as relevant clues then get downweighted. If one decides not to merge clues for computational reasons (this would require handling their joint distribution), one could hope to mitigate information overload by assigning strong weights only to those clues that are believed to be ``most informative''. However, when chosen for computational simplicity, clues may not only convey limited information at individual level: their informativeness may also be very much hypothesis-dependent. Consider, for instance, diagnosing a disease from a routine medical checkup. Body temperature may point to a bacterial infection by comparison with normality, but would not help detecting a non-infectious cardiovascular disease -- and conversely for, say, blood pressure. This motivates a more general setting where clues can be weighted differently depending on hypotheses. To avoid unnecessary technicalities, we will assume from now on a finite set of hypotheses, $\Theta=\{\theta_0,\theta_1,\ldots,\theta_m\}$, where one particular hypothesis, $\theta_0$, is given the special status of reference, or ``null'' hypothesis. We start by introducing auxiliary binary variables~$t_j$, for $j=1,\ldots,m$, defined by truncation of~$\theta$: \begin{equation} \label{eq:binary} t_j = \left\{ \begin{array}{lll} 1 & {\rm if} & \theta = \theta_j \\ 0 & {\rm if} & \theta = \theta_0 \end{array} \right. . \end{equation} One can think of each~$t_j$ as an indicator light that flashes green or red whenever~$\theta$ is in one of the particular two states $\theta_j$ or $\theta_0$, and does not respond otherwise. The collection of all $t_j$'s may be thought of as a population code \cite{Knill-04} for~$\theta$. The key idea is as follows: instead of approximating the message sent from the data to~$\theta$ in one piece as in Section~\ref{sec:message}, we may approximate each of the simpler messages sent to the $t_j$'s. To that end, we construct a factor graph using the population code, depicted in Figure~\ref{fig:fgraph3}, which is equivalent to the graph in Figure~\ref{fig:fgraph1} for the purpose of computing the posterior distribution~$p(\theta|y)$. This graph involves multiple factors representing the ``truncated'' likelihood functions, $$ L_j(t_j)=p(y|t_j)= \left\{ \begin{array}{lll} p(y|\theta_j) & {\rm if} & t_j=1 \\ p(y|\theta_0) & {\rm if} & t_j=0 \end{array} \right. , \qquad j=1,\ldots,m, $$ as opposed to a single factor for the full likelihood $L(\theta)$. In addition, there are factors $\gamma_j$ to synthesize the different messages sent to the binary variables $t_1,\ldots,t_m$ into one sent to~$\theta$: \begin{equation} \label{eq:factor_gamma} \gamma_j(t_j,\theta) = \left\{ \begin{array}{lll} \delta_{1t_{j}} & {\rm if} & \theta=\theta_j \\ \delta_{0t_{j}} & {\rm if} & \theta\not=\theta_j \end{array} \right. , \end{equation} where $\delta$ denotes the Kronecker delta. \begin{figure}[!ht] \begin{center} \includegraphics[width=.6\textwidth]{fgraph3.pdf} \end{center} \caption{Alternative graph using population coding for the computation of $p(\theta|y)$. See text for details.} \label{fig:fgraph3} \end{figure} The unnormalized joint distribution~$p_2(y,\theta)$ represented by the graph in Figure~\ref{fig:fgraph3} reads: \begin{eqnarray*} p_2(y,\theta) & = & \pi(\theta) \sum_{t_1=0}^{1}\ldots \sum_{t_m=0}^{1} \prod_{j=1}^m L_j(t_j) \gamma_j(t_j,\theta) \\ & = & \pi(\theta) \prod_{j=1}^m \left\{ \mathbf{1}_{\{\theta_j\}}(\theta) p(y|\theta_j) + [1-\mathbf{1}_{\{\theta_j\}}(\theta)]p(y|\theta_0) \right\}\\ & = & \pi(\theta) p(y|\theta) p(y|\theta_0)^{m-1} , \end{eqnarray*} clearly yielding the same posterior $p_2(\theta|y)=p(\theta|y)$ as the graph in Figure~\ref{fig:fgraph1}, although generally not the same generative model: $p_2(y|\theta)\propto p(y|\theta)p(y|\theta_0)^{m-1}$ hence $p_2(y|\theta)\not=p(y|\theta)$ unless there are two hypotheses only ($m=1$), or the null distribution~$p(y|\theta_0)$ is uniform. While the inconsistency between generative models is irrelevant to inference, the alternative graph enables a more flexible likelihood approximation scheme, where each factor $L_j(t_j)$ can be substituted with a specific composite likelihood $L_{cj}(t_j, \mathbf{w}_j)$ using own pre-determined weights $\mathbf{w}_j=(w_{1j},w_{2j},\ldots,w_{nj})^\top$ depending on~$j$: $$ L_{cj}(t_j, \mathbf{w}_j) = \prod_{i=1}^n p(z_i|t_j)^{w_{ij}} = \prod_{i=1}^n \beta_{ij}(z_i, t_j), $$ with: \begin{equation} \label{eq:factor_beta} \beta_{ij}(z_i, t_j) = \left\{ \begin{array}{lll} p(z_i|\theta_j)^{w_{ij}} = \ell_i (\theta_j)^{w_{ij}} & {\rm if} & t_j=1 \\ p(z_i|\theta_0)^{w_{ij}} = \ell_i (\theta_0)^{w_{ij}} & {\rm if} & t_j=0 \end{array} \right. , \end{equation} and: $$ \forall j \in \{1,\ldots,m\}, \qquad \sum_{i=1}^n w_{ij} = 1 . $$ This corresponds to replacing each factor $L_j$ in Figure~\ref{fig:fgraph3} with a subgraph of same structure as the subgraph highlighted in red in Figure~\ref{fig:fgraph2}, resulting in the further modified factor graph shown in Figure~\ref{fig:fgraph4}. Intuitively, hypothesis-dependent weights make it possible to emphasize the clues that are relevant to each particular hypothesis comparison $\theta_j$ {\em vs.}~$\theta_0$, leading to potentially better approximations to the true odds $p(\theta_j|y)/p(\theta_0|y)$ than using constant weights. \begin{figure}[!ht] \begin{center} \includegraphics[width=.7\textwidth]{fgraph4.pdf} \end{center} \caption{Super composite likelihood factor graph. See respectively (\ref{eq:factor_beta}) and (\ref{eq:factor_gamma}) for the expression of the factors $\beta_{ij}$ and $\gamma_j$.} \label{fig:fgraph4} \end{figure} We shall point out that the subgraph connecting ${\bf z}=(z_1,z_2,\ldots,z_n)$ and ${\bf t}=(t_1,t_2,\ldots,t_m)$ in Figure~\ref{fig:fgraph4} (shown in red) is equivalent to an undirected bipartite graph, a property shared with restricted Boltzmann machines (RBM) \cite{Fischer-14,Hinton-06,Larochelle-08}. This means that the variables in one layer are independent conditionally on the other layer: \begin{equation} \label{eq:bipartite} p_3({\bf t}|{\bf z}) = \prod_{j=1}^m p_3(t_j|{\bf z}), \qquad p_3({\bf z}|{\bf t}) = \prod_{i=1}^n p_3(z_i|{\bf t}), \end{equation} with, in this case, $$ p_3(t_j|{\bf z}) \propto L_{cj}(t_j, \mathbf{w}_j), \qquad p_3(z_i|{\bf t}) = p_3(z_i|\theta) \propto p(z_i|\theta)^{W_i(\theta)}, $$ where $W_i(\theta)$ is defined by $W_i(\theta_j)=w_{ij}$ if $j\in\{1,m\}$ and $W_i(\theta_0)=\sum_{j=1}^m w_{ij}$. An essential difference with RBM, however, is that the generative distribution $p_3({\bf z}|{\bf t})$ is not intended to be used for model training. Instead, our core assumption is that the marginal feature distributions are learned ``before'' constructing the graph, a step that does not rely on assuming feature independence conditionally on~${\bf t}$ or, equivalently, $\theta$. Therefore, despite satisfying the bipartite condition (\ref{eq:bipartite}), the joint distribution $p_3({\bf t},{\bf z})$ represented in Figure~\ref{fig:fgraph2} is asymmetrical in use, and serves the only purpose of defining a posterior distribution~$p_3(\theta|y)$ via~$p_3({\bf t}|{\bf z})$. Marginalizing out~${\bf t}$, we find: \begin{eqnarray*} p_3(\theta|y) & \propto & \pi(\theta) \sum_{{\bf t}\in \{0,1\}^m} \prod_{j=1}^m p_3({\bf t}|{\bf z}) \gamma_j(t_j,\theta) \\ & = & \pi(\theta) \sum_{t_1=0}^{1}\ldots \sum_{t_m=0}^{1} \prod_{j=1}^m L_{cj}(t_j, \mathbf{w}_j) \gamma_j(t_j,\theta) \\ & = & \pi(\theta) \prod_{i=1}^n \prod_{j=1}^m \ell_i \left( \mathbf{1}_{\{\theta_j\}}(\theta) \theta + [1-\mathbf{1}_{\{\theta_j\}}(\theta)]\theta_0 \right)^{w_{ij}} \\ & = & \pi(\theta) \prod_{i=1}^n \ell_i(\theta_0)^{\sum_{j=1}^m w_{ij}} \prod_{i=1}^n \left[ \frac{\ell_i(\theta)}{\ell_i(\theta_0)} \right]^{w_{ij}} , \end{eqnarray*} therefore: \begin{equation} \label{eq:new_pool} p_3(\theta|y) = K \frac{\pi(\theta)}{\pi(\theta_0)} \prod_{i=1}^n \left[ \frac{\ell_i(\theta)}{\ell_i(\theta_0)} \right]^{w_i(\theta)}, \end{equation} where $K$ is a normalizing factor (dependent on~$y$) and the functions $w_i(\theta)$ are defined by $w_i(\theta_j)=w_{ij}$ for $j=1,\ldots,m$ and $w_i(\theta_0)=0$ conventionally. As an opinion pooling rule, (\ref{eq:new_pool}) is more general than the log-linear pool~(\ref{eq:log_pool}) as it explicitly depends on~$\theta$ through the changing weights. Moreover, since the weights sum up to one for each~$\theta\not=\theta_0$, we have the equivalent expression: $$ p_3(\theta|y) = K \prod_{i=1}^n \left[ \frac{\pi(\theta)\ell_i(\theta)}{\pi(\theta_0)\ell_i(\theta_0)} \right]^{w_i(\theta)}, $$ showing that the pool does not change depending on whether the prior is incorporated before or after combining experts. In other words, (\ref{eq:new_pool}) is, again, externally Bayesian. As in Section~\ref{sec:pool}, we recognize a Bayes rule-type expression: $p_3(\theta|y)\propto\pi(\theta){\cal L}_c(\theta,\mathbf{W})$, with: \begin{equation} \label{eq:super} {\cal L}_c(\theta,\mathbf{W}) \equiv \prod_{i=1}^n \left[ \frac{\ell_i(\theta)}{\ell_i(\theta_0)} \right]^{w_i(\theta)} , \end{equation} where~$\mathbf{W}$ denotes the $n\times m$ weight matrix with general element $w_{ij}$. We call~(\ref{eq:super}) a super composite likelihood (SCL). Note that the SCL evaluated at a particular hypothesis~$\theta_j$ boils down to a composite likelihood ratio, \begin{equation} \label{eq:super_ratio} {\cal L}_c(\theta_j,\mathbf{W}) = \frac{L_c(\theta_j,\mathbf{w}_j)}{L_c(\theta_0,,\mathbf{w}_j)} . \end{equation} To see that SCL~(\ref{eq:super}) is indeed a generalization of composite likelihood, assume that $\mathbf{w}_j=\mathbf{w}$ is the same for all hypotheses. This implies that (\ref{eq:super_ratio}) simplifies to ${\cal L}_c(\theta,\mathbf{W})=L_c(\theta,\mathbf{w})/L_c(\theta_0,\mathbf{w})$, the denominator of which is independent from~$\theta$ and can therefore be safely ignored for inference about~$\theta$. In this special case, SCL is equivalent to standard composite likelihood regardless of the chosen reference~$\theta_0$. Nevertheless, $\theta_0$ plays a crucial normalization role whenever the columns of $\mathbf{W}$ are different to enable complex, possibly sparse, weighting patterns. Consider, for instance, the case where each hypothesis gets evidence from a single clue, so that $\mathbf{W}$ has a single~1 in each column and all other elements~0. The SCL~(\ref{eq:super}) then reduces to a likelihood ratio involving a single, hypothesis-specific clue $z_{\iota(j)}$ determined by some mapping $\iota:\{1,\ldots,m\}\to \{1,\ldots,n\}$: \begin{equation} \label{eq:pdf_proj} {\cal L}_c(\theta_j,\mathbf{W}) = \frac{\ell_{\iota(j)}(\theta_j)}{\ell_{\iota(j)}(\theta_0)} = \frac{p(z_{\iota(j)}|\theta_j)}{p(z_{\iota(j)}|\theta_0)} . \end{equation} This expression was already used in \cite{Baggenstoss-03,Minka-04,Baggenstoss-15} motivated by a different argument than here, namely the {\em PDF projection theorem}, which states the full data-generating model maximizing relative entropy (with respect to a chosen reference distribution) under knowledge of the feature sampling distributions. As discussed in \cite{Baggenstoss-03}, (\ref{eq:pdf_proj}) closely approximates the true likelihood if each $z_{\iota(j)}$ is a near-sufficient statistic for~$\theta_j$ {\em vs.}~$\theta_0$, a condition that may be difficult to meet if the choice of clues is driven by computational efficiency. SCL provides an alternative interpretation of PDF~projection, while also extending it to multiple clues with unknown statistical dependences. \section{Weight optimization} \label{sec:weights} The general advantage of~SCL over standard composite likelihood is to define a broader class of pseudo-likelihood functions, hence with the potential to better approximate the true likelihood for a suitable choice of the weight matrix~$\mathbf{W}$. On the other hand, standard composite likelihood satisfies good asymptotic properties for any choice of unit sum positive weights, as shown in Appendix~\ref{sec:asymptotic}, but such properties do not extend in full generality to~SCL: a poor likelihood approximation is to be expected if~$\mathbf{W}$ is chosen at random, hence the importance of fine-tuning super composite weights. To this end, a conventional machine learning strategy may be used. Assume (for now) that a number of examples of inputs and responses $(y^k, \theta^k)$, for $k=1,\ldots,N$, are sampled independently. Given a tentative weight matrix~$\mathbf{W}$, examples elicit different SCL functions ${\cal L}_{ck}(\theta, \mathbf{W})$ as $y^k$ varies across examples. By analogy with classical maximum likelihood model selection, we may consider rating the model ability to predict examples depending on~$\mathbf{W}$ via the sample average logarithm of the~SCL: \begin{eqnarray*} \hat{\cal U}(\mathbf{W}) & = & \frac{1}{N} \sum_{k=1}^N \log {\cal L}_{ck}(\theta^k, \mathbf{W})\\ & = & \frac{1}{N} \sum_{k=1}^N \sum_{i=1}^n w_i(\theta^k) \log \frac{p(z_i^k|\theta^k)}{p(z_i^k|\theta_0)} , \end{eqnarray*} which essentially averages feature-based log-likelihood ratios relative to the chosen reference hypothesis~$\theta_0$. This utility measure turns out to be particularly appealing to tune SCL weights, for two reasons. First, if examples are drawn from the true yet incompletely known joint distribution $p(y,\theta)=\pi(\theta)p(y|\theta)$, then, under the condition of existence, $\hat{\cal U}(\mathbf{W})$ converges in the limit of many examples to: \begin{eqnarray} {\cal U}(\mathbf{W}) & = & E\left[ \log {\cal L}_c(\theta, \mathbf{W}) \right] \nonumber \\ & = & \sum_{j=1}^m \pi(\theta_j) \sum_{i=1}^n w_{ij} \underbrace{\int p(z_i|\theta_j) \log \frac{p(z_i|\theta_j)}{p(z_i|\theta_0)} dz_i}_{u_{ij}}, \label{eq:discrim} \end{eqnarray} where $u_{ij}$ denotes the KL~divergence of~$p(z_i|\theta_0)$ from~$p(z_i|\theta_j)$, and can be interpreted as the utility of clue~$i$ regarding hypothesis~$j$. Since the $u_{ij}$'s are fully determined by the known feature-generating distributions~$p(z_i|\theta)$, (\ref{eq:discrim}) can be evaluated without further knowledge about~$p(y,\theta)$. Consequently, we do not need to constitute an actual training dataset! Second, let~${\cal U}_\star$ denote the expected utility associated with the true distribution~$p(y,\theta)$, considered as a SCL built from a single clue, the full data~$y$. As is easy to check, ${\cal U}_\star$ is the conditional Kullback-Leibler divergence of~$p(y|\theta_0)$ from~$p(y|\theta)$: $$ {\cal U}_\star = \sum_{j=1}^m \pi(\theta_j) \int p(y|\theta_j) \log \frac{p(y|\theta_j)}{p(y|\theta_0)} dy = D[p(y|\theta)\|p(y|\theta_0)] . $$ Applying in~(\ref{eq:discrim}) the data reduction inequality recalled in Appendix~\ref{sec:reduction_inequality}, we obtain the intuitive result that~${\cal U}(\mathbf{W})$ is upper bounded by the expected utility achievable without data reduction: $$ {\cal U}(\mathbf{W})\leq {\cal U}_\star . $$ This means that the full data-generating model maximizes the expected log-SCL over the whole space of SCL functions ({\em i.e.}, SCL functions built from arbitrary clues in arbitrary number and using arbitrary weights). Therefore, (\ref{eq:discrim}) qualifies as a measure of goodness of fit to the true likelihood. Maximizing~(\ref{eq:discrim}) with respect to~$\mathbf{W}$ amounts to optimizing the fit over a subspace of SCL functions spanned by fixed user-specified features. Some constraints may be imposed on~$\mathbf{W}$, such as assuming identical columns as in the case of standard composite likelihood, or forcing some elements to zero if some feature likelihood functions are only known on subsets of~$\Theta$. We here focus on the case where no constraints are imposed beside, of course, that the columns of~$\mathbf{W}$ lie in the $n$-dimensional simplex. Clearly, the optimal weights are then determined independently for each hypothesis (and thus independently from the prior) as any positive weights satisfying, for all~$j\in\{1,\ldots,n\}$: \begin{equation} \label{eq:optimal_weights} w_{ij} = 0 \quad {\rm iff}\quad i \not\in \arg\max_{i'\in\{1,\ldots,n\}} u_{i'j}, \qquad {\rm and} \quad \sum_{i=1}^n w_{ij}=1 , \end{equation} yielding a sparsity-enforcing rule which assigns non-zero weights only to the clues that achieve maximal KL~utility for a particular hypothesis. If there is a unique KL-maximal, or ``most exhaustive'' clue for each hypothesis, the resulting~SCL function essentially boils down to the PDF~projection method \cite{Baggenstoss-03,Minka-04,Baggenstoss-15}. More generally, if several clues~$z_i$ maximize $u_{ij}$ for some hypothesis~$\theta_j$, then optimal weights are not unique. In such case, it is preferable to assign equal weights to the winning clues in order to ensure that utility is not only large in average, but also stable across examples ({\em i.e.}, low in variance). Also note that the weight optimality rule~(\ref{eq:optimal_weights}) implies that, for any $\theta_\star \in \Theta$, $$ E\left[ \log \frac{L_c(\theta_\star, \mathbf{w})}{L_c(\theta_0, \mathbf{w})} \right] \leq E\left[ \log \frac{L_c(\theta_\star, \mathbf{w}_\star)}{L_c(\theta_0, \mathbf{w}_\star)} \right], $$ where $\mathbf{w}_\star$ is the weight vector associated with~$\theta_\star$, $E$ stands for the expectation with respect to $p(y|\theta_\star)$, and~$\mathbf{w}$ is any weight vector. Moreover, the following inequality is shown in Appendix~\ref{sec:asymptotic}: $$ E\left[ \log \frac{L_c(\theta, \mathbf{w})}{L_c(\theta_0, \mathbf{w})} \right] \leq E\left[ \log \frac{L_c(\theta_\star, \mathbf{w})}{L_c(\theta_0, \mathbf{w})} \right] , $$ for any hypothesis~$\theta$ and weighting~$\mathbf{w}$ and, in particular, for the weight vector associated with~$\theta$ via~(\ref{eq:optimal_weights}). Therefore, provided that the weight matrix~$\mathbf{W}$ verifies~(\ref{eq:optimal_weights}), we have that: $$ E \left[\log {\cal L}_c(\theta,\mathbf{W}) \right] \leq E \left[\log {\cal L}_c(\theta_\star,\mathbf{W}) \right] , $$ meaning that the SCL is asymptotically consistent in the sense that the expectation of its logarithm is maximized by the ``true'' parameter value. This is a most awaited frequentist property which, as already pointed out, holds with any choice of weights for standard composite likelihood but not, in general, for~SCL. \section{Further extensions} \subsection{Conditional super composite likelihood} \label{sec:conditional} The SCL derivation rests upon the definition of feature-based likelihood as $\ell_i(\theta)=p(z_i|\theta)$. As a straightforward extension, $\ell_i(\theta)$ may be conditioned by an additional ``independent'' feature $z^c_i = f^c_i(y)$ considered as a predictor of the ``dependent'' feature, $z_i=f_i(y)$, yielding the more general form: \begin{equation} \label{eq:cond_feat_lik} \ell_i(\theta) = p(z_i|\theta,z^c_i). \end{equation} Conditioning may be useful if it is believed that $z^c_i$ alone is little or not informative about $\theta$, but can provide relevant information when considered jointly with $z_i$, as in the case of regression covariates, for instance. Standard composite likelihood (\ref{eq:comp_lik}) then amounts to {\em conditional composite likelihood} \cite{Varin-11}, a more general form of composite likelihood also including Besag's historical {\em pseudo-likelihood} \cite{Besag-74}, which was a major breakthrough in computer vision. Likewise, the above derivation remains valid when ``independent'' features are used, and we can thus define a conditional version of SCL by plugging likelihood functions of the form~(\ref{eq:cond_feat_lik}) into (\ref{eq:super}). \subsection{Nuisance parameters} \label{sec:nuisance_params} Most often, plausible feature-generating models involve unknown quantities of no direct interest. Such quantities may be estimated offline in a supervised learning phase if a suitable training dataset is available. However, if such ``pre-training'' is not feasible in practice, the feature-based likelihoods depend on an unknown parameter~$\psi$, in addition to the parameter of interest~$\theta$: $\ell_i(\theta,\psi)=p(z_i|\theta,\psi)$, or $\ell_i(\theta,\psi)=p(z_i|z^c_i,\theta,\psi)$ if ``independent'' features are used. We then face two difficulties: \begin{itemize} \item {\em Parameter integration.} How to make the inference on~$\theta$ independent from~$\psi$ assuming known SCL~weights? \item {\em Weight optimization.} How to optimize the SCL~weights under unknown~$\psi$? \end{itemize} We address these two points in the sequel. \subsubsection{Parameter integration} \label{sec:nuisance_integration} When weighting clues independently from the hypotheses, a joint composite likelihood on both parameters may be derived: $$ L_c(\theta,\psi,\mathbf{w}) = \prod_i \ell_i(\theta,\psi)^{w_i}, $$ and further integrated with respect to some prior on the nuisance parameter to yield a function of~$\theta$ only, which we may call the {\em composite evidence}: \begin{equation} \label{eq:comp_evidence} \bar{L}_c(\theta, \mathbf{w}) = \int \pi(\psi) L_c(\theta,\psi,\mathbf{w}) d\psi . \end{equation} This corresponds to using the factor graph represented in Figure~\ref{fig:fgraph5} to approximate the message from the data to~$\theta$, where the factors $\beta_i$ are now defined by $\beta_i(z_i,\theta,\psi)=\ell_i(\theta,\psi)^{w_i}$. To compute the associated posterior $p_4(\theta|y)$, $\psi$ is treated as an auxiliary variable to be integrated out, leading to $p_4(\theta|y)\propto \pi(\theta)\bar{L}_c(\theta, \mathbf{w})$. \begin{figure}[!ht] \begin{center} \includegraphics[width=.6\textwidth]{fgraph5.pdf} \end{center} \caption{Extension of the composite likelihood factor graph in Figure~\ref{fig:fgraph2} to account for nuisance parameters.} \label{fig:fgraph5} \end{figure} More generally, when weighting clues depending on hypotheses, we may use essentially the same idea as in Section~\ref{sec:super}, {\em i.e.}, replicate subgraphs such as the one depicted in red and magenta in Figure~\ref{fig:fgraph5} in order to approximate the different messages sent from the data to each truncated variable~$t_j$, as defined in~(\ref{eq:binary}). Note that, by replicating the same graph structure across variables~$t_j$, the variable~$\psi$ is also replicated, so that the resulting factor graph in Figure~\ref{fig:fgraph6} does not represent a joint distribution of the form $p(y,\theta,\psi)$, but rather one of the form~$p(y,\theta,\psi_1,\ldots,\psi_m)$. This makes the approximation more flexible by taking advantage of the fact that a marginal distribution on~$\psi$ is not required. \begin{figure}[!ht] \begin{center} \includegraphics[width=.7\textwidth]{fgraph6.pdf} \end{center} \caption{Extension of the super composite likelihood factor graph in Figure~\ref{fig:fgraph4} to account for nuisance parameters. Here, the factors $\beta_{ij}(z_i,t_j,\psi_j)$ are defined by $\beta_{ij}(z_i,1,\psi_j)=\ell_i(\theta_j,\psi_j)^{w_{ij}}$ and $\beta_{ij}(z_i,0,\psi_j)=\ell_i(\theta_0,\psi_j)^{w_{ij}}$.} \label{fig:fgraph6} \end{figure} The posterior distribution encoded by this graph (integrated with respect to the replicates $\psi_1,\ldots,\psi_m$) is easily found to be: $$ p_5(\theta|y) \propto \pi(\theta) \left\{ \begin{array}{lll} \displaystyle \frac{\bar{L}_c(\theta_j,\mathbf{w}_j)}{\bar{L}_c(\theta_0,\mathbf{w}_j)} & {\rm if} & \theta=\theta_j \quad {\rm with} \quad j\in\{1,\ldots,m\}\\ 1 & {\rm if} & \theta=\theta_0 \end{array} \right. , $$ where $\bar{L}_c(.,\mathbf{w}_j)$ is the above-defined composite evidence function~(\ref{eq:comp_evidence}) associated with the weight vector~$\mathbf{w}_j$. This justifies defining the {\em super composite evidence} as: \begin{equation} \label{eq:super_comp_evidence} \bar{\cal L}_c(\theta_j,\mathbf{W}) = \left\{ \begin{array}{lll} \displaystyle \frac{\bar{L}_c(\theta_j,\mathbf{w}_j)}{\bar{L}_c(\theta_0,\mathbf{w}_j)} & {\rm if} & j\in\{1,\ldots,m\}\\ 1 & {\rm if} & j=0 \end{array} \right. . \end{equation} Obvious from~(\ref{eq:super_comp_evidence}) is that the super composite evidence conserves all odds $\theta_j$ {\em vs.}~$\theta_0$ integrated within their respective associated subgraphs, very much like its nuisance parameter-free version~(\ref{eq:super_ratio}). The conservation of odds implies that evaluating the SCL at a particular~$\theta$ is akin to computing a scaled Bayes factor, which may sometimes be efficiently approximated via a maximum likelihood ratio statistic. \subsubsection{Weight optimization} \label{sec:nuisance_weights} Consistently with the idea of nuisance parameter replication (see Figure~\ref{fig:fgraph5}), we may employ a nested strategy to optimize the SCL~weights in presence of nuisance parameters: for each $j\in\{1,\ldots,m\}$, apply the weight optimization of Section~\ref{sec:weights} to the subgraph model connecting all clues~$z_i$ with the particular pair $(t_j,\psi_j)$. This leads to a set of linear programming problems: for each~$j$, find the weight vector~${\bf w}_j$ on the $n$-dimensional simplex that maximizes: $$ \bar{\cal U}_j({\bf w}_j) = E\left[\log \frac{L_c(\theta_j,\psi_j,{\bf w}_j)}{L_c(\theta_0,\psi_j,{\bf w}_j)}\right], $$ where the expectation is taken with respect to $(y,\psi_j)\sim p(y|\theta_j,\psi)\pi(\psi)$, a computation that only requires knowledge of the feature sampling distributions, $p(z_i|\theta,\psi)$ for $i=1,\ldots,n$, in addition to the prior $\pi(\psi)$. In this way, every ${\bf w}_j$ is optimal for the posterior of~$(t_j,\psi_j)$ or, more precisely, yields the best approximation to the true joint likelihood $L(\theta, \psi)$ for~$\theta$ restricted to the binary set~$\{\theta_0,\theta_j\}$. The ensuing marginal likelihood approximations are therefore as tight as possible for all hypothesis comparisons with~$\theta_0$, as is the goal of SCL~weight optimization. \section{Discussion} \label{sec:discussion} Composite likelihood is a relatively recent concept from computational statistics that has mainly been developed so far in a frequentist perspective as a surrogate for the maximum likelihood method. In this paper, we have shown (to our best knowledge, for the first time) a deep connection between composite likelihood and probabilistic opinion pooling, thereby establishing composite likelihood as a class of {\em discriminative models}\footnote{We here define a discriminative model as a model of which the parameters describe the conditional distribution of an unobserved variable of interest given an observable variable, as opposed to a {\em generative} model, which involves parameters encoding for the conditional distribution of an observable given an unobserved variable.} for statistical inference and machine learning. This connection is possible under the mild restriction that composite likelihood weights are chosen to sum up to one. \subsection{From na\"ive Bayes to super composite likelihood} In the probabilistic opinion pooling perspective, composite likelihood is essentially a reinterpretation and a generalization of the {\em na\"ive Bayes} paradigm that relaxes the associated ``na\"ive'' mutual feature independence assumption. In particular, when all features are given equal weight, the composite likelihood is the na\"ive Bayes likelihood raised to the power~$1/n$, where~$n$ is the number of selected features, yielding for instance the same maximum a posteriori (MAP) estimator if a flat prior is used. However, beside providing a simple justification to the wide use of na\"ive Bayes MAP algorithms, composite likelihood using unit sum weights also entails a conservative rescaling of credibility sets derived from na\"ive Bayes, which gets more drastic as the number of features increases. This comes as a consequence of relaxing feature independence. We further argued that this rescaling may, to some extent, be unduly conservative in multiclass problems if features are weighted uniformly over the space of unobserved labels, leading us to propose the more general concept of super composite likelihood (SCL). SCL essentially approximates likelihood ratios relative to a fixed reference hypothesis using locally weighted composite likelihood functions. Owing to weight adaptability, SCL describes a more general class of discriminative models than standard composite likelihood. The idea can also be understood in terms of approximating the factor graph in Figure~\ref{fig:fgraph3}, which encodes the true but intractable posterior distribution, by another factor graph of the type depicted in Figure~\ref{fig:fgraph4}. This substitution results in breaking into pieces both the observed and unobserved variable spaces, and assembling a series of concise {\em messages} passing from the former to the latter. Note that this is a pretty unusual case of approximate inference in factor graphs where the factors to be approximated are unknown (or need not be known). \subsection{Super composite likelihood training} Any SCL~model is fully determined by the marginal generative distributions of some pre-specified features, which may rely on a moderate number of parameters if low-dimensional features are chosen. There are two approaches to deal with these parameters. One is to estimate them beforehand by supervised learning, if an adequate training dataset is available. This could be compared with contrastive pre-training of RBMs \cite{Hinton-06,Fischer-14}, which also optimizes parameters for generation of observable features. An important difference, however, is that contrastive pre-training is unsupervised. On the other hand, SCL pre-training relies on weaker assumptions as it does not assume conditional feature independence unlike RBMs. Once pre-training is complete, the SCL~weights may be tuned by maximum~SCL, as shown in Section~\ref{sec:weights}: this second training stage is, again, supervised in essence, but does not require additional training examples since it is fully determined by the feature distributions learned in the pre-training step. \begin{figure}[!ht] \begin{center} \subfigure[Generative model]{\includegraphics[width=.25\textwidth]{dgraph1.pdf}\label{fig:dgraph1}} \hspace*{.05\textwidth} \subfigure[Classical discriminative model]{\includegraphics[width=.25\textwidth]{dgraph2.pdf}\label{fig:dgraph2}} \hspace*{.05\textwidth} \subfigure[Composite likelihood model]{\includegraphics[width=.25\textwidth]{dgraph3.pdf}\label{fig:dgraph3}} \caption{Belief networks representing, respectively: (a) a generative model, (b) a classical discriminative model, and (c) a composite likelihood model. Note the marginal independence between the data~$y$ and the nuisance parameter~$\psi$ in (b).} \label{fig:dgraph} \end{center} \end{figure} The other approach to deal with feature distribution parameters is to consider them as nuisance parameters, as proposed in Section~\ref{sec:nuisance_params}, thereby avoiding pre-training. The method then becomes completely unsupervised. The weights can be tuned by maximizing the log-SCL integrated over the prior on nuisance parameters (see Section~\ref{sec:nuisance_weights}), which requires no training dataset whatsoever, and the nuisance parameters can be eliminated by substituting the~SCL function with a {\em super composite evidence} (see Section~\ref{sec:nuisance_integration}). In this version, SCL~essentially generalizes Bayesian integration. The potential of SCL for unsupervised learning is perhaps suprising considering that it is a discriminative model. This stems from the fact that, unlike classical discriminative models, SCL~exploits direct information conveyed by the data about both the parameters of interest and the nuisance parameters, reflecting the generative nature of the underlying feature distribution models. This essential difference is illustrated in Figure~\ref{fig:dgraph}: the factor graph model underlying composite likelihood in its parametric version (see Figure~\ref{fig:fgraph5}) is not reducible to a belief network of the type represented in Figure~\ref{fig:dgraph2}, in which the data and the nuisance parameter are marginally independent. In contrast, the data is informative about the nuisance parameter in a composite likelihood model, as shown in Figure~\ref{fig:dgraph3}. \section{Conclusion} \label{sec:conclusion} In summary, (super) composite likelihood has the potential to yield weakly supervised or unsupervised Bayesian-like inference procedures depending on the particular task at hand. This property reflects the encoding of statistical relationships between the data and {\em all} unknown parameters. Composite likelihood thus appears as a trade-off between generative models, which are optimal for unsupervised learning but possibly intractable, and traditional discriminative models (logistic regression, Gaussian processes \cite{Rasmussen-06}, maximum entropy models \cite{BergerA-96}, etc.), which are inherently supervised. Composite likelihood models are discriminative models assembled from atomic generative models and, from this point of view, may be considered as {\em semi-generative} models.
2003.10180
\section{Introduction} \IEEEPARstart{T}{}he emerging paradigm of massive machine-type communications (mMTC) is identified as an indispensable component for enabling the massive access of machine-type devices (MTDs) in the emerging Internet-of-Things (IoT) \cite{overview1}. In stark contrast to conventional human-centric mobile communications, mMTC focuses on uplink-oriented communications serving massive MTDs and exhibits sporadic tele-traffic requiring low-latency and high-reliability massive access \cite{overview1}. The conventional grant-based access approach relies on complex time and frequency-domain resource allocation before data transmission, which would impose prohibitive signaling overhead and latency on massive mMTC \cite{overview1}. To support low-power MTDs at low latency, the emerging grant-free approach has attracted significant attention for massive access, since it simplifies the access procedure by directly delivering data without scheduling \cite{BWang1,YangDU1,Profshim2,BWang2,YangDU2,TWOLEVEL}. Specifically, by exploiting the block-sparsity of mMTC, the authors of \cite{BWang1} and \cite{YangDU1} proposed compressive sensing (CS) solutions for joint active device and data detection, while a maximum {\it a posteriori} probability based scheme was proposed in \cite{Profshim2} for improving performance. Furthermore, MTDs having slow-varying activity tend to exhibit partially block sparsity, hence a modified orthogonal matching pursuit solution was conceived in \cite{BWang2}, beside a modified subspace pursuit algorithm was proposed in \cite{YangDU2}. It was shown that the previous detected results can be exploited for enhancing the following detection. However, the contributions \cite{BWang1,YangDU1,Profshim2,BWang2,YangDU2} only consider single-antenna configurations at both the MTDs and the BS. To achieve higher efficiency and more reliable detection, multi-antenna at MTDs using spatial modulation (SM) and massive multi-input multi-output (mMIMO) were considered in \cite{Gao,TWOLEVEL} at the BS, where a two-level sparse structure based CS (TLSSCS) detector and a structured CS detector was proposed in \cite{TWOLEVEL} and \cite{Gao}, respectively. However, the increased data rate of SM by one bit requires doubling the number of antennas \cite{SM1,SM2}, which violates the low-cost requirement of MTDs. To improve the uplink (UL) throughput at a low cost and power-consumption, authors of \cite{MBMMUD1,MBMMUD2} proposed to employ media modulation at the MTDs, where an iterative interference cancellation detector and a CS detector was employed for multi-user detection in \cite{MBMMUD1} and \cite{MBMMUD2}, respectively. However, they have not considered the active user detection (AUD). {\color{black} To sum up, we provide a brief comparison of the related literatures in Table I.} Against this background, we propose to adopt media modulation at the MTDs for improving the UL throughput and to employ a mMIMO scheme at the BS. Moreover, a CS-based active device and data detection solution is proposed by exploiting both the sporadic traffic and the block-sparsity of mMTC as well as the structured sparsity of media modulated symbols. Specifically, we first propose a structured orthogonal matching pursuit (StrOMP) algorithm for AUD, where the block-sparsity of UL access signals across the successive time slots and the structured sparsity of media modulated symbols are exploited. Additionally, a successive interference cancellation based structured subspace pursuit (SIC-SSP) algorithm is proposed for demodulating the detected active MTDs, where the structured sparsity of media modulated symbols in each time slot is exploited for enhancing the decoding performance. {\color{black} Note that the proposed CS-based StrOMP and SIC-SSP algorithms belong to the greedy algorithms. Due to the computational benefit and near-optimal performance, greedy algorithms have been popularly used in the mMTC scenarios \cite{Profshim2,BWang1,YangDU1,BWang2,YangDU2,TWOLEVEL,CSreview}.} Finally, our simulation results verify the superiority of the proposed scheme over cutting-edge benchmarks. {\textit {Notation}: Boldface lower and upper-case symbols denote column vectors and matrices, respectively. For a matrix ${\bf A}$, ${\bf A}^T$, ${\bf A}^H$, ${\bf A}^\dagger$, ${\left\| {\bf{A}} \right\|_F}$, ${\bf{A}}_{[m,n]}$ denote the transpose, Hermitian transpose, pseudo-inverse, Frobenius norm, the $m$-th row and $n$-th column element of ${\bf{A}}$, respectively. ${\bf{A}}_{[\Omega,:]}$ (${\bf{A}}_{[:,\Omega]}$) is the sub-matrix containing the rows (columns) of ${\bf{A}}$ indexed in the ordered set $\Omega$. ${\bf{A}}_{[\Omega,m]}$ is the $m$-th column of ${\bf{A}}_{[\Omega,:]}$. For a vector ${\bf x}$, ${\left\| {\bf x} \right\|_p}$, $[{\bf x}]_{m}$, $[{\bf x}]_{m:n}$ and $[{\bf x}]_{\Omega}$ are the ${l_p}$ norm, $m$-th element, $m$-th to $n$-th elements, and entries indexed in the ordered set $\Omega$ of ${\bf x}$, respectively. For an ordered set $\Gamma$ and its subset $\Omega$, $|\Gamma|_c$, $\Gamma[m]$, and $\Gamma\setminus \Omega$ are the cardinality of $\Gamma$, $m$-th element of $\Gamma$, and complement of subset $\Omega$ in $\Gamma$, respectively. $[K]$ is the set $\{1,2,...,K\}$. \begin{table}[!t] \centering \captionsetup{font = {normalsize, color = {black}}, labelsep = period} \color{black}\caption*{Table I: A brief comparison of the related literatures} \begin{tabular}{|c|c|c|c|c|c|c|} \Xhline{1.2pt} \multicolumn{2}{|c|}{\diagbox{Contents}{Literatures}} & [2]-[6] & [7] & [8] & [11] & [12]\\% \Xhline{1.2pt} \multirow{2}*{BS} &Single antenna &\checkmark& & & & \\ \cline{2-7} &mMIMO & & \checkmark& \checkmark & \checkmark &\checkmark\\ \Xhline{1.2pt} \multirow{3}*{MTDs} &Single antenna&\checkmark& & & &\\ \cline{2-7} &SM & & \checkmark& \checkmark & &\\ \cline{2-7} &Media modulation & & & & \checkmark& \checkmark\\ \Xhline{1.2pt} \multicolumn{2}{|c|}{AUD}&\checkmark & \checkmark& & &\\ \Xhline{1.2pt} \multicolumn{2}{|c|}{Data detection}&\checkmark & \checkmark& \checkmark & \checkmark &\checkmark\\ \Xhline{1.2pt} \end{tabular} \end{table} \vspace{-2mm} \section{System Model} We first introduce the proposed media modulation based mMTC scheme and then focus on our massive access technique relying on joint active device and data detection at the BS. \vspace{-3.5mm} \subsection{Proposed Media Modulation Based mMTC Scheme} As illustrated in Fig. \ref{fig:MTC-MBM-joint}, we propose that all $K$ MTDs adopt media modulation for enhanced UL throughput and the BS employs mMIMO using $N_r$ receive antenna elements for reliable massive access. In the UL, each symbol consists of the conventional modulated symbol and of the media modulated symbol, and each MTD relies on a single conventional antenna and $M_r$ extra radio frequency (RF) mirrors \cite{MBMMUD1,MBMMUD2,MBM1,MBM2,MBM3}. By adjusting the binary on/off status of the $M_r$ RF mirrors, we have $N_t=2^{M_r}$ mirror activation patterns (MAPs), and the media modulated symbol is obtained by mapping ${\rm log_2}{(N_t)}=M_r$ bits to one of the $N_t$ MAPs. Therefore, if the conventional $M$-QAM symbol is adopted, the overall UL throughput of an MTD is $\eta = M_r +{ \it{\rm log_2}} M$ bit per channel use (bpcu). {\color{black}By contrast, to realize the extra bits, SM technique with an RF chain and multiple transmit antennas will select one of the transmit antennas for UL transmission \cite{SM1,SM2}. Hence, to achieve the same extra throughput, media modulation requires a single UL transmit antenna and a linearly increasing number of RF mirrors, but SM requires an exponentially increasing number of antennas \cite{MBMMUD1,MBMMUD2,SM1,SM2,MBM1,MBM2,MBM3}. Clearly, media modulation is more attractive for mMTC owing to its increased UL throughput at a negligible power consumption and hardware cost \cite{MBM1,MBM2,MBM3}.} Moreover, using a mMIMO UL receiver is the most compelling technique. By leveraging the substantial diversity gain gleaned from hundreds of antennas, the mMIMO BS is expected to achieve high-reliability UL multi-user detection, in the context of mMTC. By integrating the complementary benefits of media modulation at the MTDs and mMIMO reception at the BS into mMTC, we arrive at an excellent solution. \vspace{-3.5mm} \subsection{Massive Access of the Proposed mMTC Scheme} As shown in Fig. \ref{fig:MTC-MBM-joint}, we assume that the activity patterns of the $K$ MTDs remain unchanged in a frame, which consists of $J$ successive time slots. Hence we only focus our attention on the massive access for a given frame. Specifically, the signal received at the BS in the $j$-th $(\forall j\in [J])$ time slot, denoted by ${\bf{y}}^j\in\mathbb{C}^{N_r \times 1}$, can be expressed as \begin{equation}\label{eq:system} \begin{array}{l} {\bf y}^j=\sum\limits_{k = 1}^K a_k g_k^j {\bf H}_k {\bf d}_k^j+{\bf w}^j=\sum\limits_{k = 1}^K {\bf H}_k {\bf x}_k^j+{\bf w}^j ={\bf H} \widetilde{{\bf x}}^j + {\bf w}^j, \end{array} \end{equation} where the activity indicator $a_k$ is set to one (zero) if the $k$-th MTD is active (inactive), while ${g_k^{j}}\in\mathbb{C}$, ${\bf d}_k^j\in\mathbb{C}^{N_t \times 1}$, and ${\bf x}_k^j=a_k g_k^j {\bf d}_k^j \in\mathbb{C}^{N_t \times 1}$ are the conventional modulated symbol, media modulated symbol, and equivalent UL access symbol of the $k$-th MTD in the $j$-th time slot, respectively. Furthermore, ${\bf H}_k\in\mathbb{C}^{N_r\times N_t}$ is the multi-input multi-output (MIMO) channel matrix associated with the $k$-th MTD, ${\bf w}^j \in \mathbb{C}^{N_r \times 1}$ is the noise with elements obeying the independent and identically distributed (i.i.d.) complex Gaussian distribution ${\cal C}{\cal N}( {0,\sigma_w^2} )$, while ${{\bf{ H}}} = [{\bf H} _1, {\bf H} _2,...,{\bf H} _K] \in \mathbb{C}^{N_r \times (K N_t)}$ and $\widetilde{\bf x}^j = [({\bf x}^j _1)^T,({\bf x}^j _2)^T,...,({\bf x}^j_K)^T]^T \in \mathbb{C}^{(K N_t) \times 1}$ are the aggregate MIMO channel matrix and UL access signal in the $j$-th time slot, respectively. \begin{figure} \centering \includegraphics[width=8.7cm,height=5.5cm, keepaspectratio {SystemModel.eps} \captionsetup{font={footnotesize}, name={Fig.},labelsep=period} \caption{Proposed media modulation based mMTC scheme, where the UL access signal exhibits block-sparsity in a frame and structured sparsity in each time slot.} \label{fig:MTC-MBM-joint} \vspace*{-5mm} \end{figure} Note that for any ${\bf d}_k^j$ given $\forall j\in[J]$ and $\forall k\in[K]$, only one of its entries is one and the others are all zeros, i.e., \begin{equation}\label{eq:d} \begin{array}{l} {\rm supp}\{{\bf d}_k^j\}\in [N_t],~~\parallel\!{{\bf d}_k^j}\!\parallel_0=1,~~\parallel\!{{\bf d}_k^j}\!\parallel_2=1, \end{array} \end{equation} where ${\rm supp\{\cdot\}}$ is the support set of its argument. Furthermore, we consider the Rayleigh MIMO channel model, hence the elements in ${\bf H}_k$ for $\forall k\!\in\![K]$ follow the i.i.d. complex Gaussian distribution ${\cal C}{\cal N}(0,1)$. We assume that the channels remain time-invariant for a relatively long period in typical IoT scenarios, hence $\{{\bf H}_k\}_{k=1}^K$ can be accurately estimated at the BS via periodic updates. \section{Proposed CS-Based Massive Access Solution In typical IoT scenarios, the MTDs generate sporadic tele-traffic \cite{Profshim2,BWang1,YangDU1,BWang2,YangDU2,TWOLEVEL}, which indicates that ${\bf a}\!\!=\!\![a_1,a_2,...,a_K]^T\!\in\!\mathbb{C}^{K\!\times\!1}$ is a sparse vector and $K_a\!\!\!=\parallel\!\!{\bf a}\!\!\parallel_0\ll\!\!\!K$. Moreover, this activity pattern exhibits the block-sparsity, since ${\bf a}$ typically remains unchanged in $J$ successive time slots within a frame \cite{BWang1,YangDU1,Profshim2,TWOLEVEL}. Furthermore, ${\bf x}_k^j\!=\!a_k g_k^j {\bf d}_k^j$ for $\forall j\!\!\in\!\![J]$ exhibits the structured sparsity \cite{MBMMUD1,MBMMUD2}, due to the sparse nature of media modulated symbols' feature as illustrated in (\ref{eq:d}). The block-sparsity and structured sparsity of the UL signals inspire us to invoke CS theory to detect the active devices and demodulate the data at the BS. To exploit the block-sparsity of active MTD patterns, we first rewrite the received signals within a frame as \vspace{-1mm} \begin{equation}\label{eq:systemModel} \begin{array}{l} \bf{Y}=\bf{ H}\bf{X}+\bf{W}, \end{array} \end{equation} where we have ${\bf Y}\!=\![{\bf y}^1, {\bf y}^2, ..., {\bf y}^J]\in\mathbb{C}^{N_r \times J}$, ${{\bf H}}\in\mathbb{C}^{N_r \times (K N_t)}$, ${\bf X}\!=\![\widetilde{\bf x}^1, \widetilde{\bf x}^2, ..., \widetilde{\bf x}^J]\in\mathbb{C}^{(K N_t) \times J}$, and ${\bf W}=[{\bf w}^1, {\bf w}^2, ..., {\bf w}^J]\in\mathbb{C}^{N_r \times J}$. Thus the massive access problem can be formulated as the following optimization problem \begin{equation}\label{eq:OPTproblem} \begin{split} &\min\nolimits_{\bf X} \parallel {\bf Y}-{\bf HX}\parallel_F^2=\min\nolimits_{\{{\bf \widetilde x}^{j}\}_{j=1}^{J}}\sum\nolimits_{j=1}^J\parallel {\bf y}^j-{\bf H}{\bf \widetilde x}^j\parallel_2^2\\ &=\min\nolimits_{\{a_k,{\bf d}_k^j,g_k^j\}_{j=1,k=1}^{J,K}} \,\, \sum\nolimits_{j=1}^J\parallel {\bf y}^j-\sum\nolimits_{k = 1}^K a_k g_k^j {\bf H}_k{\bf d}_k^j\parallel_2^2\\ &~{\rm s.t.}~~ (2)~~{\rm and}~~\parallel\!\!{\bf a}\!\!\parallel_0\ll K. \end{split} \end{equation} \SetAlFnt{\scriptsize} \SetAlCapFnt{\normalsize} \SetAlCapNameFnt{\normalsize} \begin{algorithm}[tp!] \caption{{\color{black}Proposed StrOMP Algorithm}} \label{Algorithm:1} \KwIn{${\bf Y}\in\mathbb{C}^{N_r \times J}$, ${\bf H}\in\mathbb{C}^{N_r \times (K N_t)}$, and threshold $P_{\rm th}$.} \KwOut{The index set of estimated active MTDs ${\Gamma}\subseteq [K]$, $\widehat{K_a}=|\Gamma|_c$.} {\bf Initialization}: The iterative index $i$=1, the residual matrix ${\bf R}^{(0)}\!\!=\!\!{\bf Y}$, $\Gamma^{(0)}\!\!=\!\!\emptyset$. We define ${\bf m}\!\!\in\!\!\mathbb{C}^{K\!\times\!1}$ as an intermediate block correlation variable. For possible active MTDs given their temporary index set $\Lambda$, their MAP's index set is $\widetilde\Lambda\!\!=\!\!\{\widetilde \Lambda_n\}_{n=1}^{|\Lambda|_c}$, where $\widetilde\Lambda_n\!\!=\!\!\{N_t(\Lambda[n]\!\!-\!\!1)\!+\!u\}_{u=1}^{N_t}$ is the MAP's index set of the $n$-th MTD in $\Lambda$ for $ n\in[|\Lambda|_c]$\; \While{$1$}{ $[{\bf m}]_k=\sum\nolimits_{l=(k-1)N_t+1}^{kN_t}\sum\nolimits_{j=1}^J|({\bf H}_{[:,l]})^H{\bf R}_{[:,j]}^{(i-1)}|^2,~{\rm for}~k\in [K]$\; \label{block sparsity} $k^{\star}={\rm arg\mathop{max}\nolimits}_{\widehat{k}\in[K]}{[{\bf m}]_{\widehat k}}$\; \label{max} $\Lambda=\Gamma^{(i-1)}\cup k^{\star}$;~~~\{Possible support estimate\}\\ \label{support estimate} ${\bf B}_{[\widetilde\Lambda,:]}\!=\!({\bf H}_{[:,\widetilde\Lambda]})^\dagger{\bf Y},{\bf B}_{[[KN_t]\setminus \widetilde\Lambda, :]}=0$;\{Coarse signal estimate via LS\}\\ \label{LS1} \color{black}{ $\eta^{\star}_{n,j}={\rm arg\mathop{max}\nolimits}_{\widehat{\eta}_{{n,j}}\in\widetilde\Lambda_{n}}|{\bf B}_{[{\widehat{\eta}_{n,j}},j]}|^2,~{\rm for}~n\in[|\Lambda|_c],~j\in [J]$\; \label{structure1} $\Omega^{(j)}=\{\eta^{\star}_{n,j}\}_{n=1}^{|\Lambda|_c},~{\rm for}~j\in [J]$\; ${\bf A}_{[\Omega^{(j)},j]}\!\!=\!\!({\bf H}_{[ :,\Omega^{(j)}]})^\dagger{\bf Y}_{[:,j]},{\bf A}_{[[KN_t]\!\setminus\!{\Omega^{(j)}}, j]}=0,~{\rm for}~j\!\!\in\!\! [J]$;~~\{Fine signal estimate via LS\}\\ \label{LS2} ${\bf R}^{(i)}={\bf Y}-{\bf H}{\bf A}$;~~~\{Residue Update\}\\ \label{Residual Update} \uIf {$\left\|{\bf R}^{(i-1)}\right\|_F - \left\|{\bf R}^{(i)}\right\|_F < P_{\rm th}$}{ \label{stop cretiria} {\bf break};~~~\{Terminates the while-loop\}\\ }\Else{ $\Gamma^{(i)}=\Lambda$;~~~\{Support estimate update\}\\ \label{supportupdate} $i=i+1$\; \label{iterationupdate} } } } \KwResult{$\Gamma=\Gamma^{(i-1)}$ , ~$\widehat{K_a}=|\Gamma|_c$.} \label{Algorithm end 1} \end{algorithm} In the following subsections, we will first utilize the proposed StrOMP algorithm to determine the indices of active devices. On that basis, the associated data is further detected based on the proposed SIC-SSP algorithm. {\color{black} Finally, we will discuss the computational complexity of the proposed algorithms.} \SetAlFnt{\scriptsize} \SetAlCapFnt{\normalsize} \SetAlCapNameFnt{\normalsize} \begin{algorithm}[tp!] \caption{Proposed SIC-SSP Algorithm} \label{Algorithm:2} \KwIn{${\bf Y}\!=\![{\bf y}^1, {\bf y}^2, ..., {\bf y}^J]\in\mathbb{C}^{N_r \times J}$, ${\bf H}\in\mathbb{C}^{N_r \times (K N_t)}$, and the output of Algorithm 1: $\Gamma$, $\widehat{K_a}$.} \KwOut{Reconstructed UL access signal ${\bf X}\!=\![\widetilde{\bf x}^1, \widetilde{\bf x}^2, ...,\widetilde{\bf x}^J]$.} \For{$j={1}:J$}{ \label{outerloop} \For{$s={1}:\widehat{K_a}$}{ \label{innerloop} \If {$s=1$}{ ${\bf v}={\bf y}^j$,~$\Lambda=\Gamma$, where ${\bf v}$ is the measurement vector and $\Lambda$ is the remaining set of MTDs to be decoded, and the definitions of $\widetilde{\Lambda}$ and $\widetilde{\Lambda}_n$ are the same as those in Algorithm 1;\{Initialization\}\\ } $i=1,$~$\Psi^{(0)}=\emptyset$,~${\bf r}^{(0)}={\bf v}$;~~~\{Initialization\}\\ \While{$1$}{ \label{while} $[{\bf p}]_{\widetilde\Lambda}=({\bf H}_{[:,\widetilde\Lambda]})^H{\bf r}^{(i-1)}$, $[{\bf p}]_{[KN_t]\setminus \widetilde\Lambda}=0$;~~~\{Correlation\}\\ $\tau^{\star}_n={\rm arg\mathop{max}\nolimits}_{\widehat{\tau}_n\in \widetilde{\Lambda}_n}|[{\bf p}]_{\widehat{\tau}_n}|^2,~{\rm for}~n\in [|\Lambda|_c]$\; \label{structuredSpar1} $\Omega\!=\!\{\tau^{\star}_n+(\Lambda[n]-1)N_t\}_{n=1}^{|\Lambda|_c}$;\{$|\Lambda|_c$ most likely MAPs\}\\ $\Omega'=\Omega\cup\Psi^{(i-1)}$;\{Preliminary support estimate\}\\ $[{\bf e}]_{\Omega'}\!=\!({\bf H}_{[:,\Omega']})^\dagger{\bf r}^{(0)},[{\bf e}]_{[KN_t]\setminus \Omega'}=0$;\{Coarse LS\}\\ $\eta^{\star}_n={\rm arg\mathop{max}\nolimits}_{\widehat{\eta}_n\in \widetilde{\Lambda}_n}|[{\bf e}]_{\widehat{\eta}_n}|^2,~{\rm for}~n\in [|\Lambda|_c]$\; \label{structuredSpar2} $\Psi^{(i)}=\{\eta^{\star}_n+(\Lambda[n]-1)N_t\}_{n=1}^{|\Lambda|_c}$;~~~\{Pruning support set\}\\ $[{\bf e}]_{\Psi^{(i)}}\!=\!({\bf H}_{[:,\Psi^{(i)}]})^\dagger{\bf r}^{(0)}\!,\![{\bf e}]_{[KN_t]\setminus \Psi^{(i)}}\!=\!0$;\{Fine LS\}\\ \label{estimation} ${\bf r}^{(i)}={\bf r}^{(0)}-{\bf H}{\bf e}$;\{Residue Update\}\\ \If {$i\geq \widehat{K_a}$~~{\rm or}~~$\Psi^{(i)}=\Psi^{(i-1)}$}{ \label{startSIC} $\Psi=\Psi^{(i)}$, $n^\star={\rm arg\mathop{max}\nolimits}_{\widehat n\in [|\Lambda|_c]}|[{\bf e}]_{\Psi[\widehat{n}]}|^2$;\\ \label{SIC1} ${\bf v}\!=\!{\bf v}\!-\!{\bf H}_{[:,\Psi[n^\star]]}[{\bf e}]_{\Psi[n^\star]}$;\{Measurement vector update\}\\ \label{SIC2} $[\widetilde{\bf x}^j]_{\Psi[n^\star]}=[{\bf e}]_{\Psi[n^\star]}$,~~~$\Lambda=\Lambda\setminus\{\Lambda[n^\star]\}$;\\ \label{SIC3} ${\bf break}$;~~~\{Terminates the while-loop\}\\ \label{break} } \label{endSIC} $i=i+1$\; }\label{endwhile} }\label{innnerloopend} }\label{outterloopend} \KwResult{${\bf X}\!=\![\widetilde{\bf x}^1, \widetilde{\bf x}^2, ...,\widetilde{\bf x}^J]$.} \label{Algorithm end 2} \end{algorithm} \vspace{-3mm} {\color{black} \subsection{ Proposed StrOMP Algorithm for AUD} } {\color{black}The proposed StrOMP algorithm listed in Algorithm \ref{Algorithm:1}, is developed from the orthogonal matching pursuit (OMP) algorithm of \cite{OMP}. Specifically, line \ref{block sparsity} calculates the sum correlation ${\bf m}$ associated with all $N_t$ MAPs in $J$ time slots for each MTD; line \ref{support estimate} combines $k^{\star}$ (i.e., the most likely active MTD) with $\Gamma^{(i-1)}$ to update the possible support set $\Lambda$; in line \ref{LS1}, the coarse signal estimate is obtained by the least squares (LS) algorithm; lines \ref{structure1}$\sim$\ref{LS2} exploit the structured sparsity of media modulated symbols to estimate the possible MAPs based on the coarsely estimated signal ${\bf B}$, and then the fine signal estimate is obtained in line \ref{LS2} for improved robustness to noise; line \ref{Residual Update} updates the residual by using the finely estimated signal ${\bf A}$. In line \ref{stop cretiria}, if the energy difference of the residual in adjacent iterations $\left\|{\bf R}^{(i-1)}\right\|_F - \left\|{\bf R}^{(i)}\right\|_F$ falls below a predefined threshold, the loop stops, otherwise the iteration continues.} {\color{black}The classical OMP algorithm requires the sparsity level $K_a$, whereas the proposed StrOMP algorithm adaptively acquires the number of active MTDs without knowing $K_a$. Compared to the OMP algorithm, the proposed StrOMP achieves an improved detection performance by exploiting the block-sparsity (line \ref{block sparsity}) and the structured sparsity (lines \ref{structure1}$\sim$\ref{LS2}) of the UL access signals.} \begin{figure*}[!t] \centering \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:PeSNR} \includegraphics[width=3.2in]{PeSNRnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:BERSNR} \includegraphics[width=3.2in]{BERSNRnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \centering \setlength{\abovecaptionskip}{-1.8mm} \captionsetup{font={footnotesize, color = {black}}, name={Fig.},labelsep=period} \caption{Performance comparison of different solutions versus the SNR ($N_r=50$, $J=12$): (a) AUD performance; (b) BER performance.} \label{fig:PerforSNR} \end{figure*} \subsection{The SIC-SSP Algorithm Proposed for Data Detection} Based on the estimated active MTDs $\Gamma$ obtained from Algorithm 1, the data detection problem in formula (\ref{eq:OPTproblem}) reduces to the same CS problem as in \cite{Gao} (i.e., Eq. (10) for $J=1$ in \cite{Gao}), which can be solved by the group subspace pursuit (GSP) algorithm of \cite{Gao}. To further improve the performance, the proposed SIC-SSP algorithm, as listed in Algorithm \ref{Algorithm:2}, intrinsically integrates the idea of successive interference cancellation (SIC) with the GSP algorithm. Specifically, the outer for-loop recovers $\{\widetilde{\bf x}^j\}_{j=1}^{J}$ separately. For each $\widetilde{\bf x}^j$ with $j\!\!\in\!\![J]$, the inner for-loop recovers a structured sparse signal with $\widehat{K_a}$ sparsity by performing ($\widehat{K_a}-1$) SIC operations. In contrast to the existing GSP algorithm, the inner for-loop of the proposed algorithm incorporates the SIC operation (line \ref{startSIC}$\sim$\ref{endSIC}). Specifically, line \ref{SIC1} selects the index of the maximum element of the finely estimated signal ${\bf e}$ and subsequently line \ref{SIC2} eliminates it from the measurement vector ${\bf v}$; line \ref{SIC3} records the maximum element in $\widetilde{\bf x}^j$ ($j\!\!\in\!\![J]$) and reduces the size of the remaining set of active MTDs $\Lambda$ by 1, which corresponds to reducing the column dimension of the channel matrix in the next iteration for improving the data detection performance. Moreover, lines \ref{structuredSpar1} and \ref{structuredSpar2} improve the performance by exploiting the signal's structured sparsity. Finally, the algorithm is terminated when ${\bf X}$ is fully reconstructed. \vspace{-3mm} \color{black}\subsection{Computational Complexity} \begin{enumerate}[] \item {The computational complexity of the proposed StrOMP algorithm (Algorithm 1) in the $i$-th iteration mainly depends on the following operations. {\bf Signal correlation} (line \ref{block sparsity}): The matrix multiplication involved has the complexity on the order of $\mathcal{O}(JKN_tN_r)$. {\bf Coarse signal estimate via LS} (line \ref{LS1}): Coarse LS solution has the computational complexity on the order of $\mathcal{O}(J(2N_r(iN_t)^2+(iN_t)^3))$. {\bf Fine signal estimate via LS} (line \ref{LS2}): Fine LS solution has the computational complexity on the order of $\mathcal{O}(J(2N_ri^2+i^3))$. {\bf Residue update} (line \ref{Residual Update}): Since signal ${\bf A}$ acquired in line \ref{LS2} is a sparse matrix, the complexity of computing the residual is $\mathcal{O}(JN_ri)$.} \item {The computational complexity of the proposed SIC-SSP algorithm (Algorithm 2) in the $s$-th ($1\leq s \leq \widehat{K_a}$) inner for-loop mainly depends on the following operations. {\bf Correlation} (line 8): The matrix multiplication involved has the complexity on the order of $\mathcal{O}((\widehat{K_a}-s+1)N_tN_r)$. {\bf Coarse LS} (line 12): Coarse LS solution has the computational complexity on the order of $\mathcal{O}(2N_r(2(\widehat{K_a}-s+1))^2+(2(\widehat{K_a}-s+1))^3)$. {\bf Fine LS} (line 15): Fine LS solution has the computational complexity on the order of $\mathcal{O}(2N_r(\widehat{K_a}-s+1)^2+(\widehat{K_a}-s+1)^3)$. {\bf Residue update} (line 16): The complexity of computing the residual is $\mathcal{O}((\widehat{K_a}-s+1)N_r)$.} \end{enumerate} \begin{figure*}[!t] \centering \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:PeT} \includegraphics[width=3.2in]{PeTnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:BERT} \includegraphics[width=3.2in]{BERTnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \centering \setlength{\abovecaptionskip}{-1.8mm} \captionsetup{font={footnotesize, color = {black}}, name={Fig.},labelsep=period} \caption{Performance comparison of different solutions versus the frame length $J$ (SNR~=~2 dB, $N_r=50$): (a) AUD performance; (b) BER performance.} \label{fig:PerforT} \end{figure*} \begin{figure*}[!t] \centering \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:PeNr} \includegraphics[width=3.2in]{PeNrnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \subfigure{ \begin{minipage}[t]{0.5\linewidth} \centering \label{fig:BERNr} \includegraphics[width=3.2in]{BERNrnew3.eps}\\%benchmark \vspace{0.02cm} \end{minipage \centering \setlength{\abovecaptionskip}{-1.8mm} \captionsetup{font={footnotesize, color = {black}}, name={Fig.},labelsep=period} \caption{Performance comparison of different solutions versus the the number of receive antennas $N_r$ (SNR~=~2 dB, $J=12$): (a) AUD performance; (b) BER performance.} \label{fig:PerforNr} \end{figure*} \color{black}\section{Simulation Results} Let us now evaluate {\color{black}the probability of AUD error rate ($P_{\rm e}$) and} the bit error rate (BER) for the proposed CS-based massive access solution. Here {\color{black}$P_{\rm e}=\frac{E_u+E_f}{K}$}, and ${\rm BER}\!=\!\frac{E_u J\eta+B_m+B_c}{K_a J\eta}$, where $E_u$ is the number of active MTDs missed by activity detection, {\color{black}$E_f$ is the number of falsely detected inactive MTDs}, $B_m$ and $B_c$ are the total number of error bits in the media modulated symbols and conventional symbols for detected active MTDs within a frame, respectively, and $K_a J\eta$ is the total number of bits transmitted by $K_a$ active MTDs within a frame. In our simulations, the total number of MTDs is $K=100$ with $K_a=8$ active MTDs. Furthermore, each media modulation based MTD adopts $M_r=2$ RF mirrors and 4-QAM ($M=4$), hence the overall throughput becomes $\eta=M_r+{\rm log}_2M=4$ bpcu. {\color{black}Finally, $P_{\rm th}$ in the proposed StrOMP algorithm is set to 2, which is selected experimentally.} For comparison, we consider the following benchmarks. {\bf Benchmark 1}: Zero forcing multi-user detector for the traditional mMIMO UL \cite{Gao} with $K_a$ single-antenna users adopting 16-QAM to achieve the same 4 bpcu. {\bf TLSSCS}: TLSSCS detector from literature \cite{TWOLEVEL}, and the scaling factor $\alpha=4$ (i.e., $\alpha$ in Eq. (6) of \cite{TWOLEVEL}). {\color{black}{\bf StrOMP+GSP}: The proposed StrOMP algorithm and the existing GSP algorithm of \cite{Gao} are successively used to detect the active MTDs and the data. {\bf AUD lower bound}: A modified StrOMP algorithm relying on the perfect knowledge of $K_a$, which performs the iterations including lines \ref{block sparsity}$\sim$\ref{Residual Update} and lines \ref{supportupdate}$\sim$\ref{iterationupdate} for $K_a$ times, and the output estimated support set is $\Gamma^{(K_a)}$ containing $K_a$ elements.} {\bf BER lower bound}: The Oracle LS based detector relying on the perfect known index set of active MTDs and the support set of media modulated symbols, is considered as the BER lower bound of the proposed mMTC scheme. {\color{black} From Fig. \ref{fig:BERSNR}, Fig. \ref{fig:BERT}, and Fig. \ref{fig:BERNr}, it is obvious that the BER performance of the proposed mMTC scheme outperforms the traditional mMIMO UL (Benchmark 1) for the same throughput when $P_{\rm e}$ is small enough, thanks to the extra bits introduced by media modulation. Note that it is actually unfair for the proposed scheme to be compared with the benchmark 1 in BER performance since the latter does not consider the AUD error. } {\color{black} Fig. \ref{fig:PeSNR} and Fig. \ref{fig:BERSNR} compare the AUD performance and BER performance versus the signal-to-noise ratio (SNR), respectively. It is clear that the AUD performance of the proposed StrOMP algorithm is better than the TLSSCS algorithm, and closer to the AUD lower bound. We find that the BER performance of our ``StrOMP+SIC-SSP" solution outperforms the TLSSCS detector, and the ``StrOMP+GSP" solution, which demonstrates the efficiency of the proposed solution. Moreover, compared to the ``StrOMP+GSP" solution, the BER performance of our ``StrOMP+SIC-SSP" solution is getting better and better with the increase of SNR, which proves the the efficiency of the SIC operation. } \begin{table*}[!t] \centering \captionsetup{font = {normalsize, color = {black}}, labelsep = period} \color{black}\caption*{Table II: Computational complexity comparison of different algorithms} \begin{threeparttable} \begin{tabular}{|p{2cm}|p{3cm}|p{6cm}|p{2cm}|p{2cm}|} \Xhline{1.2pt} \multicolumn{2}{|c|}{\multirow{2}*{{\bf Algorithms}}} & \multirow{2}*{{\bf Computational complexity}}& \multicolumn{2}{|c|}{{\bf Complex-valued multiplications\tnote{1} ($10^6$)}} \\% \cline{4-5} \multicolumn{2}{|c|}{~} & ~& $N_r=50$ &$N_r=100$ \\% \Xhline{1.2pt} \multirow{3}*{AUD} ~&Proposed StrOMP &$\mathcal{O}((K_a+1)JKN_tN_r+\sum\nolimits_{s=1}^{K_a+1}[JN_r(s+2s^2+2(sN_t)^2)+J(s^3+(sN_t)^3)])$ & 9.6 & 17.6\\ \cline{2-5} ~&AUD part of TLSSCS [7] & $\mathcal{O}((K_a+1)[{N_r}^2(KN_t+J)+N_rJKN_t]+\sum\nolimits_{s=1}^{K_a+1}[{N_r}^2+2N_r(sN_t)^2+(sN_t)^3])$& 12.5 & 44.2\\ \cline{2-5} ~& AUD lower bound & $\mathcal{O}(K_aJKN_tN_r+\sum\nolimits_{s=1}^{K_a}[JN_r(s+2s^2+2(sN_t)^2)+J(s^3+(sN_t)^3)])$&7.1 & 13.2\\ \Xhline{1.2pt} \multirow{5}*{Data detection} ~&Proposed SIC-SSP&$\mathcal{O}(J\sum\nolimits_{s=1}^{K_a}[2sN_r(N_t+1)+14N_rs^2+11s^3])$&2.1 &4.0\\ \cline{2-5} ~&Data detection part of TLSSCS [7] &$\mathcal{O}(JN_rK_aN_t+2N_r(K_aN_t)^2+(K_aN_t)^3)$ &0.15 & 0.28\\ \cline{2-5} ~& GSP [8] & $\mathcal{O}(J[2sN_r(N_t+1)+14N_r{K_a}^2+11{K_a}^3])$ &0.65 &1.2\\ \cline{2-5} ~& BER lower bound & $\mathcal{O}(JN_rK_a+2N_r{K_a}^2+{K_a}^3)$ &0.01 &0.02\\ \cline{2-5} ~& Benchmark 1 & $\mathcal{O}(JN_rK_a+2N_r{K_a}^2+{K_a}^3)$&0.01&0.02 \\ \Xhline{1.2pt} \end{tabular} \begin{tablenotes} \footnotesize \item[1] The number of the complex-valued multiplications is calculated under the parameters $J=12$, $N_t=4$, $K=100$, $K_a=8$. \end{tablenotes} \end{threeparttable} \end{table*} \vspace{3mm} {\color{black} Fig. \ref{fig:PeT} and Fig. \ref{fig:BERT} compare the AUD performance and BER performance versus the frame length $J$, respectively. Owing to the exploitation of the block sparsity, it can be seen that the AUD performance of the proposed StrOMP improves with the increase of $J$. Furthermore, for AUD performance, the advantage of the proposed StrOMP algorithm over the TLSSCS algorithm becomes more obvious with the incerease of $J$. We also find that except for the Oracle LS (BER lower bound), the proposed ``StrOMP+SIC-SSP" solution has the lowest BER floor, for sufficiently large $J$. } {\color{black} Fig. \ref{fig:PeNr} and Fig. \ref{fig:BERNr} compare the AUD performance and BER performance versus the number of receive antennas $N_r$, respectively. Observe from Fig. \ref{fig:PerforNr} that when $N_r$ becomes large, the AUD performance or BER of the proposed ``StrOMP+SIC-SSP" solution is better than that of the TLSSCS detector and the ``StrOMP+GSP" solution, which indicates the superiority of the proposed solution for mMIMO. } {\color{black} The computational complexity of different solutions in our simulation are compared in Table II, where different algorithms are divided into two parts based on their functions (i.e., AUD or data detection). It is obvious that the complex-valued multiplications of the proposed StrOMP algorithm is a little smaller than the AUD part of the TLSSCS algorithm (i.e., lines 1-14 of Algorithm 1 in \cite{TWOLEVEL}) when $N_r=50$. If $N_r$ is doubled, the number of complex-valued multiplications of the proposed StrOMP algorithm increases linearly with $N_r$, whereas the complexity of the AUD part of the TLSSCS algorithm is nearly proportional to the square of $N_r$. Hence, it is clear that our StrOMP algorithm is more suitable for mMIMO with large antenna arrays. Furthermore, after obtaining the active MTDs, the data detection part of the TLSSCS algorithm becomes an LS operation (i.e., line 15 of Algorithm 1 in \cite{TWOLEVEL}) with limited BER performance for media modulated signal. Hence, our proposed SIC-SSP algorithm sacrifices some computational complexity for much better data detection performance. } \section{Conclusions} A media modulation based mMTC UL scheme relying on mMIMO detection at the BS was proposed for achieving reliable massive access with an enhanced throughput. The sparse nature of the mMTC traffic motivated us to propose a CS-based solution. First, an StrOMP algorithm was proposed to detect the active MTDs exhibiting block-sparsity and structured sparsity of the UL signals, which improved the performance. Then, an SIC-SSP algorithm was proposed for detecting the data of the detected MTDs by exploiting the structured sparsity of media modulated symbols for enhancing the performance. {\color{black}Furthermore, we analysed the computational complexity of the proposed algorithms.} Finally, our simulation qualified the benefits of the proposed solution.
1107.0699
\section{Introduction} Pattern formation in many systems is governed by competing interaction~\cite{Vedmedenko2007}. Examples of such systems are: the pasta phase in neutron stars\cite{PhysRevC.69.045804}, ferrofluids\cite{PhysRevE.67.021402,PhysRevE.64.041506,Rosensweig1983127,Tsebers1980}, Langmuir monolayers\cite{Kaganer1999,Suresh1988}, magnetic garnet thin films, type-I superconductors~\cite{PhysRevLett.103.267002}, colloids and gels\cite{PhysRevLett.97.078301,Lu2008,PhysRevLett.94.208301,PhysRevLett.104.165702,PhysRevE.77.031608}, etc. In general, there is a strong correlation between the pattern formation and the inter-particle interaction. Attraction favors aggregation, while repulsion favors low local densities. This competition between repulsive and attractive interaction leads to very rich phases, such as stripes, clusters, bubbles, etc.\cite{Seul27011995}. Those different phases were observed in many diverse systems. In neutron stars, the competition between the short-range nuclear attraction and the long-range Coulomb repulsion leads to complex pasta phases\cite{PhysRevC.69.045804}. In ferrofluid systems, rich phases due to the competition between dipolar forces and short-range forces opposing density variations were found experimentally\cite{PhysRevE.67.021402} and theoretically\cite{PhysRevE.64.041506}. Pattern formation was extensively studied in colloidal systems where the colloid-colloid interaction is characterized by the competition of the hardcore excluded volume interaction, on the one hand, and the polarization of the particles, on the other hand. The interaction potential can be further controlled by adding other components. As a result, rich configurations, such as clusters, repulsive or attractive glassy states, gels, were found numerically \cite{Reichhardt2003,PhysRevLett.90.026401,Reichhardt2005,Sciortino2005,PhysRevLett.93.055701,PhysRevB.83.014501}, analytically \cite{PhysRevLett.96.075702,PhysRevE.75.011410} and experimentally \cite{PhysRevLett.104.165702,Lu2008} in colloidal systems with short range attractive interaction. The properties of isolated clusters formed by the short-range attractive and long-range repulsive interaction were studied\cite{Mossa2004}. By controlling the interaction, the growth of the cluster was shown to change from nearly spherical to one-dimensional. The one-dimensional growth of the clusters facilitated the collective packing into columnar or lamellar phases \cite{Mossa2004}. The columnar and lamellar phases in three dimension were also studied by molecular dynamics (MD) simulations \cite{PhysRevE.74.010403}. In superconductors, the vortex-vortex interaction is usually considered to be either repulsive (in type-II superconductors where the Ginzburg-Landau parameter $\kappa$, i.e., the ratio of the magnetic field penetration depth $\lambda$ to the coherence length $\xi$, $\kappa>1/\sqrt{2}$) or attractive (in type-I superconductors, where $\kappa<1/\sqrt{2}$ and vortices are unstable) while vortices do not interact with each other at the so-called ``dual point'' when $\kappa=1/\sqrt{2}$ \cite{PhysRevB.3.3821}. However, a deeper analysis of the inter-vortex interaction in type-II superconductors near the dual point revealed an attractive tail~\cite{Brandt1987,Brandt2011}. This repulsive-attractive inter-vortex interaction was used for the explanation of unusual patterns in the intermediate state in low-$\kappa$ superconductors (e.g., Nb): islands of Meissner phase surrounded by vortex phase or vice versa, i.e., vortex clusters surrounded by Meissner phase~\cite{Brandt1987,Brandt2011}. Recent discovery of ``type-1.5'' superconductors \cite{PhysRevLett.102.117001} induced a new wave of interest (see, e.g., Ref.~\cite{PhysRevB.83.214523,lucia2}) to systems with non-monotonic interactions, due to the fact that the observed vortex patterns in those superconductors revealed a clear signature of the repulsive-attractive inter-vortex interaction. In particular, several properties (e.g., vortex lattice with voids, the nearest-neighbor distribution) of the observed vortex patterns were explained using a simple model that involved a non-monotonic inter-vortex interaction based on a more general approach to multi-order-parameter condensates~\cite{PhysRevB.72.180502}. In this paper, we consider a model competing interaction potential which is repulsive for short range and attractive for long range. This form of the interaction potential could be used as a model for different systems with non-monotonic inter-particle interation, e.g., atoms or molecules (i.e., the Lennard-Jones potential), or vortices in two-band superconductors, depending on specific parameters of the potential. We study pattern formation in two-dimensional systems, for different interaction potential profiles, i.e., we distinguish ``soft-core'' and ``hard-core'' interactions and analyze the transitions between different phases. Based on this, we construct a phase diagram for different interaction parameters and particle densities. We propose a new approach to characterize the different phases: instead of qualitative characterization of the phases (e.g., clusters or labyrinths), we introduce a number of quantitative criteria to distinguish these. In particular, different phases are analyzed in terms of the Radial Distribution Function (RDF) and additional quantities chracterizing, e.g., the local density of particles in clusters. The paper is organized as follows. In Sec.~II, we describe the model. The pattern formation for different interaction parameters is discussed in Sec.~III. In Sec.~IV, we analyze different phases using the RDF and discuss criteria for identification of different patterns. The conclusions are given in Sec.~V. \section{Model} The inter-particle interaction potential is taken to be of the following form: \begin{equation} \label{eq-V_interaction_1} V_{ij}=V_0\left(\frac{a}{b}*K_0(b*r_{ij}/\lambda)-K_0(r_{ij}/\lambda)\right). \end{equation} Here, $K_0$ is a first order Bessel function, $r_{ij}=\mid\textbf{r}_i-\textbf{r}_j\mid$ is the inter-vortex distance, $V_0$ and $\lambda$ are the units of energy and length, respectively. [Note that in case of a type-II superconductor appropriate units of length $\lambda$ and the energy $V_0$, correspondingly, are the magnetic field penetration depth $\lambda$ and $ V_{0} = \Phi_{0}^{2} / 8 \pi^{2} \lambda^{2}, $ where $\Phi_{0} = hc/2e$ (see, e.g., Ref.~\cite{PhysRevB.82.184512})]. In dimensionless form, the interaction potential (\ref{eq-V_interaction_1}) reads as \begin{equation} \label{eq-V_interaction_1dl} V^{\prime}_{ij}=\frac{V_{ij}}{V_0}=\frac{a}{b}*K_0(b*r^{\prime}_{ij})-K_0(r^{\prime}_{ij}), \end{equation} where the dimensionless length is defined as $r^{\prime}_{ij}=r_{ij}/\lambda$. Further on, we will omit the primes and use the dimensionless form of the potential (\ref{eq-V_interaction_1dl}). The interaction force is then given by \begin{equation} \label{eq-interaction_1} \textbf{F}_{ij}=-\mathbf{\nabla}V_{ij}=(a*K_1(b*r_{ij})-K_1(r_{ij}))\hat{\textbf{r}}_{ij}, \end{equation} where $K_1$ is a first order Bessel function, $\hat{\textbf{r}}_{ij}=(\textbf{r}_i-\textbf{r}_j)/r_{ij}$, $a$ and $b$ are two positive coefficients. The interaction potential Eq.~(\ref{eq-V_interaction_1dl}) is generic: by choosing suitable parameters, it can be used as a model of non-monotonic interactions in different systems, for example, the well-known inter-atomic (molecule) Lennard Jones (LJ) potential: $\mu_{LJ}(r)=\mu_0[(r/\sigma)^{-12}-(r/\sigma)^{-6}]$. In Fig.~\ref{fig-Lennard_Jones}(a), we compare the LJ potential for a particle of diameter $\sigma=2.762\lambda$ and the energy unit $\mu_0=0.1V_0$ with the model potential given by Eq.~(\ref{eq-V_interaction_1dl}) with the parameters: $a=1.045 \times 10^7$, $b=5.896$. The corresponding interaction forces are presented in Fig.~\ref{fig-Lennard_Jones}(b). The comparison shows that the potentials and the corresponding forces (i.e., for the model potential (\ref{eq-V_interaction_1}) and the LJ potential) fairly agree with each other. On the other hand, one can easily see that the interaction potential Eq.~(\ref{eq-V_interaction_1}) is a generalized form of the inter-vortex interaction in type-II superconductor which, as shown by Kramer\cite{PhysRevB.3.3821}, can be presented in the form: $V(r)=d_1(\kappa)K_0(r/\lambda)-d_2(\kappa)K_0(r/\xi)$. The first term of Eq.~(\ref{eq-interaction_1}) is repulsive while the second term describes an attractive interaction force. Indeed, for $r\rightarrow\infty$, $K_1(x)\rightarrow\sqrt{\pi/(2x)}e^{-x}$. The interaction force (\ref{eq-interaction_1}) has a repulsive (attractive) tail when $b<1$ ($b>1$), while for $r\rightarrow 0$, $K_1(x)\rightarrow 1/x$ and thus $\textbf{F}_{ij}\rightarrow (a/b-1)/r$. Therefore, for short range the interaction force (\ref{eq-interaction_1}) is repulsive (attractive) when $a>b$ ($a<b$), and we only consider the case $a>b$ since an attractive interaction for short distances would result in a collapse of our system of point particles. When $a>b$ and $b<1$, the interaction is always repulsive, and particles form a Wigner crystal structure. The most interesting case is realized when $a>b$ and $b>1$, and the interaction has a repulsive core and attractive tail. In this case, there exists a critical distance $r_c$, where the inter-particle interaction energy (\ref{eq-V_interaction_1}) reaches a minimum (and the interaction force (\ref{eq-interaction_1}) changes sign). By setting the force equal to zero, the coefficient $a$ is given by \begin{equation} \label{eq-coefficent_a} a=\frac{K_1(r_c)}{K_1(b*r_c)}. \end{equation} The pattern formation is determined by the coefficients $b$, $r_c$, and the particle density $n$. We study pattern formation in a system of interacting particles using Langevin equations. The dynamics of a single particle {\it i} obeys the following overdamped equation of motion: \begin{equation} \label{Md} \eta \textbf{v}_i=\textbf{F}_i=\sum_{j\neq i} \textbf{F}_{ij}+\textbf{F}_{i}^T. \end{equation} Here, $\eta$ is the viscosity, which is set to unity. $\textbf{F}_{ij}$ is the interparticle interaction force defined by Eq.~(\ref{eq-interaction_1}) and $\textbf{F}_{i}^T$ is the stochastic thermal force. The thermal stochastic term $\textbf{F}_{i}^T$ in Eq.~(\ref{Md}) obeys the following conditions: \begin{eqnarray} \langle F_i^T(t)\rangle =0 \end{eqnarray} and \begin{eqnarray} \langle F_i^T(t) F_i^T(t^\prime)\rangle =2\eta k_B T\delta_{ij}\delta(t-t^\prime). \end{eqnarray} \begin{figure} \begin{center} \includegraphics[width=0.96\columnwidth]{fig-01}\\ \caption{ (Color online) The LJ potential and the model potential given by Eq.~(\ref{eq-V_interaction_1}) with the following parameters: $a=1.045 \times 10^7$, $b=5.896$, $\sigma=2.762\lambda$, and $\mu_0=0.1V_0$ (a). The corresponding interaction forces (b).} \label{fig-Lennard_Jones} \end{center} \end{figure} We consider a two-dimensional (2D) square simulation region $L_{x} \times L_{y}$ in the $xy$-plane and apply periodic boundary conditions in the $x$ and $y$ directions. For the interaction force given by Eq.~(\ref{eq-interaction_1}), which exponentially decays for large distances, we use a cut-off for $r>8$. The length of the square cell $L_{x} = L_{y} = L$ has been varied from $60$ to $180$ to examine the finite-size effects, and we set $L=120$ to optimize the calculation speed without influencing the results. To obtain stable particle patterns, we performed simulated annealing simulations (SAS) of interacting particles. For this purpose, particles were initially randomly distributed inside the simulation region at some suitable non-zero temperature (depending on the inter-particle interaction). Then temperature was gradually reduced to zero, and the simulation was continued until a stable state was reached. \begin{figure} \vspace*{0.5cm} \vspace*{0.5cm} \begin{center} \includegraphics[width=0.9\columnwidth]{fig-02}\\ \caption{ (Color online) The profile of the inter-particle interaction force versus the distance. The main panel and the inset panel show the change in the force profile due to an increase of $r_c$ and $b$, respectively. } \label{fig-force_profile} \end{center} \end{figure} \section{Patterns and phases} In this section we will study pattern formation and identify different phases depending on parameters of the interaction $r_c$, $b$ and the particle density $n$. In Fig.~\ref{fig-force_profile}, we illustrate the change of the interaction force profile due to the increase of $r_c$ and $b$. On the one hand, the interaction force is very sensitive to $r_c$, since its increase directly leads to the increase of the repulsive part in Eq.~(\ref{eq-interaction_1}) (see the main panel of Fig.~\ref{fig-force_profile}). On the other hand, increase of $b$ leads to the increase of both the repulsive and attractive parts (see the inset of Fig.~\ref{fig-force_profile}). However, the repulsive interaction increases much faster for $r<r_c$ than the attractive interaction for $r>r_c$. Thus, with increasing $b$ it becomes hard to reduce the inter-particle distance below $r_{c}$. In other words, the increase of $b$ results in hardening of the core in the interaction force (\ref{eq-interaction_1}), i.e., the interaction changes continuously from a ``soft-core'' to a ``hard-core'' interaction. \subsection{Soft-core interaction} To analyze the soft-core regime, we set the coefficient $b=1.1$. We define the critical density as $n_c=8r_c^{-2}/\sqrt{3}$ which is the density of an ideal (hexagonal) Wigner crystal with the lattice constant $a=r_c$. The density is defined as $n=N/S$, where $S=120\times120$ is the area of the simulation region and $N$ is the number of particles (further on, when analyzing different patterns, we will refer to either the number of particles $N$ in the simulation cell or to the density which is $n=N/14400$). For the case of $n<n_c$, which is considered here, the Wigner crystal is not stable due to the attractive interaction. In Fig.~\ref{fig-pattern_1500}, we show patterns formed by $N=1500$ particles when $r_c$ increases from $1.9$ (a) to $2.9$ (f). Note that the condition $n<n_c$ is always fulfilled for all the values of $r_c$ in this range. We found that for $r_c<2.1$ particles form clusters similar to the formation of the ``clump'' phase found in Ref.~\cite{Reichhardt2003,PhysRevLett.90.026401,Reichhardt2005} (see Figs.~\ref{fig-pattern_1500}(a) and (b)). The main difference to the patterns found in Ref.~\cite{Reichhardt2003,PhysRevLett.90.026401,Reichhardt2005} is that the relatively softer core in our case is compressed due to the attractive interaction, and the clusters acquire a circular shape. In addition, the interaction between the clusters (decaying exponentially for long distances) becomes negligible for inter-cluster distance of the order of few to several $r_{0}$. Therefore, the clusters can be considered as non-interacting for low densities (although still they do not approach each other), contrary to the situation of Ref.~\cite{Reichhardt2003,PhysRevLett.90.026401,Reichhardt2005} when a super-lattice is formed due to the long-range cluster-cluster repulsion. When $r_c$ increases, the clusters expand. In particular, for $r_c> 2.1$, the clusters start to elongate which is an indication of the instability with respect to the transition to the stripe phase. For $2.1<r_c<2.3$, a mixed state with both stripes and clusters is observed (see Fig.~\ref{fig-pattern_1500}(c)). Further increase of $r_c$ gradually destroys the cluster phase and leads to the formation of the labyrinth phase (see Figs.~\ref{fig-pattern_1500}(d) to (f)). In order to investigate the influence of the density, we gradually increase the number of particles from $1500$ to $10500$ in our simulation cell. First, in Fig.~\ref{fig-pattern_5500}, we present patterns formed by $N=5500$ particles for varying $r_{c}$. Note that $n>n_c$ for $N=5500$. As compared to the lower density case shown in Fig.~\ref{fig-pattern_1500}, in the cluster phase, the additional particles lead to the expansion of the clusters (see Figs.~\ref{fig-pattern_5500}(a) and (b)). A mixture of clusters and stripes are formed when the additional particles form bridges connecting the clusters (see Fig.~\ref{fig-pattern_5500}(c)) which will be discussed in detail below. For the labyrinth phase (shown in Figs.~\ref{fig-pattern_1500}(d) to (f) for $N=1500$), the additional particles fill the empty regions (voids), resulting in the formation of the triangular lattice with varying local density (Fig.~\ref{fig-pattern_5500}(d)) and, finally, a regular triangular lattice (see Figs.~\ref{fig-pattern_5500}(e) and (f)). Note that the lattice with varying local density (Fig.~\ref{fig-pattern_5500}(d)) is not stable in systems with pure repulsive interaction such as vortices in type-II superconductors. We analyzed in detail the intermediate regime (i.e., corresponding to the transition from clusters to stripes, see Fig.~\ref{fig-pattern_1500}(c)) where $r_c\approx2.3$. The patterns for varying density are shown in Fig.~\ref{fig-rc2.3}. Fig.~\ref{fig-rc2.3}(a) displays a configuration at low density with $N=1500$ when many individual clusters are formed. When increasing the density (see Fig.~\ref{fig-rc2.3}(b)), the clusters connect with each other and form a stripe phase. A further increase of the number of particles (Fig.~\ref{fig-rc2.3}(c)) results in the formation of a mixed phase of interconnected stripes with voids. A very interesting and counter-intuitive evolution is observed when the number of particles increases from $N=5500$ to $N=6500$ (see Fig.~\ref{fig-rc2.3}(d)): in contrast to the gradual transition from a void-rich configuration to a lattice-rich configuration when $N$ changes from $N=3500$(b) to $N=5500$(c), in case of $N=6500$ we observe a ``reentrant'' behavior, i.e., the void-rich phase starts to recover which is compensated by an increase of the local density in the stripes. However, further increasing density results in the expansion of the stripes to the empty regions which is accompanied by a decrease of the local density in the stripes, as shown in Fig.~\ref{fig-rc2.3}(e). The distribution of particles becomes more uniform, with only few small voids. Finally, for $N=9500$ (Fig.~\ref{fig-rc2.3}(f)), we obtain a deformed triangular lattice characterized by a varying local density with only one small void. Our calculations show that the obtained phases are very sensitive to variations in $r_c$. Thus, if $r_c$ slightly decreases (e.g., $r_c=2.25$), the number of clusters greatly increases as compared to the case $r_c=2.3$, for the same density of particles. For $r_c=2.25$, we also observe the transition from a void-rich configuration to a lattice-rich configuration. However, since the decrease of $r_c$ increases the attractive component of the inter-particle interaction this occurs at much higher density. For even smaller values of $r_c$, i.e., $r_c<2.1$, we do not observe the lattice phase, even for extremely large number of particles (up to $N=20000$). \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-03}\\ \caption{ (Color online) Different patterns for $N=1500$ particles in a unit cell $L\times L$ with $L=120$ for varying $r_c$: $r_c=1.9$ (a), $2.1$ (b), $2.3$ (c), $2.5$ (d), $2.7$ (e), $2.9$ (f). } \label{fig-pattern_1500} \end{figure} \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-04}\\ \caption{ (Color online) Patterns for $N=5500$ particles in a unit cell $L\times L$ with $L=120$ for varying $r_c$: $r_c=1.9$ (a), $2.1$ (b), $2.3$ (c), $2.5$ (d), $2.7$ (e), $2.9$ (f). } \label{fig-pattern_5500} \end{figure} \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-05}\\ \caption{ (Color online) Patterns for $r_c=2.3$ and varying number of particles $N$ (in the computational unit cell $L\times L$ with $L=120$): $N=1500$ (a), 3500 (b), 5500 (c), 6500 (d), 7500 (e), 9500 (f). } \label{fig-rc2.3} \end{figure} \subsection{Phase Diagram} Based on the above analysis of the patterns and phases, we constructed a phase diagram in the plane of $r_c$ and the number density $n$ (see Fig.~\ref{fig-phase}). For extremely low density ($n<0.03$), particles form small clusters which are well separated, and the patterns are rather insensitive to variations of $r_c$. However, phases become richer when density $n$ increases (i.e., $n>0.1$). Low values of $r_c$ (i.e., $r_c<2.1$) still favor cluster formation for a broad range of densities although the clusters become denser for large $n$. With increasing $r_c$, clusters become unstable with respect to the formation of ``bridges'' between separate clusters which is a precursor of the formation of stripes. Stripes are formed in a rather narrow range of $r_c$ when $r_c>2.1$ (see Fig.~\ref{fig-phase}). The stripe phase is represented by two sub-phases, I and II, i.e., void-rich phase (see Fig.~\ref{fig-phase}(i)) and void-poor (lattice-rich) phase (Fig.~\ref{fig-phase}(j)). For larger $r_c$ and $n<n_c$, particles form labyrinth structures. However, when increasing the density, additional particles fill the empty regions and finally they form a deformed triangular lattice with varying density. Depending on $r_c$ and $n<n_{c}$ deformed lattice is characterized by a varying local density or by the appearance of voids. Correspondingly, we distinguish two regions (1 and 2 in Fig.~\ref{fig-phase}). Note that in the vicinity of the phase boundaries patterns are always mixtures of the two phases (e.g., clusters and stripes) except the phase transition from the stripe phase to the deformed triangular lattice. Near this phase boundary, particles either form stripes with high local density or deformed triangular lattices with lower local density. The probability of finding the lattice configuration greatly increases when $r_c$ or $n$ increases. Finally, for very large density, when the average interparticle distance becomes larger than $r_c$ the repulsive interaction prevails resulting in the formation of an almost perfect triangular lattice. The phases presented in Fig.~\ref{fig-phase} are found in many real systems. For example, the cluster phase is found in such systems as colloids~\cite{PhysRevLett.97.078301,Sciortino2005,PhysRevLett.94.208301,PhysRevLett.93.055701} and neutron stars~\cite{PhysRevC.69.045804}. The obtained stripe patters are very similar to those in Langmuir monolayers~\cite{Suresh1988}. Several of the calculated phases were found in superconductors, e.g., the stripe phase in the mixed state of type-I superconductors, clusters of Meissner phase or vortex clusters in the intermediate state in low-$\kappa$ type-II superconductors~\cite{Brandt1987,Brandt2011}, the labyrinth phase (i.e., vortex lattice with voids) or vortex clusters in type-1.5 superconductors~\cite{PhysRevLett.102.117001}. In spite of the variety of the physical systems and length scales ranging from nano- and micro-objects to cosmic objects, the common feature of all these systems is a competing attractive-repulsive interparticle interaction which allows analyzing them within the same approach. \begin{figure*} \includegraphics[width=1.8\columnwidth]{fig-06}\\ \caption{ (Color online) The phase diagram in the plane ``critical radius $r_{c}$ $-$ density $n$'' (n) and representative patterns (a to m) for the different phases. For extremely low density ($n<0.03$), particles form small well separated clusters. For $r_c<2.1$, particles form clusters. For larger $r_c$, the size of the clusters increases (see the change from (d) to (e) and (b) to (c)). The increase of the density results in increasing the size of the clusters (see the change from (d) to (b) and (e) to (c) ). Further increase of $r_c$ leads to elongation of the clusters and the formation of ``bridges'' between them (a). Thus, the configurations change gradually from the cluster phase to the stripe phase (i). When $r_c$ or $n$ become even larger, stripes interconnect and form patterns with voids (j). Thus the stripe phase is divided into two sub-phases, I and II. When $r_c$ become even larger, labyrinth-like configurations are formed for $n<n_c$ (see (g) and (f)). With further increase of $r_c$ or $n$, deformed triangular lattice with (k) or without voids (l) are formed. The deformation of a triangular lattice is reduced for even larger values of $r_c$ and $n$ (m). } \label{fig-phase} \end{figure*} \subsection{Hard-core interaction} Let us now analyze the influence of coefficient $b$ on the pattern formation. As mentioned above (see Fig.~\ref{fig-force_profile}), an increase in $b$ changes the potential from a soft-core to a hard-core. Hardening the repulsive core in the interaction potential generally leads to a decreasing compressibility of the inner parts of the patterns where particles are closely packed. Correspondingly, the patterns change as compared to the soft-core regime. As shown in Fig.~\ref{fig-pattern_1500_2}, for $b=4$ and low density ($N=1500$), all the patterns are clusters of different shape. Thus for $r_c=1.15$ and $r_c=1.3$, the clusters are of circular shape similar to those found in the soft-core regime although for $r_c=1.3$ some clusters composed of a small number of particles have polygon shapes. With increasing $r_c$, polygon shaped clusters become more favorable, which allows us to identify them as short stripes. For even larger $r_c$, the clusters become much larger. They represent separate islands of triangular lattices. For higher density ($N=6500$), the variety of phases is much richer. Although for $r_c=1.15$ circular-shape clusters are still observed (see Fig.~\ref{fig-pattern_6500_2}(a)), which grow in size for $r_c=1.3$ and slightly deviate in shape from circular (Fig.~\ref{fig-pattern_6500_2}(b)), for larger $r_c=1.45$ deviations of the cluster shape from circular become more pronounced (see Fig.~\ref{fig-pattern_6500_2}(c)). With further increasing $r_c$, i.e., $r_c=1.6$, stripes are formed (see Fig.~\ref{fig-pattern_6500_2}(d)) followed by lattices with voids for $r_c=1.75$ and $r_c=1.9$. In the hard core regime, deformations of lattices occur via the appearance of voids instead of varying local density (in the soft-core regime). \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-07}\\ \caption{ (Color online) Patterns for $N=1500$ particles in a unit cell $L\times L$ with $L=120$, $b=4$ and for varying $r_c$: $r_c=1.15$ (a), $1.3$ (b), $1.45$ (c), $1.6$ (d), $1.75$ (e), $1.9$ (f). } \label{fig-pattern_1500_2} \end{figure} \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-08}\\ \caption{ (Color online) Patterns for $N=6500$ particles in a unit cell $L\times L$ with $L=120$, $b=4$ and for varying $r_c$: $r_c=1.15$ (a), $1.3$ (b), $1.45$ (c), $1.6$ (d), $1.75$ (e), $1.9$ (f). } \label{fig-pattern_6500_2} \end{figure} \section{Analysis of the patterns} \subsection{Radial Distribution Function} Although different patterns and phases studied above are qualitatively well distinguished, it is highly desirable to build a set of solid criteria which would allow us to identify them in a quantitative manner. For this purpose, we analyze here different phases by means of the Radial Distribution Function (RDF). The RDF, $g(r)$, describes the variation of the atomic (particle) density as a function of the distance from one particular atom (particle). If we define an average density as $n = N/V$ (where $V$ is the volume or surface area in the 2D case), then the local density at distance $r$ is $ng(r)$. The knowledge of RDF is important since one can measure $g(r)$ experimentally using neutron scattering or x-ray diffraction scattering~\cite{Hajdu1976}. Moreover, macroscopic thermodynamic quantities can be calculated using $g(r)$ in thermodynamics \cite{Frenkel2002}. In our calculations, we define the RDF $g_i(r)$ as follows: \begin{equation} \label{eq-rdf} g_i(r)=\frac{\triangle N/\triangle r}{2\pi r}. \end{equation} Here, the lower index indicates that the RDF centers at the position of the $i$th particle; $\triangle N$ is the number of particles whose distance to the $i$th particles is between $r$ and $r+\triangle r$. The average $g(r)$ is given by \begin{equation} \label{eq-rdfa} g(r)=\frac{1}{N}\sum_{i=1}^{N}g_i(r). \end{equation} In Fig.~\ref{fig-rdf1}(a), we plot the function $g(r)$ calculated for the low-density cluster configuration shown in Fig.~\ref{fig-pattern_1500}(b). The function $g(r)$ has two well-pronounced peaks. The first peak corresponds to the average distance to the first coordination sphere (nearest neighbors) while the second one is located at the distance approximately twice the distance to the first peak, which shows short-range periodicity. For $r$ larger than the size of the cluster, and smaller than the inter-cluster distance, $g(r)<1$, since only few particles are located inside this range. As well as for the RDF obtained for the pasta phase in neutron stars\cite{PhysRevC.69.045804}, the decreasing tail in our case also shows a strong aggregation of the particles in the cluster phases. The minimum of $g(r)$ is very close to zero since most of the clusters have circular symmetry and they are well separated. The position of the minimum of $g(r)$ (marked by the gray arrow in Fig.~\ref{fig-rdf1}(a)) gives an estimate of the average diameter of the clusters. For the stripe phase (see Fig.~\ref{fig-rdf1}(b) for the pattern shown in Fig.~\ref{fig-pattern_1500}(d)), there are also two peaks indicating the short-range periodicity, similar to the cluster phase. However, unlike in the case of clusters the minimum of $g(r)$ is non-zero since the stripes are generally not separated. For the labyrinth phase (see Fig.~\ref{fig-rdf1}(c) for the pattern shown in Fig. \ref{fig-pattern_1500}(f)), only short-range periodic ordering exists, and $g(r)$ becomes uniform for larger $r$. \begin{figure} \vspace*{0.5cm} \vspace*{0.5cm} \includegraphics[width=1\columnwidth]{fig-09}\\ \caption{ (Color online) The average RDF of the patterns formed at low density ($N=1500$). (a), (b), and (c) correspond to the patterns shown in Figs.~\ref{fig-pattern_1500}(b), (d), and (f), respectively. The peaks marked by dark arrows show the shot-range periodicity. The gray arrow shows the minimum of the RDF. } \label{fig-rdf1} \end{figure} In Fig.~\ref{fig-rdf2}, we plot $g(r)$ for the patterns formed at high density (N=5500). For clusters, the increase in the density leads to an increasing variation of the cluster size. Note, however, that the sizes of individual clusters are rather insensitive to moderate variations of the number of particles in the clusters, due to the strong compressibility of the core. As a result, the function $g(r)$ for the high density clusters misses the short-range periodicity and the second peak disappears (see Fig.~\ref{fig-rdf2}(a)). Strikingly, this peak still exist for the void-rich phase (see Fig.~\ref{fig-rdf2}(b)). The decrease of the tail of $g(r)$ becomes very slow, which shows a minor aggregation of the particles. For the lattice phase (see Fig.~\ref{fig-rdf2}(b)), the position of the first peak is $r_1\approx a=\sqrt{2/(\sqrt{3}n)}$, where $a$ is the distance between two neighboring particles in the ideal triangular lattice with the density $n$. The positions of the second and third peaks are at $r_2\approx \sqrt{3}a$ and at $r_2\approx 2a$, respectively, corresponding to those in a triangular lattice. A fourth, fifth and even sixth peaks appear, which shows that this phase is much more ordered. However, due to the variation of the local density, these peaks are strongly broadened, and they are actually a combination of many neighboring peaks. These peaks become clearer for larger $r_c$ or $n$, which implies that the lattices become more regular. \begin{figure} \vspace*{0.5cm} \includegraphics[width=1\columnwidth]{fig-10}\\ \caption{ (Color online) The average RDF of the patterns formed at high density (N=5500). (a), (b), and (c) correspond to the patterns shown in Figs.~\ref{fig-pattern_5500}(b), (d), and (f), respectively. The arrows have the same meaning as in Fig.~\ref{fig-rdf1}. } \label{fig-rdf2} \end{figure} It is interesting to compare the results of the function $g(r)$ for hard-core interaction with those for soft-core interaction. Although the clusters shown in Figs.~\ref{fig-pattern_6500_2}(a) and (b) have shapes very similar to those of the clusters shown in Figs.~\ref{fig-pattern_5500}(a) and (b), the analysis using the RDF shows that most of the clusters shown in Fig.~\ref{fig-pattern_6500_2}(b) have hexagonal ordered cores. The peaks in $g(r)$ appear to be much better separated than for, e.g., the deformed triangular lattice formed in the soft-core case (see Fig.~\ref{fig-rdf2}(c)). For the stripe and void-rich phases, the RDF shows much clearer triangular lattice ordering. Therefore, the analysis using the RDF allowed us to establish quantitative criteria for the different phases and reveal the differences in the structure of the patterns (which look similar, e.g., clusters) in case of soft- and hard-core interparticle interaction. \begin{figure} \includegraphics[width=1\columnwidth]{fig-11}\\ \caption{ (Color online) The average RDF of the patterns formed in the hard-core case. (a), (b), (c), (d), and (e) correspond to the patterns (a), (b), (c), (d), and (e) in Fig.~\ref{fig-pattern_6500_2}, respectively. } \label{fig-rdf_h} \end{figure} \subsection{Local density} \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-12}\\ \caption{ (Color online) The average local density $I$ versus the particle density $n$ for $b=1.1$ (a). Curves A, B, C, D, E, and F correspond to $r_c=1.9$, $2.1$, $2.3$, $2.5$, $2.7$, and $2.9$, respectively. For the cluster phase, $I > 20$, for the stripe phase, $10 < I < 20$, for the labyrinth phase, $6 < I < 10$. $I=6$ for lattice and deformed lattice configurations. The local density $I$ versus the number of particles $N$ for $b=4$ (b). Curves A, B, C, D, E, and F correspond to $r_c=1.15$, $1.3$, $1.45$, $1.6$, $1.75$, and $1.9$, respectively. } \label{fig-order_I} \end{figure} Let us define, using the RDF, an ``order parameter'' to characterize the different phases. By definition, the local density is: \begin{equation} \label{eq-rdf_integral} I_i=\int_0^\xi 2\pi r n g_i(r)dr= N_\xi-1. \end{equation} Here, $N_\xi\approx \pi \xi^2 n$ is the average number of particles within the circle centered at the $i$th particle with radius $R=\xi$. Since in an ideal triangular lattice one particle has six nearest neighbors, we take $N_\xi=7$, then $\xi=\sqrt{N_\xi/\pi n}=\sqrt{7/\pi n}$. Thus, for any configuration characterized by a small local density fluctuation, the average local density $I = \langle I_i \rangle = 6$. From the definition given by Eq.~(\ref{eq-rdf_integral}), we can see that the presence of a large fraction of empty regions can considerably increase $I$ since only the ``shell'' of those regions of thickness $\xi$ is considered. In Fig.~\ref{fig-order_I}(a), we plot the function $I$ for different $r_c$ and $N$ for the soft-core interaction with $b=1.1$. We see that the function $I$ can be used to charaterize the different phases. Thus we found that, for clusters, $I$ is always large ($I>20$) since aggregation is strong. For stripes, aggregation is smaller, and thus $I$ is smaller ($10<I<17$). For labyrinths, the regions free of particles are relatively small. Therefore, $I$ ranges between $6$ to $9$. Finally, for the lattice phase, $I$ is always $6$. Therefore, the function $I$ serves as a measure of the aggregation of particles and allows us to effectively distinguish the different phases. The function $I$ also provides a tool to analyze the stability of patterns with increasing density. This is demonstrated in Fig.~\ref{fig-order_I}(b), where we plot $I$ for the hard-core case when $b=4$. For $r_c=1.15$, $I>20$ for all the values of the density, and thus the clusters are stable. However, the situation changes for $r_c=1.3$: while clusters are formed for low densities up to $N \approx 7500$, for larger density $I$ decreases below the value $I=20$ which means that clusters elongate and interconnect giving rise to the onset of the stripe phase. For $r_c=1.45$, the particles start to form stripes at even lower density (N=3500). For higher density, the additional particles fill in the empty regions and finally they form the lattice phase. For $r_c>1.6$, the island-like clusters formed at low densities are very unstable with respect to an increase in density. The additional particles rapidly occupy the empty regions, and the patterns change form clusters to stripes and from stipes to lattice with increasing density. Therefore, the phenomenological description of pattern evolution with increasing density we revealed above in Sec.~III.A has been verified in a quantitative manner using the rigid basis of the local density function approach. \subsection{Occupation factor} \begin{figure} \includegraphics[width=0.9\columnwidth]{fig-13}\\ \caption{ (Color online) The occupation factor $A=(r_1/a)^2$ for $b=1.1$ (a).Curves A, B, C, D, E, and F correspond to $r_c=1.9$, $2.1$, $2.3$, $2.5$, $2.7$, and $2.9$, respectively. The occupation factor, $A$ for $b=4$ (b). Curves A, B, C, D, E, and F correspond to $r_c=1.15$, $1.3$, $1.45$, $1.6$, $1.75$, and $1.9$, respectively. } \label{fig-order_II} \end{figure} We demonstrated that the (average) RDF $g(r)$ (Eq.~(\ref{eq-rdfa})) and the local density function $I$ (Eq.~(\ref{eq-rdf_integral})) allowed us to unambiguously characterize the different patterns and phases. At the same time, it is still hard to distinguish, using the above tools, between the labyrinth phase (or a lattice with voids) and the lattice phase (especially in the soft-core regime when the corresponding RDF does not differ much from each other, and both phases are characterised by small values of $I$. However, these phases can be easily distinguished by employing another simple criterion, namely, the occupation factor which characterizes the fraction of the space occupied by particles to the total space. Let us assume that all the patterns are formed by islands of triangular lattice with the average distance between two nearest neighbor particles $r=r_1$, where $r_1$ is the position of the first peak of the RDF. Then the ratio of the area occupied by the particles and the whole simulation area is $A=(r_1/a)^2$. As shown in Fig.~\ref{fig-order_II}(a), this ratio can be used to distinguish the labyrinth phase and the lattice phase, since for the lattice phase $A\approx 1$. We also found that circular clusters in the case of soft-core interaction are very stable with respect to an increase in the density. In this case, the additional particles cannot efficiently increase the occupation factor due to the increase in the local density of the core. However, for the stripe and labyrinth phases, the occupation factor $A$ increases with the density (see Fig.~\ref{fig-order_II}(a)). These phases are not stable for high density. Thus the large jump in $A(N)$ at $N=6500$ in curve C in Fig.~\ref{fig-order_II}(a) shows the phase transition from the stripe phase to the lattice phase. As one can expect, in the case of hard-core interaction, the occupation factor increases nearly linearly versus the density (see Fig.~\ref{fig-order_II}(b)). Therefore, the polygon shape and island-like clusters are not stable: with increasing density they evolve to the lattice phase (the plateau in $A(N)$ in Fig.~\ref{fig-order_II}(b)). In addition, let us introduce another useful quantity which characterizes the degree of ``perfection'' of a lattice. Let us define the particles with the inter-particle distance shorter than $\xi$ as neighboring particles. Then \begin{align}\label{eq-rdf_displacement} \varepsilon&=\frac{1}{a}\sqrt{\frac{\int_0^\xi 2\pi r g(r) n (r-a)^2 dr}{\int_0^\xi 2\pi r g(r) n dr}}\nonumber\\ &=\frac{1}{a}\sqrt{\frac{\int_0^\xi r g(r) (r-a)^2 dr}{\int_0^\xi r g(r) dr}}. \end{align} is an average displacement of the inter-distance between two neighbor particles (measured in units of $a$), which is independent of the density. The function $\varepsilon$ is non-zero for a deformed triangular lattice and $0$ for the ideal triangular lattice. Thus, $\varepsilon$ quantitatively measures the degree of perfection of a lattice. Note that $\varepsilon$ is only used for lattices as an auxiliary tool to distinguish ordered lattices from less ordered ones (see Table I). \section{Conclusions} Using molecular-dynamics simulations, we analyzed the pattern formation and identified different phases in a system of particles interacting via a non-monotonic potential, with a repulsive short-range part and attractive long-range part. The form of the interacting potential is generic: it describes, depending on specific parameters, the interparticle interaction in a variety of physical systems ranging from, e.g., atoms and molecules (Lennard-Jones potential) to colloids and neutron stars. It can also be used as a model of inter-vortex interaction in low $\kappa$ type-II superconductors and in recently discovered so-called ``type-1.5'' superconductors. The obtained different phases were summarized in a phase diagram in the plane ``critical radius $r_{c}$ $-$ density $n$'' ($r_{c}$ is the critical radius where the interaction force changes its sign). We also analyzed the influence of the hardness of the ``core'', i.e., the strength of the repulsive core part of the interaction potential on the pattern formation. We developed a set of criteria in order to unambiguously identify the obtained phases using the following approaches: (i) the Radial Distribution Function (RDF) $g(r)$, (ii) the local density function $I$, and (iii) the occupation factor $A$. In addition, we introduced a parameter which characterizes the degree of perfection of a lattice $\varepsilon$. Employing these approaches, we elaborated the criteria for the identification of the different phases which are summarized in Table~\ref{criterion}. \begin{table}[h] \begin{center} {\footnotesize \begin{tabular}{|c|c|c|c|c|} \hline Patterns & $g(r)$ & $I$ & $A$ & $\varepsilon$ \\ \hline clusters & peak at $r_{1}$($r_{2}$), & $>20$ & & \\ & $g(r)_{min}=0$ & & & \\ \hline stripes & peak at $r_{1}$($r_{2}$), & $10\sim 20$ & & \\ & $g(r)_{min}>0$ & & & \\ \hline labyrinths & peak at $r_{1}$($r_{2}$), & $6 \sim 10$ & $<1$ & \\ & $g(r)\approx$ const, $r>r_{2}$ & & & \\ \hline lattice & several peaks: $r_{1}$, $r_{2}$\ldots & $\approx 6$ &$\approx 1 $ & 0 \\ \hline deformed & several peaks: $r_{1}$, $r_{2}$\ldots & $\approx 6$ &$\approx 1 $ & $>0$ \\ lattice & & & & \\ \hline \end{tabular} } \end{center} \caption{ The set of criteria used to quantitatively identify the different phases in terms of the RDF $g(r)$, the local density function $I$, the occupation factor $A$, and the parameter $\varepsilon$ (for details, see the text). } \label{criterion} \end{table} \section{Acknowledgments} We acknowledge fruitful discussions with Ernst Helmut Brandt. This work was supported by the ``Odysseus'' Program of the Flemish Government and the Flemish Science Foundation (FWO-Vl), the Interuniversity Attraction Poles (IAP) Programme --- Belgian State --- Belgian Science Policy, and the FWO-Vl.
1411.0684
\section{Introduction} \label{sect:intro} Black hole X-ray binaries (BHBs; $\hbox{$M_{\rmn{BH}}$} \sim 10 \hbox{$\thinspace M_{\odot}$}$) display both low frequency ($\sim 0.1 - 30$\,Hz) and high frequency ($\sim 40-450$\,Hz) quasi-periodic oscillations (LFQPOs; HFQPOs) in their X-ray power spectra (PSD; see e.g. \citealt{RemillardMcClintock06} for a review). HFQPOs are the fastest coherent features observed in accreting black holes and their high frequencies suggest an origin in the innermost regions of the accretion flow. Understanding this phenomenon will then provide important information on the BH mass and spin as well as the structure of the strongly-curved spacetime close to the event horizon (e.g. \citealt{MilsomTaam97}; \citealt{NowakETAL97}; \citealt{Wagoner99}; \citealt{stella99}; \citealt{AbramowiczKluzniak01}; \citealt{RezzollaETAL03}; \citealt{DasCzerny11}). A scale invariance of the accretion process (e.g. \citealt{shaksuny73}; \citealt{mushotzky93}) implies that QPOs should also be present in active galactic nuclei (AGN; $\hbox{$M_{\rmn{BH}}$} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 10^{6} \hbox{$\thinspace M_{\odot}$}$). For a BH mass ratio $M_{\rm BHB} / M_{\rm AGN} = 10^{-5}$ the expected frequency of LFQPOs in AGN is $f_{\rm LFQPO} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 10^{-5}$ (i.e. timescales of $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 100$\,ks). LFQPOs are therefore not expected to be easily detected in AGN with existing data (see \citealt{VaughanUttley05}). The analogous HFQPOs in AGN are expected to occur at $f_{\rm HFQPO} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 5 \times 10^{-3}$\,Hz (i.e. timescales of $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 200$\,s), well within the temporal passband of e.g. {\it XMM-Newton~\/}. If detected in AGN, the longer periodicity means we can study the QPO on the level of individual periods, given sufficient data quality, providing a better window into the HFQPO phenomenon across the BH mass scale. QPOs have been notoriously difficult to detect in AGN, with many early `detections' disfavoured due to an inadequacy in modelling the underlying broad band noise (\citealt{vaughan05a}; \citealt{VaughanUttley06}; \citealt{GonzalezMartinVaughan12} and references therein). A $\sim 200$\,s QPO (most likely a HFQPO) was detected in the tidal disruption event (TDE) Swift J164449.3+573451 (\citealt{ReisETAL13}). Recently, a $\sim 3.8$\,hr QPO was reported in 2XMM J123103.2+110648 (\citealt{LinETAL13}). The low black hole mass ($\hbox{$M_{\rmn{BH}}$} \sim 10^{5} \hbox{$\thinspace M_{\odot}$}$) and 50 per cent rms variability (typically $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 15$ per cent for LFQPOs in BHBs, e.g. \citealt{RemillardMcClintock06}) led the authors to associate this with the LFQPO phenomenon. The first robust AGN HFQPO detection came from the Seyfert galaxy RE J1034+396, with a $\sim 1$\,hr periodicity (\citealt{gierlinski08}). Recently, we showed that the QPO is present in 5 years of {\it XMM-Newton~\/} observations of RE J1034+396~(\citealt[][hereafter A14]{alston14b}). The frequency of the QPO has remained constant in this time, although it is now only detected in the 1.0--4.0\,keV band. This strengthens the association of the QPO with the primary (hot, optically thin) Comptonising corona, as observed in BHBs (e.g. \citealt{RemillardMcClintock06}). Accreting BHs display hard lags at low frequencies --- where variations in harder energy bands are delayed with respect to softer energy bands (\citealt{miyamoto89}; \citealt{nowak99}a; \citeyear{nowak99b}b; \citeauthor[][2003a]{vaughan03a}; \citealt{mchardy04}; \citealt{arevaloetal08}; \citealt{kara13b}; \citealt{alston14a}; \citealt{LobbanETAL14}). The leading model for the origin of the hard lags is the radial propagation of random accretion rate fluctuations through a stratified corona (e.g. \citealt{Lyubarskii97}; \citealt{churazov01}; \citealt{kotov01}, \citealt{arevalouttley06}). A switch from hard (\emph{propagation}) lags at lower frequencies to soft (\emph{reverberation}) lags at higher frequencies has now been observed in $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 20$ AGN (e.g. \citealt{fabian09}; \citealt{emmanoulopoulos11}; \citealt{zoghbi11a}; \citealt{alston13b}; \citealt{cackett13}; \citealt{demarco13lags}; \citealt{kara13a}; \citealt{alston14a}). A consistent picture is emerging where the soft lag represents the \emph{reverberation} signal as the primary X-ray emission is reprocessed by the inner accretion disc itself (e.g. \citealt{fabian09}, see \citealt{uttley14rev} for a review). Further evidence for this scenario comes from the corresponding detection of high frequency iron K$\alpha$ lags (e.g. \citealt{zoghbi12a}; \citealt{zoghbi13a}; \citealt{kara13c}) and Compton hump reverberation lags above 10.0\,keV observed with {\it NuSTAR}~(\citealt{zoghbi14}; \citealt{kara15a}). High frequency soft lags have also been observed in an ultra-luminous X-ray source (\citealt{HeilVaughan10}; \citealt{demarco13b}) and one hard state BHB (\citealt{uttley2011}). The time lags of HFQPOs in BHBs have also been studied, with \citet{Cui99} finding a hard lag in the 67\,Hz QPO in GRS 1915+105. Recently, \citet{Mendez13} performed a systematic study of HFQPO time lags in a sample of 4 BHBs. They found a hard lag in all QPOs with the exception of the 35\,Hz QPO in GRS 1915+105 which displays a soft lag that increases with increasing energy separation. The physical meaning of these HFQPO time lags is still uncertain. However, they provide an extra diagnostic for identifying HFQPOs and for understanding their physical origin. In this paper we present the significant detection of a QPO in the AGN MS 2254.9--3712~and explore several aspects of the QPO spectral-variability. MS 2254.9--3712~is a nearby ($z = 0.039$; \citealt{stocke91}), X-ray bright (log~($L_{\rm X}) = 43.29~{\rm erg~s^{-1}}$; \citealt{grupe04}) and `unabsorbed' ($N_{\rm H} < 2 \times 10^{22}~{\rm cm}^{-1}$; \citealt{grupe04}) radio quiet (\citealt{shields03}) narrow line Seyfert 1 (NLS1) galaxy (FWHM(H$\beta) \sim 1500$\,km s$^{-1}$; \citealt{grupe04}). The central BH mass in MS 2254.9--3712~derived from the empirical $R_{\rm BLR}-\lambda L_{\lambda}(5100 {\rm \AA})$ relation (e.g. \citealt{kaspi2000}) is $\hbox{$M_{\rmn{BH}}$} \sim 4 \times 10^{6}$ (\citealt{grupe04}; \citealt{grupe10}). The BH mass derived from the $\hbox{$M_{\rmn{BH}}$} - \sigma$ relation (\citealt{tremaine02}) using $\sigma$(OIII) is estimated as $\hbox{$M_{\rmn{BH}}$} \sim 10^{7}$ (\citealt{shields03}). The Eddington rate estimated from $\lambda L_{\lambda}(5100 {\rm \AA})$ is $L_{\rm Bol}/\hbox{$L_{\rm Edd}$} = 0.24$ (\citealt{grupe04}; \citealt{grupe10}). However, \citet{wang03} suggest MS 2254.9--3712~is accreting at super-Eddington rate ($\hbox{$\dot M$} / \hbox{$\dot m_{\rm Edd}$} > 1$). They found that super-Eddington accretion can lead to a limit relation between the BH mass and the FWHM of the broad lines, indicating that super-Eddington accretors radiate close to their Eddington luminosity, but accrete above the Eddington limit (see also \citealt{CollinKawaguchi04}). The structure of this paper is as follows: in Section~\ref{sec:obs} we describe the observations and data reduction, in Section~\ref{sec:psd} we present the power spectral analysis and QPO identification. We explore the time delays, frequency dependent energy spectra and principle component analysis in Section~\ref{sec:var}. In Section~\ref{sec:disco} we discuss these results and the QPO identification. \section{Observations and data reduction} \label{sec:obs} \begin{figure} \centering \includegraphics[width=0.44\textwidth,angle=0]{ltcrv_pnmos.eps} \caption{Background-subtracted source (blue) and background (grey) EPIC-pn light curves for the 0.3--0.7\,keV (a), 0.7--1.2\,keV (b), 1.2--5.0\,keV (c) and 5.0--10.0\,keV (d) bands. A binsize of $\Delta t = 200$\,s is used for plotting purposes. The QPO filtered 1.2--5.0\,keV band light curve (red) and original (black) are shown in panel (e). A bandpass filter with width $\pm 20 \%$ the QPO frequency was applied, see Section~\ref{sec:psd} for details. Panel (f) shows the combined EPIC-MOS 1+2 background-subtracted source (blue) and background (grey) 1.2--5.0\,keV band light curves.} \label{fig:ltcrv} \end{figure} We make use of the single $\sim 70$\,ks {\it XMM-Newton~\/} observation of MS 2254.9--3712~from 2005 ({\textsc OBS ID:} 0205390101). For the timing analysis in this paper we use both the EPIC-pn (European Photon Imaging Camera; \citealt{struder01}) and EPIC-MOS data. These observations were taken in small window mode. The MOS data is split into 3 exposures, of which we make use of exposure 3 only with duration 62\,ks. We processed the Observation Data Files (ODFs) following standard procedures using the {\it XMM-Newton~\/}\ Science Analysis System (SAS v13.5.0), using the most recent calibration files as of October 2014. We processed the data using the filtering conditions {\tt PATTERN} = 0--4 ($<=12$ for MOS) and {\tt FLAG} = 0 (\#XMMEA\_EM for MOS). We extract source light curves from a 20 arcsec circular region. The background was taken from a large rectangular region on the same chip, approximately 15 times larger than the source region and placed away from the chip edges. Various background regions were also used in the following analysis, and the choice of background region was found to have no significant affect on the results. For the PSD analysis in Section~\ref{sec:psd} we filtered soft proton flares using a threshold of $0.5~{\rm ct~s}^{-1}$ in the 10.0--12.0\,keV background light curve. We linearly interpolate across any gaps less than 500\,s and add Poisson noise. This background rate cut removes the period of high background flaring at the start of the observation giving 58\,ks of high quality data where the EPIC-pn and EPIC-MOS overlap. This ensures the highest signal-to-noise (S/N) light curves in the PSD analysis, which is most sensitive to uncorrelated (Poisson) noise. After accounting for the flares at the beginning of the observation, the interpolation fraction is negligible in the remaining 58 ks. For the remaining analysis in Section~\ref{sec:var} onwards we filter the data for soft proton flares using a threshold of $2~{\rm ct~s}^{-1}$ in the 10.0--12.0\,keV background light curve. Again, we linearly interpolate across any short gaps and add Poisson noise, although the interpolation fraction was negligible using this rate cut. This rate cut allows for the full 70\,ks of EPIC-pn data to be used in the cross-spectral analysis in Section \ref{sec:var}, enabling us to probe to lower frequencies and increasing the frequency resolution. The resulting full background subtracted source light curves (with $\Delta t = 200$\,s for plotting purposes) for several energy bands are shown in Fig.~\ref{fig:ltcrv} (blue), as well as the background light curve (grey). We show only the $1.2-5.0$\,keV MOS band for illustrative purposes. With a mean count rate of $1.35 {\rm ~ct s^{-1}}$ in the 0.3--10.0\,keV band, pile-up is negligible in this observation. A binsize $\Delta t = 100$\,s is used in the timing analysis throughout. \section{Energy-dependent power spectrum} \label{sec:psd} The PSD was estimated using the standard method of calculating the periodogram (e.g. \citealt{priestley81}; \citealt{PercivalWalden93}), with an $\rmn{[rms/mean]}^2$ normalisation (e.g. \citeauthor[][2003a]{vaughan03a}). Motivated by the PSD analysis in A14 we estimated the periodogram in four energy bands; 0.3--0.7\,keV, 0.7--1.2\,keV, 1.2--5.0\,keV and 5.0--10.0\,keV. The PSD of the 1.2--5.0\,keV band is shown in Fig.~\ref{fig:psd1}. The energy bands were chosen in order to investigate the association of a QPO feature with a particular spectral component, whilst maintaining a high signal-to-noise ratio (S/N) PSD. \begin{figure} \centering \includegraphics[width=0.3\textwidth,angle=90]{psdmod_1200_5000_pnmos.eps} \caption{The 1.2--5.0\,keV band PSD and model fits are shown in panel (a), for model 1 (red) and model 2 (blue). The data/model residuals for models 1 and 2 are shown in panels (b) and (c), respectively.} \label{fig:psd1} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth,angle=90]{psdmod_300_700_pnmos.eps} \caption{The 0.3--0.7\,keV band PSD and model fits are shown in panel (a), for model 1 (red) and model 2 (blue). The data/model residuals for models 1 and 2 are shown in panels (b) and (c), respectively.} \label{fig:psd2} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth,angle=90]{psdmod_700_1200_pnmos.eps} \caption{The 0.7--1.2\,keV band PSD and model fits are shown in panel (a), for model 1 (red) and model 2 (blue). The data/model residuals for models 1 and 2 are shown in panels (b) and (c), respectively.} \label{fig:psd3} \end{figure} \begin{table*} \centering \caption{Results of model fits to the pn+MOS PSDs. Column (1) shows the energy range, column (2) shows the posterior predictive $p$-values (ppp) for the LRT between the \textit{null} hypothesis and alternative hypothesis. Columns (3) and (4) show the ppp for $T_{\rmn{SSE}}$ and $T_{\rmn R}$, respectively. Column (5) shows the ratio $ R_j = 2I_j/S_j$, where $j$ is the QPO frequency, $f_{\rm QPO} = 1.5 \times 10^{-4}$\,Hz and (6) is the absolute rms at $f_{\rm QPO}$. Columns (7), (8), (9) and (10) show the best fit model parameters with their 68.3 per cent confidence intervals. For Model 2, column (8) shows the $\alpha_{\rm high}$ parameter.} \begin{tabular}{l ccc ccc ccc} \hline {\it En} band & $p_{\rmn{LRT}}$ & $p_{\rmn{SSE}}$ & $p_{\rmn R}$ & $R_j$ & rms($f_{\rm QPO}$) & log(N) & $\alpha$ & log($\nu_{\rm bend}$) & $P_{\rm noise}$ \\ \vspace{-0.3cm}\\ keV & \multicolumn{4}{c}{} & $\%$ & & & Hz&\\ (1) & (2) &(3) &(4) &(5) &(6) &(7) &(8) &(9) &(10) \\ \hline \multicolumn{10}{c}{Model 1} \\ 0.3--0.7 & 0.1762 & 0.7750 & 0.2182 & 5 & 3 & $-9.2\substack{+1.1 \\ -1.2}$ & $2.6\substack{+0.4 \\ -0.3}$ & - & $0.75\substack{+0.04 \\ -0.04}$ \\ 0.7--1.2 & 0.1196 & 0.0435 & 0.0748 & 9 & 4 & $-8.5\substack{+1.3 \\ -1.5}$ & $2.4\substack{+0.4 \\ -0.3}$ & - & $1.07\substack{+0.05 \\ -0.05}$ \\ 1.2--5.0 * & 0.0012 & 0.0010 & 0.0010 & 18 & 6 & $-9.2\substack{+1.2 \\ -1.3}$ & $2.5\substack{+0.4 \\ -0.3}$ & - & $0.85\substack{+0.04 \\ -0.04}$ \\ \multicolumn{10}{c}{Model 2} \\ 0.3--0.7 & 0.2270 & 0.7760 & 0.7828 & 5 & 3 & $-2.2\substack{+0.3 \\ -0.4}$ & $4.6\substack{+0.4 \\ -0.7}$ & $-4.09\substack{+0.10 \\ -0.15}$ & $0.76\substack{+0.04 \\ -0.03}$ \\ 0.7--1.2 & 0.6448 & 0.0452 & 0.0757 & 6 & 4 & $-2.5\substack{+0.4 \\ -0.4}$ & $4.1\substack{+0.6 \\ -0.7}$ & $-3.92\substack{+0.11 \\ -0.15}$ & $1.06\substack{+0.04 \\ -0.05}$ \\ 1.2--5.0 & 0.5230 & 0.0252 & 0.0384 & 10 & 6 & $-2.7\substack{+0.4 \\ -0.4}$ & $4.9\substack{+0.7 \\ -0.7}$ & $-3.74\substack{+0.24 \\ -0.25}$ & $0.85\substack{+0.03 \\ -0.04}$ \\ \hline \end{tabular} \label{fitresults} \end{table*} We fitted the PSDs with simple continuum models and searched for significant data/model outliers using the maximum likelihood method of \citet[][hereafter V10]{vaughan10}. The fitting procedure distinguishes between continuum models before testing the preferred continuum model for deficiencies that indicate the presence of a significant narrow coherent feature. In this way were are sensitive to QPOs that are constrained to one frequency bin width. The details of the model fitting are given in A14 and we refer the reader to V10 (and references therein) for a full discussion. A likelihood ratio test (LRT) statistic (eq. 22 of V10) was used to select between the continuum models, $H_0$ and $H_1$ (e.g. \citealt{ProtassovETAL02}; V10). Following V10, the \textit{null}-hypothesis model $H_0$ was rejected using the criterion $p_{\rm LRT} < 0.01$, which the simulation results of V10 suggest is a conservative estimate. Once the preferred continuum model has been selected, the presence of narrow coherent features is investigated using two test statistics. Markov Chain Monte Carlo (MCMC) simulations were used to find the test statistic distribution and the associated posterior predictive $p$-value (ppp). The overall model fit is assessed using the summed square error, $T_{\rm SSE}$ (eq. 21 of V10), which is analogous to the traditional chi-square statistic. A small $p_{\rm SSE}$ indicates an inadequacy in the continuum modelling. Significant outliers are investigated using $T_{\rm R} = {\rm max}_j \hat{R}_j$, where $\hat{R} = 2I_{j} / \hat{S}_j$ and $I_j$ is the observed periodogram and $S_j$ is the model power spectrum at frequency $\nu_j$. A small $p_{\rm R}$ indicates that the largest outlier is unusual under the best-fitting continuum model and the presence of a QPO is inferred. Following A14 (and reference therein) we use two simple continuum models; a power law plus constant (Model 1): \begin{equation} \label{eqn:pl} P(\nu) = N \nu^{- \alpha} + C \end{equation} \smallskip \noindent with normalisation $N$; and the slightly more complex Model 2 is a bending power-law (e.g. \citealt{mchardy04}): \begin{equation} \label{eqn:bendpl} P(\nu) = \frac{N \nu^{{\alpha}_{\rm low}}}{1 + (\nu / \nu_{\rm bend})^{{\alpha}_{\rm low}-{{\alpha}_{\rm high}}}} + C \end{equation} \smallskip \noindent where ${\alpha}_{\rm high}$ is the high frequency slope and ${\alpha}_{\rm low}$ is the slope below the bend frequency, $\nu_{\rm bend}$. In both models, the Poisson noise level is described in the fitting process using the non-negative, additive constant, $C$. In model 2, $\nu_{\rm bend}$ was originally set to a value $1.5 \times 10^{-4}$\,Hz from the best fitting value found in \citet[][hereafter GMV12]{GonzalezMartinVaughan12}. The prior distributions on all model parameters have a $3 \sigma$ dispersion around the mean level (see V10, section 9.4). The results of the PSD fitting are shown in Table~\ref{fitresults}, along with the 68.3 per cent (1$\sigma$) confidence intervals on model parameters. The 1.2--5.0\,keV band is the only energy band to display a significant outlier at $\sim 1.5 \times 10^{-4}$\,Hz ($p_{\rmn{SSE}} = 0.001$; $p_{\rmn R} = 0.001$), with Model 1 preferred ($p_{\rmn{LRT}} = 0.0012$). Fig.~\ref{fig:psd1} shows the best fitting models to the 1.2--5.0\,keV band. Despite Model 1 being preferred, the $p$-values of Model 2 are moderately low ($p_{\rmn{SSE}} = 0.03$; $p_{\rmn R} = 0.03$), indicating the outlier at $\sim 1.5 \times 10^{-4}$\,Hz is unusual under the best fitting Model 2. In Appendix~\ref{ap:psd} we show the individual pn and MOS 1.2--5.0\,keV PSDs. The best fitting models to the 0.3--0.7\,keV, 0.7--1.2\,keV bands are shown in Figs.~\ref{fig:psd2} and \ref{fig:psd3} respectively. Although no formally significant outlier was detected in these two bands, some structure in the PSD can be seen at $\sim 1.5 \times 10^{-4}$\,Hz. The 5.0--10.0\,keV band PSD is dominated by Poisson noise and hence we do not show it here. The QPO at $\sim 1.5 \times 10^{-4}$\,Hz is confined to one frequency bin, making the QPO highly coherent. The quality factor $Q = \nu / \Delta \nu$ is $\sim 8$. The QPO rms fractional variability for each band is given in column 6 of Table~\ref{fitresults}. A value of $6 \%$ is observed in the 1.2--5.0\,keV band. If we assume the narrow outliers at $\sim 1.5 \times 10^{-4}$\,Hz in the two softer bands also indicate the presence of a QPO, then we observe an increase in the QPO rms with increasing energy band. The QPO does not appear as apparent in the light curve compared to those observed in BHBs or the $1.0-4.0$\,keV band in RE J1034+396~(see A14 Fig.~1). To illustrate the QPO we apply a bandpass filter to the 1.2--5.0\,keV light curve, with a frequency width $\pm 30 \%$ of the QPO frequency. This removes the variations outside of the filter window, allowing the variations on the timescale of the filter bandpass to be seen. The filtered light curve is plotted in Fig.~\ref{fig:ltcrv} panel (e), where the quasi-periodic nature of the light curve is now apparent. We note here, however, that a narrow filter applied to a pure noise signal will also produce a quasi-sinusoidal time series, except the amplitude of the oscillation will be greatly reduced. The deviation from a quasi-periodic signal in the $1.2-5.0$\,keV light curve is most likely due to the red noise nature of the broadband noise which contributes a substantial amount of power at low frequencies (see Fig.~\ref{fig:psd1}). These long timescale trends then wash out the quasi-periodic signal. The amplitude of the broadband noise below the observed QPO in the $1.0-4.0$\,keV band in RE J1034+396~is much smaller than in MS 2254.9--3712, and hence why the QPO is apparent in the light curve of RE J1034+396~ (see A14 Fig.~1). \section{The frequency dependent variability} \label{sec:var} \subsection{The Cross-Spectrum} \label{sec:cspec} \begin{figure} \centering \includegraphics[width=0.4\textwidth,angle=0]{cs.eps} \caption{Cross-spectral products for the $0.3-0.7$\,keV and $1.2-5.0$\,keV bands. The data are binned as described in Sec.~\ref{sec:cspec}. Panel (a) shows the PSDs with Poisson noise level estimates from the best fitting models described in Sec.~\ref{sec:psd}. Panel (b) shows the Poisson-noise corrected coherence between the two bands. The dashed line is the function $\gamma^2(f) = \exp(-f / 5 \times 10^{-4}$\,Hz). Panel (c) shows the time delays, where a positive value indicates a hard band lag. The open symbols highlight the estimates at the QPO and harmonic frequencies.} \label{fig:cs} \end{figure} In this section we explore the cross-spectral products (PSDs, coherence and time delays) between the 1.2--5.0\,keV band and the two softer bands. In this way we can study the frequency dependent correlations between the QPO and any components dominating at softer energies (see \citealt{uttley14rev} for a review). In this section and in the remaining analysis we use the 70\,ks EPIC-pn observation only (see Section~\ref{sec:obs} for details). This allows us to probe down to lower frequencies and provides more data for any segment averaging and frequency binning. Following the method outlined in \cite{vaughannowak97} we calculate the cross-spectrum values in $M$ non-overlapping time series segments, then average over the $M$ estimates at each Fourier frequency. To improve the signal-to-noise (S/N) in the resulting cross-spectra we averaged over neighbouring frequency bins, with each bin increasing geometrically by a factor 1.15 in frequency. In the following analysis we use a segment length of 35\,ks and ${\rm dt} = 20$\,s. The segment size and frequency binning are chosen in order to maximise the number of data points in each frequency bin whilst maintaining a sufficient frequency resolution to pick out any interesting features in the cross-spectral products. Fig.~\ref{fig:cs} panel (a) shows the PSD for the $0.3-0.7$\,keV (black) and $1.2-5.0$\,keV (grey) bands. The $1.2-5.0$\,keV shows the QPO at $\sim 1.5 \times 10^{-4}$\,Hz (open symbol). The binned $1.2-5.0$\,keV band also shows tentative evidence for a 3:2 harmonic at $\sim 2 \times 10^{-4}$\,Hz (open symbol). From herein we refer to this frequency as the {\it harmonic}. From the cross-spectrum we get the coherence $\gamma^2$ (or {\it squared coherency}) between the two bands (e.g. \citealt{bendatpiersol86}). The coherence gives a measure of the linear correlation between the two bands, i.e. how much of one band can be predicted from the other. It is defined between [0,1], where 1 is perfectly coherent and 0 is perfectly incoherent. Fig.~\ref{fig:cs} panel (b) shows the coherence between the $0.3-0.7$\,keV and $1.2-5.0$\,keV bands. The coherence of the broadband noise component is high ($\sim 1$) at low frequencies and decreases with increasing frequency. This pattern is typically observed in Seyferts (e.g. \citealt{alston13b}). At the QPO and harmonic frequencies we see $\sim$\,unity coherence (open symbols), which sits above the falling broadband noise coherence. Following \citeauthor[][(2003b)]{vaughan03b} we illustrate this by overlaying the function $\gamma^2(f) = \exp(-f / 5 \times 10^{-4}$\,Hz) in Fig.~\ref{fig:cs} panel (b). This describes the coherence of the broadband noise fairly well, whilst the coherence at the QPO and harmonic frequencies are clearly distinct from this component. From the cross-spectrum we also obtain a phase lag estimate at each frequency, $\phi(f)$, which is transformed into the corresponding time lag $\tau(f) = \phi(f) / (2 \pi f)$, with errors estimated following (\citealt{vaughannowak97}; \citealt{bendatpiersol86}). We have previously performed extensive Monte Carlo simulations to check this method produces reliable error estimates when the contribution of Poisson noise is large (see \citealt{alston13b}; \citealt{alston14a}). Fig.~\ref{fig:cs} panel (c) shows the frequency dependent time lags between the $0.3-0.7$\,keV and $1.2-5.0$\,keV bands, where we follow the convention of using a positive time lag to indicate a hard band lag. A hard lag at low frequencies is typically seen in variable Seyferts (e.g. \citealt{demarco13lags}). No significant hard lags in MS 2254.9--3712, but do see a $\sim 3 \sigma$ negative (hereafter soft lag) at the lowest frequency. A highly significant ($\sim 5 \sigma$) soft lag is also seen at the QPO and harmonic frequencies (open symbols). We also measure the cross-spectral products between the $0.7-1.2$\,keV and $1.2-5.0$\,keV bands, but we do not show it here. The coherence between the two bands is $\sim 1$ up to $\sim 3 \times 10^{-4}$\,Hz (i.e. above the harmonic frequency), above which the Poisson noise dominates. No significant lag is found at the QPO or harmonic frequency. \subsection{Time delays as a function of energy} \begin{figure} \centering \includegraphics[width=0.4\textwidth,angle=0]{lagen_allen.eps} \caption{The lag-energy spectrum for the noise ($0.7 - 1.0 \times 10^{-4}$\,Hz), QPO ($1.3 - 1.6 \times 10^{-4}$\,Hz) and harmonic ($1.8 - 2.1 \times 10^{-4}$\,Hz) frequencies. The open circles in panel (a) are the same data but with higher energy resolution.} \label{fig:lagen} \end{figure} A related technique is to study the time delays at a particular frequency as a function of energy. The \emph{lag-energy} spectrum can be calculated by estimating the cross-spectrum between a comparison energy band vs a broad (in energy) reference band (e.g. \citealt{zoghbi11a}; \citealt{alston14a}). If the comparison energy band falls within the reference band it is subtracted from the reference band, in order to have no correlated errors. We use the $0.3-5.0$\,keV band as the reference band due to its high S/N. We compute the lag-energy spectrum at a range of frequencies, including the broadband noise ($0.7 - 1.0 \times 10^{-4}$\,Hz), QPO ($1.3 - 1.6 \times 10^{-4}$\,Hz) and harmonic and ($1.8 - 2.1 \times 10^{-4}$\,Hz), and show these in Fig.~\ref{fig:lagen}. A positive lag value indicates the average band lag compared to the broad reference band. The broadband noise in panel (c) shows a lag that increases log-linearly between $\sim 1.0$ and 10.0\,keV. This trend is observed at low frequencies in BHBs (e.g \citealt{miyamoto89}; \citealt{nowak99}) and AGN (e.g. \citealt{papadakis01}; \citeauthor[][2003a]{vaughan03a}; \citealt{mchardy04}; \citealt{arevaloetal08}; \citealt{kara13c}; \citealt{alston14a}; \citealt{LobbanETAL14}). The lag-energy spectrum shows zero lag between 0.3 and $\sim 1$\,keV. The QPO lag-energy spectrum in Fig.~\ref{fig:lagen} panel (b) shows, on average, the softer bands lagging behind the harder bands. This is consistent with soft lag seen between the lag-frequency spectrum in Fig.~\ref{fig:cs} panel (c). A similar lag-energy shape is observed at high frequencies in several Seyfert 1s (e.g. \citealt{kara13c}; \citealt{alston14a}). Many of these sources also display a lag between the primary continuum (e.g. $1.0-4.0$\,keV) and the iron K$\alpha$ band at 6.4\,keV. In the QPO lag-energy spectrum the error bars are large above $\sim 5$\,keV and no clear lag in the iron K$\alpha$ band is seen, hence we bin over from $5.0-10.0$\,keV. This is most likely due to the low number of cross-spectral estimates being averaged in this frequency band. The lag-energy spectrum for the harmonic frequency is shown in Fig.~\ref{fig:lagen} panel (c). A significant ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 4 \sigma$) lag is observed between the $\sim 1 - 4$\,keV continuum band and the $\sim 5-7$\,keV band, which contains the iron K$\alpha$ band. To show the iron K$\alpha$ lag is not sensitive to the choice of energy binning we compute the lag-energy spectrum at higher resolution (open symbols), which clearly follow the same shape. Despite the PSD being dominated by Poisson noise above 5\,keV, we are able to pick out a significant lag at energies higher than this due to the significant power in the broad reference band and the high coherence between the energy bands. The average lag between the $0.3-0.7$\,keV and $1.2-5.0$\,keV bands is soft, consistent with the lag-frequency spectrum at the harmonic frequency in Fig.~\ref{fig:cs} panel (c). A dip in the lag-energy spectrum at $3-4$\,keV compared to the remaining bands can also be seen at the harmonic in panel (c). This feature has now been observed in the high frequency lag-energy spectra of several NLS1s (e.g. \citealt{kara13b}), but is yet to be explained. \subsection{Frequency dependent energy spectrum} \begin{figure} \centering \includegraphics[width=0.42\textwidth,angle=0]{fvar.eps} \caption{The frequency resolved variability spectrum for the low-frequency noise ($1.4 - 4.3 \times 10^{-5}$\,Hz), QPO ($1.3 - 1.6 \times 10^{-4}$\,Hz) and harmonic ($1.9 - 2.1 \times 10^{-4}$\,Hz) frequencies. Panel (a) shows the absolute rms spectra as well as the mean (time-averaged) energy spectrum unfolded to a power law with index 0 and normalisation 1 (i.e. `fluxed' spectra). Panel (b) shows the fractional rms spectrum ($F_{\rm var} = {\rm rms / mean}$).} \label{fig:rms} \end{figure} Using frequency-resolved rms-spectra we investigate the energy dependence of the variability at different timescales (e.g. \citealt{Edelson2002}; \citealt{MarkowitzETAL03b}; \citeauthor[][2003a]{vaughan03a}). We calculate the rms in a given energy band by integrating the noise subtracted PSD (using an rms normalisation) over the frequency range of interest (i.e from $1/T$ to $1/2 \Delta t$). This gives the rms spectrum in absolute units. The fractional rms-spectrum is obtained by dividing the rms spectra by the mean count rate in each energy band. Following \citet{PoutanenETAL08} we calculate errors using Poisson statistics. Energy bands are made sufficiently broad such that no time bins have zero counts. We compute the rms spectra in three frequency bands; $1.4 - 4.3 \times 10^{-5}$\,Hz (noise), $1.3 - 1.6 \times 10^{-4}$\,Hz (QPO) and $1.9 - 2.1 \times 10^{-4}$\,Hz (harmonic). Fig.~\ref{fig:rms} shows the rms-spectra in absolute units (panel a) and fractional units (panel b). The noise has a soft spectral shape, whereas the QPO and harmonic are both spectrally hard. This same dependence of spectral shape with frequency is observed in the NLS1 galaxies PG 1244+026~and RE J1034+396, which are also believed to be accreting close to the Eddington rate (\citealt{MiddletonETAL09}; \citealt{middletonetal11}; \citealt{jinETAL13}). The time averaged spectrum is also shown in Fig.~\ref{fig:rms} panel (a). We obtain a good fit to the data (${\Large \chi}^{2} = 773 / 784$ degrees of freedom) with a spectral model consisting of two absorbed power laws (PL) and a neutral reflection component {\sc pexmon} (\citealt{NandraETAL07}). The spectral index of the PL components are $2.84 \pm 0.04$ and $1.43 \pm 0.06$ respectively, consistent with the values reported in \citet{BianchiETAL09}. We use {\sc tbabs} (\citealt{wilms2000tbabs}) for the total absorption and find a value of $N_{\rm H} = 1.7 \times 10^{20}~{\rm cm}^{-2}$, consistent with the value of neutral absorption from \citet{WillingaleETAL13}. The {\it XMM-Newton~\/} RGS spectrum shows no signatures of ionised absorption. We fit the absolute rms spectra at each frequency with a single absorbed PL, with $N_{\rm H}$ fixed as above. The spectral index of the low-frequency noise is $2.7 \pm 0.1$, consistent with the soft component in the mean spectrum. The spectral index of the QPO and harmonic are $2.0 \pm 0.1$ and $1.5 \pm 0.2$ respectively. The spectral index of the harmonic is consistent with the hard PL required in the time averaged spectrum. The smooth energy dependence of the QPO and harmonic variability suggests the QPO process is indeed present in the softer energy bands, despite the non-detection of the QPO in the PSD. A related method for studying the variable energy spectra at a given timescale is covariance spectra (\citealt{WilkinsonUttley09}). Using a high S/N reference band, the correlated variability is picked out in a given comparison band, thus improving the S/N of the variability spectrum compared to rms spectrum. The covariance spectra can also be used to investigate the correlated variability between a given energy band and the remaining individual energy bands. We compute the covariance spectra in the Fourier domain (\citealt{uttley2011}; \citealt{CassatellaETAL2012}; \citealt{uttley14rev}) in the same frequency bands used for the rms spectra. We use two reference energy bands to compute the covariance spectra; $0.3-0.7$ and $1.2-5.0$\, keV. The covariance spectra from both reference bands are identical to the rms spectra shown in Fig.~\ref{fig:rms}. The same is true for any reference band investigated. This is unsurprising given the $\sim$~unity coherence observed between the soft and hard energy bands in Sec.~\ref{sec:cspec}. \subsection{Principle component analysis} \begin{figure} \centering \includegraphics[width=0.42\textwidth,angle=0]{rej_comparison.eps} \caption{The first (top) and second (bottom) principle components (PCs) for MS 2254.9--3712~and RE J1034+396. PC1 contains $\sim 90 \%$ of the variability in each source and PC2 $\sim 5 \%$. The shape of the PCs are remarkably similar in each source and are themselves different to the two primary PCs see in other Seyferts (\citealt{parker15a}).} \label{fig:pca} \end{figure} A complementary way to investigate the spectral variability is Principle Component Analysis (PCA; e.g. \citealt{VaughanFabian04}; \citealt{ParkerETAL14a}). PCA decomposes a dataset into a set of orthogonal eigenvectors, or principal components (PCs; \citealt{Kendall75}). When applied to X-ray data, the variability of the source spectrum is broken down into a set of variable spectral components. If the source variability consists of a linear sum of uncorrelated and spectrally distinct physical components then an exact description of the physical components is obtained. Whereas rms and covariance spectra are somewhat model dependent, in principle, PCA produces the individual variable spectral components in a model independent way. Fig.~\ref{fig:pca} shows the two PCA components. The majority of the variability is dominated by a component (PC1) that increases linearly with increasing energy band above $\sim 1$\,keV. The second component (PC2) accounts for $\sim 5 \%$ of the variability and has a soft spectral shape. Fig.~\ref{fig:pca} also shows the first two PCA components for RE J1034+396~(from \citealt{parker15a}). The primary PCA components are practically identical in shape and amplitude. The second PCA components both show a similar soft spectral dependence, however, RE J1034+396~is softer below $\sim 2.0$\,keV and is spectrally harder above $\sim 3$\,keV. Typical NLS1s (e.g. MCG--6--30--15; \citealt{ParkerETAL14a}) have a PC1 which can be described by a soft power law (\citealt{parker15a}). Their PC2 is due to spectral pivoting, and higher order PCs are associated with ionised reflection. The PCA of absorption dominated Seyfert 1s also has a distinct spectral variability shape (e.g. NGC 1365; \citealt{ParkerETAL14b}). \section{Discussion and Conclusions} \label{sec:disco} We have presented an analysis of the energy dependent variability in the Seyfert 1 galaxy MS 2254.9--3712, based on a $\sim 70$\,ks {\it XMM-Newton~\/} observation. We found a significant ($\sim 3.3 \sigma$) QPO at $\sim 1.5 \times 10^{-4}$\,Hz in the PSD of the $1.2-5.0$\,keV band. The QPO is coherent, $Q \sim 8$, and has an rms of $\sim 6$ per cent. No significant QPO is observed in softer energy bands, although there is evidence in the PSD for some structure at the QPO frequency. A highly coherent soft lag is seen between the $1.2-5.0$\,keV and $0.3-0.7$\,keV bands at the QPO frequency and at the frequency $\sim 2 \times 10^{-4}$\,Hz. This strongly suggests the presence of a harmonic QPO component in a frequency ratio 3:2, although does not constitute a detection. The coherence of the broadband noise is high at low frequencies and drops off at higher frequencies. The highly coherent soft lag suggests that the weak periodic feature seen in the $0.3-0.7$\,keV is actually the reprocessed hard QPO emission. An iron K$\alpha$ lag is seen at the harmonic frequency. If this frequency is indeed related to the QPO, then this is the first time this reverberation signature responding to a QPO modulation has been reported in the literature. An iron K$\alpha$ reverberation lag responding to the QPO process was first observed in RE J1034+396~(\citeauthor[][{\it in prep}]{Markeviciute14prep}) making this feature unique to these two sources. A soft lag is observed at the QPO frequency in MS 2254.9--3712~but the data are insufficient to detect a clear iron K$\alpha$ lag at this frequency. If it is really absent at the QPO frequency, then the lack of an iron K$\alpha$ reverberation lag suggests some geometrical dependence to the QPO and harmonic components: the disc can only respond to the harmonic oscillation, but not the QPO. The variability of the QPO and harmonic has a hard energy dependence and are associated with the hard power law spectral component. No significant lag is observed between the $0.7-1.2$\,keV and $1.2-5.0$\,keV bands at any frequency. This could be due to the same spectral component, modulated by the QPO process at higher frequencies, dominating the spectrum across these energies. The X-ray and broadband energy spectrum will be investigated in detail in a follow up paper. The similarities in the PCA of MS 2254.9--3712~and RE J1034+396~indicates the same variability process is occurring in these two sources. The primary PCA component in both sources has a hard spectral shape. The QPO is also preferentially detected at harder energies, indicating the QPO has an intrinsically hard spectral shape. \citeauthor{parker15a} measured the PCA in a sample of 26 objects and found the PCA of RE J1034+396~to be different to other well studied Seyfert 1s. They performed extensive simulations to account for the wide range of spectral variability. However, the hard shape of the primary PCA component in RE J1034+396~could not be reproduced. This suggests the QPO variability is modulating the spectral components in a different way to the variability process dominating in other Seyfert 1s. \subsection{Comparison with previous results} GMV12 analysed the $0.2-2.0$\,keV, $2.0-10.0$\,keV and $0.2-10.0$\,keV PSD of MS 2254.9--3712~and searched for the presence of QPOs. They reported no significant QPO in these energy bands. Our lack of significant detection in the $0.3-0.7$\,keV and $0.7-1.2$\,keV bands is consistent with their results. Our detection of a significant QPO in the $1.2-5.0$\,keV band is most likely due to the $5.0-10.0$\,keV being dominated by Poisson noise, thus affecting the detectability of the QPO over the broader energy band. Indeed, we repeated our analysis using the energy bands of GMV12 and find no significant QPOs. GMV12 found very weak preference for Model 2. However, their best fit parameters were very unusual for this model (e.g. $\alpha_{\rm high} \sim -8$ for the 0.2-2.0\,keV band). Our value of $\alpha_{\rm high} \sim -4.5$ is marginally consistent with the mean slope of $\sim 3.1$ found for 15 sources with a strongly detected bend in GMV12. Our larger value could be indicating the presence of the QPO is distorting the continuum modelling. Our choice of higher quality data selection is most likely responsible for the different best fit model parameters found by GMV12. \subsection{Understanding the time delays} An approximately log-linear hard lag is observed at low frequencies ($\sim 9 \times 10^{-5}$\,Hz) where the broadband noise dominates. This is consistent with the observed time lags at low frequencies in BHBs and AGN, which are currently best explained by the model of radial propagation of random accretion rate fluctuations through a stratified corona (e.g. \citealt{arevalouttley06}). At the lowest frequencies investigated ($\sim 3 \times 10^{-5}$\,Hz) a tentative soft-lag is observed. This soft lag at very low frequencies has only been observed in a handful of Seyferts: the low-flux observations of NGC 4051 (\citealt{alston13b}), MCG--6--30--15 (\citealt{kara14b}) and NGC 1365 (\citealt{kara15a}). The origin of this low-frequency soft lag is still unclear, and whether the same mechanism is responsible in the three sources it has been detected in so far. \citet{zoghbi11b} detected a soft lag at the QPO frequency in RE J1034+396, and found the soft lag to be broader in frequency than the QPO. We observe the QPO and soft lag in MS 2254.9--3712~to have the same frequency width (see Fig.~\ref{fig:cs}). A14 found evidence for a QPO harmonic component in RE J1034+396. The broader lag observed in RE J1034+396~may then be due to the lag at the harmonic frequency, but the current data are insufficient to individually resolve this feature. The time lags at the HFQPO frequencies in a sample of BHBs was presented by \citet{Mendez13}. With the exception of the 35\,Hz QPO in GRS 1915+105, all of the lags detected were hard lags, different to what we observe in MS 2254.9--3712. The sign of the observed lag in accreting BHs could be due to the relative fluxes of the intrinsic and lagging components (e.g. the primary power-law and soft reflection). Indeed, \citet{alston13b} found a strong dependence on the lag direction with source flux in the NLS1 galaxy NGC 4051, with the flux changes dominated by changes in the primary continuum normalisation (e.g. \citealt{vaughan11a}). Alternatively, the sign of the observed lag may be due to some other system parameter, such as inclination. It is also possible that the hard lags observed by \citet{Mendez13} are in fact the same lagging process seen above 10\,keV in AGN with {\it NuSTAR}, which are inferred as the Compton hump lagging the primary continuum (\citealt{zoghbi14}; \citealt{kara14b}). \subsection{QPO identification} In this section we discuss the identification of the QPO in terms of LF or HF type. HFQPOs in BHBs typically have a fractional rms $\sim 5 \%$ (\citealt{RemillardMcClintock06}). The 6 per cent QPO rms in MS 2254.9--3712~is consistent with the value observed in BHB HFQPOs, and the $\sim 8$ per cent observed in RE J1034+396~(A14). A $Q \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 2$ is typically observed in HFQPOs in BHBs (e.g. \citealt{casella04}), consistent with our value of $Q \sim 8$. HFQPOs in BHBs are observed to display harmonic components in their power spectra, with an integer ratio of 3:2 (e.g. \citealt{remillard2002, remillard2003prec}; \citealt{RemillardMcClintock06}). We find strong evidence for the presence of a harmonic component with ratio 3:2, which strongly suggests this is a HFQPO in MS 2254.9--3712. The QPO in MS 2254.9--3712~displays many similar timing properties to the HFQPO in RE J1034+396~(\citealt{middletonetal11}) and Swift J164449.3+573451 (\citealt{ReisETAL13}). The QPO in these two sources are also dominated by the hard X-ray component, suggesting a similar origin to the QPO in MS 2254.9--3712. The current best estimate of the mass of RE J1034+396~is $\hbox{$M_{\rmn{BH}}$} \sim 1 - 4 \times 10^{6}$ (\citealt{BianHuang10}). The factor $\sim 2$ in QPO frequency observed in these two AGN is then consistent with the factor $\sim 2$ in BH mass, if the QPO is caused by the same process which then scales linearly with black hole mass. HFQPOs are a common feature of the very high/intermediate state (steep power-law state) in BHBs (e.g. \citealt{RemillardMcClintock06}). These states are also characterised by mass accretion rates at or near Eddington (e.g. \citealt{Nowak1995}; \citealt{vanderklis95}). The BHB GRS 1915+105~displays HFQPOs at 35 and 67 Hz when in a super-Eddington state (e.g. \citealt{morgan97}; \citealt{Cui99}; \citealt{belloni06}; \citealt{ueda09}; \citealt{MiddletonDone10}). \citet{wang03} have suggested that MS 2254.9--3712~is a super-Eddington accretor. RE J1034+396~(\citealt{middletonetal11}) and Swift J164449.3+573451 (\citealt{ReisETAL13}) are also believed to be accreting at or around Eddington. It is then natural to associate MS 2254.9--3712~to the high accretion rate states of BHBs, arguing in favour of a HFQPO in this source. The HFQPO fundamental $\nu_0$ in BHBs approximately follows the relation of $\nu_0 = 931(\hbox{$M_{\rmn{BH}}$} / \hbox{$\thinspace M_{\odot}$} )^{-1}$\,Hz, where $\nu_0$ is the unobserved QPO fundamental (\citealt{RemillardMcClintock06}). If the QPO we detect at $\sim 1.5 \times 10^{-4}$\,Hz is the harmonic $2 \nu_0$ then we estimate $\hbox{$M_{\rmn{BH}}$} \sim 6 \times 10^{6} \hbox{$\thinspace M_{\odot}$}$. This is consistent with the value $\hbox{$M_{\rmn{BH}}$} \sim 4 \times 10^{6} \hbox{$\thinspace M_{\odot}$}$ determined from the $R_{\rm BLR}-\lambda L_{\lambda}(5100 {\rm \AA})$ relation (\citealt{grupe04}). \citet{demarco13lags} found a close relation between the soft lag frequency and BH mass in a sample of Seyfert 1s. Assuming the same reverberation process is occurring in MS 2254.9--3712, the frequency of the QPO soft lag would then indicate $\hbox{$M_{\rmn{BH}}$} \sim 2 \times 10^7 \hbox{$\thinspace M_{\odot}$}$. This is consistent with the higher BH mass estimate $\hbox{$M_{\rmn{BH}}$} \sim 10^{7} \hbox{$\thinspace M_{\odot}$}$ (\citealt{shields03}), suggesting a HFQPO in MS 2254.9--3712. Alternatively, if the low-frequency soft lag is caused by the same reverberation process seen in \citet{demarco13lags}, a mass of $\hbox{$M_{\rmn{BH}}$} \sim 10^8 \hbox{$\thinspace M_{\odot}$}$ is inferred, again consistent with a HFQPO. LFQPOs have been observed up to $\sim 30$\,Hz in BHBs (\citealt{RemillardMcClintock06}). If the timescale of this process scales linearly with BH mass then we estimate an upper limit of $\hbox{$M_{\rmn{BH}}$} < 1 \times 10^6 \hbox{$\thinspace M_{\odot}$}$. Harmonic components to LFQPOs are often reported in BHBs, however the frequencies are related by the ratio 2:1. The QPO in 2XMM J123103.2+110648 (\citealt{LinETAL13}) is only detected below $\sim 2$\,keV and no evidence for a hard PL component is seen. The QPO rms in this source is $\sim 25 - 50$ percent, which is consistent with the rms of LFQPOs in BHBs. When LFQPOs are present in BHBs, the broadband noise component has a flat shape, with $\alpha \sim 1$ (e.g. \citealt{RemillardMcClintock06}). This is inconsistent with the shape of the broadband noise observed in MS 2254.9--3712, making it unlikely to be a LFQPO. This source currently lacks a reverberation mapping mass estimate (e.g. \citealt{Peterson04}) which is required before the exact QPO mechanism can be robustly identified. From the arguments above we propose the QPO observed in MS 2254.9--3712~is indeed the same as the HFQPO phenomenon observed in several BHBs. The origin of HFQPOs is still highly uncertain, but it is clear that it must be a physical process occurring in the direct vicinity of the BH. A long observation of MS 2254.9--3712~is required to independently confirm the presence of the QPO and harmonic component. If indeed the observed iron K$\alpha$ reverberation is responding to the QPO process, this will allow us to understand both these processes in better detail, and provide an important constraint for any theoretical model for the origin of HFQPOs. \section*{Acknowledgements} We thank the anonymous referee for constructive feedback that helped improve the manuscript. WNA, ACF and EK acknowledge support from the European Union Seventh Framework Programme (FP7/2013--2017) under grant agreement n.312789, StrongGravity. This paper is based on observations obtained with {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). \footnotesize{ \bibliographystyle{mn2e}
1301.1552
\section{Introduction} Particle-based simulations are widely used, for example to study fluid flows or plasmas. The physical particles of interest are often not simulated individually, but as groups of particles, called \emph{super-particles} or \emph{macro-particles}. Most systems contain so many particles that simulating them individually would be very slow or impossible. And for many macroscopic properties of a system, individual particle behavior is not important. On the other hand, a sufficient number of particles is required to limit stochastic fluctuations. The weight of a simulation particle indicates how many physical particles it represents. Traditionally, particles had a fixed weight~\cite{Birdsall71926, Hockney88}. More recently, Lapenta and Brackbill~\cite{Lapenta2002317, Lapenta1994213, Lapenta1995139}, Assous et al.~\cite{Assous2003550}, Welch et al.~\cite{Welch2007143} and others have introduced methods that adapt the weight of particles during a simulation. As discussed in~\cite{Assous2003550}, adaptive methods have significant advantages if: \begin{enumerate} \item Many new particles are created in the simulation. Adaptive re-weighting is required to limit the total number of particles. Examples can be found in~\cite{Chanrion2008} and~\cite{Li20121020}. \item The system has a multiscale nature. In some regions more macro-particles are required, especially if some type of mesh refinement is employed, see for example~\cite{lapentaArXiv}. \item Control is needed over the number of particles per cell, for example to limit stochastic noise to a realistic value. \end{enumerate} The goal of these methods is to change the number of particles to a desired value, while keeping the distribution of particles intact. Most methods operate on a single grid cell at a time; arguments for this approach are given in~\cite{Lapenta1995139}. There are different ways to change the number of particles. One option is to merge two (or sometimes three) particles, to form particles with higher weights. Reversely, splitting can be performed to reduce weights. Another option is to replace all the particles in a cell by a new set of particles, with different weights. We will use the name `adaptive particle management', introduced in~\cite{Welch2007143}, for such algorithms. We present a technique for the merging of particles, that extends earlier work of Lapenta~\cite{Lapenta2002317}. This method can operate independently of the mesh, and in any space dimension. The main idea is to store the particle coordinates (typically position and velocity) in a $k$-d tree. A $k$-d tree is a space partitioning data structure that given $N$ points enables searching for neighbors in $O(\log N)$ time~\cite{Bentley1975}. We can then efficiently locate pairs of particles with similar coordinates, and these pairs can be merged. Because the merged particles are similar, the total distribution of particles is not significantly altered. In section \ref{sec:general} we briefly discuss the general principles of particle management and $k$-d trees. The implementation of the new particle management algorithm is discussed in section \ref{sec:method}. In section \ref{sec:numtests} we provide numerical examples to demonstrate the method, and we compare different ways of merging particles. \section{Adaptive particle management and $k$-d trees} \label{sec:general} As stated in the introduction, it is typically impossible to simulate all the physical particles in a system individually. Therefore super-particles are used, representing multiple physical particles. Often, the simulation can run faster or give more accurate results if the weight of these super-particles is controlled adaptively. Different names have been introduced for these algorithms: `adaptive particle management'~\cite{Welch2007143}, `control of the number of particles'~\cite{Lapenta1995139}, `particle coalescence'~\cite{Assous2003550}, `particle resampling'~\cite{Chanrion2008}, `particle remapping'~\cite{moss2006}, `particle rezoning'~\cite{Lapenta2002317}, `(particle) number reduction method'~\cite{Shon2001322} and probably others. There seem to be many independent findings, with independent names. We will use the name `adaptive particle management' (APM), introduced in~\cite{Welch2007143}, to describe this class of algorithms. \subsection{Conservation properties} If weights of particles are adjusted, then the `microscopic details' of a simulation are changed. But the relevant macroscopic quantities should be conserved as much as possible. To specify these macroscopic quantities, we consider a very common type of particle simulation: the particle in cell (PIC) method, also known as the particle mesh (PM) method~\cite{Hockney88, Birdsall71926}. In PIC simulations, particles are mapped to moments on a grid. From the grid moments the fields acting on the particles are computed, and the particles move accordingly. For example, in an electrostatic code, the charge density is used to compute the electric field. An APM algorithm typically changes a set of $N_\mathrm{in}$ particles to a new set of $N_\mathrm{out}$ particles. If the two sets give rise to the same grid moments, they give rise to the same fields. Therefore, most algorithms are designed to (approximately) conserve the relevant grid moments. Only conserving the grid moments is not enough, because the dynamics of a system are not fully determined by the fields. For example, the results of a simulation can be very sensitive to changes in the momentum or energy distribution. Therefore, some methods try to preserve the shape of these distributions. More generally, we would like to keep the important aspects of the particle distribution function $f(\vec{x}, \vec{v}, t)$ the same. The changes to $f(\vec{x}, \vec{v}, t)$ should not be significantly larger than the fluctuations that naturally occur. For example, in a collision dominated plasma, particles frequently change direction. Not conserving the momentum distribution in each direction might have little effect on the overall results. But for a collisionless plasma, a change in the momentum distribution might lead to significant differences. Similarly, due to the finite number of particles, fluctuations in the local particle density occur naturally. Therefore, keeping the particle density exactly the same on each grid point might not be necessary, as long as the total number of particles is conserved. \subsection{Merging and splitting particles} \label{sec:mergesplit} A set of $N_\mathrm{in}$ particles can be transformed to a new set of $N_\mathrm{out}$ particles in many ways. If $N_\mathrm{in} > N_\mathrm{out}$, we use a pairwise coalescence algorithm, that merges two particles into a single new one. Compared to algorithms that transform multiple particles at the same time, pairwise coalescence has two advantages. First, it is a more local operation, because only closest neighbors in phase space are selected. This ensures that the distribution of particles is not changed very much. Second, it involves fewer degrees of freedom, which makes it simpler to set the properties for new particles. The pairwise coalescence of particles is illustrated in figure~\ref{fig:part_bef_after}. In $D$ dimensions, the momentum $\vec{p}$ of the new particle has $D$ degrees of freedom. Imposing momentum and energy conservation puts $D+1$ constraints on $\vec{p}$. Therefore, it is in general not possible to conserve both energy and momentum in pairwise coalescence. This means that there is no single best way to merge particles, as different applications require the conservation of different properties. We consider several coalescence schemes, which are discussed in section~\ref{sec:mergeschemes}. The situation would be very different if $N_\mathrm{in}$ particles are merged at the same time to form multiple new particles. We still have $D+1$ constraints, but now $D \cdot N_\mathrm{out}$ degrees of freedom in the momenta of the $N_\mathrm{out}$ new particles. The system is under-determined, and additional information about the particles has to be used. This leads to more complicated algorithms, see for example~\cite{Assous2003550, Welch2007143}. If $N_\mathrm{in} < N_\mathrm{out}$, particles have to be split. Several methods for particle splitting have been compared by Lapenta in~\cite{Lapenta2002317}. As shown there, choosing the right splitting method can be important, depending on the type of simulation. Here, we will not consider this problem in detail, as our focus is on the merging of particles. We will simply split single particles into two new ones with the same properties, but half the weight. This can be viable if the simulation includes random collisions, so that the new particles will undergo different collisions and spread out. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \centering \footnotesize \input{./figures/particle_bef_after} \caption{Example showing the merging of particles close in space and velocity (velocity is not shown). The particles that were removed after merging are shown as green crosses, particles that were not merged as green filled circles, and the newly formed merged particles as red empty circles. The latter have weight 2, the rest weight 1. } \label{fig:part_bef_after} \end{minipage} \hspace{0.05\textwidth} \begin{minipage}{0.45\textwidth} \centering \footnotesize \input{./figures/kdtree.eps_tex} \caption{Schematic example of how a $k$-d tree is generated for points in the plane (indicated as black dots). At every step (indicated by the numbers), boxes are split in two parts. The split is located on a point, that is added to the tree. The direction of splitting alternates between vertical and horizontal. } \label{fig:kdtree_working} \end{minipage} \end{figure} \subsection{$k$-d trees} \label{sec:kdtrees} To locate particles with similar coordinates we use a $k$-d tree~\cite{Bentley1975}, which is a space partitioning data structure. A $k$-d tree can be used to organize a set of points in a $k$-dimensional space, for any $k\geq1$. The tree consists of nodes, that contain data (the coordinates of one of the points) and links to at most two `child'-nodes. The starting point of the tree is the root node, and it contains as many nodes as there are points. We will briefly explain how such a $k$-d tree can be generated. To help with the explanation, we let nodes have a \textbf{todo} list, that contains points that need to be processed. Suppose we have a collection of points in the $(x,y)$ plane. Initially all points are in the \textbf{todo} list of the root node. Then the following algorithm, which is illustrated in figure \ref{fig:kdtree_working}, creates the $k$-d tree: \begin{enumerate}\compresslist \item Pick a splitting coordinate, either $x$ or $y$. A simple choice is to alternate between them. \item For each node with a non-empty \textbf{todo} list: \begin{enumerate}\compresslist \item \label{kdtree:sorting} Sort the particles in the list along the splitting coordinate. The particle in the middle of the list is the median. If the list contains an even number of particles, pick one of the two middle particles as the median. \item The point corresponding to the median is assigned to the node. \item The remaining points are moved to the \textbf{todo} lists of (at most) two new child nodes. The first one gets the points below the median, the second one those above the median. \end{enumerate} \item If there are still points in \textbf{todo} lists, go back to step one. Otherwise, the tree is completed. \end{enumerate} In $k$ dimensions, the only difference would be that there are now $k$ choices for the splitting coordinate. The computational complexity of creating a $k$-d tree like this is $O(N \log^2 N)$, with $N$ the number of points in the tree. This can be reduced to $O(N \log N)$ if a linear-time median finding algorithm is used instead of sorting at step~\ref{kdtree:sorting}. Searching for the nearest neighbor to a location $\vec{r}$ can be done in $O(\log N)$ time. The basic idea is to first traverse the tree down from the root node, at each step selecting that side of the tree that $\vec{r}$ lies in. (If $\vec{r}$ happens to lie exactly on a splitting plane, it is a matter of convention which side to pick.) During the search, the closest neighbor found so far is stored. Then going upward in the tree, at every step determine whether a closer neighbor could lie on the other side of the splitting plane. If so, also traverse that other part of the tree down (but only where it can contain a closer neighbor). Typically, only a small number of these extra traverses is required. When the algorithm ends up at the root node again, the overall closest neighbor is found. For the numerical tests presented in section~\ref{sec:numtests}, we have used the Fortran 90 version of the \texttt{KDTREE2}~\cite{Kennel8067K} library. \section{Implementation} \label{sec:method} We will discuss the implementation of our adaptive particle management algorithm in section~\ref{sec:implementationapm}. Different schemes that can be used for particle merging are given in section~\ref{sec:mergeschemes}. \subsection{Adaptive particle management algorithm} \label{sec:implementationapm} Suppose that we have particles with coordinates $\vec{x}_i$, $\vec{v}_i$ and weights $w_i$. Furthermore, assume there is some function $W_\mathrm{opt}(i)$ that gives the user-determined optimal weight for particle $i$. Then the APM algorithm works as follows: \begin{enumerate}\compresslist \item \textit{ Create a list \textbf{merge} with all the particles for which $w_i < \tfrac{2}{3} W_\mathrm{opt}(i)$. Similarly, create a list \textbf{split} with particles for which $w_i > \tfrac{3}{2} W_\mathrm{opt}(i)$.} The function $W_\mathrm{opt}(i)$ gives the desired weight for particle $i$. The factors $\tfrac{2}{3}$ and $\tfrac{3}{2}$ ensure that merged particles are not directly split again, and vice versa. A good choice of $W_\mathrm{opt}(i)$ will often depend on the application. We typically want to keep the number of particles per cell close to a desired value $N_\mathrm{ppc}$, and use $W_\mathrm{opt}(i) = \max\left(1, N_\mathrm{phys}(i)/ N_\mathrm{ppc}\right)$. Here $N_\mathrm{phys}(i)$ denotes the number of physical particles in the cell of particle $i$. This increases the number of particles in regions with finer grids. \item \textit{For the particles in \textbf{merge}:} \begin{enumerate}\compresslist \item \textit{ Create a $k$-d tree with the (transformed) coordinates of the particles as input.} We construct the $k$-d trees in two ways: using the coordinates $(\vec{x}, \lambda_v \vec{v})$ or using the coordinates $(\vec{x}, \lambda_v \vectornorm{\vec{v}})$, where $\lambda_v$ is a scaling parameter. We will refer to them as the `full coordinate $k$-d tree' and the `velocity norm $k$-d tree', and we will denote them with a superscript $^{\vec{x},\vectornorm{\vec{v}}}$ and $^{\vec{x},\vec{v}}$, respectively. The scaling is necessary because the nearest neighbor search uses the Euclidean distance between points. There is some freedom in the choice of $\lambda_v$, which should express the ratio of a typical length divided by a typical velocity. With higher values the differences in velocity become more important than the spatial distances. \item \textit{ Search the nearest neighbor of each particle in the $k$-d tree. If the distance between particles $i$ and $j$ is smaller than $d_\mathrm{max}$, merge them. Particles should not be merged multiple times during the execution of the algorithm, so mark them inactive.} We let $d_\mathrm{max}$ be proportional to the grid spacing $\Delta x$, so particles in finer grids need to be closer to be merged. There is no single optimal way to merge two particles. Several schemes for merging are discussed below in section \ref{sec:mergeschemes}. \end{enumerate} \item \textit{ Split each of the particles in \textbf{split} into two new particles.} The new particles have the same position and velocity as the original particle $i$, and weights $w_i/2$ and $(w_i+1)/2$ (both rounded down). As was discussed in section \ref{sec:mergesplit}, for some applications a different method should be used. \end{enumerate} \subsection{Merge schemes} \label{sec:mergeschemes} When two particles are merged, it is generally not possible to conserve both energy and momentum. Therefore we consider different schemes, that conserve either momentum, energy or other properties. The performance of these schemes is compared in section \ref{sec:numtests}. We have not used ternary schemes, that merge three particles into two. As discussed in~\cite{Lapenta2002317}, such schemes do not necessarily perform better, although they can conserve both momentum and energy. Furthermore, they are more complicated to construct in 2D or 3D. When particles $i$ and $j$ are merged, the weight of the new particle is always the sum of the weights $w_\mathrm{new} = w_i + w_j$. For the new position we consider two choices. It can be the weighted average $\vec{x}_\mathrm{new} = (w_i \vec{x}_i + w_j \vec{x}_j)/(w_i+w_j)$. It can also be picked randomly as either $\vec{x}_i$ or $\vec{x}_j$, with the probabilities proportional to the weights. If we take the weighted average, then we introduce a (slight) bias in the spatial distribution. On the other hand, picking the position randomly increases stochastic fluctuations. For example, suppose we have a cluster of particles, and particles are being merged until there is only one left. If we use the weighted average position, then we always end up at the center of mass. So the spatial distribution of particles has become very different, a single peak at the center. With the probabilistic method we also end up with a single peak, located at the position of one of the original particles. But now the probability of ending up at particle $i$ is proportional to $w_i$. Therefore, the `average' spatial distribution has the same shape as before the merging. Below we list several schemes for picking a new velocity $\vec{v}_\mathrm{new}$. For convenience of notation, let \begin{align*} \vec{v}_\mathrm{avg} &= (w_i \vec{v}_i + w_j \vec{v}_j) / (w_i + w_j),\\ v^2_\mathrm{avg} &= (w_i \vectornorm{\vec{v}_i}^2 + w_j \vectornorm{\vec{v}_j}^2) / (w_i + w_j), \end{align*} so $\vec{v}_\mathrm{avg}$ is the weighted average velocity and $v^2_\mathrm{avg}$ is the weighted square norm of the velocity. The schemes are indicated by the following symbols: \begin{itemize} \item[p:] Conserve momentum strictly by taking $\vec{v}_\mathrm{new} = \vec{v}_\mathrm{avg}$. Because $\vectornorm{\vec{v}_\mathrm{avg}}^2 \leq v^2_\mathrm{avg}$, the kinetic energy is reduced by an amount $\tfrac{1}{2} m w_\mathrm{new} \left(v^2_\mathrm{avg} - \vectornorm{\vec{v}_\mathrm{avg}}^2\right)$, where $m$ is the mass of a particle with weight one. \item[$\varepsilon$:] Conserve energy strictly by taking $\vec{v}_\mathrm{new} = \sqrt{v^2_\mathrm{avg}} \cdot \vec{\hat{v}}_\mathrm{avg}$ (the hat denotes a unit vector). Because the energy is kept the same, the momentum increases by $m w_\mathrm{new}\left(\sqrt{v^2_\mathrm{avg}} - \vectornorm{\vec{v}_\mathrm{avg}}\right) \cdot \vec{\hat{v}}_\mathrm{avg}$. \item[$\vec{v}_r$:] Conserve both momentum and energy on average, by randomly taking the velocity of one of the particles. The probability of choosing the velocity of particle $i$ is proportional to its weight $w_i$. \item[$\vec{v}_r\varepsilon$:] Randomly take the velocity of one of the particles, but scale it to strictly conserve energy. The expected change in momentum is $m w_\mathrm{new} \left(\sqrt{v^2_\mathrm{avg}}(w_i\vec{\hat{v}}_{i}+w_j\vec{\hat{v}}_{j})/w_\mathrm{new} - \vec{v}_\mathrm{avg}\right)$, which is small if $\vectornorm{\vec{v}_i} \approx \vectornorm{\vec{v}_j}$. \end{itemize} Although they are quite simple, we are not aware of other authors that have used schemes with randomness. It is possible to use multiple schemes, where the choice of scheme depends on the properties of the particles to be merged. \section{Numerical tests and results} \label{sec:numtests} It is difficult to come up with a general test of the performance of an APM algorithm. The algorithm should not significantly alter the simulation results, compared to a run without super-particles. At the same time, it should decrease the computational cost as much as possible. But whether these criteria are met depends on the particular simulation that is performed. Therefore we perform tests on a simplified system, and we focus on the effects of the coalescence algorithm on the particle distribution. As stated before, our method works in 1D, 2D, 3D or any other dimension. For testing, we use a 2D domain with periodic boundary conditions. The domain consists of $2 \times 2$ cells, each of size $1\times 1$. (We let lengths and velocities be of order unity, and give them without a unit.) Initially, particles with weight $1$ are distributed uniformly over the domain. Then the coalescence algorithm is performed once, with the desired weight of the particles set to $2$. We compare how the different merge schemes change the momentum and energy distribution. We also measure their effect on the density, momentum and energy grid moments. \subsection{Effect of the merge schemes on the energy and momentum distribution} \subsubsection{First test} In the first test, there are 400 particles with a Gaussian velocity distribution. Both components of the velocity have mean $1$ and a standard deviation of $1/4$. The resulting energy and momentum distribution functions are shown in the top row of figure~\ref{fig:df1_incr_comparison}. We show the distribution of momentum along the first coordinate, not the total momentum of particles, therefore we label it $x$-momentum. To convert the velocity of a particle to momentum, we multiply it by the weight of the particle, which represents the mass. Initially, the particles have weight 1, and a desired weight of 2. Then the particles are coalesced according to a merge scheme, and the changes in the energy, momentum and density distribution are recorded. The whole procedure is repeated $10^5$ times for each scheme, using different random numbers, to reduce stochastic fluctuations. We have used both the velocity norm $k$-d tree (containing $\vec{x}, \lambda_v\vectornorm{\vec{v}}$) and the full coordinate $k$-d tree (containing $\vec{x}, \lambda_v\vec{v}$). Somewhat arbitrarily we took $\lambda_v = 4/5$, as the mean velocity plus the standard deviation in velocity was $5/4$. \begin{figure}[!t] \centering \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/eedf_before_incr_1} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/emdf_before_incr_1} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/eedf_diff_incr_1} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/emdf_diff_incr_1} \end{minipage} \caption{ Results for the first test. Top row: the initial energy (left) and momentum (right) distribution of the particles. The integrated or cumulative curves are also shown (dashed). Bottom row: the effect of various merge schemes on the cumulative energy (left) and momentum (right) distribution function. The schemes are indicated by the following symbols; $\varepsilon$: conserve energy, p: conserve momentum, $\vec{v}_r$: conserve energy and momentum on average, $\vec{v}_r\varepsilon$: take velocity from one of the particles at random, scale to conserve energy, $^{\vec{x},\vectornorm{\vec{v}}}$: velocity norm $k$-d tree, $^{\vec{x},\vec{v}}$: full coordinate $k$-d tree. } \label{fig:df1_incr_comparison} \end{figure} The bottom row of figure~\ref{fig:df1_incr_comparison} shows the effects of the merge schemes on the cumulative energy and momentum distribution function. The schemes are indicated by the same symbols as in section~\ref{sec:mergeschemes}: \begin{itemize}\compresslist \item[p:] conserve momentum \item[$\varepsilon$:] conserve energy \item[$\vec{v}_r$:] take velocity of one of the particles at random \item[$\vec{v}_r\varepsilon$:] take velocity from one of the particles at random, scale to conserve energy \item[$^{\vec{x},\vectornorm{\vec{v}}}$:] velocity norm $k$-d tree \item[$^{\vec{x},\vec{v}}$:] full coordinate $k$-d tree \end{itemize} We present the cumulative differences because they are less noisy and reveal trends more clearly. The schemes $\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$ and $\vec{v}_r\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$ have the same effect on the energy distribution, so they are shown together there as $(\vec{v}_r)\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$. The schemes $\vec{v}_r^{\vec{x},\vectornorm{\vec{v}}}$ and $\vec{v}_r^{\vec{x},\vec{v}}$ are also shown together, as $\vec{v}_r$. They take the new velocity randomly from one of the original particles. Therefore, on average, both do not change the shape of the energy and momentum distribution. The other schemes move particles from the tails of the distribution towards the center. To see this in the cumulative distribution functions, note that particles get removed where the slope is negative, and are moved to where the slope is positive. This happens because these schemes take averages, which are more likely to lie towards the center of the distribution. Results are not shown for the velocity norm $k$-d tree with the momentum conserving scheme, $\text{p}^{\vec{x},\vectornorm{\vec{v}}}$. This combination leads to large changes in the energy distribution. For all the merge schemes, on average about $40\%$ of the particles is merged. The number is below $50\%$ because the $k$-d tree is created only once, in a static way. When a particle is merged, it is not removed from the tree, but marked as inactive. So it might later be the nearest neighbor of another particle, that is still to be merged. In that case, the second particle is not merged, and the algorithm moves on to the next particle. Another option would be to search for the second closest neighbor, and so on. But then merging would happen over greater distances towards the end of the algorithm. \subsubsection{Second test} The second test is performed in the same way as the first test, but now the particles have a different velocity distribution. Both components of the velocity have a mean of $1/4$ and a standard deviation of $1$. The resulting energy and momentum distribution functions are shown in the top row of figure~\ref{fig:df2_incr_comparison}. Because it is more isotropic, the second velocity distribution poses a bigger challenge for the merge schemes. The bottom row of figure~\ref{fig:df2_incr_comparison} shows the effects of the merge schemes on the cumulative energy and momentum distribution function. Again, the schemes $\vec{v}_r$ perform best, as the other schemes move particles from the tail of the distribution towards the center. Note that the schemes $\vec{v}_r\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$ and $\varepsilon^{\vec{x},\vec{v}}$ also move particles away from zero momentum. As for the first test, on average about $40\%$ of the particles is merged. \begin{figure}[!t] \centering \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/eedf_before_incr_2} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/emdf_before_incr_2} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/eedf_diff_incr_2} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/emdf_diff_incr_2} \end{minipage} \caption{ Results for the second test. Top row: the initial energy (left) and momentum (right) distribution of the particles. The integrated or cumulative curves are also shown (dashed). Bottom row: the effect of various merge schemes on the cumulative energy (left) and momentum (right) distribution function. The legend is the same as for figure~\ref{fig:df1_incr_comparison}. In the right figure, the peak for scheme 3$\varepsilon$ is cut off, it extends to $-0.018$. } \label{fig:df2_incr_comparison} \end{figure} \subsection{Effect on grid moments} In many particle simulations, a grid (or mesh) is used. Grid moments are defined at the grid points, and provide local averages from which the fields acting on the particles can be computed. For example, the first grid moment gives the particle density, the second the current density or momentum density, the third the energy density and so on. Particles can be mapped to grid moments in different ways, here we use first order interpolation, also know as cloud-in-cell (CIC)~\cite{Birdsall71926, Hockney88}. Using the data of the second test, we now look at the effect of the merge schemes on the first three grid moments. An APM algorithm should not induce large differences in these grid moments. The mean difference is often zero, because the corresponding quantity is conserved. Therefore, we also look at the relative standard deviation, or $\sigma / \mu$, where $\sigma$ is the standard deviation of a random variable with mean $\mu$. This is a measure of the relative size of fluctuations. We measure these fluctuations at a single grid point, as they would be correlated for multiple grid points. In table~\ref{tab:schemetable} the changes in the grid moments are given for various schemes. The schemes are labeled by the same symbols as before. In addition, $\vec{x}_r$ indicates that the new position is picked randomly from one of the merged particles. The bottom part of the table is about cell-by-cell merging, which is discussed in section~\ref{sec:cellbycell}. The APM fluctuations should be compared to those resulting from advancing the particles in time. Therefore, the table includes entries that list the effect of taking a timestep $\Delta t$ without any merging. Since we have included no collisions, the particles simply move with a constant velocity during this timestep. The average deviation in particle density $\rho$ is zero for all the schemes, because they conserve the total weight of the particles. Therefore this quantity is not included in table~\ref{tab:schemetable}. The induced fluctuations in the grid moments can differ by almost an order of magnitude between the schemes. As expected, conserving momentum reduces the mean energy, and conserving energy increases momentum. This is especially problematic when the velocity norm $k$-d tree is used. The mean deviations are then larger than $10\%$. The full coordinate $k$-d tree in combination with the energy-conserving scheme, $\varepsilon^{\vec{x},\vec{v}}$, gives good results regarding energy and momentum conservation. Schemes that select the new velocity at random do not lead to systematic differences in the energy and momentum grid moments. With the $\vec{v}_r^{\vec{x},\vectornorm{\vec{v}}}$ scheme, the fluctuations in momentum can be relatively large. The $\vec{v}_r^{\vec{x},\vec{v}}$ scheme leads to much smaller fluctuations. This scheme performs well: on average it conserves the grid moments and also the shapes of the energy/momentum distribution functions, and it does not create big fluctuations. Taking the new position at random at one of the original particles ($\vec{x}_r$) increases the fluctuations in particle density. For all the schemes, the fluctuations in density, momentum or energy are smaller than those resulting from a timestep of $\Delta t = 0.4$. \begin{table} \centering {\small \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}} c | c c r r r r r} Method & $N_\mathrm{merge}$ & $d_\mathrm{avg}$ & $\sigma_{\rho}$ & $\Delta p_x$ & $\sigma_{p_x}$ & $\Delta \varepsilon$ & $\sigma_\varepsilon$ \\ \hline $\Delta t = 0.1$ & - & - & $1.6\%$ & $0.0\%$ & $9\%$ & $0.0\%$ & $3.8\%$ \\ $\Delta t = 0.2$ & - & - & $2.9\%$ & $0.0\%$ & $16\%$ & $0.0\%$ & $6.7\%$ \\ $\Delta t = 0.4$ & - & - & $4.9\%$ & $0.0\%$ & $25\%$ & $0.0\%$ & $9.4\%$ \\%[0.5em] \hline $\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$ & 39\% & 0.16 & $0.3\%$ & $12\%$ & $16\%$ & $0.0\%$ & $0.8\%$ \\ p$^{\vec{x},\vectornorm{\vec{v}}}$ & 39\% & 0.16 & $0.3\%$ & $0.0\%$ & $4\%$ & $-37\%$ & $5.0\%$ \\ $\vec{v}_r^{\vec{x},\vectornorm{\vec{v}}}$ & 39\% & 0.16 & $0.3\%$ & $0.0\%$ & $24\%$ & $0.0\%$ & $1.2\%$ \\ $\vec{v}_r\varepsilon^{\vec{x},\vectornorm{\vec{v}}}$ & 39\% & 0.16 & $0.3\%$ & $0.4\%$ & $25\%$ & $0.0\%$ & $0.8\%$ \\ $\vec{v}_r\vec{x}_r^{\vec{x},\vectornorm{\vec{v}}}$ & 39\% & 0.16 & $1.0\%$ & $0.0\%$ & $24\%$ & $0.0\%$ & $2.2\%$ \\%[0.5em] $\varepsilon^{\vec{x},\vec{v}}$ & 40\% & 0.38 & $0.7\%$ & $0.1\%$ & $4\%$ & $0.0\%$ & $1.5\%$ \\ p$^{\vec{x},\vec{v}}$ & 40\% & 0.38 & $0.7\%$ & $0.0\%$ & $4\%$ & $-1.2\%$& $1.5\%$ \\%[0.5em] $\vec{v}_r^{\vec{x},\vec{v}}$ & 40\% & 0.38 & $0.7\%$ & $0.0\%$ & $6\%$ & $0.0\%$ & $2.4\%$ \\ \hline p$^\vec{v}$, cell & 40\% & 0.19 & $2.8\%$ & $0.0\%$ & $12\%$ & $-0.9\%$ & $3.8\%$ \\ $\vec{v}_r^{\vec{x},\vectornorm{\vec{v}}}$, cell & 39\% & 0.17 & $0.3\%$ & $0.0\%$ & $23\%$ & $0.0\%$ & $1.5\%$ \\ $\vec{v}_r\vec{x}_r^{\vec{x},\vectornorm{\vec{v}}}$, cell & 39\% & 0.17 & $1.0\%$ & $0.0\%$ & $23\%$ & $0.0\%$ & $2.2\%$\\ $\varepsilon^{\vec{x},\vec{v}}$, cell & 38\% & 0.40 & $0.8\%$ & $0.5\%$ & $5\%$ & $0.0\%$ & $1.8\%$ \end{tabular*} } \caption{The induced differences and fluctuations in the grid moments by the various merge schemes, using the second test distribution. Legend: $N_\mathrm{merge}$ is the fraction of merged particles, and $d_\mathrm{avg}$ is the average distance between merged particles. The relative differences in grid moments are indicated by $\Delta p_x$ (momentum) and $\Delta \varepsilon$ (energy), and relative standard deviations by $\sigma_{\rho}$ (density), $\sigma_{p_x}$ (momentum) and $\sigma_\varepsilon$ (energy). Both are given relative to the mean value. The rows starting with $\Delta t$ show the fluctuations in the grid moments resulting from a timestep (no merging). The merge schemes are indicated by the following symbols; $\varepsilon$: conserve energy, p: conserve momentum, $\vec{v}_r$: random velocity, $\vec{v}_r\varepsilon$: random velocity, scale to conserve energy, $\vec{x}_r$: random position, $^{\vec{v}}$: $k$-d tree contains only the velocity, $^{\vec{x},\vectornorm{\vec{v}}}$: velocity norm $k$-d tree, $^{\vec{x},\vec{v}}$: full coordinate $k$-d tree, cell: perform the merging cell-by-cell. } \label{tab:schemetable} \end{table} \subsection{Cell-by-cell merging} \label{sec:cellbycell} Using $k$-d trees, there is no reason to do cell-by-cell merging. But because this type of merging is commonly used, we briefly evaluate its effects. The bottom part of table~\ref{tab:schemetable} shows results for cell-by-cell merging, for various schemes, using the second test distribution. The notation is the same as before, and a superscript $^{\vec{v}}$ indicates that only the velocity was used in the $k$-d tree, not the position. The fluctuations are mostly similar if the particles are merged locally (cell-by-cell) instead of globally. With fewer particles per cell, the differences would be larger though, as close neighbors are more likely to lie in other cells. The average spatial distribution of particles directly after merging are shown in figure~\ref{fig:densdist}. Only the type of $k$-d tree is important for the effect, because all the shown merge schemes take the average position. From left to right: With the velocity norm $k$-d tree the spatial distribution of particles is affected close to the cell boundaries. With the full coordinate $k$-d tree, the effect is similar as with the velocity norm $k$-d tree. Using the $k$-d tree that includes only the velocity, particles are moved to the center of the cells. The spatial distribution is severely affected. Furthermore, the fluctuations in particle density are higher, as can be seen in table~\ref{tab:schemetable}. If particles would be merged globally (not cell-by-cell), the spatial distribution would be uniform. \begin{figure} \centering \begin{minipage}{0.32\textwidth} \centering \footnotesize \input{./figures/density_c3_5} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \footnotesize \input{./figures/density_c5_5} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \footnotesize \input{./figures/density_c7_5} \end{minipage} \caption{Results for cell-by-cell merging. Shown are the relative differences in particle density, for different ways of creating the $k$-d tree. The schemes are indicated by the same symbols as in table~\ref{tab:schemetable} and figure~\ref{fig:df1_incr_comparison}. Cell-by-cell merging leads to clear artifacts, especially when the $k$-d tree contains only the velocity. } \label{fig:densdist} \end{figure} \subsection{Computational costs of $k$-d trees} The goal of an APM algorithm is to speed up a simulation, so the algorithm itself should not take too much time. Theoretically, the computational complexity of creating a $k$-d tree is $O(N_\mathrm{p} \log N_\mathrm{p})$, with $N_\mathrm{p}$ the number of points in the tree. The average cost of a random search in the tree is $O(\log N_\mathrm{p})$. We have tested the practical performance of the \texttt{KDTREE2} library on an Intel i7-2600 CPU. In figure~\ref{fig:kdtreeperf} the creation time and the average search time are shown for $k$-d trees of various sizes. Neighbors can be found faster if the $k$-d tree is constructed in fewer dimensions. Note that the average search time is given for uncorrelated searches, that are done at random locations. This is the worst-case scenario, as the CPU cannot do efficient data caching. If the next search location is picked close to the previous search location, search times in 5D decrease by more than $80\%$. \begin{figure} \centering \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/kdtree_creation} \end{minipage} \begin{minipage}{0.49\textwidth} \centering \footnotesize \input{./figures/kdtree_lookup} \end{minipage} \caption{Performance figures for $k$-d trees in 2D-5D with $N_\mathrm{p}$ points, using the \texttt{KDTREE2}~\cite{Kennel8067K} library. Left: the time it takes to create the $k$-d tree. Right: the time it takes to find a nearest neighbor (for uncorrelated searches). The calculations were performed on an Intel i7-2600 CPU. } \label{fig:kdtreeperf} \end{figure} \section{Conclusion} Adaptively adjusting the weights of simulated particles can greatly improve the efficiency of simulations. We follow Welch et al.~\cite{Welch2007143} and call algorithms that do this `adaptive particle management' (APM) algorithms. In this work, we have focused on the pairwise merging of particles. We found that the use of a $k$-d tree offers several important advantages over present methods. First, only particles that are `close together' are merged. `Close together' can be defined as desired (for example close in position and velocity). This ensures that the distribution of particles is not significantly altered. Second, the merging can be performed completely independent of the numerical mesh used in the simulation. The algorithm works in the same way, whether the simulation is in 1D or in any higher dimension. Third, with a $k$-d tree, the closest neighbors can be located efficiently. Therefore, the method can be used for simulations with millions of particles. Fourth, from a practical point of view, the use of a $k$-d tree library greatly simplifies the implementation of pairwise merging. Two particles can be merged in different ways, and we have compared various merge schemes. An interesting option is to select properties for the merged particle at random from the original particles. With these stochastic schemes fluctuations increase, but on average both momentum and energy can be conserved. The optimal scheme depends on the application. In general, it is more important to preserve the essential characteristics of the particle distribution function than to exactly conserve grid moments. A scheme that conserves energy or momentum should typically be used with a full coordinate $k$-d tree (containing $\vec{x},\vec{v}$). A velocity norm $k$-d tree (containing $\vec{x},\vectornorm{\vec{v}}$) can be used with a stochastic scheme. The advantage of a velocity norm $k$-d tree is that is can be constructed and searched faster than one with the full coordinates. The combination of a stochastic scheme with a full coordinate $k$-d tree seems a good choice: on average, the shape of the energy and momentum distribution functions is conserved, while the induced fluctuations in the grid moments are relatively small. \section*{Acknowledgement} J. Teunissen was supported by STW-project 10755. \bibliographystyle{elsarticle-num.bst}
1804.07264
\section{Introduction} A black hole is the most strongly bound system. If we can extract energy from a black hole, it would be much more efficient than nuclear energy. However, because of the black hole area theorem\cite{Hawking:1971}, we cannot extract energy from a Schwarzschild black hole. For a rotating black hole, instead, Penrose suggested the use of the ergoregion of a rotating black hole to extract energy\cite{Penrose:1969pc}. A particle can have negative energy in the ergoregion. Hence we suppose that a plunged particle in the ergoregion breaks up into two particles such that one particle has negative energy and falls into the black hole, while the other particle with positive energy, which is larger than the input energy, goes away to infinity. As a result, we can extract energy from a rotating black hole, which is called Penrose process. It was pointed out that this Penrose process could play a key role in the energy emission mechanism of jets and/or X-rays from astrophysical objects \cite{Wheeler:1970}. It has become one of the most interesting and important mechanisms in astrophysics as well as in general relativity. However, some earlier works \cite{Bardeen:1972fi,Wald:1974kya,kovetz1975efficiency} showed that the incident particle or the break-up particles must be relativistic, which implies that the Penrose process is rare in astrophysics and that this process cannot serve for astrophysical process. A disintegration of a plunged particle may also not be practical for extraction of energy from a black hole. Hence two more plausible methods have been proposed: One is a superradiance, in which we use propagating waves instead of a particle \cite{Zeldovich:1971, Zeldovich:1972, Misner:1972, Press:1972, Bekenstein:1973}. An impinging wave on a rotating black hole is amplified for some range of frequencies when it is scattered (see \cite{Brito:2015oca} for the recent progress). The other one is a collisional Penrose process, in which two particles plunge into a black hole and collide in the ergoregion instead of disintegration of a single particle\cite{Piran1975}. One expects that it may give more efficient mechanism in astrophysical situations. Unfortunately, the efficiency of the energy extraction, which is the ratio of the extracted energy to the input energy, turns to be as modest as the original Penrose process \cite{piran1977upper}. Recently this process has again attracted much attention because Ba\~nados, Silk and West\cite{Banados:2009pr} showed that the center of mass energy of two particles can be arbitrarily large when the angular momentum of one incident particle is tuned and the collision occurs near the horizon of an extreme Kerr black hole. This is referred to the BSW effect. If the center-of-mass energy is enough large, new unknown particles could be created if any. It may reveal new physics. It could also play an important role in astrophysics. There have been so far many studies on the BSW effect after their finding\cite{Jacobson:2009zg,Berti:2009bk,PhysRevD.82.103005,Banados:2010kn,Zaslavskii:2010jd,Zaslavskii:2010aw,Grib:2010dz,Lake:2010bq,Harada:2010yv,Kimura:2010qy,Patil:2011yb,Abdujabbarov:2013qka,Tsukamoto:2013dna,Toshmatov:2014qja,Tsukamoto:2014swa,Armaza:2015eha}. Since the interaction between a black hole spin and an angular momentum of the particle is essential for the Penrose process and the BSW effect, it may be interesting to discuss collision of spinning particles. As we will summarize in the text, the 4-momentum of a spinning particle is not always parallel to its 4-velocity, resulting in the possibility of violation of the timelike condition of the orbit. As a result, although the BSW effect by collision of spinning particles in nonrotating Schwarzschild spacetime can take place near the horizon, the motion of the spinning particles becomes superluminal before the collision point\cite{Armaza:2015eha}. While, if the particle energy satisfies $E<\sqrt{3}\mu /6 $, with which such a particle cannot plunge from infinity, the timelike condition is preserved until the horizon\cite{Zaslavskii:2016dfh}. Of course, we find the BSW effect for the collision of spinning particles in a rapidly rotating Kerr (or Kerr-Newman) black hole\cite{Guo:2016vbt, Zhang:2016btg}. We are also very curious about the efficiency of energy extraction from a black hole, which is defined by $\eta=$(output energy)/(input energy). Even when the center-of-mass energy becomes arbitrarily large near the horizon, a resulting particle may not necessarily escape to infinity. Thus, it is also important to study how large is the efficiency of the energy extraction from a black hole. When two massive particles collide near the horizon on the equatorial plane and are converted to massless particles (photons), Bejger et al\cite{Bejger:2012yb} showed numerically that the maximal efficiency is about 1.29. This result has been confirmed analytically by Harada, Nemoto and Miyamoto\cite{Harada:2012ap}. However, as Schnittman showed numerically\cite{Schnittman:2014zsa}, the maximal efficiency becomes 13.92 when an outgoing fine-tuned massless particle collides with a massive particle near the horizon. Leiderschneider and Piran\cite{Leiderschneider:2015kwa} then derived the maximal efficiency analytically for several possible processes. They analyzed not only the collision on the equatorial plane but also more general off-plane orbits. They concluded that the maximal efficiency is $(2+\sqrt{3})^2\approx 13.93$, which is found in the case of the Compton scattering (collision of massless and massive particles) on the equatorial plane. The similar analytic approaches were performed in \cite{Ogasawara:2015umo} and \cite{Zaslavskii:2016unn}. These results agree with the numerical result by Schnittman\cite{Schnittman:2014zsa} More efficient way of extracting the energy from a black hole, which is called the super-Penrose process, has been proposed in \cite{Berti:2014lva,Patil:2015fua}, but there is still an argument \cite{Leiderschneider:2015kwa}. The essential problem is how to create the particles which cause the super-Penrose process. Zaslavskii\cite{Zaslavskii:2015fqy} pointed out that it is difficult to prepare a suitable initial state only by preceding mechanical collisions. One natural question may arise: How the efficiency of the collisional Penrose process will be enhanced when the particles are spinning? Recently this subject was discussed in \cite{MUKHERJEE201854}. However the timelike condition was not properly taken into account. The value of spin is too large for the orbit to be timelike. Here we will study the effect of the particle spin on the efficiency of energy extraction in detail. We consider the collision of two massive spinning particles and the Compton or inverse Compton scattering (collision of one massless and one massive particles). In Sec. II, we briefly review the equation of motion of a spinning particle in a Kerr black hole and provide the timelike condition of the orbit. In Sec. III, we study the collision of two spinning particles in an extreme Kerr geometry and analyze the maximal efficiency. We also discuss the collision of one spinning massive particle and one massless particle ( the Compton and the inverse Compton scatterings) . Section IV is devoted to concluding remarks. Throughout this paper, we use the geometrical units of $c=G=1$ and follow \cite{misner1971gravitation} for the notations. \\[2em] \section{Basic Equations} \label{basic_equations} \subsection{Equations of Motion of a Spinning Particle} We consider a spinning particle in Kerr geometry. The equations of motion of a spinning particle were first derived by Papapetrou\cite{papapetrou1951spinning} by the use of the pole-dipole approximation of an extended body, and then reformulated by Dixon\cite{dixon1970:1,dixon1970:2,dixon1979isolated}. The equations of motion are \begin{eqnarray*} && {Dp^\mu\over d\tau}=-{1\over 2} R^\mu_{~\nu\rho\sigma} v^\nu S^{\rho\sigma}\\ && {DS^{\mu\nu}\over d\tau}=p^\mu v^\nu-p^\nu v^\mu \end{eqnarray*} where $p^\mu, v^\mu=dz^\mu/d\tau, $ and $S^{\mu\nu}$ are the 4-momentum, the 4-velocity and the spin tensor of the particle, respectively. $\tau$ is the proper time and $z^\mu(\tau)$ is the orbit of the particle. We need a set of supplementary conditions \begin{eqnarray*} S^{\mu\nu}p_\nu=0 \,, \end{eqnarray*} which fixes the center of mass of the particle. Defining the particle mass $\mu (>0)$ by $\mu^2=-p^\mu p_\mu$, we also use a specific 4-momentum $u^\mu$, which is defined by \begin{eqnarray*} u^\mu={p^\mu\over \mu}\,. \end{eqnarray*} The normalized magnitude of spin $s$ is defined by \begin{eqnarray*} S^{\mu\nu}S_{\mu\nu}=2\mu^2 s^2 \,. \end{eqnarray*} We also normalize the affine parameter $\tau$ as \begin{eqnarray*} u^\mu v_\mu=-1\,. \end{eqnarray*} We then find the relation between the 4-velocity and the specific 4-momentum as \begin{eqnarray*} v^\mu-u^\mu={S^{\mu\nu}R_{\nu\rho\sigma\lambda}u^\rho S^{\sigma\lambda} \over 2(\mu^2+{1\over 4}R_{\alpha \beta\gamma\delta}S^{\alpha \beta}S^{\gamma\delta})} \,, \end{eqnarray*} which means that the 4-velocity $v^\mu$ and the 4-momentum $p^\mu$ are not always parallel. \subsection{Conserved Quantities} If we have a Killing vector $\xi_\mu$ in a background geometry, we obtain the conserved quantity \begin{eqnarray*} Q_\xi=p^\mu\xi_\mu+{1\over 2}S^{\mu\nu}\nabla_\mu \xi_\nu \,. \end{eqnarray*} In the Kerr geometry, there are two Killing vectors: \begin{eqnarray*} && \xi_{\mu}^{(t)}=-\left(\sqrt{\Delta\over \Sigma}e_\mu^{(0)}+{a\sin \theta\over \sqrt{\Sigma}}e_\mu^{(3)}\right) \\ && \xi_{\mu}^{(\phi)}=a\sqrt{\Delta\over \Sigma}\sin^2\theta e_\mu^{(0)} +{(r^2+a^2)\sin \theta\over \sqrt{\Sigma}}e_\mu^{(3)} \,, \end{eqnarray*} where \begin{eqnarray*} \Delta&=& r^2-2Mr+a^2 \\ \Sigma&=& r^2+a^2\cos^2\theta \,, \end{eqnarray*} and the tetrad basis $e_\mu^{~(a)} $ is defined by \[ e_\mu^{~(a)} = \left( \begin{array}{cccc} \sqrt{\Delta\over \Sigma} & 0 & 0& -a\sqrt{\Delta\over \Sigma}\sin^2\theta\\ 0 &\sqrt{\Sigma\over \Delta} & 0& 0\\ 0& 0 & \sqrt{\Sigma}& 0\\ -{a\over \sqrt{\Sigma}} \sin\theta & 0 & 0&{(r^2+a^2) \over \sqrt{\Sigma}}\sin\theta \end{array} \right) \,. \] Hence there are two conserved quantities in Kerr geometry, which are the energy $E$ and the $z$ component of the total angular momentum $J$ given by \begin{eqnarray*} E&:= &-Q_{\xi^{(t)}} \\ &=&\sqrt{\Delta\over \Sigma} p^{(0)}+{a\sin\theta\over \sqrt{\Sigma}}p^{(3)} \\ ~~&& +{M(r^2-a^2\cos^2\theta)\over \Sigma^2}S^{(1)(0)}+ {2Mar\cos\theta\over \Sigma^2}S^{(2)(3)} \\ J&:= &Q_{\xi^{(\phi)}} \\ &=&a\sin^2\theta \sqrt{\Delta\over \Sigma} p^{(0)}+{(r^2+a^2)\sin\theta\over \sqrt{\Sigma}}p^{(3)} \\ ~~&& +{a\sin^2\theta \over \Sigma^2}[(r-M)\Sigma+2Mr^2]S^{(1)(0)} \\ ~~&& +{a\sqrt{\Delta}\sin\theta \cos\theta\over \Sigma}S^{(2)(0)} +{r\sqrt{\Delta}\sin\theta \over \Sigma}S^{(1)(3)} \\ ~~&&+ {\cos\theta\over \Sigma^2}[(r^2+a^2)^2-a^2\Delta\sin^2\theta]S^{(2)(3)} \,. \end{eqnarray*} \subsection{Equations of Motion in the Equatorial Plane} We introduce a specific spin vector $s^{(a)}$ by \begin{eqnarray*} s^{(a)}=-{1\over 2\mu}\epsilon^{(a)}_{~(b)(c)(d)}u^{(b)}S^{(c)(d)} \,, \end{eqnarray*} which is inversed as \begin{eqnarray*} S^{(a)(b)}=\mu\epsilon^{(a)(b)}_{~~~~(c)(d)}u^{(c)}s^{(d)} \,, \end{eqnarray*} where $\epsilon_{(a)(b)(c)(d)}$ is the totally antisymmetric tensor with $\epsilon_{(0)(1)(2)(3)}=1$. In what follows, we consider only the particle motion in the equatorial plane ($\theta=\pi/2$)\cite{Saijo:1998mn}. From this constraint, we find that the spin direction is always perpendicular to the equatorial plane. Hence only one component of $s^{(a)}$ is nontrivial, i.e., \begin{eqnarray*} s^{(2)}=-s \,. \end{eqnarray*} If $s>0$, the particle spin is parallel to the black hole rotation, while when $s<0$, it is antiparallel. As a result, the spin tensor is described as \begin{eqnarray*} S^{(0)(1)}=- s p^{(3)}\,,~~S^{(0)(3)}= s p^{(1)}\,,~~S^{(1)(3)}= s p^{(0)}\,. \end{eqnarray*} We then obtain the conserved quantities as \begin{eqnarray*} E&=&{\sqrt{\Delta}\over r}p^{(0)}+{(ar+Ms)\over r^2}p^{(3)} \label{Eu0u3} \\ J&=& {\sqrt{\Delta}\over r}(a+s)p^{(0)}+{r(r^2+a^2)+as (r+M) \over r^2}p^{(3)}. ~~~~~~~~ \label{Ju0u3} \end{eqnarray*} From those equations, we find \begin{eqnarray*} u^{(0)}&=& {\left[ (r^3+a(a+s)r+aMs)E-(ar+Ms)J\right]\over \mu r^2\sqrt{\Delta}\left(1-{Ms^2\over r^3}\right)}~~~~~~~~~~ \label{u0EJ} \\ u^{(3)}&=& {\left[ J-(a+s)E\right]\over \mu r\left(1-{Ms^2\over r^3}\right)} \,. \label{u3EJ} \end{eqnarray*} There exists the normalization condition $u_\mu u^{\mu}=-1$, i.e., \begin{eqnarray*} -(u^{(0)})^2+(u^{(1)})^2+(u^{(3)})^2=-1 \,. \end{eqnarray*} Hence we have \begin{eqnarray*} u^{(1)}&=&\sigma \sqrt{(u^{(0)})^2-(u^{(3)})^2-1}\,, \end{eqnarray*} where $\sigma=\pm 1$ correspond to the outgoing and ingoing motions, respectively. The relation between the 4-velocity $v^{(a)}$ and the specific 4-momentum $u^{(a)}$ is given by \begin{eqnarray*} v^{(0)}&=&\Lambda_s^{-1} u^{(0)}\,,~~ \\ v^{(1)}&=&\Lambda_s^{-1} u^{(1)}\,,~~ \\ v^{(3)}&=&{\left(1+{2Ms^2\over r^3}\right)\over \left(1-{Ms^2\over r^3}\right)} \Lambda_s^{-1} u^{(3)}\,, \end{eqnarray*} where \begin{eqnarray*} \Sigma_s&=&r^2\left(1-{Ms^2\over r^3}\right) \\ \Lambda_s&=&1-{3Ms^2r[J-(a+s)E]^2\over \mu^2 \Sigma_s^3} \,. \end{eqnarray*} Hence we obtain \begin{eqnarray*} {dt\over d\tau}&:=&v^{0}= {r^2+a^2\over r\sqrt{\Delta}}v^{(0)} +{a\over r} v^{(3)} \\&&={1\over r \Lambda_s}\left({r^2+a^2\over \sqrt{\Delta}}u^{(0)} +a{1+{2Ms^2\over r^3}\over 1-{Ms^2\over r^3}} u^{(3)}\right)\,,~~ \\ {dr\over d\tau}&:=&v^{1}= {\sqrt{\Delta}\over r}v^{(1)}= {\sqrt{\Delta}\over r\Lambda_s }u^{(1)}\,,~~ \\ {d\phi\over d\tau}&:=&v^{3}= {a\over r\sqrt{\Delta}}v^{(0)} +{1\over r} v^{(3)} \\&&={1\over r \Lambda_s}\left({a\over \sqrt{\Delta}}u^{(0)} +{1+{2Ms^2\over r^3}\over 1-{Ms^2\over r^3}} u^{(3)}\right)\, . \end{eqnarray*} \begin{widetext} We finally obtain the equations of motion of the spinning particle as \begin{eqnarray*} \Sigma_s \Lambda_s \mu {dt\over d\tau}&=& {\Sigma_s \mu \over r }\left({r^2+a^2\over \sqrt{\Delta}}u^{(0)} +a{1+{2Ms^2\over r^3}\over 1-{Ms^2\over r^3}} u^{(3)}\right)= a\left(1+{3Ms^2\over r\Sigma_s}\right)[ J-(a+s)E]+{r^2+a^2\over \Delta}P_s \\ \Sigma_s \Lambda_s \mu {dr\over d\tau}&=&{\Sigma_s \mu\sqrt{\Delta}\over r }u^{(1)}= \sigma \sqrt{R_s} \\ \Sigma_s \Lambda_s \mu {d\phi\over d\tau}&=& { \Sigma_s \mu \over r }\left({a\over \sqrt{\Delta}}u^{(0)} +{1+{2Ms^2\over r^3}\over 1-{Ms^2\over r^3}} u^{(3)}\right)= \left(1+{3Ms^2\over r\Sigma_s}\right)[ J-(a+s)E]+{a\over \Delta}P_s \end{eqnarray*} \end{widetext} where \begin{eqnarray*} P_s&=&\left[r^2+a^2+{as\over r}(r+M)\right] E-\left(a+{Ms\over r}\right)J \\ R_s&=&P_s^2-\Delta\left[{\mu^2\Sigma_s^2\over r^2}+\left[-(a+s)E+J\right]^2\right] \,. \end{eqnarray*} Note that \begin{eqnarray} u^{(1)}=\sigma{r\sqrt{R_s} \over \mu \sqrt{\Delta}\Sigma_s } \,. \label{specific_radial_momentum} \end{eqnarray} Now we introduce the dimensionless variables as \begin{eqnarray*} \tilde E={E\over \mu}\,,~~\tilde J={J\over \mu M}\,,~~\tilde s={s\over M} \,, \end{eqnarray*} \begin{eqnarray*} \tilde t={t\over M}\,,~~\tilde r={r\over M}\,,~~a_*={a\over M}\,,~~\tilde \tau={\tau\over M}\,, \end{eqnarray*} and \begin{eqnarray*} &&\hskip -2cm \tilde \Delta=\tilde r^2-2\tilde r+a_*^2\,, \end{eqnarray*} \begin{eqnarray*} && \tilde \Sigma_s={\Sigma_s\over M^2}= \tilde r^2\left(1-{\tilde s^2\over \tilde r^3}\right)\,, \\ && \tilde P_s={P_s\over \mu M^2} \\ && ~~~ =\left[\tilde r^2+a_*^2+{a_*\tilde s\over\tilde r}(\tilde r+1)\right] \tilde E-\left(a_*+{\tilde s\over \tilde r}\right)\tilde J\,, \\ && \tilde R_s={R_s\over \mu^2 M^4} \\ && ~~~=\tilde P_s^2-\tilde \Delta \left[{\tilde \Sigma_s^2\over \tilde r^2}+\left[-(a_*+\tilde s)\tilde E+\tilde J\right]^2\right]\,. \end{eqnarray*} The equations of motion are then given by \begin{eqnarray*} \tilde \Sigma_s \Lambda_s{d\tilde t\over d\tilde \tau}&=& a_*\left(1+{3\tilde s^2\over \tilde r\tilde \Sigma_s}\right)[ \tilde J-(a_*+\tilde s)\tilde E]+{\tilde r^2+a_*^2\over \tilde \Delta}\tilde P_s \\ \tilde \Sigma_s \Lambda_s {d\tilde r\over d\tilde \tau}&=&\pm \sqrt{\tilde R_s} \\ \tilde \Sigma_s \Lambda_s {d\phi\over d\tilde \tau}&=& \left(1+{3\tilde s^2\over \tilde r\tilde \Sigma_s}\right)[ \tilde J-(a_*+\tilde s)\tilde E]+{a_*\over \tilde \Delta}\tilde P_s \,. \end{eqnarray*} \begin{widetext} \subsection{Constraints on the Orbits} In what follows, we drop the tilde just for brevity. In order to find an orbit to the horizon $r_H:=1+\sqrt{1-a_*^2}$, the radial function $R_s$ must be nonnegative for $r\geq r_H$, which condition is reduced to be \begin{eqnarray*} && \Big{\{} \left[ r^3+a_*(a_*+s-b)r+(a_*-b) s\right] ^2 -r^2 \Delta \left(a_*+ s-b\right)^2 \Big{\}}E^2 \geq \Delta \Sigma_s^2 \,, \end{eqnarray*} by introducing an ``impact'' parameter $b:=J/E$. There exists a critical value of the impact parameter $b_{\rm cr}$, beyond which the orbit cannot reach the event horizon. The particle bounces off at the turning point $dr/d\tau=0$, which radius is larger than $r_H$. The turning point for the critical orbit with $b=b_{\rm cr}$ is found just at the horizon radius. From the condition such that $R_s(r_H)=0$, we find \begin{eqnarray*} b_{\rm cr} &=&{r_H^3+a_*(a_*+s)r_H+a_*s\over a_*r_H+s} = a_*+s+{r_H^3-s^2\over a_* r_H +s} \,. \end{eqnarray*} Hence in order for the orbit to reach the horizon, the condition such that $b\leq b_{\rm cr}$ is required. There exists one more important physical condition that the 4-velocity must be timelike, which is explicitly written as \begin{eqnarray*} v^\mu v_\mu&=&-(v^{(0)})^2+(v^{(1)})^2+(v^{(3)})^2 ={\left[\left(1-X\right)^2\left( -(u^{(0)})^2+(u^{(1)})^2\right)+\left(1+2 X\right)^2(u^{(3)})^2\right]\over \left[1-X\left(1+3(u^{(3)})^2\right)\right]^2}<0\,, \end{eqnarray*} where $X={s^2\over r^3}$. It gives \begin{eqnarray*} \left(1-X\right)^2\left( -(u^{(0)})^2+(u^{(1)})^2\right)+\left(1+2 X\right)^2(u^{(3)})^2<0 \,. \end{eqnarray*} \end{widetext} Since $-(u^{(0)})^2+(u^{(1)})^2+(u^{(3)})^2=-1$, this condition is reduced to be \begin{eqnarray*} -(1-X)^2+3X(2+X)(u^{(3)})^2<0 \,. \end{eqnarray*} From \begin{eqnarray*} u^{(3)}={X^{1/3}\over s^{2/3}(1-X)}[J-(a_*+s)E] \,, \end{eqnarray*} we obtain the timelike condition of $v^\mu$ as \begin{eqnarray} {(1-X)^4\over (2+X)X^{5/3}}>{3[J-(a_*+s)E]^2\over s^{4/3}} \,. \label{timelike_condition0} \end{eqnarray} This condition must be satisfied outside of the event horizon, $r\geq r_H$. Note that the timelike condition is always satisfied for $s=0$. Since $s^2\leq 1$, $X$ is always smaller than unity outside of the horizon, and the function on the left hand side in the inequality (\ref{timelike_condition0}) is monotonic with respect to $X$, we find the above condition is reduced to be \begin{eqnarray} {(1-X_H)^4\over (2+X_H)X_H^{5/3}}>{3[J-(a_*+s)E]^2\over s^{4/3}} \,, \label{timelike_general} \end{eqnarray} where $X_H:=s^2/r_H^3$. By use of the impact parameter $b$, we find the above timelike condition for as \begin{eqnarray*} E^2<{s^{4/3}(1-X_H)^4\over 3(b-a_*-s)^2 (2+X_H)X_H^{5/3}} \,, \end{eqnarray*} which gives a constraint on the particle energy $E$. It is also regarded as a constraint on the impact parameter $b$ for given energy $E$, i.e., \begin{eqnarray} a_*+s-{F(s,r_H) \over E}<b< a_*+s+{F(s,r_H)\over E} \label{timelike_condition} \end{eqnarray} where \begin{eqnarray*} F(s,r_H):={s^{2/3}(1-X_H)^2\over \sqrt{3(2+X_H)}X_H^{5/6}} \,. \end{eqnarray*} For the critical orbit with $J=J_{\rm cr}$, it becomes \begin{eqnarray} E^2<{s^{4/3}(1-X_H)^4\over 3(b_{\rm cr}-a_*-s)^2 (2+X_H)X_H^{5/3}} \,. \label{non_extreme_constraint} \end{eqnarray} In what follows, we mainly consider the extreme Kerr black hole ($a_*=1, r_H=1$), especially when we discuss the collisional Penrose process in the next section. For the extreme black hole, we find $b_{\rm cr}=2$, which does not depend on the spin $s$. If the particle is not critical, by setting $b=2(1+\zeta)$, the timelike condition (\ref{timelike_condition}) is rewritten as \begin{eqnarray} && -{(1-s)\over 2}-{(1-s^2)^2\over 2E\sqrt{3s^2(2+s^2)}}<\zeta \nonumber \\ && ~~~~~<-{(1-s)\over 2}+{(1-s^2)^2\over 2E\sqrt{3s^2(2+s^2)}} \label{timelike_noncritical} \end{eqnarray} This gives a constraint on $\zeta$ (or the impact parameter $b=J/E$). While, for the critical particle with $b_{\rm cr}=2$, from (\ref{non_extreme_constraint}) we have the timelike condition as \begin{eqnarray} E^2<{(1-s)^2(1+s)^4\over 3s^2 (2+s^2)} \label{timelike_critical} \,. \end{eqnarray} If the particle plunges from infinity, $E\geq 1$, which gives the constraint on the spin $s$ as $s_{\rm min} < s < s_{\rm max}$, where $s_{\rm min}$ and $s_{\rm max}$ are the solution of the equation \begin{eqnarray*} s^6 + 2 s^5 - 4 s^4- 4s^3 - 7 s^2 + 2 s + 1 = 0 \,, \end{eqnarray*} with the constraint $s^2\leq 1$. We find $s_{\rm min}\approx -0.2709$ and $s_{\rm max}\approx 0.4499$. Eq. (\ref{timelike_critical}) also gives the constraint on a spin $s$ for given particle energy $E$, which is shown in Fig. \ref{spinconstraint}. This shows the high energy particle cannot reach the horizon if the spin is too large. \begin{figure}[h] \includegraphics[width=6cm]{spinconstraint.eps} \caption{The allowed region for the spin $s$ and the energy $E$, with which the particle can reach the event horizon.} \label{spinconstraint} \end{figure} When we will discuss a collision in the next chapter, we find that the direction of the particle is important. Since we assume two particles plunge from infinity, those particles are ingoing. However, if $b>b_{\rm cr}$, a particle falling from infinity will find a turning point, and then bounce back to infinity. Such a particle is moving outward. Hence we consider both directions of the particle motions at collision. Solving $dr/d\tau=0$ for the angular momentum $J$, we find $J=J_{\pm}(r,E,\mu,s)$, where \begin{widetext} \begin{eqnarray*} J_{\pm} = {E \{-2 r^4 + r^2 (r^3 - 3r^2 -2) s - r(r + 1) s^2 \} \pm (r-1) (r^3 -s^2) \sqrt{ E^2 r^4 - \mu^2 (r^2 + s) (r^2 - 2r - s) } \over r (r^2 + s) (r^2 - 2r - s) }\,, \end{eqnarray*} which gives the bounce point $r$ for a given value of $b=b_\pm:=J_\pm/E$. \end{widetext} Fig.\ref{turning_diagram} shows the turning points for various values of the spin $s$ for $E=1$. \begin{figure}[h] \includegraphics[width=8.5cm,height=5cm]{turning_diagram.eps} \caption{The relation between the turning point $r$ and the impact parameter $b$ for a spinning particle with $E=1$. The particle with $b>b_{\rm cr}$ or $b<{\rm max}(b_-)$ falling from infinity will bounce at the turning point and escape to infinity, while the outward particle with $r<r_{\rm max}$ and $b<\text{max}(b_-)$ will bounce at the turning point and go back to the horizon, where ${\rm max}(b_-)=-4.97, -4.82, {\rm and~}-4.54$ and $r_{\rm max}=5.48, 5.82, {\rm and~ } 6.30 $ for $s=-0.27, 0, {\rm and } 0.449$, respectively. } \label{turning_diagram} \end{figure} We find that ${\rm min}(b_+)=b_{\rm cr}$. Then, if the particle is near critical ($b\approx b_{\rm cr}$) but $b>b_{\rm cr}$, the particle bounces back near the horizon. For the negative value of $b$, when $b<{\rm max}(b_-)$, the outgoing particle near the horizon will bounce back to the horizon, while the particle coming from infinity will bounce back to infinity. We find ${\rm max}(b_-) \approx -4.97, -4.82, {\rm and~} -4.54$ for $s=-0.27, 0, {\rm and~} 0.449$, respectively. For nonextreme black hole, from Eq. (\ref{non_extreme_constraint}), the timelike condition for the critical orbit with $E\geq 1$ gives the necessary conditions on the parameters of $(s,a_*)$, which is shown in Fig. \ref{timelike}. \begin{figure}[h] \includegraphics[width=6cm]{timelike.eps} \caption{The parameter region $(s,a_*)$ for the existence of the timelike critical orbit with $E\leq 1$ until the event horizon.} \label{timelike} \end{figure} For $a_*=0.9$, $E\geq 1$ gives $-0.3179< s <0.5497$, which range is a little larger than the extreme case. While for $a_*=0$ (Schwarzschild black hole), no region exists because there is no critical orbit. \subsection{Orbit of a massless particle on the equatorial plane} Since we also discuss the scattering of massless particle later, we shall describe its orbit on the equatorial plane in the Kerr geometry. A massless particle particle is not spinning ($s=0$). Hence, the conserved energy and the $z$-component of the angular momentum of the massless particle are defined by \begin{eqnarray*} E=-p^\mu \xi^{(t)}_{\mu}\,,~{\rm and}~~J=p^\mu\xi^{(\phi)}_\mu \,. \end{eqnarray*} Then we find \begin{eqnarray*} p^{(0)}= {\left[ (r^2+a^2)E-a)J\right]\over r\sqrt{\Delta}} \,,~{\rm and}~~ p^{(3)}= {\left[ J-aE\right]\over r} \,. \end{eqnarray*} This gives \begin{eqnarray*} p^{(1)}&=&\sigma\sqrt{(p^{(0)})^2-(p^{(3)})^2} \\ &=&{\sigma\over r\sqrt{\Delta}}\sqrt{\left[ ((r^2+a^2)E-aJ)^2-(J-aE)^2\Delta\right]} \,. \end{eqnarray*} When we discuss the orbit we have to look at the 4-velocity $v^\mu={dz^\mu\over d\lambda}$, where $\lambda$ is an affine parameter. The 4-momentum $p^\mu$ and the 4-velocity $v^\mu$ are proportional. By choosing the affine parameter $\lambda$ appropriately, we can set \begin{eqnarray*} p^\mu=E v^\mu\,. \end{eqnarray*} As a result, we find \begin{eqnarray*} \left({dr\over d\lambda}\right)^2&=&{\Delta\over r^2}\left(v^{(1)}\right)^2 = {\Delta\over r^2}{\left(p^{(1)}\right)^2\over E^2} \\ &=& {1\over r^4 E^2}\left[ ((r^2+a^2)E-aJ)^2-(J-aE)^2\Delta\right] \,. \end{eqnarray*} Using the ``impact'' parameter $b=J/E$, we find the critical value \begin{eqnarray*} b_{\rm cr}={r_H^2+a^2\over a}={2Mr_H\over a} \,, \end{eqnarray*} beyond which the photon orbit bounces before the horizon. For the extreme black hole, we find the same critical value $b_{\rm cr}=2$ as that for the massive particle. \section{Collision of Spinning Particles} \label{collision} Now we discuss the collision of two particles moving in extreme Kerr geometry ($a_*=1$), in which we expect the maximal energy extraction. Two particles 1 and 2, whose 4-momenta are $p_1^\mu$ and $p_2^\mu$, are moving to a rotating black hole and collide just before the horizon. After the collision, the particles 3 with the 4-momentum $p_3^\mu$ is going away to infinity, while the particle 4 with the 4-momentum $p_4^\mu$ falls into the black hole. We assume that the sum of two momenta and spins, if any, are conserved at the collision, i.e., \begin{eqnarray*} p_1^\mu+p_2^\mu&=&p_3^\mu+p_4^\mu \\ S_1^{\mu\nu}+S_2^{\mu\nu}&=&S_3^{\mu\nu}+S_4^{\mu\nu} \,. \end{eqnarray*} From those conservations with the Killing vectors, we find the conservations of the energy and total angular momentum, \begin{eqnarray*} E_1+E_2&=&E_3+E_4 \\ J_1+J_2&=&J_3+J_4 \,. \end{eqnarray*} We also obtain that the sum of the spins and the radial components of 4-momenta are conserved at the collision; \begin{eqnarray*} \mu_1 s_1+\mu_2 s_2&=&\mu_3 s_3+\mu_4 s_4 \\ p_1^{(1)}+p_2^{(1)}&=&p_3^{(1)}+p_4^{(1)} \,. \end{eqnarray*} In what follows, we discuss two cases: {\bf [A]} collision of two massive particles ({\bf MMM}), and {\bf [B]} collision of massless and massive particles ; the Compton scattering ({\bf PMP}) and inverse Compton scattering ({\bf MPM}), where we use the symbols of {\bf MMM}, {\bf PMP}, and {\bf MPM} following \cite{Leiderschneider:2015kwa}. {\bf P} and {\bf M} describe a massless particle (a photon) and a massive particle, respectively. The first and the second letters denote colliding particles, while the third letter shows an escaped particle. For the case {\bf [A]}{\bf MMM}, we assume all masses of the particles are the same, i.e., $\mu_1=\mu_2=\mu_3=\mu_4=\mu$. Hence the conservation equations hold for the dimensionless specific variables: \begin{eqnarray} \tilde E_1+\tilde E_2&=&\tilde E_3+\tilde E_4 \label{collision_condition1} \\ \tilde J_1+\tilde J_2&=&\tilde J_3+\tilde J_4 \label{collision_condition2} \\ \tilde s_1+\tilde s_2&=&\tilde s_3+\tilde s_4 \label{collision_condition3} \\ u_1^{(1)}+u_2^{(1)}&=&u_3^{(1)}+u_4^{(1)} \,. \label{collision_condition4} \end{eqnarray} For the case {\bf [B]}{\bf PMP}, we assume that the particles 1 and 3 are massless and nonspinning, corresponding to a photon, while the particles 2 and 4 have the same mass, i.e., $\mu_2=\mu_4=\mu$. We then have \begin{eqnarray} \tilde s_2&=&\tilde s_4 \label{Compton_condition3} \\ p_1^{(1)}+p_2^{(1)}&=&p_3^{(1)}+p_4^{(1)} \,, \label{Compton_condition4} \end{eqnarray} in addition to two conservation equations (\ref{collision_condition1}) and (\ref{collision_condition2}). In the case of {\bf [B]}{\bf MPM}, the particles 2 and 4 are massless and nonspinning, while the particles 1 and 3 are massive with the same mass, i.e., $\mu_1=\mu_3=\mu$, and Eq. (\ref{Compton_condition3}) is replaced by \begin{eqnarray} \tilde s_1&=&\tilde s_3 \,. \label{inverse_Compton_condition3} \end{eqnarray} As we showed, there exists a critical orbit, which satisfies $J=J_{\rm cr}=2 E$ in the extreme Kerr spacetime. This orbit will reach to the event horizon, and then bounce there. If $J<J_{\rm cr}$, the orbit gets into a black hole. While when $J>J_{\rm cr}$, the orbit bounces back before the horizon. We assume that the particles 1 and 2 starting from infinity are falling toward a black hole, and collide near the event horizon, i.e., the collision point $r_c$ is very close to the horizon ($r_H=1$), i.e., $r_c=1/(1-\epsilon)$ ($0<\epsilon \ll 1$). Hence the leading order of the radial component of the 4-momentum $p^{(1)}$ is \begin{eqnarray*} p^{(1)}\approx \sigma {|2E-J|\over \epsilon(1-s)}+\cdots \,. \end{eqnarray*} The momentum conservation equation ($p_1^{(1)}+p_2^{(1)}=p_3^{(1)}+p_4^{(1)}$) yields \begin{widetext} \begin{eqnarray} && \sigma_1 {|2E_1-J_1|\over 1-s_1} +\sigma_2 {|2E_2-J_2|\over 1-s_2} =\sigma_3{|2E_3-J_3|\over 1-s_3} +\sigma_4{|2E_4-J_4|\over 1-s_4} +O(\epsilon) \label{leading_order} \end{eqnarray} \end{widetext} In what follows, we consider just the case such that the particle 1 is critical ($J_1=2E_1$). To classify the case, we consider two situations for the particle orbits: One is near-critical ($J=2 E+O(\epsilon)$), and the other is noncritical ($J=2 E+O(\epsilon^0)$). Since we consider the collision near the horizon, noncritical orbit must have a smaller angular momentum $J<2E$. From Eq. (\ref{leading_order}), we find the following four cases: \\ (1) Both particle 2 and particle 3 are near-critical. In this case there is no constraint on $\sigma_2, \sigma_3$ and $\sigma_4$. \\ (2) The particle 2 is near-critical but the particle 3 is noncritical ($J_3<2E_3$). In this case, using the conservation equations (\ref{collision_condition1}) and (\ref{collision_condition2}), we find \begin{eqnarray*} \left[{\sigma_3\over 1-s_3}+{\sigma_4 \over 1-s_4}\right] (J_3-2 E_3)= O(\epsilon) \,. \end{eqnarray*} We find $\sigma_4=-\sigma_3$ and $s_4=s_3=s$. For the case {\bf [B]}, since $s_3=0$ or $s_4=0$, the massive particles are also nonspinning. \\ (3) The particle 3 is near-critical but the particle 2 is noncritical ($J_2<2E_2$). In this case, we find \begin{eqnarray*} \left[{\sigma_4\over 1-s_4}-{\sigma_2 \over 1-s_2}\right] (J_2-2 E_2)= O(\epsilon) \,. \end{eqnarray*} We find $\sigma_4=\sigma_2$ and $s_4=s_2$. Hence we have to impose $s_3=s_1$. \\ (4) Both particle 2 and particle 3 are noncritical ($J_2<2E_2$ and $J_3<2E_3$). In this case there is no constraint on $\sigma_2, \sigma_3$ and $\sigma_4$. Here we shall analyze only the case (3). It is because it gives a good efficiency as we will show below. We will not discuss the other three cases (1), (2) and (4) in this paper. It is because it does not seem to get a good efficiency for the cases (1) and (2). For the case (4), the super-Penrose process could be possible, but it is not possible to analyze it by our present method. Since we consider the collision of the particle 1 and the particle 2, the noncritical particle 2 with $J_2<2E_2$ must be ingoing ($\sigma_2=-1$). So we assume that $\sigma_4=\sigma_2=-1$. While the critical particle 1 can be either ingoing ($\sigma_1=-1$) or outgoing after a bounce near the horizon ($\sigma_1=1$). The latter case is not exactly correct. In order for the particle 1 to bounce, it must be supercritical such that $J_1=2E_1+\delta$ with $\delta>0$. We then take a limit of $\delta\rightarrow 0$, which gives the ``critical orbit'' with a bounce. Since we also have a small parameter $\epsilon$, we have to take a limit of $\delta\rightarrow 0$ first, which implies $\delta\ll \epsilon$. The above setting gives \begin{eqnarray} J_1&=&2 E_1 \\ J_3&=&2 E_3 (1+\alpha_3\epsilon+\beta_3 \epsilon^2+\cdots) \,, \label{case3_1} \end{eqnarray} where $\alpha_3$ and $\beta_3$ are parameters of $O(\epsilon^0)$. As for the particle 2, we assume \begin{eqnarray} J_2&=&2 E_2 (1+\zeta) \,, \label{case3_2} \end{eqnarray} where $\zeta<0$ with $\zeta=O(\epsilon^0)$. From the conservation laws, we find \begin{eqnarray} E_4=E_1+E_2-E_3\,,~~~J_4=J_1+J_2-J_3 \,, \label{conservation} \end{eqnarray} giving \begin{eqnarray*} J_4=2 E_4\left(1+{E_2\over E_4}\zeta +\cdots\right) \,. \end{eqnarray*} Now we evaluate $E_2$ and $E_3$ for the cases {\bf [A]} and {\bf [B]} separately. \begin{widetext} \subsection{Case {\bf [A] MMM} {\rm (Collision of two massive particles)}} For the massive particle, the radial component of the specific 4-momentum is written as \begin{eqnarray} &&u^{(1)}=\sigma {r\sqrt{R_s}\over \Sigma_s\sqrt{\Delta}} \nonumber \\ && ={\sigma\sqrt{r^2\left[(r^3+(1+s)r+s)E-(r+s)J\right]^2-(r-1)^2\left[(r^3-s^2)^2+r^4(J-(1+s)E)^2\right]} \over (r-1)(r^3-s^2)} \label{radial_momentum} \,. \end{eqnarray} Plugging the conditions (\ref{case3_1}) and (\ref{case3_2}) into Eq. (\ref{radial_momentum}), and using the conservation equations (\ref{conservation}) we find \begin{eqnarray} u_1^{(1)}&=&\sigma_1\Big[{f(s_1,E_1,0)\over (1 - s_1^2)} -\epsilon{E_1^2 h(s_1)\over (1 - s_1^2)^2 f(s_1,E_1,0)}+O(\epsilon^2) \Big] \label{momentum1} \\ u_2^{(1)}&=&\epsilon^{-1}{2 E_2 (1+s_2) \zeta\over 1-s_2^2} -{E_2(2+s_2)(1-s_2+2\zeta)\over (1-s_2)^2(1+s_2)} \nonumber \\ && -\epsilon {(1-s_2)^4(1+s_2)^2+E_2^2\left(1-s_2+2\zeta\right)\left[(1-s_2)^3-2(1+2s_2)(1+4s_2+s_2^2)\zeta \right]\over 4 (1 - s_2)^3 (1 + s_2)^2 E_2\zeta}+O(\epsilon^2) \label{momentum2} \end{eqnarray} \begin{eqnarray} u_3^{(1)}&=&\sigma_3\Big{\{}{f(s_1,E_3,\alpha_3)\over (1 - s_1^2)} -\Big{[}{\epsilon E_3^2\over (1-s_1^2)^2f(s_1,E_3,\alpha_3)} \times \Big(h(s_1) - 2 (1 + s_1)^2(2 + s_1) g_2(s_1,\alpha_3) \nonumber \\ && +2\beta_3 (1+s_1) (1 - s_1^2) g_1(s_1,\alpha_3) \Big) \Big{]}+O(\epsilon^2) \Big{\}} \label{momentum3} \\ u_4^{(1)}&=&\epsilon^{-1}{2 E_2 (1+s_2) \zeta\over 1-s_2^2} -{[E_1 (1-s_2)(2+s_2) -E_3 (1- s_2) g_1(s_2,\alpha_3)+ E_2 (2 + s_2) (1 - s_2 + 2 \zeta)] \over (1 - s_2)^2 (1 + s_2) } \nonumber \\ && -{\epsilon\over 4 (1 - s_2)^3 (1 + s_2)^2 E_2 \zeta }\Big{[} (1 - s_2)^4 [(E_1-E_3)^2 + (1 + s_2)^2] \nonumber \\ && - 2 E_2 (1 - s_2)\{4(1+s_2)E_3\zeta[ \alpha_3 (2 +s_2)- \beta_3 (1 - s_2^2) ] + (E_3-E_1 ) [(1-s_2)^3-2s_2(2+s_2)^2\zeta]\} \nonumber \\ && + E_2^2(1-s_2+2\zeta)[ (1-s_2)^3-2 (1+2s_2)(1+4s_2+s_2^2)\zeta] \Big{]} +O(\epsilon^2) \label{momentum4} \,, \end{eqnarray} where \begin{eqnarray*} f(s,E,\alpha)&:=&\sqrt{E^2 [3 - 2 \alpha (1+s)][1 + 2 s - 2 \alpha (1+s)] - (1 - s^2)^2} \,, \\ g_1(s,\alpha)&:=&2+s-2\alpha(1+s) \,, \\ g_2(s,\alpha)&:=&\alpha (2 + s-2 \alpha) \,, \\ h(s)&:=&1 + 7 s + 9 s^2 + 11 s^3 - s^4 \end{eqnarray*} Since $u_1^{(1)}+u_2^{(1)}=u_3^{(1)}+u_4^{(1)}$, we find the leading order of $\epsilon^{-1}$ is trivial. From the next leading order of $\epsilon^0$, we find \begin{eqnarray*} && \sigma_3{f(s_1,E_3,\alpha_3)\over 1-s_1^2} =\sigma_1{ f(s_1,E_1,0)\over 1-s_1^2} +{\left[E_1 (2+s_2) -E_3 g_1(s_2,\alpha_3) \right]\over 1-s_2^2} \,, \end{eqnarray*} which is reduced to \begin{eqnarray} {\cal A}E_3^2-2{\cal B}E_3+{\cal C}=0 \,, \label{eq_E3} \end{eqnarray} where \begin{eqnarray} {\cal A}&=& -[3-2\alpha_3(1+s_1)][1+2s_1-2\alpha_3(1+s_1)] +{(1-s_1^2)^2\over (1-s_2^2)^2}g_1^2(s_2,\alpha_3) \label{calA} \\ {\cal B}&=&g_1(s_2,\alpha_3){(1-s_1^2)\over (1-s_2^2)}\left[ (2+s_2){(1-s_1^2)\over (1-s_2^2)} E_1+\sigma_1f(s_1,E_1,0) \right] \label{calB} \\ {\cal C}&=&E_1\left[ \left({3(1+2s_1)(1-s_2^2)^2+(1-s_1^2)^2(2+s_2)^2\over (1-s_2^2)^2}\right) E_1 +2\sigma_1{(1-s_1^2)(2+s_2)\over (1-s_2^2)} f(s_1,E_1,0) \right] \label{calC} \,, \end{eqnarray} with the condition such that $E_3\leq E_{3, {\rm cr}}$ for $\sigma_3=1$, or $E_3\geq E_{3, {\rm cr}}$ for $\sigma_3=-1$, where \begin{eqnarray*} E_{3, {\rm cr}}:={1\over g_1(s_2,\alpha_3)}\left[ (2+s_2)E_1+\sigma_1{(1-s_2^2)\over (1-s_1^2)}f(s_1,E_1,0) \right] \,. \end{eqnarray*} Here we focus just into the case of $\sigma_3=-1$. We should stress that for the outgoing particle 3 after collision ($\sigma_3=1$), the energy $E_3$ has the upper bound $E_{3, {\rm cr}}$, which magnitude is the order of $E_1$. Hence we may not expect large efficiency. We will present the concrete analysis for the case of $\sigma_3=1$ in Appendix \ref{appendix}, in which we find the efficiency is not so high. Since the particle 3 is ingoing after the collision, the orbit must be supercritical, i.e., $J_3>2 E_3$, which means either $\alpha_3 >0$ or $\alpha_3=0$ with $\beta_3>0$. Once we give $\alpha_3$, the value of $E_3$ is fixed in terms of $s_1$, $s_2$ and $E_1$ by \begin{eqnarray} E_3&=&E_{3,+}:={{\cal B}+\sqrt{{\cal B}^2-{\cal A}{\cal C}} \over {\cal A}} \label{energy_E3} \,, \end{eqnarray} where we have chosen the larger root because it gives the larger extracted energy as it turns out that ${\cal A}$ is always positive. The next leading order terms give \begin{eqnarray} {\cal P}E_2 = (1 - s_2)^3 (E_1-E_3)^2 \,, \label{energy_E2} \end{eqnarray} where \begin{eqnarray} {\cal P}:&=& 2 (E_3-E_1 )(1-s_2)^3+ 4\zeta\Big[ {(1 - s_2^2)^2 \over (1-s_1^2)^2} {\cal Q} + 2(1+s_2)E_3[ \alpha_3 (2 +s_2)- \beta_3 (1 - s_2^2) ] -s_2(2+s_2)^2 (E_3-E_1 ) \Big] \,.~~~ \label{calP} \end{eqnarray} with \begin{eqnarray*} {\cal Q}&:=& \sigma_1{E_1^2 h(s_1)\over f(s_1,E_1,0)} -\sigma_3\Big{[}{ E_3^2\over f(s_1,E_3,\alpha_3)} \times \Big(h(s_1) - 2 (1 + s_1)^2(2 + s_1)g_2(s_1,\alpha_3) +2\beta_3 (1+s_1) (1 - s_1^2)g_1(s_1,\alpha_3)\Big) \Big{]} \label{calQ} \end{eqnarray*} Since this fixes the value of $E_2$, we obtain the efficiency by \begin{eqnarray*} \eta={E_3\over E_1+E_2} \,, \end{eqnarray*} when $\alpha_3, \beta_3$ and $\zeta$ are given. \subsection{Case {\bf [B]}} \subsubsection{{\bf [B] PMP} {\rm (Compton scattering)}} For the massless particle, we normalize the 4-momentum, the energy and the angular momentum by the mass $\mu$ of the massive particle. The radial component of the normalized 4-momentum is written as \begin{eqnarray} p^{(1)} ={\sigma\sqrt{r\left[(r+1)E-J\right]\left[(r^2-r+2)E+(r-2)J\right]} \over r(r-1)} \label{massless_radial_momentum} \,, \end{eqnarray} where $E$ and $J$ are normalized by $\mu$ and $\mu M$ just as those of the massive particle. For the momenta of the massive particles 2 and 4, Eqs. (\ref{momentum2}) and (\ref{momentum4}) do not change, while for the massless particles 1 and 3, we find \begin{eqnarray} p_1^{(1)}&=&\sigma_1\Big[\sqrt{3}E_1 -\epsilon{E_1 \over \sqrt{3}}+O(\epsilon^2) \Big] \label{photon1} \\ p_3^{(1)}&=&\sigma_3\Big{\{}E_3\sqrt{(3-\alpha_3)(1-2\alpha_3)} -\epsilon E_3\Big{[}{ [1 - 4(2\alpha_3 -\beta_3) (1- \alpha_3) ] \over \sqrt{(3-2\alpha_3)(1-2\alpha_3)}} \Big{]}+O(\epsilon^2) \Big{\}} \label{photon3} \,. \end{eqnarray} From the conservation of the radial components of the 4-momenta, we find \begin{eqnarray} && E_3={\cal S}E_1 \,, \label{Compton_energy_E3} \end{eqnarray} where the magnification factor ${\cal S}$ is given by \begin{eqnarray*} {\cal S}:={\sigma_1\sqrt{3}(1-s_2^2)+2+s_2 \over\sigma_3\sqrt{(3-2\alpha_3)(1-2\alpha_3)}(1-s_2^2) +2+s_2-2\alpha_3(1+s_2) } \label{Compton_magnification} \end{eqnarray*} and \begin{eqnarray} {\cal P}E_2 = (1 - s_2)^3 (E_1-E_3)^2 \,, \label{Compton_energy_E2} \end{eqnarray} where ${\cal P}$ is given by Eq. (\ref{calP}) with $s_1=0$ but replacing ${\cal Q}$ by ${\cal T}$, which is defined by \begin{eqnarray*} {\cal T}&:=& \sigma_1{E_1 \over \sqrt{3}} -\sigma_3 E_3\Big{[}{ 1 - 4(2\alpha_3 -\beta_3) (1- \alpha_3) \over \sqrt{(3-\alpha_3)(1-2\alpha_3)}} \Big{]} \,. \end{eqnarray*} \subsubsection{Case {\bf [B] MPM} {\rm (Inverse Compton scattering)}} For the momenta of the massive particles 1 and 3, Eqs. (\ref{momentum1}) and (\ref{momentum3}) do not change, while for the massless particles 2 and 4, we find \begin{eqnarray} p_2^{(1)}&=&2\epsilon^{-1}E_2\zeta -2E_2(1+2\zeta) -\epsilon{E_2 (1-4\zeta^2)\over 4\zeta}+O(\epsilon^2) \label{photon2} \\ p_4^{(1)}&=&2\epsilon^{-1}E_2\zeta-2\left[E_4+2E_2\zeta+E_3\alpha_3\right] -\epsilon{E_4^2-8E_2E_3(2\alpha_3-\beta_3)\zeta-4E_2^2\zeta^2\over 4 E_2\zeta} +O(\epsilon^2) \label{photon4} \,. \end{eqnarray} \end{widetext} where $E_4=E_1+E_2-E_3$ From the conservation of the radial components of the 4-momenta, we find \begin{eqnarray} E_3&=& \left. {{\cal B}+\sqrt{{\cal B}^2-{\cal A}{\cal C}} \over {\cal A}} \right|_{s_2=0} \label{Inverse_Compton_energy_E3} \,, \end{eqnarray} and \begin{eqnarray} E_2&=& \left.{(E_1-E_3)^2 \over {\cal P}} \right|_{s_2=0} \,, \label{Inverse_Compton_energy_E2} \end{eqnarray} where ${\cal A}, {\cal B}, {\cal C}$ and ${\cal P}$ are given by Eqs. (\ref{calA}), (\ref{calB}), (\ref{calC}) and (\ref{calP}), which should be evaluated with $s_2=0$. As a result, $E_2$ and $E_3$ coincide with those found at the collision of a spinning massive particle and a nonspinning massive particle. \section{The maximal efficiency} \label{max_efficiency} \subsection{Efficiency of Collision of Massive Particles} Now we discuss the necessary condition to find the maximal efficiency. As we showed, giving the particle 1 energy ($E_1$) and two particle spins ($s_1$ and $s_2$), we find the energies of the particle 3 and particle 2 in terms of the orbit parameters of the particles 2 and 3 ($\alpha_3$, $\beta_3$ and $\zeta$). In order to get the large efficiency, we must find large extraction energy, i.e., the energy of the particle 3 ($E_3$) for given values of $E_1$ and $E_2$ of the ingoing particles. Although $E_1$ is arbitrary, the energy of the particle 2 ($E_2$) is fixed in our approach. Hence we also have to find the possible minimum value of $E_2$. Since we consider two particles are plunging from infinity, we have the constraints of $E_1\geq 1$ and $E_2\geq 1$. We then assume that $E_1=1$ and $\sigma_1=1$, and find the maximal value of $E_3$ as well as the minimum value of $E_2$. Note that we do not find a good efficiency for $\sigma_1=-1$, although the off-plane orbits may give a little better efficiency\cite{Leiderschneider:2015kwa}. First we analyze $E_3$, which is determined by Eq. (\ref{energy_E3}) for given value of $\alpha_3$. Since the orbit of the particle 3 is near critical, we have two constraints: $E_3\geq E_{3, {\rm cr}}$ for $\sigma_3=-1$ and the timelike condition (\ref{timelike_critical}). In order to find the large value of $E_3$, from the timelike condition we find that the spin magnitude $s_3(=s_1)$ must be small (see Fig. \ref{spinconstraint}). Hence we first set $s_1=0$. We then show the contour map of $E_3$ in terms of $\alpha_3$ and $s_2$ in Fig. \ref{a3s2}. We find $\alpha_3 \approx 0$ gives the largest efficiency. Hence next we set $\alpha_3=0+$, and analyze the maximal efficiency. Here $0+$ means that we assume $\alpha_3>0$ but take a limit of $\alpha_3\rightarrow 0$ after taking the limit of $\epsilon\rightarrow 0$. This is justified because $E_2$ and $E_3$ change smoothly when we take the limit of $\alpha_3\rightarrow 0$. \begin{figure}[h] \includegraphics[width=6cm]{a3s2.eps} \caption{The contour map of $E_3$ in terms of $\alpha_3$ and $s_2$ with $s_1=0$. $E_3$ changes smoothly with respect to two parameters $\alpha_3$ and $s_2$, and $\alpha_3\rightarrow 0$ and small $s_2$ give larger value of $E_3$.} \label{a3s2} \end{figure} Assuming $\alpha_3=0+$, we look for the maximal value of $E_3$ for given $s_1$ and $s_2$. In Fig.\ref{maxE3}, we show the contour map of $E_3$ in terms of $s_1$ and $s_2$. The red point, which is $(s_1, s_2)\approx (0.01379, s_{\rm min})$, gives the maximal value of $E_3$. \begin{figure}[h] \includegraphics[width=6cm]{maxE3.eps} \caption{The contour map of $E_3$ in terms of $s_1$ and $s_2$. The timelike condition for the particle 3 orbit is satisfied in the light green shaded region. As a result, the maximal value of $E_3=E_{3,{\rm max}}\approx 30.02$ is obtained when $s_2=s_{\rm min}\approx-0.2709$ and $s_1\approx 0.01379$ (the red point in the figure).} \label{maxE3} \end{figure} Since $E_2\geq 1$ when we plunge the particle 2 from infinity, if $E_2=1$ is possible, we find that the maximal value of $E_3$ gives the maximal efficiency. However $E_2$ is fixed in our approach. So we have to check whether $E_2=1$ is possible or not and then provide which conditions are required if possible. \begin{figure}[h] \includegraphics[width=4cm]{E2.eps} \caption{The relation between $\zeta$ and $\beta_3$ for $E_2=1$. The other parameters are chosen to give the maximal value of $E_3$. The timelike condition for the particle 2 orbit gives the constraint of $\zeta_{\rm min}<\zeta<0$ with $\zeta_{\rm min}\approx -1.271$.} \label{E2} \end{figure} \begin{figure}[h] \includegraphics[width=6cm]{efficiency.eps} \caption{The contour map of the maximal efficiency for given $s_1$ and $s_2$. The green shaded region is the constraint from the timelike condition of the particle 3. The red point, $(s_1, s_2)\approx (0.01379, s_{\rm min})$, gives the maximal efficiency $\eta_{\rm max}\approx 15.01$.} \label{efficiency} \end{figure} \begin{figure}[h] \vskip .5cm \includegraphics[width=5cm]{eff_max.eps} \caption{The efficiency in terms of $s_2$ for fixed values of $s_1=-2.111\times 10^{-2}, 0$ and $1.379\times 10^{-2}$.} \label{eff_max} \end{figure} The condition for $E_2=1$ in Eq. (\ref{energy_E2}) gives the relation between $\zeta$ and $\beta_3$, which is a linear equation of $\beta_3$. Hence we always find a real solution of $\beta_3$. While the timelike condition of the particle 2 gives the constraint on $\zeta$, which is Eq. (\ref{timelike_noncritical}) with $E=1$, i.e., \begin{eqnarray*} \zeta_{\rm min}<\zeta <0 \,, \label{timelike_E2} \end{eqnarray*} where \begin{eqnarray*} \zeta_{\rm min}:= -{(1-s_2)\over 2}\left[1+{(1-s_2)(1+s_2)^2\over \sqrt{3s_2^2(2+s_2^2)}}\right] \end{eqnarray*} since the upper bound in Eq. (\ref{timelike_noncritical}) is always positive for the range of $s_{\rm min}<s_2<s_{\rm max}$. For the parameters giving the maximal value of $E_3$, we find the relation between $\zeta$ and $\beta_3$, which is shown in Fig. \ref{E2}. From the timelike condition for the particle 2 orbit, we have the constraint of $\zeta_{\rm min}<\zeta<0$ where $\zeta_{\rm min}\approx -1.271$. Since there exists a possible range of parameters with $E_2=1$, we find the maximal efficiency is given by $\eta_{\rm max}=E_{3,{\rm max}}/2\approx 15.01$. Hence we find the maximal efficiency $\eta_{\rm max}=E_3/2$ for given $s_1$ and $s_2$, which is shown in Fig. \ref{efficiency}. We also show the efficiency in terms of $s_2$ for fixed values of $s_1=-2.111\times 10^{-2}, 0$ and $1.379\times 10^{-2}$ in Fig. \ref{eff_max}. The efficiency gets larger as $s_2$ approaches the minimum value $s_{\rm min}$. It shows that the effect of spin is very important. Note that we obtain the maximal efficiency $\eta_{\rm max}\approx 6.328$ for nonspinning case, which is consistent with \cite{Leiderschneider:2015kwa}. \subsection{Efficiency of Compton scattering} We find the efficiency $\eta$ by \begin{eqnarray*} \eta&=&{E_3\over E_1+E_2} = {{\cal S}\over 1+{({\cal S}-1)^2(1-s_2)^3\over {\cal P}/E_1}} \end{eqnarray*} where \begin{eqnarray*} {\cal P}/E_1&=&2({\cal S}-1)(1-s_2)^3 \\ &+& 4\zeta\Big{[} (1-s_2^2)^2{\cal T}/E_1+2(1+s_2){\cal S} [\alpha_3(2+s_2) \\ &-& \beta_3(1-s_2^2)]-s_2(2+s_2)^2({\cal S}-1) \Big{]} \end{eqnarray*} with \begin{eqnarray*} {\cal T}/E_1={\sigma_1\over \sqrt{3}}-\sigma_3{\cal S} \Big{[}{1 - 4(2\alpha_3 -\beta_3) (1- \alpha_3) \over \sqrt{(3-\alpha_3)(1-2\alpha_3)}} \Big{]} \,. \end{eqnarray*} Although the extracted photon energy depends on the input photon energy $E_1$, the efficiency does not depend on $E_1$ and $E_2$. It is determined by the orbital parameters $\alpha_3$, $\beta_3$ and $\zeta$ as well as the spin $s_2$. We first look for when we find the largest value of $E_3$, or the magnification factor ${\cal S}$, which is determined by $\alpha_3$. In Fig. \ref{Compton_E3}, we show the magnification factor ${\cal S}$ in terms of $\alpha_3$ and $s_2$. Just as the case {\bf [A]}, $\alpha_3\rightarrow 0$ and small $s_2$ give larger value of ${\cal S}$. The maximal value is ${\cal S}_{\rm max}\approx 26.85$ at $\alpha_3=0+$ and $s_2=s_{\rm min}\approx -0.2709$. \begin{figure}[h] \includegraphics[width=6cm]{Compton_E3.eps} \caption{The contour map of ${\cal S}$ in terms of $\alpha_3$ and $s_2$. ${\cal S}$ changes smoothly with respect to two parameters $\alpha_3$ and $s_2$, and $\alpha_3\rightarrow 0$ and small $s_2$ give larger value of ${\cal S}$.} \label{Compton_E3} \end{figure} Since the maximal value of ${\cal S}$ is obtained when $\alpha_3\rightarrow 0$ and $s_2=s_{\rm min}$, setting $\alpha_3=0+$ and $s_2=s_{\rm min}$, we show the contour map of the efficiency $\eta$ in terms of $\beta_3$ and $\zeta$ in Fig. \ref{Compton_eff}. \begin{figure}[h] \includegraphics[width=6cm]{Compton_eff1.eps} \caption{The contour map of the efficiency in terms of $\beta_3$ and $\zeta$. Fixing $\zeta$ with $0>\zeta>\zeta_{\rm min}(\approx -3.890)$, in the limit of $\beta_3\rightarrow \infty$, we find the maximal efficiency of $\eta_{\rm max}\approx 26.85$.} \label{Compton_eff} \end{figure} Although $\beta_3$ is arbitrary as long as $\alpha_3>0$, $\zeta$ is constrained as $\zeta_{\rm min}<\zeta<0$ in order for the particle 2 can reach the horizon, where the minimum value $\zeta_{\rm min}$ depends on the spin $s_2$. For $s_2=s_{\rm min}$, we find $\zeta_{\rm min}=-3.890$. We then obtain the maximum efficiency for the Comptom scattering as $\eta_{\rm max}=26.85$ in the limit of $\beta_3\rightarrow -\infty$. If $s_2=0$, the maximal efficiency is $\eta_{\rm max}\approx 13.93$, which is consistent with the results by Schnittman\cite{Schnittman:2014zsa} and Leiderschneider-Piran\cite{Leiderschneider:2015kwa}. \\[1em] \subsection{Efficiency of inverse Compton scattering} Since the particles 1 and 2 plunge from infinity, we have the constraint $E_1\ge 1$ and $E_2\ge 0$. We then assume that $E_1=1$, and find the maximal value of $E_3$ as well as the minimal value of $E_2$. Since $E_3$ is determined only by $\alpha_3$ and $s_1$, we first discuss $E_3$. \begin{figure}[h] \includegraphics[width=6cm]{Inverse_Compton_E3.eps} \caption{The contour map of ${E_3}$ in terms of $\alpha_3$ and $s_1$. The timelike condition for the particle 3 is satisfied in the light-green shaded region. The maximum value of $E_3=E_{3,{\rm max}}=15.64$ is obtained at the red point $(\alpha_3, s_1)=(0, 0.02679)$.} \label{Inverse_Compton_E3} \end{figure} In Fig.\ref{Inverse_Compton_E3}, we show the contour map of $E_3$ in terms of $\alpha_3$ and $s_1$. The red point, which is $(\alpha_3,s_1)=(0, 0.02679)$, gives the maximal value of $E_3$. If $E_2\rightarrow 0$ is possible, it gives the minimal value of $E_2$ and then the maximal efficiency is given by $\eta_{\rm max}=E_{3,{\rm max}}$. Hence, assuming $\alpha_3=0+$ and $s_1=0.02679$, we analyze whether $E_2\rightarrow 0$ is possible or not. From Eq. (\ref{calP}), we find the asymptotic behavior of ${\cal P}$ as \begin{eqnarray*} {\cal P}\approx 8E_3 \zeta \beta_3\left[{E_3(2+s_1)\over (1-s_1)f(s_1, E_3, 0)}-1\right] \,, \end{eqnarray*} if $\zeta\beta_3\rightarrow \infty$. It gives $E_2\rightarrow 0$. $\zeta$ is constrained as $-\infty<\zeta<0$ because the particle 2 is nonspinning, while $\beta_3$ is arbitrary as long as $\alpha_3>0$. As a result, we obtain $E_2\rightarrow 0$ is obtained in the limit of $\zeta \beta_3\rightarrow \infty$. $\beta_3$ must be negative. Hence, we find the maximum efficiency $\eta_{\rm max}\approx 15.64$ for the inverse Compton scattering. For $s_1=0$, the maximum efficiency becomes $\eta_{max}=7+4\sqrt{2} \approx 12.66$, which is consistent with the result by Leiderschneider and Piran\cite{Leiderschneider:2015kwa}. \section{Concluding Remark} \label{Concluding_Remark} We have analyzed the maximal efficiency of the energy extraction from the extreme Kerr black hole by collisional Penrose process of spinning test particles. We summarize our result in Table \ref{summary}. For the collision of two massive particles ({\bf MMM}+), we obtain the maximal efficiency is about $\eta_{\rm max}\approx 15.01$, which is more than twice as large as the case of the collision of non-spinning particles. It happens when the particle 1 with $E_1=\mu $, $J_1=2\mu M$ and $s_1\approx0.01379\mu M$ and the particle 2 with $E_2=\mu $, $-0.5418\mu M<J_2<2\mu M$ and $s_2=s_{\rm min}\approx 0.2709\mu M$ plunge from infinity, and collide near the horizon. After collision, the particle 3 with $E_3\approx 30.02\mu $ and $J_3\approx 60.03\mu M$ escapes into infinity, while the particle 4 with $E_4\approx -28.02\mu $ and $-58.57\mu M<J_4<-56.03\mu M$ falls into the black hole. As for the collision of a massless and massive particles, we obtain the maximal efficiency $\eta_{\rm max}\approx 26.85$ for the case of {\bf PMP}+(the Compton scattering), which is almost twice as large as the nonspinning case. In the case of {\bf MPM}+(the inverse Compton scattering), however, we find $\eta_{\rm max}\approx 15.64$, which value is not so much larger than the nonspinning case. It is because that the timelike condition forces the magnitude of spin not to be so large for the energetic spinning particle. Although we have presented some examples to give a large efficiency of the energy extraction from a rotating black hole, the following cases should also be studied: \begin{widetext} \begin{table}[H] \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline \raisebox{-6pt}{collisional process}& spin &input energy& output energy &maximal\\ [-0.4em] &$(s_1,s_2)$&$(E_1,E_2)$&($E_3$)&efficiency\\ \hline \hline {\bf MMM}+& non-spinning &\raisebox{-6pt}{$(\mu,\mu)$}&$12.66 \mu$&$6.328$ \\[-.4em] \cline{2-2}\cline{4-5} (Collision of Two Massive Particles ) & $(0.01379\mu M, -0.2709\mu M)$&&$30.02 \mu$ &$15.01$ \\ [.1em] \hline {\bf PMP}+& non-spinning&\raisebox{-6pt}{$(+\infty,\mu)$}&$+\infty$&$13.93$ \\[-.4em] \cline{2-2}\cline{4-5} (Compton Scattering) & $(0, -0.2709\mu M)$&&$+\infty$ &$26.85$\\ [.1em] \hline {\bf MPM}+ & non-spinning&\raisebox{-6pt}{$(\mu,0)$}&$12.66 \mu$&$12.66$ \\[-.4em] \cline{2-2}\cline{4-5} (Inverse Compton Scattering) & $(0.02679\mu M, 0)$&&$15.64 \mu$ &$15.64$\\ \hline \end{tabular} \caption{The maximal efficiencies and energies for three processes. We include the nonspinning case obtained by \cite{Leiderschneider:2015kwa} as a reference. The maximal efficiencies and maximal energies are enhanced twice or more when the spin effect is taken into account. Following \cite{Leiderschneider:2015kwa}, we use the symbols of {\bf MMM}+, {\bf PMP}+, {\bf MPM}+ for each process, where + means the case of $\sigma_1=1$. } \label{summary} \end{center} \end{table} \end{widetext} ${\bf [1]}$ \underline{Nonextreme black hole}\\ The spin of the astrophysical black hole may not exceed $a/M=0.998$ as pointed out by Thorne\cite{thorne1974disk}. Hence we should analyze the efficiency for a nonextreme black hole.\\[.2em] ${\bf [2]}$ \underline{Super-Penrose process }\\ We have not analyzed the case (4) : Collision of two subcritical particles. If $\sigma_1=1$, which is not a natural initial condition for a subcritical particle, there is no upper bound for the efficiency\cite{Leiderschneider:2015kwa}. This super-Penrose process may be interesting to study for spinning particles too, although there still exists the question about its initial set up\cite{Zaslavskii:2015fqy}. Recently it was discussed in \cite{Liu:2018myg}, but the timelike condition has not been taken into account.\\[.2em] ${\bf [3]}$ \underline{Spin transfer}\\ Since a spin plays an important role in the efficiency, it is also interesting to discuss a transfer of spins. For example, $s_1=s_2=s_{\rm min}\approx -0.27$ to $s_3=0$ and $s_4=2s_{\rm min}\approx -0.54$.\\[.2em] ${\bf [4]}$ \underline{Collision of particles in off-equatrial-plane orbits}\\ In \cite{Leiderschneider:2015kwa}, they also analyzed the collision of the particle in off-plane orbits, which gives the maximal efficiency for the case of $\sigma_1=-1$. Although it may be interesting to analyze the orbits not in the equatorial plane, the equations of motion for a spinning particle are not integrable. As a result, such an analysis would be very difficult. \\[.2em] ${\bf [5]}$ \underline{Back reaction effect}\\ In this paper, we have adopted a test particle approximation. However because of lack of the back reaction, it may not reveal the proper upper bound on the efficiency of the energy extraction. In the Reissner-Nordstr\"{o}m spacetime, we could perform such an analysis for the collision of charged shells \cite{Nakao:2017xwe}. However it would be difficult to analyze the back reaction effect in Kerr black hole background although it is important. Finally one may ask how large the magnitude of spin can be in a realistic astrophysical system since we have assumed a theoretically (or logically) allowed value of a spin in this paper. The orbital angular momentum is given by $|\vect{L}|=|\vect{r}\times \vect{p}|\sim R_{\rm orbit} \times \mu v\, \mbox{\raisebox{-1.ex O(\mu M)$, while the spin angular momentum is $s\sim R_{\rm body}\times \mu v\, \mbox{\raisebox{-1.ex O(\mu^2)$. Hence the ratio $s/L\sim R_{\rm body}/R_{\rm orbit}$ should be small for a test particle approximation. In fact, if a test particle is a black hole ($s\leq \mu^2$), we find $s/\mu M = s/\mu^2 \times (\mu/M)\ll 1$. Hence the value assumed here may be too large for astrophysical objects. However, for a fast rotating star, $s$ can be much larger than $\mu^2$. For example, we find $s/\mu^2\, \mbox{\raisebox{-1.ex 500$ for a fast rotator $\alpha$ LEONIS (REGULUS) \cite{0004-637X-628-1-439}. Hence the validity of the test particle approximation would be marginal in this case. The present spin effect might become important when we extend beyond a test particle limit including nonlinear or nonperturbed process. \section*{Acknowledgments} We would like to thank Tomohiro Harada and Kota Ogasawara for useful discussions. This work was supported in part by JSPS KAKENHI Grants No. JP16K05362 (KM) and No. JP17H06359 (KM). ~~\\ \newpage
1804.07338
\section{Introduction} The first interpretation of nuclear fission was made about eight decades ago, though many features of this process are still in the rudimentary stage. The discovery of nuclear fission \cite{hanh39} was recognized as an evolution of the nuclear shape from a single compound nucleus split into two receding fragments \cite{meit39,bohr39}. This conceptual framework within the macroscopic-microscopic approach to the calculation of nuclear binding energies, provides a powerful theoretical tool for studies of low-energy fission dynamics. Further analysis from the microscopic theories to exploration of its dynamics are also prime objective at present in nuclear physics. In order to explain the fission properties of superheavy nuclei, it is essential to measure the shape (i.e. height and width) of the barriers and shape degrees of freedom \cite{bohr39,hof00,oga07,oga10,moll09,liu11}. In early days, the fission shapes were investigated by minimizing the sum of the Coulomb and surface energies using a development of the radius in the Liquid Drop Model (LDM). Recently, fusion studies have shown that the effects of the nuclear forces in the neck region (i.e. the gap between two fragments) of the deformed valley are indeed needed for optimizing the proximity energy of the fission process. The goal is more or less reached by following the studies from macroscopic-microscopic (mic-mac) model \cite{moll01,dobr07,iva09,kowa10,roye12,zhong14}, the extended Thomas-Fermi with Strutinsky integral (ETFSI) method \cite{mamd98,ghys15}, non-relativistic Skyrme-Hartree-Fock \cite{bur04,samy07,mina09,pei09,godd15,zhu16}, Gogny force \cite{egi00,ward05,ward12,ber17}, and relativistic mean field models \cite{bend03,lu06,sat10,bhu11,lu12,prasa13,lu14,bhu15,schu16}. The use of the adiabatic approximation in fission process is an interpretation of the potential energy surface (PES), an analogue of the classical phase space of Lagrangian and Hamiltonian mechanics. The fission point of a nucleus can be determined from the the total nuclear potential energy as a function of the shape coordinates relative to the ground state of the most favorable saddle point where the configuration evolves from a single nucleus into two separated fragments. The current way to deal with the splitting fragments depends on the most relevant collective variables of the nuclear shape, such as elongation, reflection asymmetry and neck structure that can be described by the multi-polarity deformations \cite{ward02,ward05,lu14,bhu15}. Furthermore a critical feature of the fission process is the multiplicity of neutron and/or small N=Z nuclei from the two fragments at the post scission point after they are accelerated by the mutual Coulomb repulsion \cite{mad85,sama95,sat08,sat10}. In this process, the neck is believed to be neutron rich and favorable for neutron emission than that of the proton and/or $\alpha$-particle emission. At present, it is not possible to ascertain the true composition of the neck experimentally, which has the potential to reveal many important aspects of the fission dynamics. The PES spanned by the relevant degrees-of-freedom of a fissile nucleus can be used to reveal a static fission path, fission lifetime, mass of the fragments and also many features of fission dynamics \cite{stas99,berg00,ward02,ward12a,ber17,sat10,lu14,bhu15,schu16}. To generate the neck structure of actinide nuclei and to determine the constituents of the neck (i.e. the average neutron-proton asymmetry and the neutron multiplicity) quantitatively, can be used to benchmark the predictive power of theoretical models \cite{mad85,koep88,fink89,sat10,bhu11,bhu15,lu14,ber17}. Such a study would be a step forward in the understanding of the fission dynamics of actinide nuclei \cite{sat10,lu14,schu16} and the synthesis process in the experimental laboratories available at present or/and under construction around the world \cite{leino95,gross00,sun03,wink08,sakurai08,muller91,geissel92,rodin03,thoe10}. Further the composition of the neck in the fission state of actinide nuclei may involve information regarding the formation of the elements in the rapid neutron capture process (i.e. $r$-process) of nuclear synthesis in stellar evolution \cite{gori11,koro12,just15}. In the present study we examine the properties of the fission state of actinides using the axially deformed relativistic mean field (RMF) model. This paper is organized as follows: In Sec. II we outline our scheme of the calculations using the relativistic mean field approach. The calculations and results are given in Sec. III. Finally, a summary and brief conclusion are given in Sec. IV. \section{Theoretical formalisms} The microscopic self-consistent mean field calculation is one of the standard tools to investigate the properties of infinite nuclear matter and nuclear structure phenomena \cite{bur04,samy07,stas99,godd15,zhu16,berg00,ward12a,ber17,lu14,bhu15,schu16,bogu77,sero86,ring86}. The relativistic mean field (RMF) approach is one of the most popular and widely used formalisms among them. It starts with a basic Lagrangian that describes nucleons as Dirac spinors interacting through different meson fields. The relativistic mean field Lagrangian density, which has several modifications to account for various limitations of Walecka Lagrangian \cite{bogu77,sero86} for a nucleon-meson many body system \cite{bogu77,sero86,ring86,lala99c,bhu09,rein89,ring96,vret05,meng06,niks11,logo12,zhao12,lala09,ring96a,niko92,bur02,fuch95,niks02,bro92,ring90,lala97}, is \begin{eqnarray} {\cal L}&=&\overline{\psi}\{i\gamma^{\mu}\partial_{\mu}-M\}\psi +{\frac12}\partial^{\mu}\sigma \partial_{\mu}\sigma \nonumber \\ && -{\frac12}m_{\sigma}^{2}\sigma^{2}-{\frac13}g_{2}\sigma^{3} -{\frac14}g_{3}\sigma^{4} -g_{s}\overline{\psi}\psi\sigma \nonumber \\ && -{\frac14}\Omega^{\mu\nu}\Omega_{\mu\nu}+{\frac12}m_{w}^{2}\omega^{\mu}\omega_{\mu} -g_{w}\overline\psi\gamma^{\mu}\psi\omega_{\mu} \nonumber \\ &&-{\frac14}\vec{B}^{\mu\nu}.\vec{B}_{\mu\nu}+\frac{1}{2}m_{\rho}^2\vec{\rho}^{\mu}.\vec{\rho}_{\mu} -g_{\rho}\overline{\psi}\gamma^{\mu}\vec{\tau}\psi\cdot\vec{\rho}^{\mu} \nonumber \\ &&-{\frac14}F^{\mu\nu}F_{\mu\nu}-e\overline{\psi} \gamma^{\mu} \frac{\left(1-\tau_{3}\right)}{2}\psi A_{\mu}. \label{lag} \end{eqnarray} The $\psi$ is the Dirac spinor for the nucleon whose third component of isospin is denoted by $\tau_{3}$. Here $g_{\sigma}$, $g_{\omega}$, $g_{\rho}$ and $\frac{e^2}{4\pi}$ are the coupling constants for the $\sigma-$, $\omega-$, $\rho-$ meson and photon, respectively. The constant $g_2$ and $g_3$ are for the self-interacting non-linear $\sigma-$meson field. The masses of the $\sigma-$, $\omega-$, $\rho-$ mesons and nucleons are $m_{\sigma}$, $m_{\omega}$, $m_{\rho}$, and $M$ respectively. The quantity $A_{\mu}$ stands for the electromagnetic field. The vector field tensors for the $\omega^{\mu}$, $\vec{\rho}_{\mu}$ and photon are given by, \begin{eqnarray} F^{\mu\nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} \\ \Omega_{\mu\nu} = \partial_{\mu} \omega_{\nu} - \partial_{\nu} \omega_{\mu} \end{eqnarray} and \begin{eqnarray} \vec{B}^{\mu\nu} = \partial_{\mu} \vec{\rho}_{\nu} - \partial_{\nu} \vec{\rho}_{\mu}, \end{eqnarray} respectively. From the above Lagrangian, we obtain the field equations for the nucleons and mesons. These equations are solved by expanding the upper and lower components of the Dirac spinors and the boson fields in an axially deformed harmonic oscillator basis, with an initial deformation $\beta_{0}$. The set of coupled equations are solved numerically by a self-consistent iteration method \cite{horo81,boguta81,price87,fink89}. The center-of-mass motion energy correction is estimated by the harmonic oscillator formula $E_{c.m.}=\frac{3}{4}(41A^{-1/3})$. The quadrupole deformation parameter $\beta_2$ is evaluated from the resulting proton and neutron quadrupole moments, as \begin{eqnarray} Q=Q_n+Q_p=\sqrt{\frac{16\pi}5} (\frac3{4\pi} AR^2\beta_2). \end{eqnarray} The root mean square (rms) matter radius is defined as \begin{eqnarray} \langle r_m^2\rangle=\frac{1}{A}\int\rho(r_{\perp},z) r^2d\tau, \end{eqnarray} where $A$ is the mass number, and $\rho(r_{\perp},z)$ is the axially deformed density. We obtain the potentials, nucleon densities, single-particle energy levels, nuclear radii, quadrupole deformations and the binding energies for a given nucleus. Converged ground state along with various constraint solutions can be obtained at different deformations including fission state of a nucleus (see the potential energy surface). To deal with the nuclear bulk properties of open-shell nuclei, one has to consider the pairing correlations \cite{karat10}. There are various methods such as the BCS approach, the Bogoliubov transformation and the particle number conserving methods that have been developed to treat the pairing effects in the study of nuclear properties including fission barriers \cite{zeng83,moli97,hao12}. The Bogoliubov transformation is widely used method to take pairing correlation into account for the drip-line region \cite{vret05,ring96a,meng06,lala99a}. In the case of nuclei not too far from the $\beta$-stability line, the constant gap BCS pairing approach provides a reasonably good description of pairing \cite{doba84}. The present analysis is based on the superheavy mass nuclei around the $\beta-$stability line, hence the relativistic mean field results with BCS treatment should be applicable. Further, to avoid difficulties in the calculations, we have employed the constant gap BCS approach to deal with the present mass region \cite{mad81,moll88,bhu09,bhu15,bhu18}. \begin{table*} \caption{The RMF (NL3$^*$) results for the binding energy (BE), root-mean-square charge radii $r_{ch}$ and the quadrupole deformation parameter $\beta_2$ for $^{242,244,246,248}$Cm and $^{248,250,252,254}$Cf nuclei. The ground state, the constraint minima for first, second and fission states are given in the 1$^{st}$, 2$^{nd}$, 3$^{rd}$, and 4$^{th}$ row for each nucleus. The Finite-Range-Droplet-Model \cite{moll95,moll97}, Hartree-Fock + BCS \cite{gori01} and the experimental data \cite{audi13,angeli13,raman01} for the ground state configurations are given for comparison, wherever available. The energies are in $MeV$ and radii in $fm$.} \renewcommand{\tabcolsep}{0.12cm} \renewcommand{\arraystretch}{1.45} \begin{tabular}{cccccccccccccccccc} \hline \hline Nucleus & \multicolumn{3}{c}{Binding Energy} & \multicolumn{3}{c}{Charge Radius} & \multicolumn{3}{c}{Quadrupole Deformation} \\ \hline & RMF & Expt. \cite{audi13} & FRDM \cite{moll95} & RMF & Expt. \cite{angeli13} & HFBCS \cite{gori01} & RMF & Expt. \cite{raman01} & FRDM \cite{moll97} & HFBCS \cite{gori01} \\ \hline $^{242}$Cm & 1823.92 & 1823.3 & 1823.05 & 5.933 & 5.8285& 5.90 & 0.287 & $--$ & 0.224 & 0.25\\ & 1822.82 & & & 6.560 & & & 0.969 & & & \\ & 1822.51 & & & 8.143 & & & 2.313 & & & \\ & 1693.64 & & & 11.089& & & 5.036 & & & \\ $^{244}$Cm & 1836.24 & 1835.8 & 1835.79 & 5.946 & 5.8429& 5.91 & 0.293 & 0.2972(17)& 0.234 & 0.25 \\ & 1835.12 & & & 6.554 & & & 0.959 & & & \\ & 1821.33 & & & 8.455 & & & 2.475 & & & \\ & 1704.21 & & & 11.086& & & 5.010 & & & \\ $^{246}$Cm & 1847.34 & 1847.8 & 1847.86 & 5.947 & 5.8475& 5.93 & 0.293 & 0.2983(19)& 0.234 & 0.27 \\ & 1845.75 & & & 6.553 & & & 0.921 & & & \\ & 1833.16 & & & 8.449 & & & 2.464 & & & \\ & 1714.82 & & & 10.982& & & 4.984 & & & \\ $^{248}$Cm & 1860.63 & 1859.2 & 1859.28 & 5.959 & 5.8562& 5.94 & 0.290 & 0.2972(19)& 0.235 & 0.28 \\ & 1859.31 & & & 6.556 & & & 0.916 & & & \\ & 1844.72 & & & 8.474 & & & 2.453 & & & \\ & 1724.72 & & & 10.965& & & 4.957 & & & \\ $^{248}$Cf & 1861.11 & 1857.8 & 1857.82 & 5.990 & $--$ & 5.95 & 0.288 & $--$ & 0.235 & 0.25 \\ & 1859.83 & & & 6.624 & & & 0.969 & & & \\ & 1847.22 & & & 8.554 & & & 2.490 & & & \\ & 1726.41 & & & 11.115& & & 4.973 & & & \\ $^{250}$Cf & 1872.90 & 1870.0 & 1870.29 & 6.001 & $--$ & 5.96 & 0.285 & 0.299 (15)& 0.245 & 0.28 \\ & 1871.81 & & & 6.641 & & & 0.967 & & & \\ & 1859.51 & & & 8.568 & & & 2.479 & & & \\ & 1736.83 & & & 11.076& & & 4.945 & & & \\ $^{252}$Cf & 1883.82 & 1881.3 & 1881.32 & 6.011 & $--$ & 5.97 & 0.278 & $--$ & 0.236 & 0.25\\ & 1882.64 & & & 6.681 & & & 1.081 & & & \\ & 1871.61 & & & 8.581 & & & 2.461 & & & \\ & 1710.73 & & & 10.972& & & 4.884 & & & \\ $^{254}$Cf & 1893.25 & 1892.2 & 1891.69 & 6.022 & $--$ & 5.97 & 0.272 & $--$ & 0.226 & 0.24 \\ & 1891.96 & & & 6.987 & & & 1.083 & & & \\ & 1820.45 & & & 8.593 & & & 2.460 & & & \\ & 1820.73 & & & 10.843& & & 4.838 & & & \\ \hline \hline \end{tabular} \label{tab1} \end{table*} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{potential.pdf} \caption{\label{fig1} (Color online) The RMF (NL3$^*$) potential energy surfaces (PES) of $^{242,248}$Cm and $^{248,252}$Cf as a function of the quadrupole deformation parameter $\beta_2$ are displayed with the empirical values \cite{capo09} for the first and second barrier heights. Note that the reflection symmetry is assumed in the present calculation. Heights are in MeV. See text for details.} \end{center} \end{figure} \section{Calculations and Results} In the relativistic mean field model, we performed the self-consistent calculation for maximum boson major shell number $N_B$ = 20 and varying maximum nucleon major shell number $N_F$ from 14 to 24 to verify the convergence of the solutions by taking different inputs of initial deformation $\beta_0$ for the ground state \cite{lala97,lala09,ring90,bhu09}. From the results obtained, we found that the relative variations of the ground state solutions are $\leq$ 0.004$\%$ for the binding energy and 0.002$\%$ for the nuclear radius. In the case of fission state solutions, the binding energy and nuclear radius varies $\leq$ 0.01$\%$ and 0.006$\%$, respectively over the range of major shell fermion number $N_F$ from 16 to 28 for $N_B$ = 24. Hence, we fixed that the number of major shells for fermions and bosons at $N_F$ = $N_B$ = 20 and $N_F$ = $N_B$ = 24 for the ground state and for the fission state of the considered mass region, respectively. The number of mesh points for Gauss-Hermite and Gauss-Lagurre integral are $20$ and $24$, respectively. We have used the recently developed NL3$^*$ force \cite{lala09} for the present analysis, which is a version of the NL3 force \cite{lala97} refitted to improve the description for the properties of neutron- and/or proton-rich exotic and superheavy nuclei \cite{lala09,bhu09,bhu11}. For a given nucleus, we find various constraint solutions including the fission state along with the ground state (see the potential curve Fig. \ref{fig1}). The calculated bulk properties such as binding energy (BE), root-mean-square (rms) charge radius, and qudrupole deformation $\beta_2$ for the ground state, first, second, third constraint and fission solutions are given in the first, second, third and fourth rows for a given nucleus, respectively. The results obtained from NL3$^*$ force listed together with the predictions from Finite-Range-Droplet-Model (FRDM) \cite{moll95,moll97}, Hartree-Fock + BCS (HFBCS) \cite{gori01} and the experimental data \cite{audi13,angeli13,raman01}. Since BE values are not available for HFBCS predictions, we have listed the rms charge radius $r_{ch}$, and the quadrupole deformation $\beta_2$ for comparisons. We find that the ground state binding energies, charge radii and $\beta_2$ values agree well with the available experimental data \cite{audi13,angeli13,raman01} and the theoretical predictions \cite{moll95,moll97,gori01}. As discussed above, all the isotopes of Cm and Cf are shown to have several intrinsic minima, where each minimum corresponds to a quadrupole deformation. For example, the ground state (g.s.), first excited state, second excited state and the fission state deformation $\beta_2$ for $^{242}$Cm are 0.287, 0.969, 2.313 and 5.036, respectively. Similarly, the values are 0.288, 0.969, 2.490, and 4.973, respectively for $^{248}$Cf. All other isotopes and their deformations for various minima including the fission state are listed in Table \ref{tab1}. The solution corresponding to the highly deformed (hyper-deformed) configuration of $\beta_2 \sim$ 2.4 for all isotopes provide a beautiful picture of the pre-fission state. In other words, very smooth hyper-deformed solutions followed the fission configurations for all the considered isotopes in the present study. Further, the rms charge radius $r_{ch}$ gradually increases with increase of quadrupole deformation for a given nucleus. \begin{table} \caption{The RMF (NL3$^*$) results for the first and second barrier heights of even-even isotopes of Cm and Cf nuclei are compared with the empirical values (Emp.) \cite{capo09}. Note that the reflection symmetry is assumed. Heights are in MeV.} \renewcommand{\tabcolsep}{0.25cm} \renewcommand{\arraystretch}{1.45} \begin{tabular}{cccccccccccccccccc} \hline \hline Nucleus & \multicolumn{2}{c}{First barrier} & \multicolumn{2}{c}{Second barrier} \\ & RMF & Emp. \cite{capo09} & RMF & Emp. \cite{capo09} \\ \hline $^{242}$Cm & 7.92 & 6.65 & 5.76 & 5.10 \\ $^{244}$Cm & 7.75 & 6.18 & 5.17 & 5.00 \\ $^{246}$Cm & 7.13 & 6.00 & 5.00 & 4.80 \\ $^{248}$Cm & 6.84 & 5.80 & 4.93 & 4.80 \\ $^{248}$Cf & 8.13 & $--$ & 3.33 & $--$ \\ $^{250}$Cf & 8.06 & 5.60 & 2.83 & 3.80 \\ $^{252}$Cf & 7.98 & 5.30 & 2.53 & 3.50 \\ $^{254}$Cf & 7.56 & $--$ & 1.79 & $--$ \\ \hline \hline \end{tabular} \label{tab2} \end{table} \subsection{Potential Energy Surface} The potential energy surface (PES) is calculated by using the relativistic mean field formalism in a constrained procedure \cite{bhu09,bhu11,bhu15,flocard73,koepf88,kara10,lu14}, i.e., instead of minimizing the $H_0$, we have minimized $H'=H_0-\lambda Q_{2}$. Here, $\lambda$ is a Lagrange multiplier and $Q_2$, the quadrupole moment. The term $H_0$ is the Dirac mean field Hamiltonian for the RMF model (the notations are standard and its form can be seen in Refs.\cite{ring90,bhu15}). In other words, we obtain the constrained solution from the minimization of $\sum_{ij}\frac{<\psi_i|H_0-\lambda Q_2|\psi_j>}{<\psi_i|\psi_j>}$ and calculate the constrained binding energy using $H_0$. The free energy is obtained from the minimization of $\sum_{ij}\frac{<\psi_i|H_0|\psi_j>}{<\psi_i|\psi_j>}$ and the converged energy solution does not depend on the initial guess value of the basis deformation $\beta_0$ as long as it is nearer to the minimum in PES. However, it converges to some other local minimum when $\beta_0$ is drastically different, and in this way we evaluate the different intrinsic isomeric states for a given nucleus. Note that the reflection symmetry is assumed for the calculation of the potential energy surface of the even$-$even isotopes of the Cm and Cf nuclei considered. The potential energy surface for $^{242,246}$Cm (left panel) and $^{250,252}$Cf (right panel) nuclei are shown in Fig. \ref{fig1} for a wide range of $\beta_2$ starting from the spherical to hyperdeformed prolate configuration. The cross (X) signs in both panels are represented by the empirical values \cite{capo09} of the first and second barrier heights of the respective nucleus. Here, we found multi-minima structure from the PES for each isotopes. In Fig. \ref{fig1}, we have shown the PES's of $^{242,246}$Cm and $^{250,252}$Cf as a representative case. From the figure, one can notice that two identical major minima exist at $\beta_2\approx$ 0.29 and 0.95 for $^{242}$Cm and $^{246}$Cm nuclei (see left panel of Fig. \ref{fig1}). Similarly, the minima also appear in case of $^{250,252}$Cf nuclei at $\beta_2\approx$ 0.28 and 0.95, respectively. We found similar results for all the considered isotopes of Cm and Cf nuclei. The calculated first and second barrier heights for all the isotopes along with the empirical values \cite{capo09} are listed in Table \ref{tab2}. We notice that the quadrupole deformation parameters and the barrier heights obtained from our calculations reasonably agree with the empirical values \cite{angeli13,capo09} of the isotopic chains of Cm and Cf nuclei, wherever available. For example, the obtained first and second barrier heights for $^{242}$Cm are 7.92 and 5.76 MeV, respectively (see Table \ref{tab2}). Similarly, the values are 8.06 and 2.83 MeV, respectively for $^{250}$Cf (see Table \ref{tab2}). The corresponding empirical values for the first and second barrier height for $^{242}$Cm and $^{250}$Cf are 6.65, 5.10 MeV and 5.60 and 3.80 MeV, respectively. Moreover, the calculated mimima and/or the barriers in the PES shift a bit towards larger values of deformation $\beta_2$ in the isotopic chains. \begin{figure} \begin{center} \includegraphics[width=1.2\columnwidth]{Figure1.pdf} \caption{\label{fig2}(Color online) The evolution of static fission for the isotopes of $^{242}$Cm (left) and $^{248}$Cf (right) for different deformations $\beta_2$ corresponding to the possible minima obtained in the RMF formalism using the NL3$^*$ force parameter set. See text for details.} \vspace{-0.6cm} \end{center} \end{figure} \begin{figure} \vspace{-1.0cm} \begin{center} \includegraphics[width=1.15\columnwidth]{Figure2.pdf} \vspace{-2.5cm} \caption{\label{fig3} (Color online) The RMF (NL3$^*$) total (neutron + proton) matter density distribution for the fission states of the $^{242,244,246,248}$Cm nuclei. See text for details.} \end{center} \end{figure} \subsection{Nuclear Density Distribution} The present calculations mainly explain the nuclear structure and sub-structure of the nucleus, which depend on the density distributions of the protons and neutrons for each corresponding state. The density distribution of the nucleus is influenced by the nuclear deformations, which play a prominent role in the fission study. Here, we calculate the densities for the positive quadrant of the plane parallel to the $z$-axis (i.e. the symmetry axis) and evaluated in the $zr_{\perp}$ plane, where $x^2 + y^2 = r_{\perp}^2$. The space reflection symmetries about both the $z$ and $r_{\perp}$ axes are conserved in our formalism. The results for the density in the positive quadrant can be reflected in the other quadrants to get a complete picture of the nucleus in the $zr_{\perp}$ plane. The unbroken space reflection symmetries of our numerical procedure eliminate the odd multipoles (octupole, etc.) shape degrees of freedom. In other words, there are limitations in explaining nuclei with an assymetric partition of particles that will not be properly clustered in the asymptotic limit. Nevertheless, the present study demonstrates the applicability of the RMF for studying the nuclear fission phenomenon and provides the scope for understanding the nuclear structure of even-even nuclei. Further, this furnishes an indication of the nuclear structure and various sub-structure for various deformed states including the fission state. The present calculations are performed in an axially deformed co-ordinate space. Consideration of the deformed coordinate space might solve some of these issues and will throw more light on the sub-structure of nuclei, which may be an interesting work for future. In Fig. \ref{fig2}, we have presented typical examples for the matter density distributions of the $^{242}$Cm and $^{248}$Cf nuclei for all possible solutions, starting from the ground state up to their static fission configuration with a neck. The shape of the $^{242}$Cm and $^{248}$Cf nuclei follow the deformed ground state solution around $\beta_2\approx$ 0.29, and the super-deformed and hyper-deformed prolate solutions obtained around $\beta_2\approx$ 0.97 and 2.35, respectively. Further, a well-defined dumbbell shape of the neck configuration is reproduced in the RMF study as a solution of the microscopic nuclear many-body Hamiltonian around $\beta_2 \approx$ 4.50, in agreement with the age-old classical liquid drop picture of the fission process. The physical characteristics of the neck-structures for the isotopic chain of Cm and Cf systems emerging from this study will be discussed later. From Fig. \ref{fig2}, the internal configurations for $^{242}$Cm and $^{248}$Cf nuclei are quite evident and similar structures can found for all the considered isotopes of Cm and Cf. The color code, starts from deep red with maximum density distribution to blue bearing the minimum density. One can analyze the distribution of nucleons inside the various isotopes at various shapes (in black and white figures, the color code is read as deep black with maximum density to light gray as minimum density distribution). The minimum density for the oblate-state starts from 0.001 $fm^{-3}$ and goes up to a maximum of 0.16 $fm^{-3}$ for all the shapes (see Fig. \ref{fig2}). One notices that the central density ($\rho \approx$ 0.16 $fm^{-3}$) becomes elongated with respect to deformation instead of changing in magnitudes (see the Table \ref{tab1} and Fig. \ref{fig2}). Here, we also find the neck structures (i.e. the elongated shape with clear-cut neck before scission) similar to those of the microscopic study using the constrained method with Gogny interaction \cite{dubr08} and the Skyrme-Hartree-Fock \cite{bonn06}. In other words, the fissioning systems energetically favor splitting into two separate fragments by developing an elongated shape with a neck. Since our objective has been to critically study the neck configurations, we have presented the matter density distributions for the fission states of our calculations for the four isotopes of Cm and Cf in Figs. \ref{fig3} and \ref{fig4}, respectively. The binding energies, rms charge radii and quadrupole deformations of the neck configuration for $^{242,244,246,248}$Cm and $^{248,250,252,254}$Cf can be seen in Table \ref{tab1}. As can be seen in Fig. \ref{fig1}, the neck configurations lie $\approx$ 15 MeV below the respective ground states in conformity with the expectation and in agreement with our general notion of fission dynamics. Further, the rms charge radii for the neck configurations are nearly twice those of ground state, around 12 fm as expected. From the Figs. \ref{fig3} and \ref{fig4}, it is clear that all the isotopes undergo symmetric fission, which is the limitation of the present model. Here, we see how far the neck structure for these isotopes conform to reality from the calculated values of the first and second barrier height, which reasonably agree with the empirical values (see Fig. \ref{fig1} and Table \ref{tab2}). \begin{figure} \vspace{-1.0cm} \begin{center} \includegraphics[width=1.13\columnwidth]{Figure3.pdf} \vspace{-2.5cm} \caption{\label{fig4}(Color online) The RMF (NL3$^*$) total (neutron + proton) matter density distribution for the fission states of the $^{248,250,252,254}$Cf nuclei. See text for details.} \end{center} \end{figure} \begin{table*} \caption{The RMF(NL3$^*$) characteristics of neck configurations such as the quadrupole deformation ($\beta_2$), charge radius $r_{ch}^{nk}$ of the fission state, average neutron ($\overline{\rho}_{n}^{nk}$) and proton density ($\overline{\rho}_{p}^{nk}$) and their ratio ($\frac{\overline{\rho}_{n}^{nk}}{\overline{\rho}_{p}^{nk}}$) in the neck region, dimension of the neck, length of the neck (L$^{nk}$), the number of neutron (N$^{nk}$) and proton (Z$^{nk}$ of the neck for $^{242,244,246,248}$Cm and $^{248,250,252,254}$Cf are presented. See text for details.} \renewcommand{\tabcolsep}{0.25cm} \renewcommand{\arraystretch}{1.45} \begin{tabular}{ccccccccccccccccccc} \hline \hline Nucleus & $\beta_2$ & $r_{ch}^{nk}$ & $\overline{\rho}_{n}^{nk}$ & $\overline{\rho}_{p}^{nk}$ & $\frac{\overline{\rho}_{n}^{nk}}{\overline{\rho}_{p}^{nk}}$ & Range & $L^{nk}$ & $N^{nk}$ & $N^{nk}$ & $\frac{Z^{nk}}{N^{nk}}$ & Nucleus$^{nk}$ \\ & & & & & & ($r_1$,$r_2$; $z_1$,$z_2$) & & & \\ \hline $^{242}$Cm& 5.036& 11.089& 0.032& 0.035& 0.91& $\pm 2.28; \pm 1.25$& 4.56& 2.01& 2.01& 1.00& $^2$He \\ $^{244}$Cm& 5.010& 11.086& 0.041& 0.034& 1.21& $\pm 2.28; \pm 1.25$& 4.56& 2.09& 2.05& 1.02& $^2$He \\ $^{246}$Cm& 4.984& 10.982& 0.047& 0.033& 1.42& $\pm 2.28; \pm 1.25$& 4.56& 2.02& 2.01& 1.01& $^4$He \\ $^{248}$Cm& 4.957& 10.965& 0.052& 0.033& 1.57& $\pm 2.28; \pm 1.25$& 4.56& 2.06& 2.01& 1.02& $^4$He \\ $^{248}$Cf& 4.973& 11.115& 0.034& 0.037& 0.92& $\pm 2.27; \pm 1.26$& 4.52& 1.01& 0.94& 1.07& $^4$H \\ $^{250}$Cf& 4.945& 11.076& 0.046& 0.036& 1.28& $\pm 2.27; \pm 1.26$& 4.52& 1.05& 0.98& 1.07& $^4$H \\ $^{252}$Cf& 4.884& 10.972& 0.051& 0.035& 1.46& $\pm 2.27; \pm 1.26$& 4.52& 2.08& 2.01& 1.03& $^4$He \\ $^{254}$Cf& 4.838& 10.843& 0.055& 0.034& 1.62& $\pm 2.27; \pm 1.26$& 4.52& 2.09& 2.01& 1.04& $^4$He \\ \hline \hline \end{tabular} \label{tab3} \end{table*} \subsection{The Neck Characteristics} The calculated yields of the total number of neutrons $N^{nk}$ and protons $Z^{nk}$ contained in the neck are obtained by integrating the corresponding densities over the physical dimension of the neck. The number of nucleons for the neck regions can calculated by, \begin{eqnarray} N^{nk}=\int\int \rho_n^{nk} (r_{\perp},z) d\tau, \label{countN} \end{eqnarray} and \begin{eqnarray} Z^{nk}=\int\int \rho_p^{nk} (r_{\perp},z) d\tau, \label{countZ} \end{eqnarray} where $\rho_n^{nk}$ and $\rho_p^{nk}$ are the calculated RMF neutron and proton density distributions of the nucleus in the neck configuration, respectively. We also present the mean neutron and proton densities of the neck as, \begin{eqnarray} \overline{\rho}_{n,p}^{nk} = \frac{\int \rho_{n,p}^{nk} d\tau}{\int d\tau}. \label{avr} \end{eqnarray} From Eq. \ref{avr}, we estimate the average neutron $\overline{\rho}_{n}^{nk}$ and proton $\overline{\rho}_{p}^{nk}$ density and their ratio $\overline{\rho}_{n}^{nk}/\overline{\rho}_{p}^{nk}$ for the neck region. The estimates for the neutron and/or proton constituents and their asymmetry are listed in Table \ref{tab3} for the $^{242,244,246,248}$Cm and $^{242,244,246,248}$Cf nuclei. As expected, the $\overline{\rho}_n^{nk}$ and $\overline{\rho}_p^{nk}$ for both the elements remain similar for all their isotopes being around 0.035 $fm^{-3}$ (see Table \ref{tab3}). The $\overline{\rho}_n^{nk}$ for the isotopic chains of Cm and Cf nuclei, gradually increase with the the neutron number. Furthermore, the neutron to proton density ratio $\overline{\rho}_{n}^{nk}/\overline{\rho}_{p}^{nk}$ increases gradually with respect to neutron number, as expected. In the isotopic chain of the Cm nuclei, the ratio has increased from 0.91 for $^{242}$Cm to 1.57 for $^{248}$Cm. The corresponding values are 0.92 for ${248}$Cf to 1.62 for $^{254}$Cf (see Table \ref{tab3}). We have estimated the length of the neck in the fission state, which is quite important for determining the neck constituents. The length of the neck $L^{nk}$ is the distance between the two facing connect surfaces. The width of the neck is not that important for the estimation of the constituents, using Eqs. \ref{countN} \& \ref{countZ}, because it only averages out the sum of the matter densities within $L_n$. The length of the neck $L_n$ and its constituents are listed in the Table \ref{tab3}. From the Table \ref{tab3}, one can find the charge radii of the neck configuration for all the isotopes, which are about 12 $fm$ with a well-defined neck and fairly extended mass distribution evident in all cases. It is indeed interesting that heavy and superheavy nuclei acquire such an extended dumbbell configuration, supported by the nucleon-nucleon force \cite{sat08,brink96}. As we move from $^{242}$Cm to $^{248}$Cm, the number of neck neutron and neck proton numbers remain unchanged. A similar trend is seen for the Cf isotopes. It may be noticed that the magnitude of the ratio $N^{nk}/Z^{nk}$ is some what different from that of the average neutron-to-proton neck densities $\overline{\rho}_{n}^{nk}/\overline{\rho}_{p}^{nk}$ (found in Table \ref{tab3}). It shows that the effective volume distributions of neutrons and protons are different in the neck region. The ratio of neutron-to-proton number in the neck region found in our present calculation is about $1.02$ for all the isotopes of Cm and Cf nuclei. Hence, the neck can be considered as a quasi-bound transient state of any N = Z nucleus with the neck nucleus correlated with those transient state being $^4$He for all isotopes of Cm nuclei. In the case of Cf, the effective nucleus is $np$ for $^{248,250}$Cf and $^4$He for $^{252,254}$Cf. \section{Summary and Conclusions} In the present study, we have investigated the mechanism of fission decay and the shape of the nucleus by following the static fission path to the configuration before the breakup. The well established microscopic many-body nuclear Hamiltonian, i.e., the RMF theory is employed for estimating the classical liquid-drop picture of the fission state. The actinide isotopes of Cm and Cf nuclei near the valley of stability have been studied with the objective of relevance in stellar evolution. We found a deformed prolate configuration for the ground state of the isotopic chain for Cm and Cf nuclei. Furthermore, a highly deformed configuration with a neck is found by using a very large basis consisting of as many as 24 oscillator shells, while for the ground state 20 shells are adequate. This study has revealed the anatomy of the neck in the fission state, such as the average neutron-proton asymmetry, the length and their composition. We found that the average neutron-proton ratio of the neck region progressively increases with the neutron number in the isotopic chains of Cm and Cf nuclei. The neutron-to-proton number ratio found in our calculation is $1.02$, which may correlate with the quasi-bound and/or a resonance state of a light N = Z nucleus and /or $\alpha$-particle. The necks found in the calculation at the above exotic nuclei suggest a point where along with the two heavy fragments, an $\alpha-$ particle might be emitted at scission for the considered isotopes of Cm and Cf nuclei, except $^{248,250}$Cf. In case of $^{248,250}$Cf, we found the neck constitutes are to be $np$ with the two symmetry fragments in the fission. Due to the symmetry in the neutron-proton ratio of the neck, this cannot be strained into the two fragments at scission, but itself breaks down by emitting these nucleons which might be observed from the scission mass-yield studies. This would have strong implication in the energy generation of $r-$process nucleosynthesis in stellar evolution. \section*{Acknowledgments} This work has been supported by the FAPESP Project Nos. (2014/26195-5 \& 2017/05660-0), INCT-FNA Project No. 464898/2014-5, and by the CNPq - Brasil. The authors thank Shan-Gui Zhou for his many-fold discussions through out the work.
1904.08629
\section{Introduction} Let $\mc G$ be a connected reductive group over a field $K$. It is well-known that conjugacy classes of parabolic $K$-subgroups correspond bijectively to set of simple roots (relative to $K$). Further, two parabolic $K$-subgroups are $\mc G (K)$-conjugate if and only if they are conjugate by an element of $\mc G (\overline K)$. In other words, rational and geometric conjugacy classes coincide. By a Levi $K$-subgroup of $\mc G$ we mean a Levi factor of some parabolic $K$-subgroup of $\mc G$. Such groups play an important role in the representation theory of reductive groups, via parabolic induction. Conjugacy of Levi subgroups, also known as association of parabolic subgroups, has been studied less. Although their rational conjugacy classes are known (see \cite[Proposition 1.3.4]{Cas}), it appears that so far these have not been compared with geometric conjugacy classes. Let $\Delta_K$ be the set of simple roots for $\mc G$ with respect to a maximal $K$-split torus $\mc S$. For every subset $I_K \subset \Delta_K$ there exists a standard Levi $K$-subgroup $\mc L_{I_K}$. We will prove: \begin{thmintro}\label{thm:A} Let $\mc G$ be a connected reductive $K$-group. Every Levi $K$-subgroup of $\mc G$ is $\mc G (K)$-conjugate to a standard Levi $K$-subgroup. For two standard Levi $K$-subgroups $\mc L_{I_K}$ and $\mc L_{J_K}$ the following are equivalent: \begin{itemize} \item $I_K$ and $J_K$ are associate under the Weyl group $W(\mc G,\mc S)$; \item $\mc L_{I_K}$ and $\mc L_{J_K}$ are $\mc G (K)$-conjugate; \item $\mc L_{I_K}$ and $\mc L_{J_K}$ are $\mc G (\overline K)$-conjugate. \end{itemize} \end{thmintro} The first claim and the first equivalence are folklore and not hard to show. The meat of the theorem is the equivalence of $\mc G (K)$-conjugacy and $\mc G (\overline K)$-conjugacy, that is, of rational conjugacy and geometric conjugacy. Our proof of that equivalence involves reduction steps and a case-by-case analysis for quasi-split absolutely simple groups. It occupies Section \ref{sec:1} of the paper.\\ Our main result is a generalization of Theorem \ref{thm:A} to arbitrary connected linear algebraic groups. There we replace the notion of a Levi subgroup by that of a \emph{pseudo-Levi subgroup}. By definition, a pseudo-Levi $K$-subgroup of $\mc G$ is the intersection of two opposite pseudo-parabolic $K$-subgroups of $\mc G$. We refer to \cite[\S 2.1]{CGP} and the start of Section \ref{sec:2} for more background. For reductive groups, pseudo-Levi subgroups are the same as Levi subgroups. When $\mc G$ does not admit a Levi decomposition, these pseudo-Levi subgroups are the best analogues. In the representation theory of pseudo-reductive groups over local fields (of positive characteristic), these pseudo-Levi subgroups play a key role \cite[\S 4.1]{Sol}. We prove that Theorem \ref{thm:A} has a natural analogue in the "pseudo"-setting: \begin{thmintro}\label{thm:B} Let $\mc G$ be a connected linear algebraic $K$-group. Every pseudo-Levi $K$-subgroup of $\mc G$ is $\mc G (K)$-conjugate to a standard pseudo-Levi $K$-subgroup. For two standard pseudo-Levi $K$-subgroups $\mc L_{I_K}$ and $\mc L_{J_K}$ the following are equivalent: \begin{itemize} \item $I_K$ and $J_K$ are associate under the Weyl group $W(\mc G,\mc S)$; \item $\mc L_{I_K}$ and $\mc L_{J_K}$ are $\mc G (K)$-conjugate; \item $\mc L_{I_K}$ and $\mc L_{J_K}$ are $\mc G (\overline K)$-conjugate. \end{itemize} \end{thmintro} Our arguments rely mainly on the structure theory of linear algebraic groups and pseudo-reductive groups developed by Conrad, Gabber and Prasad \cite{CGP,CP}. The first claim and the first equivalence are quickly dealt with in Lemma \ref{lem:2.2}. Like for reductive groups, the hard part is the equivalence of rational and geometric conjugacy. The proof of that constitutes the larger part of Section \ref{sec:2}, from Theorem \ref{thm:2.7} onwards. We make use of Theorem \ref{thm:A} and of deep classification results about absolutely pseudo-simple groups \cite{CP}.\\ \textbf{Acknowledgements.} We thank Jean-Loup Waldspurger for explaining us important steps in the proof of Theorem \ref{thm:A} and Gopal Prasad for pointing out some subtleties in \cite{CGP}. \vspace{5mm} \section{Connected reductive groups} \label{sec:1} Let $K$ be field with an algebraic closure $\overline K$ and a separable closure $K_s \subset \overline K$. Let $\Gamma_K$ be the Galois group of $K_s / K$. Let $\mc G$ be a connected reductive $K$-group. Let $\mc T$ be a maximal torus of $\mc G$ with character lattice $X^* (\mc T)$. let $\Phi (\mc G, \mc T) \subset X^* (\mc T)$ be the associated root system. We also fix a Borel subgroup $\mc B$ of $\mc G$ containing $\mc T$, which determines a basis $\Delta$ of $\Phi (\mc G, \mc T)$. For every $\gamma \in \Gamma_K$ there exists a $g_\gamma \in \mc G (K_s)$ such that \[ g_\gamma \gamma (\mc T) g_\gamma^{-1} = \mc T \quad \text{and} \quad g_\gamma \gamma (\mc B) g_\gamma^{-1} = \mc B . \] One defines the $\mu$-action of $\Gamma_K$ on $\mc T$ by \begin{equation}\label{eq:1.2} \mu_{\mc B}(\gamma) (t) = \mr{Ad}(g_\gamma) \circ \gamma (t) . \end{equation} This also determines an action $\mu_{\mc B}$ of $\Gamma_K$ on $\Phi (\mc G,\mc T)$, which stabilizes $\Delta$. Let $\mc S$ be a maximal $K$-split torus in $\mc G$. By \cite[Theorem 13.3.6.(i)]{Spr} applied to $Z_{\mc G}(\mc S)$, we may assume that $\mc T$ is defined over $K$ and contains $\mc S$. Then $Z_{\mc G}(\mc S)$ is a minimal $K$-Levi subgroup of $\mc G$. Let \[ \Delta_0 := \{ \alpha \in \Delta : \mc S \subset \ker \alpha \} \] be the set of simple roots of $(Z_{\mc G}(\mc S), \mc T)$. It is known that $\Delta_0$ is stable under $\mu_{\mc B}(\Gamma_K)$ \cite[Proposition 15.5.3.i]{Spr}, so $\mu_{\mc B}$ can be regarded as a group homomorphism $\Gamma_K \to \mr{Aut}(\Delta,\Delta_0)$. The triple $(\Delta ,\Delta_0, \mu_{\mc B})$ is called the index of $\mc G$ \cite[\S 15.5.5]{Spr}. Recall from \cite[Lemma 15.3.1]{Spr} that the root system $\Phi (\mc G, \mc S)$ is the image of $\Phi (\mc G, \mc T)$ in $X^* (\mc S)$, without 0. The set of simple roots $\Delta_K$ of $(\mc G, \mc S)$ can be identified with $(\Delta \setminus \Delta_0 ) / \mu_{\mc B}(\Gamma_K)$. The Weyl group of $(\mc G, \mc S)$ can be expressed in various ways: \begin{equation}\label{eq:1.1} \begin{aligned} W(\mc G,\mc S) & = N_{\mc G}(\mc S) / Z_{\mc G}(\mc S) \cong N_{\mc G (K)}(\mc S(K)) / Z_{\mc G (K)}(\mc S (K)) \\ & \cong N_{\mc G}(\mc S,\mc T) / N_{Z_{\mc G}(\mc S)}(\mc T) = \big( N_{\mc G}(\mc S,\mc T) / \mc T \big) \big/ \big( N_{Z_{\mc G}(\mc S)}(\mc T) / \mc T \big) \\ & \cong \mr{Stab}_{W(\mc G,\mc T)} (\mc S) / W(Z_{\mc G}(\mc S), \mc T) . \end{aligned} \end{equation} Let $\mc P_{\Delta_0} = Z_{\mc G}(\mc S) \mc B$ the minimal parabolic $K$-subgroup of $\mc G$ associated to $\Delta_0$. It is well-known \cite[Theorem 15.4.6]{Spr} that the following sets are canonically in bijection: \begin{itemize} \item $\mc G (K)$-conjugacy classes of parabolic $K$-subgroups of $\mc G$; \item standard (i.e. containing $\mc P_{\Delta_0}$) parabolic $K$-subgroups of $\mc G$; \item subsets of $(\Delta \setminus \Delta_0 ) / \mu_{\mc B}(\Gamma_K)$; \item $ \mu_{\mc B}(\Gamma_K)$-stable subsets of $\Delta$ containing $\Delta_0$. \end{itemize} Comparing these criteria over $K$ and over $\overline K$, we see that two parabolic $K$-subgroups of $\mc G$ are $\mc G (K)$-conjugate if and only if they are $\mc G (\overline K)$-conjugate. By a parabolic pair for $\mc G$ we mean a pair $(\mc P,\mc L)$, where $\mc L \subset \mc P$ is a parabolic subgroup and $\mc L$ is a Levi factor of $\mc P$. We say that the pair is defined over $K$ if both $\mc P$ and $\mc L$ are so. By a Levi subgroup of $\mc G$ we mean a Levi factor of some parabolic subgroup of $\mc G$. Equivalently, a Levi $K$-subgroup of $\mc G$ is the centralizer of a $K$-split torus in $\mc G$. With \cite[Lemma 15.4.5]{Spr} every $\mu_{\mc B}(\Gamma_K)$-stable subset $I \subset \Delta$ containing $\Delta_0$ gives rise to a standard Levi $K$-subgroup $\mc L_I$ of $\mc G$, namely the group generated by $Z_{\mc G}(\mc S)$ and the root subgroups for roots in $\Z I \cap \Phi (\mc G,\mc T)$. By construction $\mc L_I$ is a Levi factor of the standard parabolic $K$-subgroup $\mc P_I$ of $\mc G$. In the introduction we denoted $\mc L_I$ by $\mc L_{I_K}$, where $I_K = (I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$. Two parabolic $K$-subgroups of $\mc G$ are called associate if their Levi factors are $\mc G (K)$-conjugate. As Levi factors are unique up to conjugation (see the proof of Lemma \ref{lem:1}.a below), there is a natural bijection between the set of $\mc G (K)$-conjugacy classes of Levi $K$-subgroups of $\mc G$ and the set of association classes of parabolic $K$-subgroups of $\mc G$. The explicit description of these sets is known, for instance from \cite[Proposition 1.3.4]{Cas}. Unfortunately we could not find a complete proof of these statements in the literature, so we provide it here. \begin{lem}\label{lem:1} \enuma{ \item Every Levi $K$-subgroup of $\mc G$ is $\mc G (K)$-conjugate to a standard Levi $K$-subgroup of $\mc G$. \item For two standard Levi $K$-subgroups $\mc L_I$ and $\mc L_J$ the following are equivalent: \begin{enumerate}[(i)] \item $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate; \item $(I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$ and $(J \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$ are $W (\mc G, \mc S)$-associate. \end{enumerate} } \end{lem} \begin{proof} (a) Let $\mc P$ be a parabolic $K$-subgroup of $\mc G$ with a Levi factor $\mc L$ defined over $K$. Since $\mc P$ is $\mc G (K)$-conjugate to a standard parabolic subgroup $\mc P_I$ \cite[Theorem 15.4.6]{Spr}, $\mc L$ is $\mc G (K)$-conjugate to a Levi factor of $\mc P_I$. By \cite[Proposition 16.1.1]{Spr} any two such factors are conjugate by an element of $\mc P_I (K)$. In particular $\mc L$ is $\mc G (K)$-conjugate to $\mc L_I$.\\ (b) Suppose that (ii) is fulfilled, that is, \[ w(I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K) = (J \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K) \quad \text{for some } w \in W(\mc G,\mc S) . \] Let $\bar w \in N_{\mc G (K)}(\mc S(K))$ be a lift of $w$. Then $\bar w \mc L_I {\bar w}^{-1}$ contains $Z_{\mc G} (\mc S)$ and \[ \Phi (\bar w \mc L_I {\bar w}^{-1} ,\mc S) = w \Phi (\mc L_I,\mc S) = \Phi (\mc L_J ,\mc S) . \] Hence $\bar w \mc L_I {\bar w}^{-1} = \mc L_J$, showing that (i) holds. Conversely, suppose that (ii) holds, so $g \mc L_I g^{-1} = \mc L_J$ for some $g \in \mc G (K)$. Then $g \mc S g^{-1}$ is a maximal $K$-split torus of $\mc L_J$. By \cite[Theorem 15.2.6]{Spr} there is a $l \in \mc L_J (K)$ such that $l g \mc S g^{-1} l^{-1} = \mc S$. Thus $(lg) \mc L_I (l g)^{-1} = \mc L_J$ and $lg \in N_{\mc G}(\mc S)$. Let $w_1$ be the image of $l g$ in $W(\mc G,\mc S)$. Then $w_1 (\Phi (\mc L_I,\mc S)) = \Phi (\mc L_J,\mc S)$, so $w_1 \big( (I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K) \big)$ is a basis of $\Phi (\mc L_J,\mc S)$. Any two bases of a root system are associate under its Weyl group, so there exists a $w_2 \in W(\mc L_J,\mc S) \subset W(\mc G,\mc S)$ such that \[ w_2 w_1 \big( (I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K) \big) = (J \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K) . \qedhere \] \end{proof} When $\mc G$ is $K$-split, $\Delta_0$ is empty and the action of $\Gamma_K$ is trivial. Then Lemma \ref{lem:1} says that $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate if and only if $I$ and $J$ are $W(\mc G,\mc T)$-associate. With $\overline K$ instead of $K$ we would obtain the same criterion. In particular $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate if and only if they are $\mc G (\overline K)$-conjugate.\\ We want to prove that rational conjugacy and geometric conjugacy of Levi subgroups is equivalent. More precisely: \begin{thm}\label{thm:2} Let $\mc L, \mc L'$ be two Levi $K$-subgroups of $\mc G$. Then $\mc L$ and $\mc L'$ are $\mc G(K)$-conjugate if and only if they are $\mc G (\overline K)$-conjugate. \end{thm} The proof consists of several steps: \begin{itemize} \item Reduction from reductive to quasi-split $\mc G$. \item Reduction from reductive (quasi-split) to absolutely simple (quasi-split) $\mc G$. \item Proof for absolutely simple, quasi-split groups. \end{itemize} The first of these three steps is due to Jean-Loup Waldspurger. Let $\mc G^*$ be a quasi-split $K$-group with an inner twist $\psi : \mc G \to \mc G^*$. Thus $\psi$ is an isomorphism of $K_s$-groups and there exists a map $u : \Gamma_K \to \mc G^* (K_s)$ such that \begin{equation}\label{eq:1.7} \psi \circ \gamma \circ \psi^{-1} = \mr{Ad}(u(\gamma)) \circ \gamma^* \qquad \forall \gamma \in \Gamma_K . \end{equation} Here $\gamma^*$ denotes the $\Gamma_K$-action which defines the $K$-structure of $\mc G^*$. We fix a Borel $K$-subgroup $\mc B^*$ of $\mc G^*$ and a maximal $K$-torus $\mc T^* \subset \mc B^*$ which is maximally $K$-split. In other words, $(\mc B^*, \mc T^*)$ is a minimal parabolic pair of $\mc G^*$, defined over $K$. In $\mc G^*$ we also have the parabolic pair \[ (\mc P_{\Delta_0}^* , \mc L_{\Delta_0}^*) := (\psi (\mc P_{\Delta_0}), \psi (\mc L_{\Delta_0})) , \] which is defined over $K_s$. By the conjugacy of minimal parabolic pairs, there exists a $g_0 \in \mc G^* (K_s)$ such that \[ g_0 \psi (\mc P_{\Delta_0}) g_0^{-1} \supset \mc B^* \quad \text{ and } \quad g_0 \psi (\mc L_{\Delta_0}) g_0^{-1} \supset \mc T^* . \] Replacing $\psi$ by Ad$(g_0) \circ \psi$, we may assume that $\mc P_{\Delta_0}^* \supset \mc B^*$ and $\mc L_{\Delta_0}^* \supset \mc T^*$. \begin{lem}\label{lem:6} \enuma{ \item The parabolic pair $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$ is defined over $K$. \item $u(\gamma) \in \mc L_{\Delta_0}^* (K_s)$ for all $\gamma \in \Gamma_K$. \item Let $\mc H$ be a $K_s$-subgroup of $\mc G$ containing $\mc L_{\Delta_0}$. Then $\mc H$ is defined over $K$ if and only if $\psi (\mc H)$ is defined over $K$. } \end{lem} \begin{proof} (a) Recall that a $K_s$-subgroup of $\mc G$ is defined over $K$ if and only if it is $\Gamma_K$-stable. Applying that to $\mc P_{\Delta_0}$ and $\mc L_{\Delta_0}$, we see from \eqref{eq:1.7} that Ad$(u(\gamma)) \circ \gamma^*$ stabilizes $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$. In other words, Ad$(u(\gamma))$ sends $(\gamma^* \mc P_{\Delta_0}^*,\gamma^* \mc L_{\Delta_0}^*)$ to $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$. By the above setup both $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$ and $(\gamma^* \mc P_{\Delta_0}^*, \gamma^* \mc L_{\Delta_0}^*)$ are standard, that is, contain $(\mc B^*,\mc T^*)$. But two conjugate standard parabolic pairs of $\mc G^*$ are equal, so $\gamma^*$ stabilizes $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$. Hence this parabolic pair is defined over $K$.\\ (b) Now also Ad$(u(\gamma))$ stabilizes $(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*)$. As every parabolic subgroup is its own normalizer: \[ u(\gamma) \in N_{\mc G^* (K_s)}(\mc P_{\Delta_0}^*,\mc L_{\Delta_0}^*) = N_{\mc P_{\Delta_0}^* (K_s)}(\mc L_{\Delta_0}^*) = \mc L_{\Delta_0}^* (K_s) . \] (c) By part Ad$(u(\gamma))$ stabilizes $\psi (\mc H)$, for any $\gamma \in \Gamma_K$. From \eqref{eq:1.7} we see now that $\gamma$ stabilizes $\mc H$ if and only if it stabilizes $\psi (\mc H)$. \end{proof} We thank Jean-Loup Waldspurger for showing us the proof of the next result. \begin{lem}\label{lem:7} Suppose that Theorem \ref{thm:2} holds for all quasi-split $K$-groups. Then it holds for all reductive $K$-groups $\mc G$. \end{lem} \begin{proof} By Lemma \ref{lem:1}.a it suffices to consider two standard Levi $K$-subgroups $\mc L_I, \mc L_J$ of $\mc G$. We assume that they are $\mc G (\overline K)$-conjugate. By Lemma \ref{lem:1}.b this depends only the Weyl group of $(\mc G,\mc T)$, so we can pick $w \in N_{\mc G (K_s)}(\mc T)$ with $w \mc L_I w^{-1} = \mc L_J$. We denote the images of these objects (and of $\mc P_I,\mc P_J$) under $\psi$ by a *, e.g. $\mc L^*_I = \psi (\mc L_I)$. Then $w^* \mc L_I^* {w^*}^{-1} = \mc L_J^*$ and by Lemma \ref{lem:6}.c the parabolic pairs $(\mc P_I^*,\mc L_I^*)$ and $(\mc P_J^*,\mc L_J^*)$ are defined over $K$. Using the hypothesis of the lemma for $\mc G^*$, we pick a $h^* \in \mc G^* (K)$ with $h^* \mc L_I^* {h^*}^{-1} = \mc L_J^*$. Write $\mc P^* = h^* \mc P_I^* {h^*}^{-1}$, $h = \psi^{-1}(h^*)$ and $\mc P := \psi^{-1}(\mc P^*)$. Here $\mc P^*$ is defined over $K$ because $\mc P_I^*$ and $h^*$ are. Furthermore \[ \mc P^* \supset \mc L_J^* \supset \mc L_{\Delta_0}^* \quad \text{and} \quad \mc P \supset \mc L_J \supset \mc L_{\Delta_0} , \] so by Lemma \ref{lem:6}.c $\mc P$ is defined over $K$. Thus the parabolic $K$-subgroups $\mc P_I$ and $\mc P$ of $\mc G$ are conjugate by $h \in \mc G (K_s)$. Hence they are also $\mc G (K)$-conjugate, say $g \mc P g^{-1} = \mc P_I$ with $g \in \mc G (K)$. Now $g \mc L_J g^{-1}$ is a Levi factor of $\mc P_I$ defined over $K$. By \cite[Proposition 16.1.1]{Spr} $g \mc L_J g^{-1}$ is $\mc P_I (K)$-conjugate to $\mc L_I$, so $\mc L_I$ and $\mc L_J$ are $\mc G(K)$-conjugate. \end{proof} \begin{lem}\label{lem:3} Suppose that Theorem \ref{thm:2} holds for all absolutely simple $K$-groups. Then it holds for all reductive $K$-groups $\mc G$. Similarly, if Theorem \ref{thm:2} holds for all absolutely simple, quasi-split $K$-groups, then it holds for all quasi-split reductive $K$-groups $\mc G$. \end{lem} \begin{proof} The set of standard Levi $K$-subgroups of $\mc G$ does not change when we divide out any central $K$-subgroup $\mc Z$ of $\mc G$. In Lemma \ref{lem:1} the criterion (ii) also does not change if we divide out $\mc Z$, because $W(\mc G / \mc Z, \mc S / \mc Z) \cong W(\mc G,\mc S)$. Therefore we may assume that $\mc G$ is of adjoint type. Now $\mc G$ is a direct product of $K$-simple groups of adjoint type. If Theorem \ref{thm:2} holds for $\mc G'$ and $\mc G''$, then it clearly holds for $\mc G' \times \mc G''$. Thus we may further assume that $\mc G$ is $K$-simple and of adjoint type. Then there are simple adjoint $K_s$-groups $\mc G_i$ such that \begin{equation}\label{eq:1.4} \mc G \cong \mc G_1 \times \cdots \times \mc G_d \qquad \text{as } K_s \text{-groups.} \end{equation} Since $\mc G$ is $K$-simple, the action of $\Gamma_K$ (which defines the $K$-structure) permutes the $\mc G_i$ transitively. Write $\mc T_i = \mc T \cap \mc G_i$, so that $\mc T = \mc T_1 \times \cdots \times \mc T_d$ and \begin{align}\label{eq:1.3} W(\mc G, \mc T) = W(\mc G_1,\mc T_1) \times \cdots \times W(\mc G_d, \mc T_d) , \\ \Phi (\mc G, \mc T) = \Phi (\mc G_1,\mc T_1) \sqcup \cdots \sqcup \Phi (\mc G_d, \mc T_d) . \end{align} Put $\Delta^i = \Delta \cap \Phi (\mc G_i,\mc T_i)$ and $\Delta_0^i = \Delta_0 \cap \Phi (\mc G_i,\mc T_i)$. Let $\Gamma_i$ be the $\Gamma_K$-stabilizer of $\mc G_i$. By \cite[Proposition 15.5.3]{Spr} $\mu_{\mc B}(\Gamma_i)$ stabilizes $\Delta_0^i$ and $\mu_{\mc B}(\Gamma) \Delta_0^i = \Delta_0$. Select $\gamma_i \in \Gamma_K$ with $\gamma_i (\mc G_1) = \mc G_i$ and $\gamma_1 = 1$. Note that $\mc B_i := \gamma_i (\mc B \cap \mc G_1)$ is a Borel subgroup of $\mc G_i$. To simplify things a little bit, we replace $\mc B$ by $\mc B_1 \times \cdots \times \mc B_d$. With this new $\mc B$: \begin{equation}\label{eq:1.6} \mu_{\mc B}(\gamma_i) \Delta^1 = \gamma_i (\Delta^1) = \Delta^i \quad \text{and} \quad \mu_{\mc B}(\gamma_i) \Delta_0^1 = \gamma_i (\Delta_0^1) = \Delta_0^i . \end{equation} By Lemma \ref{lem:1}.a it suffices to prove Theorem \ref{thm:2} for standard Levi $K$-subgroups $\mc L_I ,\mc L_J$ of $\mc G$, where $\Delta_0 \subset I,J \subset \Delta$ and $I,J$ are $\mu_{\mc B}(\Gamma)$-stable. We suppose that $\mc L_I$ and $\mc L_J$ are $\mc G (\overline K)$-conjugate, and we have to show that they are also $\mc G (K)$-conjugate. By \eqref{eq:1.4} the groups $\mc L_I \cap \mc G_i$ and $\mc L_J \cap \mc G_i$ are $\mc G_i (K_s)$-conjugate, for $i = 1,\ldots,d$. The absolutely simple group $\mc G_i$ is defined over the field $K_i := K_s^{\Gamma_i}$. By the assumption of the current lemma, $\mc L_I \cap \mc G_i$ and $\mc L_J \cap \mc G_i$ are $\mc G_i (K_i)$-conjugate. Let $\mc S_i$ be the maximal $K_i$-split torus of $\mc G_i$ such that \[ \mc S = \mc S_1 \times \cdots \times \mc S_d \qquad \text{as } K_s \text{-groups.} \] Then $\Gamma_i$ acts trivially on $W(\mc G_i,\mc S_i)$, because the latter is generated by $\Gamma_i$-invariant reflections \cite[Lemma 15.3.7.ii]{Spr}. Consider the $\mu_{\mc B}(\Gamma_i)$-stable sets $I^i = I \cap \Phi (\mc G_i,\mc T_i)$ and $J^i = J \cap \Phi (\mc G_i,\mc T_i)$. By Lemma \ref{lem:1}.b the sets $I^i \setminus \Delta_0^i$ and $J^i \setminus \Delta_0^i$ are $W(\mc G_i,\mc S_i)$-associate. Pick $w_1 \in W(\mc G_1,\mc S_1)$ with \[ w_1 ( J^1 \setminus \Delta_0^1 ) = I^1 \setminus \Delta_0^1 . \] The analogue of \eqref{eq:1.3} for $\mc S$ reads \begin{equation}\label{eq:1.5} W(\mc G,\mc S) = \big( W(\mc G_1,\mc S_1) \times \cdots \times W(\mc G_d, \mc S_d) \big)^{\Gamma_K} . \end{equation} Put $w_i = \gamma_i (w_1) \in W(\mc G_i,\mc S_i)$. From \eqref{eq:1.5} we see that $w := w_1 \times \cdots \times w_d$ lies in $W(\mc G,\mc S)$. By \eqref{eq:1.6} and by the $\mu_{\mc B}(\Gamma_K)$-stability of $I$ and $J$: \[ w_i ( J^i \setminus \Delta_0^i ) = I^i \setminus \Delta_0^i \qquad \text{for } i = 1,\ldots,d . \] Hence $w (J \setminus \Delta_0 ) = I \setminus \Delta_0$. Now Lemma \ref{lem:1}.b says that $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate. Finally, we take a closer at the special case where the initial group $\mc G$ was quasi-split over $K$. Then the group $\mc G_i$ from \eqref{eq:1.4} is quasi-split over $K_i$, for instance because it admits the $\Gamma_i$-stable Borel subgroup $\mc B_i$. So in the above proof of Theorem \ref{thm:2} for a quasi-split group $\mc G$, we only need to assume it for the quasi-split absolutely simple groups $\mc G_i$. \end{proof} When $\mc G$ is quasi-split over $K$, $\Delta_0$ is empty and we can choose $\mc B$ and $\mc T$ defined over $K$, that is, $\Gamma_K$-stable. Then the $\mu$-action of $\Gamma_K$ agrees with the action defining the $K$-structure, and it is known from \cite[Proposition 2.4.2]{SiZi} that \begin{equation}\label{eq:7} W(\mc G,\mc S) = W(\mc G,\mc T)^{\Gamma_K} . \end{equation} In this case every $\Gamma_K$-stable subset $I$ of $\Delta$ gives rise to standard Levi $K$-subgroup $\mc L_I$ of $\mc G$. Lemma \ref{lem:1}.b says that $\mc L_I$ and $\mc L_J$ are \begin{itemize} \item $\mc G(\overline K)$-conjugate if and only if $I$ and $J$ are $W(\mc G,\mc T)$-associate; \item $\mc G(K)$-conjugate if and only if $I$ and $J$ are $W(\mc G,\mc T)^{\Gamma_K}$-associate. \end{itemize} \begin{lem}\label{lem:4} Theorem \ref{thm:2} holds when $\mc G$ is absolutely simple and quasi-split (over $K$). \end{lem} \begin{proof} By Lemma \ref{lem:1} and the remarks after its proof, Theorem \ref{thm:2} holds for $K$-split reductive groups. Thus it suffices to consider quasi-split, non-split, absolutely simple $K$-groups. In view of Lemma \ref{lem:1}.a, we may assume that $\mc L = \mc L_I$ and $\mc L' = \mc L_J$ are standard Levi $K$-subgroups of $\mc G$. By the above criteria for conjugacy, the only things that matter are the root system $\Phi (\mc G,\mc T)$, its Weyl group and the Galois action on those. These reductions make a case-by-case consideration feasible. In each case, we suppose that $\mc L_I$ and $\mc L_J$ are $\mc G (\overline K)$-conjugate and we have to show that $w I = J$ for some $w \in W(\mc G,\mc S) = W(\mc G,\mc T)^{\Gamma_K}$.\\ \textbf{Type $A_n^{(2)}$.} The $\Gamma_K$-stable subset $I \subset A_n^{(2)}$ has the form \[ {A_{n_1}}^2 \times \cdots \times {A_{n_k}}^2 \times A_{n_0}^{(2)} , \] where $n_0$ has the same parity as $n$ and \[ n_1 + \cdots + n_k + k \leq (n - n_0) / 2 . \] Here the connected component $A_{n_0}^{(2)}$ lies in the middle of the Dynkin diagram, and all the connected components $A_{n_i}$ occur two times, symmetrically around the middle. Similarly $J$ looks like \[ {A_{m_1}}^2 \times \cdots \times {A_{m_l}}^2 \times A_{m_0}^{(2)} . \] Lemma \ref{lem:1}.b tells us that $I$ and $J$ are associate by an element $w$ of $W(\mc G,\mc T) \cong S_{n+1}$. Hence the multisets $(n_1,n_1,\ldots,n_k,n_k,n_0)$ and $(m_1,m_1,\ldots,m_l,m_l,m_0)$ are equal. Only the element $n_0$ (resp. $m_0$) occurs with odd multiplicity, so $n_0 = m_0$. Composing $w$ inside $S_{n+1}$ with a suitable permutation on the components $A_{n_0}$ of $I$, we may assume that $w$ fixes the subset $A_{m_0}^{(2)} = A_{n_0}^{(2)}$ of $A_n^{(2)}$. In ${A_{(n - n_0 - 2)/2}}^2$, the complement of $A_{n_0}^{(2)}$ and the two adjacent simple roots, the sets \[ I' := ( A_{n_1} \times \cdots \times A_{n_k} )^2 \quad \text{and} \quad J' := ( A_{m_1} \times \cdots \times A_{m_l} )^2 \] are associated by $w$. In particular $k = l$. With the group $(S_{(n - n_0) / 2}^2)^{\Gamma_K} \cong S_{(n - n_0) / 2}$ we can sort $I'$ and $J'$, so that $n_1 \geq \cdots \geq n_k$ and $m_1 \geq \cdots \geq m_k$. As $I'$ and $J'$ came from the same multiset, they become equal after sorting. This shows that $w' I' = J'$ for some $w' \in (S_{(n - n_0) / 2}^2)^{\Gamma_K} \subset W(\mc G,\mc T)^{\Gamma_K}$. In view of \eqref{eq:7}, this says $w' I = J$ with $w' \in W(\mc G,\mc S)$.\\ \textbf{Type $D_n^{(2)}$.} The $\Gamma_K$-stable subset $I \subset D_n^{(2)}$ has the type \[ A_{n_1} \times \cdots \times A_{n_k} \times D_{n_0}^{(2)} \quad \text{with } n_0 \geq 2 \text{ and }n_1 + \cdots + n_k + k + n_0 \leq n , \] or (when $n_0 = 0)$ \[ A_{n_1} \times \cdots \times A_{n_k} \quad \text{with } n_1 + \cdots + n_k + k + 1 \leq n . \] Similarly we write \[ J = A_{m_1} \times \cdots \times A_{m_l} \times D_{m_0}^{(2)} \quad \text{with } m_0 \neq 1 . \] By assumption there exists a $w \in W(D_n)$ such that $w (I) = J$. Suppose that $n_0 \geq 2$ and $w D_{n_0}^{(2)}$ is a component $A_{n_0}$ of $J$. In the standard construction of the root system $D_n$ in $\Z^n$, the subset $D_{n_0}^{(2)}$ involves precisely $n_0$ coordinates, whereas $A_{n_0}$ involves $n_0 + 1$ coordinates (irrespective of where it is located in the Dynkin diagram). As $W(D_n) \subset S_n \ltimes \{ \pm 1 \}^n$, applying $w$ to a set of simple roots does not change the number of involved coordinates. This contradiction shows that $w$ must map $D_{n_0}^{(2)}$ to $D_{m_0}^{(2)}$ if $n_0 \geq 2$. For the same reason, if $m_0 \geq 2$, then $w^{-1} D_{m_0}^{(2)}$ must be contained in $D_{n_0}^{(2)}$. Hence $n_0 = m_0$ and $w D_{n_0}^{(2)} = D_{m_0}^{(2)}$ whenever $n_0 \geq 2$ or $m_0 \geq 2$. Obviously the same conclusion holds in the remaining case $n_0 = m_0 = 0$. Consider the sets of simple roots \[ I' := A_{n_1} \times \cdots \times A_{n_k} \quad \text{and} \quad J' := A_{m_1} \times \cdots \times A_{m_l} \] They are associated by $w \in W(D_n)$, so $(n_1,\ldots,n_k) = (m_1,\ldots,m_l)$ as multisets. Then there exists a $w' \in S_{n - n_0 - 1}$ (or in $S_{n - 2}$ if $n_0 = 0$) with $w' I' = J'$. Such a $w'$ commutes with the diagram automorphism, so $w' I = J$ with $w' \in W(D_n)^{\Gamma_K} = W(\mc G,\mc S)$.\\ \textbf{Type $D_4^{(3)}$.} The cardinality of $I$ is 0, 1, 3 or 4, and for all these sizes there is a unique $\Gamma_K$-stable subset of the Dynkin diagram $D_4^{(3)}$. Hence $\mc L_I$ is completely characterized by its rank $|I| = \mr{rk}(\Phi (\mc L_I,\mc T))$. For each possible rank there is a unique $\mc G (K)$-conjugacy class of Levi $K$-subgroups, and those Levi subgroups definitely cannot be $\mc G (\overline K)$-conjugate to Levi subgroups of other ranks.\\ \textbf{Type $E_6^{(2)}$.} We label the Dynkin diagram as \[ \begin{array}{c} \alpha_2 \\ \mid \\ \alpha_1 - \alpha_3 - \alpha_4 - \alpha_5 - \alpha_6 \end{array} \] The nontrivial automorphism $\gamma$ exchanges $\alpha_1$ with $\alpha_6$ and $\alpha_3$ with $\alpha_5$. Since $\mc L_I$ and $\mc L_J$ are $\mc G (\overline K)$-conjugate, they have the same rank $|I| = |J|$. When $|I| = 0$ or $|I| = 6$, this already shows that $J = I$. For the remaining ranks, we will check that the $W(E_6)$-association classes of $\Gamma_K$-stable subsets of $E_6$ of that rank are exactly the $W(E_6)^{\Gamma_K}$-association classes. That suffices, for it implies that the $W(E_6)$-associate sets $I$ and $J$ are already associated by an element of $W(E_6)^{\Gamma_K}$. For $|I| = 1$, the options are $\{\alpha_2\}$ and $\{\alpha_4\}$. These sets are associated by an element $w_2 \in \langle s_{\alpha_2},s_{\alpha_4} \rangle \cong S_3$. As $\alpha_2$ and $\alpha_4$ are fixed by $\Gamma_K$, $w_2 \in W(E_6)^{\Gamma_K}$. Hence there is only one $W(E_6)^{\Gamma_K}$-association class of $I$'s of rank 1. When $|I| = 2$, the possible sets of simple roots are \[ I_{2,1} = \{\alpha_2,\alpha_4\} ,\; I_{2,2} = \{ \alpha_3, \alpha_5\} \,\; I_{2,3} = \{ \alpha_1, \alpha_6\}. \] Among these $I_{2,1} \cong A_2$ is the only connected Dynkin diagram, so it is not $W(E_6)$-associate to the other two. Pick $w_1 \in \langle s_{\alpha_1},s_{\alpha_3} \rangle \cong S_3$ with $w_1 (\alpha_1) = \alpha_3$. Then $(\gamma (w_1))(\alpha_6) = \alpha_5$ and $w_1 \gamma (w_1) \in W(E_6)^{\Gamma_K}$. We conclude the $W(E_6)$-association classes on\\ $\{I_{2,1}, I_{2,2}, I_{2,3} \}$ are exactly the $W(E_6)^{\Gamma_K}$-association classes. In the case $|I| = 3$, the possibilities are \[ I_{3,1} = \{ \alpha_3, \alpha_4, \alpha_5\},\; I_{3,2} = \{ \alpha_2, \alpha_3, \alpha_5\},\; I_{3,3} = \{ \alpha_1, \alpha_2, \alpha_6\},\; I_{3,4} = \{ \alpha_1, \alpha_4, \alpha_6\}. \] Among these $I_{3,1} \cong A_3$ is the only connected diagram, so it is not $W(E_6)$-associate to the other three. The sets $I_{3,2}$ and $I_{3,3}$ are associated via $w_1 \gamma (w_1)$, while the sets $I_{3,3}$ and $I_{3,4}$ are associated via $w_2$ (as above). Hence $\{ I_{3,2}, I_{3,3}, I_{3,4}\}$ forms one $W(E_6)^{\Gamma_K}$-association class and one $W(E_6)$-association class. If $I$ has rank 4, it is one of \begin{align*} & \{\alpha_1,\alpha_3,\alpha_5,\alpha_6\} \cong A_2 \times A_2 ,\\ & \{\alpha_1,\alpha_2,\alpha_4,\alpha_6\} \cong A_2 \times A_1 \times A_1,\\ & \{\alpha_2,\alpha_3,\alpha_4,\alpha_5\} \cong A_4 . \end{align*} These three are mutually non-isomorphic, so they form three association classes, both for $W(E_6)$ and for $W(E_6)^{\Gamma_K}$. When $|I| = 5$, we have the options \[ E_6 \setminus \{\alpha_2\} \cong A_5 \quad \text{and} \quad E_6 \setminus \{\alpha_4\} \cong A_2 \times A_2 \times A_1 . \] These are not isomorphic, so they form two association classes, both for $W(E_6)$ and for $W(E_6)^{\Gamma_K}$. \end{proof} \vspace{2mm} \section{Connected linear algebraic groups} \label{sec:2} The previous results about reductive groups can be generalized to all linear algebraic groups. This relies mainly on t{he theory initiated by Borel and Tits \cite{BoTi}, and worked out much further by Conrad, Gabber and Prasad \cite{CGP,CP}. Let $\mc G$ be a connected linear algebraic $K$-group. We recall from \cite[Theorem 4.3.7]{Spr} that $\mc G$ is irreducible and smooth as $K$-variety. In particular it is a smooth affine group -- the terminology used in \cite{CGP}. When $\mc G$ has a Levi decomposition, it is clear how Levi subgroups of $\mc G$ can be defined: as a Levi subgroup (in the sense of the previous section) of a Levi factor of $\mc G$. However, there exist linear algebraic groups that do not admit any Levi decomposition, even over $\overline K$ \cite[Appendix A.6]{CGP}. For those we do not know a good notion of Levi subgroups. Instead we investigate a closely related kind of subgroups, already present in \cite{Spr}. Fix a $K$-rational cocharacter $\lambda : GL_1 \to \mc G$ and put \begin{align*} & \mc P_{\mc G}(\lambda) = \{ g \in \mc G : \lim_{a \to 0} \lambda (a) g \lambda (a)^{-1} \text{ exists in } \mc G \} , \\ & \mc U_{\mc G}(\lambda) = \{ g \in \mc G : \lim_{a \to 0} \lambda (a) g \lambda (a)^{-1} = 1 \} ,\\ & \mc Z_{\mc G}(\lambda) = \mc P_{\mc G}(\lambda) \cap \mc P_{\mc G}(\lambda^{-1}) \end{align*} These are $K$-subgroups of $\mc G$ \cite[Lemma 2.1.5]{CGP}. Moreover $\mc U_{\mc G}(\lambda)$ is $K$-split unipotent \cite[Proposition 2.1.10]{CGP}, and there is a Levi-like decomposition \cite[Proposition 2.1.8]{CGP} \begin{equation}\label{eq:2.1} \mc P_{\mc G}(\lambda) = \mc Z_{\mc G}(\lambda) \ltimes \mc U_{\mc G}(\lambda) . \end{equation} By \cite[Lemma 2.1.5]{CGP} $Z_{\mc G}(\lambda)$ is the (scheme-theoretic) centralizer of $\lambda (GL_1)$, a $K$-split torus in $\mc G$. More generally, if $\mc S'$ is any $K$-split torus in $\mc G$, $Z_{\mc G}(\mc S')$ is of the form $Z_{\mc G}(\lambda)$. Namely, for a $K$-rational cocharacter $\lambda : GL_1 \to \mc S'$ whose image does not lie in the kernel of any of the roots of $(\mc G,\mc S')$. Let $\mc R_{u,K}(\mc G)$ denote the unipotent $K$-radical of $\mc G$. By definition, a pseudo-parabolic $K$-subgroup of $\mc G$ is a group of the form \[ \mc P_\lambda := \mc P_{\mc G}(\lambda) \mc R_{u,K}(\mc G) \quad \text{for some $K$-rational cocharacter } \lambda : GL_1 \to \mc G . \] Similarly we define \[ \mc L_\lambda := \mc P_\lambda \cap \mc P_{\lambda^{-1}} = Z_{\mc G}(\lambda) \mc R_{u,K}(\mc G) . \] We call $\mc L_\lambda$ a pseudo-Levi subgroup of $\mc G$. Just a like a Levi subgroup of a reductive group is intersection of a parabolic subgroup with an opposite parabolic, a pseudo-Levi subgroup is the interesection of a pseudo-parabolic subgroup with an opposite pseudo-parabolic. We note that $\mc L_\lambda$ contains the centralizer of the $K$-split torus $\lambda (GL_1)$, but it may be strictly larger than the latter. Unfortunately the groups $\mc P_\lambda$ and $\mc L_\lambda$ do in general not fit in a decomposition like \eqref{eq:2.1}, because $\mc U_{\mc G}(\lambda)$ may intersect $\mc R_{u,K}(\mc G)$ nontrivially. When $\mc G$ is pseudo-reductive over $K$ (that is, $\mc R_{u,K}(\mc G) = 1$), the groups $\mc P_\lambda$ and $\mc L_\lambda$ coincide with $\mc P_{\mc G}(\lambda)$ and $Z_{\mc G}(\lambda)$, respectively. In view of the remarks after \eqref{eq:2.1}, the pseudo-Levi $K$-subgroups of a pseudo-reductive group are precisely the centralizers of the $K$-split tori in that group. More specifically, when $\mc G$ is reductive, the $\mc P_\lambda$ are precisely the parabolic subgroups of $\mc G$ \cite[Proposition 2.2.9]{CGP}, the $\mc L_\lambda$ are the Levi subgroups of $\mc G$ and \eqref{eq:2.1} is an actual Levi decomposition of $\mc P_\lambda$. This justifies our terminology ``pseudo-Levi subgroup''. The notions pseudo-parabolic and pseudo-Levi are preserved under separable extensions of the base field $K$ \cite[Proposition 1.1.9]{CGP}, but not necessarily under inseparable base-change. This is caused by the corresponding behaviour of the unipotent $K$-radical. We consider the $K$-group $\mc G' := \mc G / \mc R_{u,K}(\mc G)$, the maximal pseudo-reductive quotient of $\mc G$. \begin{lem}\label{lem:2.1} There is a natural bijection between the sets of pseudo-parabolic $K$-subgroups of $\mc G$ and of $\mc G'$. It remains a bijection if we take $K$-rational conjugacy classes on both sides. \end{lem} \begin{proof} The map sends $\mc P_\lambda$ to $\mc P'_\lambda := \mc P_\lambda / \mc R_{u,K}(\mc G)$. It is bijective by \cite[Proposition 2.2.10]{CGP}. According to \cite[Proposition 3.5.7]{CGP} every pseudo-parabolic subgroup of $\mc G$ (or of $\mc G'$) is its own scheme-theoretic normalizer. Hence the variety of $\mc G (K)$-conjugates of $\mc P_\lambda$ is $\mc G (K) / \mc P_\lambda (K)$. By \cite[Lemma C.2.1]{CGP} this is isomorphic with $(\mc G / \mc P_\lambda ) (K)$. Next \cite[Proposition 2.2.10]{CGP} tells us that the $K$-varieties $\mc G / \mc P_\lambda$ and $\mc G' / \mc P'_\lambda$ can be identified. We obtain \[ \mc G (K) / \mc P_\lambda (K) \cong (\mc G / \mc P_\lambda ) (K) \cong (\mc G' / \mc P_\lambda' ) (K) \cong \mc G' (K) / \mc P'_\lambda (K) , \] where the right hand side can be interpreted as the variety of $\mc G' (K)$-conjugates of $\mc P'_\lambda$. It follows that two pseudo-parabolic $K$-subgroups $\mc P_\lambda$ and $\mc P_\mu$ are $\mc G (K)$-conjugate if and only if $\mc P'_\lambda$ and $\mc P'_\mu$ are $\mc G' (K)$-conjugate. \end{proof} The setup from the start of Section \ref{sec:1} (with $\mc S, \mc T, \Delta_0, \ldots$) remains valid for the current $\mc G$, when we reinterpret $\mc B$ as a minimal pseudo-parabolic $K_s$-subgroup of $\mc G$. (Also, the $K$-group $Z_{\mc G}(\mc S)$ is not always pseudo-Levi in $\mc G$, for that we still have to add $\mc R_{u,K}(\mc G)$ to it.) We refer to \cite[Proposition C.2.10 and Theorem C.2.15]{CGP} for the proofs in this generality. The set of simple roots $\Delta_K$ for $(\mc G,\mc S)$ can again be identified with $(\Delta \setminus \Delta_0 ) / \mu_{\mc B}(\Gamma_K)$. For every $\mu_{\mc B}(\Gamma_K)$-stable subset $I$ of $\Delta$ containing $\Delta_0$ we get a standard pseudo-parabolic $K$-subgroup $\mc P_I$ of $\mc G$. By Lemma \ref{lem:2.1} and \cite[Theorem 15.4.6]{Spr} every pseudo-parabolic $K$-subgroup is $\mc G (K)$-conjugate to a unique such $\mc P_I$. The unicity implies that two pseudo-parabolic $K$-subgroups of $\mc G$ are $\mc G (K)$-conjugate if and only if they are $\mc G (K_s)$-conjugate. (Recall that by \cite[Proposition 3.5.2.ii]{CGP} pseudo-parabolicity is preserved under base change from $K$ to $K_s$.) By \cite[Proposition 3.5.4]{CGP} (which can only be guaranteed when the fields are separably closed, as pointed out to us by Gopal Prasad), $\mc G (K_s)$-conjugacy of pseudo-parabolic subgroups is equivalent to $\mc G (K)$-conjugacy. Write $\mc P_I = \mc P_{\lambda_I}$ for some $K$-rational homomorphism $\lambda_I : GL_1 \to \mc S$. It is easy to see (from \cite[Lemma 15.4.4]{Spr} and Lemma \ref{lem:2.1}) that $\mc P_{\lambda_I^{-1}}$ does not depend on the choice of $\lambda_I$, and we may denote it by $\mc P_{-I}$. Then we define \[ \mc L_I := \mc P_I \cap \mc P_{-I} = \mc P_{\lambda_I} \cap \mc P_{\lambda_I^{-1}} = \mc L_{\lambda_I} . \] We call $\mc L_I$ a standard pseudo-Levi subgroup of $\mc G$. It is the inverse image, with respect to the quotient map $\mc G \to \mc G'$, of the (standard pseudo-Levi) $K$-subgroup of $\mc G'$ called $L_I$ in \cite[Lemma 15.4.5]{Spr}. In the introduction we called this $\mc L_{I_K}$, which relates to $\mc L_I$ by $I_K = (I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$. We are ready to generalize Lemma \ref{lem:1}. \begin{lem}\label{lem:2.2} \enuma{ \item Every pseudo-Levi $K$-subgroup of $\mc G$ is $\mc G (K)$-conjugate to a standard Levi $K$-subgroup of $\mc G$. \item For two standard pseudo-Levi $K$-subgroups $\mc L_I$ and $\mc L_J$ the following are equivalent: \begin{enumerate}[(i)] \item $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate; \item $(I \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$ and $(J \setminus \Delta_0) / \mu_{\mc B}(\Gamma_K)$ are $W (\mc G, \mc S)$-associate. \end{enumerate} } \end{lem} \begin{proof} (a) Let $\mc L_\lambda$ be a pseudo-Levi $K$-subgroup of $\mc G$. Because $\mc P_\lambda$ is $\mc G (K)$-conjugate to a standard pseudo-parabolic $K$-subgroup $\mc P_I$ of $\mc G$, we may assume that \[ \mc L_\lambda \subset \mc P_\lambda = \mc P_I. \] Since all maximal $K$-split tori of $\mc P_I$ are $\mc P_I (K)$-conjugate \cite[Theorem C.2.3]{CGP}, we may further assume that the image of $\lambda$ is contained in $\mc S$. By \cite[Corollary 2.2.5]{CGP} the $K$-split unipotent radical $\mc R_{us,K}(\mc P_I)$ equals both $\mc U_{\mc G}(\lambda_I) \mc R_{u,K}(\mc G)$ and $\mc U_{\mc G}(\lambda) \mc R_{u,K}(\mc G)$ By \cite[Lemma 15.4.4]{Spr} the Lie algebra of $\mc P_I / \mc R_{u,K}(\mc G)$ can be analysed in terms of the weights for the adjoint action Ad$(\lambda)$ of $GL_1$ on the Lie algebra of $\mc G'$. Namely, $\mc P_I / \mc R_{u,K}(\mc G)$ corresponds to the sum of the subspaces on which $GL_1$ acts by characters $a \mapsto a^n$ with $n \in \Z_{\geq 0}$. The Lie algebra of the subgroup \[ \mc R_{us,K}(\mc P_I) \mc R_{u,K}(\mc G) / \mc R_{u,K}(\mc G) \] is the sum of the subspaces on which Ad$(\lambda)$ acts as $a \mapsto a^n$ with $n \in \Z_{>0}$. From \eqref{eq:2.1} inside $\mc G'$ we deduce that the Lie algebra of $\mc L_I / \mc R_{u,K}(\mc G)$ is the direct sum of the Lie algebra of $Z_{\mc G'}(\mc S)$ and the root spaces for roots $\alpha$ with $\langle \alpha ,\, \lambda \rangle = 0$. This holds for both $\lambda$ and $\lambda_I$, from which we conclude that $\mc L_\lambda = \mc L_I$.\\ (b) This can shown just as Lemma \ref{lem:1}.b, using in particular that the natural map $N_{\mc G}(\mc S)(K) \to W(\mc G,\mc S)$ is surjective \cite[Proposition C.2.10]{CGP}. \end{proof} \begin{lem}\label{lem:2.7} There is a natural bijection between the sets of pseudo-Levi $K$-subgroups of $\mc G$ and of $\mc G'$. It remains a bijection if we take $K$-rational conjugacy classes on both sides. \end{lem} \begin{proof} The map sends $\mc L_\lambda$ to $\mc L'_\lambda := \mc L_\lambda / \mc R_{u,K}(\mc G)$. This map is bijective for the same reason as in with pseudo-parabolic subgroups: $\mc G$ and $\mc G'$ have essentially the same tori, see \cite[Proposition 2.2.10]{CGP}. By \cite[Theorem C.2.15]{CGP} the $K$-groups $\mc G$ and $\mc G'$ have the same root system and the same Weyl group. Then Lemma \ref{lem:2.2}.b says that set of the conjugacy classes of pseudo-Levi $K$-subgroups are parametrized by the same data for both groups. Hence the map $\mc L_I = \mc L_{\lambda_I} \mapsto \mc L'_{\lambda_I} = \mc L'_I$ also induces a bijection between these sets of conjugacy classes. \end{proof} In case $\mc G'$ is reductive, Lemmas \ref{lem:2.1} and \ref{lem:2.7} furnish bijections \begin{equation}\label{eq:2.2} \begin{array}{ccc} \hspace{-3mm} \{ \text{parabolic $K$-subgroups of } \mc G' \} & \longleftrightarrow & \{ \text{pseudo-parabolic $K$-subgroups of } \mc G \} \hspace{-5mm} \\ \mc P_\lambda / \mc R_{u,K}(\mc G) = P_{\mc G'}(\lambda) & \leftrightarrow & \mc P_\lambda \\ \\ \{ \text{Levi $K$-subgroups of } \mc G' \} & \longleftrightarrow & \{ \text{pseudo-Levi $K$-subgroups of } \mc G \} \hspace{-5mm} \\ \mc L_\lambda / \mc R_{u,K}(\mc G) = Z_{\mc G'}(\lambda) & \leftrightarrow & \mc L_\lambda \end{array} \end{equation} which induce bijections between the $K$-rational conjugacy classes on both sides. We will now start to work towards the main result of this section: \begin{thm}\label{thm:2.7} Let $\mc G$ be a connected linear algebraic $K$-group. Any two pseudo-Levi $K$-subgroups of $\mc G$ which are $\mc G (\overline K)$-conjugate are already $\mc G (K)$-conjugate. \end{thm} The main steps of our argument are: \begin{itemize} \item Reduction from the general case to absolutely pseudo-simple $K$-groups with trivial centre. \item Proof when $\mc G$ quasi-split over $K$ (i.e. $\Delta_0$ is empty). \item Proof for absolutely pseudo-simple $K$-groups with trivial centre (using the quasi-split case). \end{itemize} \begin{lem}\label{lem:2.6} Suppose that Theorem \ref{thm:2.7} holds for all absolutely pseudo-simple groups with trivial centre. Then it holds for all connected linear algebraic groups. \end{lem} \begin{proof} By Lemma \ref{lem:2.7} we may just as well consider the pseudo-reductive group $\mc G' = \mc G / \mc R_{u,K}(\mc G)$. The derived group $\mc D (\mc G')$ has the same root system and Weyl group as $\mc G'$, both over $K$ and over $K_s$, by \cite[Proposition 1.2.6 or Theorem C.2.15]{CGP}. In view of Lemma \ref{lem:2.2}, we may replace $\mc G'$ by $\mc D (\mc G')$. In particular $\mc G'$ is now pseudo-semisimple \cite[Remark 11.2.3]{CGP}. Since the centre of $\mc G'$ is contained in every pseudo-Levi subgroup, we may divide it out. Thus we may assume that $Z(\mc G') = 1$, while retaining pseudo-reductivity \cite[Proposition 4.1.3]{CP}. Let $\{ \mc G'_j \}_j$ be the finite collection of normal pseudo-simple $K$-subgroups of $\mc G'$, as in \cite[Propostion 3.1.8]{CGP}. The root system and Weyl group of $\mc G' (K)$ decompose as products of these objects for the $\mc G'_j (K)$. Combining that with Lemma \ref{lem:2.2} we see that it suffices to prove the theorem for each of the $\mc G'_j$. To simplify the notation, we assume from now on that $\mc G$ is a pseudo-simple $K$-group. Let $\{ \mc G_i \}_i$ be the finite collection of normal pseudo-simple $K_s$-subgroups of $\mc G$. These subgroups generate $\mc G$ as $K_s$-group \cite[Lemma 3.1.5]{CGP} and $\Gamma_K$ permutes them transitively. This serves as a slightly weaker analogue of \eqref{eq:1.4}. Next we can argue exactly as in the proof of Lemma \ref{lem:3}, only replacing some parts by their previously established "pseudo"-analogues. As a consequence, it suffices to prove the theorem for the absolutely pseudo-simple groups $\mc G_i$ (over the field $K_i = K_s^{\Gamma_i}$). If necessary, we can still divide out the centre of $\mc G_i$, as observed above for $\mc G'$. \end{proof} Following \cite[\S C.2]{CP} we say that a connected linear algebraic group $\mc G$ is quasi-split (over $K$) if a minimal pseudo-parabolic $K$-subgroup of $\mc G$ is also minimal as pseudo-parabolic $K_s$-subgroup. In view of the classification of conjugacy classes of pseudo-parabolic $K_s$-subgroups, this condition is equivalent to $\Delta_0 = \emptyset$. \begin{prop}\label{prop:2.3} Theorem \ref{thm:2.7} holds when $\mc G$ is quasi-split over $K$. \end{prop} \begin{proof} In view of Lemma \ref{lem:2.7} we may assume that $\mc G$ is pseudo-reductive. Consider the reductive $\overline K$-group $\mc G^\red := \mc G / \mc R_u (\mc G)$. The image of $\mc T$ in $\mc G^\red$ is a maximal torus of $\mc G^\red$. It is isomorphic to $\mc T$ via the projection map, and we may identify it with $\mc T$. Thus $\mc G^\red$ has a reduced (integral) root system $\Phi (\mc G^\red,\mc T)$. The maximal $K$-torus $\mc T$ of $\mc G$ splits over $K_s$. In the terminology of \cite[Definition 2.3.1]{CGP}, $\mc G$ is pseudo-split over $K_s$. This is somewhat weaker than split -- the root system $\Phi (\mc G,\mc T)$ is integral but not necessarily reduced. (It can only be non-reduced if $K$ has characteristic 2.) By \cite[Proposition 2.3.10]{CGP} the quotient map $\mc G \to \mc G^\red$ induces a bijection between $\Phi (\mc G^\red,\mc T)$ and $\Phi (\mc G,\mc T)$, provided that the latter is reduced. In general $\Phi (\mc G^\red,\mc T)$ can be identified with the system of non-multipliable roots in $\Phi (\mc G,\mc T)$. In particular these two root systems have the same Weyl group, and there is a $W(\mc G,\mc T)$-equivariant bijection \[ \begin{array}{ccc} \{ \text{parabolic subsystems of } \Phi (\mc G,\mc T) \} & \longrightarrow & \{ \text{parabolic subsystems of } \Phi (\mc G^\red,\mc T) \} \\ R & \mapsto & R \cap \Phi (\mc G^\red,\mc T) \end{array}. \] This induces a bijection between the sets of simple roots for these root systems, say $I \longleftrightarrow I^\red$. We note that \begin{equation}\label{eq:2.3} I,J \text{ are } W(\mc G,\mc T) \text{-associate} \; \Longleftrightarrow \; I^\red ,J^\red \text{ are } W(\mc G^\red,\mc T) \text{-associate}. \end{equation} By Lemma \ref{lem:2.2}.a it suffices to prove the lemma for standard pseudo-Levi $K$-subgroups $\mc L_I ,\mc L_J$. We assume that $\mc L_I$ and $\mc L_J$ are $\mc G (\overline K)$-conjugate. Then the pseudo-parabolic $\overline K$-subgroups \[ \mc L^\red_I = \mc L_I \mc R_u (\mc G) / \mc R_u (\mc G) \quad \text{and} \quad \mc L^\red_J = \mc L_J \mc R_u (\mc G) / \mc R_u (\mc G) \] of $\mc G^\red$ are conjugate. By Lemma \ref{lem:2.2}.b the associated sets of simple roots $I^\red$ and $J^\red$ are $W(\mc G^\red,\mc T)$-associate. Then \eqref{eq:2.3} and Lemma \ref{lem:2.2}.b entail that $\mc L_I$ and $\mc L_J$ are $\mc G (K_s)$-conjugate. As $\mc G$ is quasi-split over $K$, the root system $\Phi (\mc G,\mc S)$ can be obtained by a simple form of Galois descent: it consists of the $\Gamma_K$-orbits in $\Phi (\mc G,\mc T)$. We know from \cite[Lemma 15.3.7]{Spr} that $W(\mc G,\mc S)$ is generated by the reflections $s_\alpha$ with $\alpha \in \Phi (\mc G,\mc S)$. Let $\mc H$ be a quasi-split reductive $K$-group $\mc H$ with the same root datum as $\mc G^\red$, and the same $\Gamma_K$-action on that. By \cite[Proposition 2.4.2]{SiZi} (applied to $\mc H$), the aforementioned reflections generate the subgroup $W(\mc G,\mc T)^{\Gamma_K}$ of $W(\mc G,\mc T)$. Thus \eqref{eq:7} holds again. We already showed that the $\Gamma_K$-stable subsets $I$ and $J$ of $\Delta$ are $W(\mc G,\mc T)$-associate. By Lemma \ref{lem:1}.b the corresponding Levi $K$-subgroups $\mc L_I^{\mc H} ,\mc L_J^{\mc H}$ of $\mc H$ are $\mc H (\overline K)$-conjugate. Then Theorem \ref{thm:2} says that $\mc L_I^{\mc H}$ and $\mc L_J^{\mc H}$ are also $\mc H (K)$-conjugate. Again using Lemma \ref{lem:1}.b, we deduce that $I$ and $J$ are associate under $W(\mc G,\mc T)^{\Gamma_K} = W(\mc G,\mc S)$. Finally Lemma \ref{lem:2.2}.b tells us that $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate. \end{proof} To go beyond quasi-split linear algebraic groups, we would like to use arguments like Lemmas \ref{lem:6} and \ref{lem:7}. However, the usual notion of an inner form (for reductive groups) is not flexible enough for pseudo-reductive groups \cite[\S C]{CP}. Better results are obtained by allowing inner twists involving a $K$-group of automorphisms called $(\mr{Aut}^{sm}_{\mc D (\mc G) / K} )^\circ$ in \cite[\S C.2]{CP}. This leads to the notion of pseudo-inner forms of pseudo-reductive groups. Every pseudo-reductive $K$-group admits a quasi-split inner form, apart from some exceptions that can only occur if char$(K) = 2$ and $[K : K^2] > 4$ \cite[Theorem C.2.10]{CP}. \begin{lem}\label{lem:2.4} Let $\mc G$ be a pseudo-reductive $K_s$-group and let $\lambda : GL_1 \to \mc G$ be a $K_s$-rational cocharacter. Suppose that $\phi \in (\mr{Aut}^{sm}_{\mc D (\mc G) / K_s} )^\circ (K_s)$ stabilizes the $K_s$-subgroups $\mc P_\lambda$ and $\mc L_\lambda$. Then $\phi$ stabilizes every $K_s$-subgroup of $\mc G$ that contains $\mc L_\lambda$. \end{lem} \begin{proof} Since the centre of $\mc G$ is contained in $\mc L_\lambda$, we may divide it out. Thus we may assume that $Z(\mc G) = 1$, while retaining pseudo-reductivity \cite[Proposition 4.1.3]{CP}. The derived group $\mc D (\mc G)$ is pseudo-semisimple \cite[Remark 11.2.3]{CGP} and $\mc G = Z_{\mc G}(\mc T) \mc D (G)$ \cite[Proposition 1.2.6]{CGP}. By \cite[Lemma 1.2.5.ii]{CGP} the centre of $\mc D (\mc G)$ centralizes $\mc T$. Since $Z_{\mc G}(\mc T)$ is commutative \cite[Proposition 1.2.4]{CGP}, $Z(\mc D (\mc G))$ commutes with it. Hence \[ Z(\mc D (\mc G)) = Z( \mc G ) \cap \mc D (\mc G) = 1. \] We recall from \cite[\S 4.1.2]{CP} that $\mr{Aut}_{\mc G,Z_{\mc G}(\mc T)}$ is the $K_s$-group of automorphisms of $\mc G$ which restrict to the identity on the Cartan $K_s$-subgroup $Z_{\mc G}(\mc T)$. The maximal smooth closed $K_s$-subgroup $Z_{\mc G,Z_{\mc G}(\mc T)}$ of $\mr{Aut}_{\mc G,Z_{\mc G}(\mc T)}$ is connected \cite[Proposition 6.1.4]{CP}. The same holds with $\mc D (\mc G)$ and $\mc C := Z_{\mc D (\mc G)}(\mc T)$ instead of $\mc G$ and $Z_{\mc G}(\mc T)$. In fact, by \cite[Proposition 6.1.7]{CP} there is a natural isomorphism \begin{equation}\label{eq:2.6} Z_{\mc G,Z_{\mc G}(\mc T)} \to Z_{\mc D (\mc G),\mc C} . \end{equation} Embedding $\mc C$ diagonally in $\mc D(\mc G) \rtimes Z_{\mc D (\mc G),\mc C}$, one forms the $K_s$-group $(\mc D (\mc G) \rtimes Z_{\mc D(\mc G),\mc C}) / \mc C$. It naturally acts on $\mc G$, the part $\mc D (\mc G)$ by conjugation and $Z_{\mc D (\mc G),\mc C}$ via \eqref{eq:2.6}. According to \cite[Proposition 6.2.4]{CP}, which we may apply because $\mc D (\mc G)$ is pseudo-semisimple, there is an isomorphism of $K_s$-groups \[ (\mc D (\mc G) \rtimes Z_{\mc D (\mc G),\mc C}) / \mc C \to (\mr{Aut}^{sm}_{\mc D (\mc G) / K_s} )^\circ , \] which preserves the actions on $\mc G$. Furthermore \cite[Proposition 6.2.4]{CP} also says that the homomorphism \begin{equation}\label{eq:2.4} \mc D (\mc G) (K_s) \rtimes Z_{\mc D (\mc G),\mc C}(K_s) \to (\mr{Aut}^{sm}_{\mc D (\mc G) / K_s} )^\circ (K_s) \end{equation} is surjective. By \cite[Proposition 6.1.4]{CP} there is a decomposition \[ Z_{\mc D (\mc G),\mc C} \cong \prod\nolimits_{\alpha \in \Delta} Z_{\mc G_\alpha,\mc C_\alpha} \qquad \text{as } K_s\text{-groups}. \] Taking into account that $Z(\mc D (\mc G)) = 1$, \cite[Lemma 6.1.3]{CP} and \cite[Proposition 9.8.15]{CGP} show that each of the $K_s$-groups $Z_{\mc G_\alpha,\mc C_\alpha}$ is a subtorus of $\mc T \cap \mc D (\mc G)$ which acts on $\mc G$ by conjugation. Combining that with \eqref{eq:2.4}, we deduce that $\phi$ can be realized as Ad$(g)$ for some $g \in \mc D (\mc G) (K_s)$. As $\mc P_\lambda$ is its own normalizer \cite[Proposition 3.5.7]{CGP}, we must have $g \in \mc P_\lambda (K_s)$. A nontrivial element $u$ of $\mc U_{\mc G}(\lambda)(K_s)$ cannot normalize $\mc L_\lambda$, because \[ \lambda (a) u \lambda^{-1}(a) u^{-1} \in \mc U_{\mc G}(\lambda)(K_s) \setminus \{1\} \] for generic (i.e. not a root of unity) $a \in K^\times$. In view of \eqref{eq:2.1}, this implies that the normalizer of $\mc L_\lambda$ in $\mc P_\lambda$ is $\mc L_\lambda$ itself. Thus the assumptions of the lemma even entail $g \in \mc L_\lambda (K_s)$. Now it is clear that $\phi = \mr{Ad}(g)$ stabilizes every $K_s$-subgroup of $\mc G$ that contains $\mc L_\lambda$. \end{proof} Suppose that $\mc G^*$ is a quasi-split pseudo-reductive group and that $\psi : \mc G \to \mc G^*$ is a pseudo-inner twist. (This forces $\mc G$ to be pseudo-reductive as well.) The setup leading to Lemma \ref{lem:6} remains valid if we replace all objects by their pseudo-versions. \begin{lem}\label{lem:2.5} Let $\mc H$ be a $K_s$-subgroup of $\mc G$ containing $\mc L_{\Delta_0}$. Then $\mc H$ is defined over $K$ if and only if $\psi (\mc H)$ is defined over $K$. \end{lem} \begin{proof} Exactly as in the proof of Lemma \ref{lem:6} one shows that $(\mc P^*_{\Delta_0},\mc L^*_{\Delta_0})$ is defined over $K$ and stable under Ad$(u(\gamma))$ for all $\gamma \in \Gamma_K$. Next Lemma \ref{lem:2.4} says that Ad$(u(\gamma)) \in (\mr{Aut}^{sm}_{\mc D (\mc G) / K} )^\circ (K_s)$ stabilizes $\psi (\mc H)$. Then \eqref{eq:1.7} shows that $\psi (\mc H)$ is $\Gamma_K$-stable if and only if $\mc H$ is $\Gamma_K$-stable. \end{proof} Now we can finish the proof of our main result. \begin{prop}\label{prop:2.8} Theorem \ref{thm:2.7} holds for absolutely pseudo-simple $K$-groups with trivial centre. \end{prop} \begin{proof} By Lemma \ref{lem:2.2}.a it suffices to consider two standard pseudo-Levi subgroups $\mc L_I, \mc L_J$ which are $\mc G (\overline K)$-conjugate. As $\mc G$ becomes pseudo-split over $K_s$, Proposition \ref{prop:2.3} tells us that there exists a $w \in \mc G (K_s)$ with $w \mc L_I w^{-1} = \mc L_J$. By \cite[Proposition 4.1.3 and Theorem 9.2.1]{CP} $\mc G$ is generalized standard, in the sense of \cite[Definition 9.1.7]{CP}. With \cite[Definition 9.1.5]{CP} we see that (at least) one of the following conditions holds: \begin{enumerate}[(i)] \item The characteristic of $K$ is not 2, or char$(K) = 2$ and $[K : K^2] \leq 4$. \item The group $\mc G$ is standard \cite[Definition 2.1.3]{CP} or exotic \cite[Definitions 2.2.2 and 2.2.3]{CP}. \item The root system of $\mc G$ over $K_s$ has type $B_n, C_n$ or $BC_n$ with $n \geq 1$. \end{enumerate} \textbf{(i) and (ii).} In the cases (i) and (ii) with $\mc G$ standard, \cite[Theorem C.2.10]{CP} tells us that $\mc G$ has a quasi-split pseudo-inner form. If we are in case (ii) with $\mc G$ non-standard and char$(K) = 2$, then $\mc G$ is an exotic pseudo-reductive group with root system (over $K_s$) of type $B_n ,C_n$ or $F_4$. By \cite[Proposition C.1.3]{CP} it has a pseudo-split $K_s / K$-form. Since the Dynkin diagram of $\mc G$ admits no nontrivial automorphisms, the group $\mr{Aut}^{sm}_{\mc G / K}$ is connected and every $K_s/K$-form of $\mc G$ is pseudo-inner \cite[Proposition 6.3.4]{CP}. Thus, in the cases (i) and (ii) $\mc G$ has a quasi-split pseudo-inner form. Now we argue as in the proof of Lemma \ref{lem:7}, using Lemma \ref{lem:2.5} instead of Lemma \ref{lem:6}.c. The hypothesis in Lemma \ref{lem:7} is fulfilled for quasi-split pseudo-reductive groups, by Proposition \ref{prop:2.3}. This shows that $\mc L_J$ is $\mc G (K)$-conjugate to a pseudo-Levi factor of $\mc P_I$. In the proof of Lemma \ref{lem:2.2}.a we checked that all such pseudo-Levi factors are $\mc P_I (K)$-conjugate, so $\mc L_J$ is $\mc G (K)$-conjugate to $\mc L_I$. \textbf{(iii).} The three types can be dealt with in the same way, so we only consider root systems $\Phi (\mc G,\mc T)$ of type $B_n$. Since this Dynkin diagram does not admit any nontrivial automorphisms, the action $\mu_{\mc B}$ of $\Gamma_K$ is trivial. Suppose first that $n \leq 2$. Then any two diferent subsets of $\Delta$ are not $W(\mc G,\mc T)$-associate, as is easily checked. Hence $I = J$ and $\mc L_I = \mc L_J$ in this case. From now on we suppose that $\Phi (\mc G,\mc T)$ has type $B_n$ with $n > 2$. We realize the root system of type $B_n$ in the standard way in $\Z^n$. Let $\alpha_1, \ldots, \alpha_{n-1}, \alpha_n$ be the vertices of $\Delta$, where $\alpha_i = e_i - e_{i+1}$ for $i < n$ and $\alpha_n = e_n$ is the short simple root. By Lemma \ref{lem:2.2}.b there exists a $w \in W(\mc G,\mc T) = W(B_n)$ with $w I = J$. When $I$ or $J$ equals $\Delta$, we immediately obtain $I = J$. Hence we may assume that $I \subsetneq \Delta \supsetneq J$. Let $m \in \Z_{\geq 0}$ be the smallest number such that $\alpha_{n-m} \notin I$. For $j < m$, $\alpha_{n-j} \in I$ is the unique root in $\Delta$ which is connected to $\alpha_n$ by a string of length $j$. As $\alpha_n$ is the unique short simple root and $w I \subset \Delta$, it follows that $w (\alpha_{n-j}) = \alpha_{n-j}$ for all $j < m$. The same considerations apply to $J$ and $w I = J$, so $\{ \alpha_{n+1-m},\ldots,\alpha_n \} \subset I \cap J$ is fixed pointwise by $w$ and $\alpha_{n-m} \neq I \cup J$. As \[ \mr{span}_\Z \{ \alpha_{n+1-m}, \ldots,\alpha_n \} = \mr{span}_\Z \{ e_n, e_{n-1},\ldots,e_{n+1-m} \}, \] $w$ must lie in $W(B_{n-m})$. Write $\Delta' = \{ \alpha_1, \ldots, \alpha_{n-1-m} \}$ and $\Delta'' = \{\alpha_{n+1-m}, \ldots, \alpha_n \}$, two orthogonal sets of simple roots. The standard pseudo-Levi $K$-subgroup $\mc L_{\Delta' \cup \Delta''}$ of $\mc G$ contains $\mc L_I$ and $\mc L_J$. Decomposing its root system in irreducible components gives \[ \mc L_{\Delta' \cup \Delta''} = \mc L_{\Delta'} \mc L_{\Delta''} . \] The index of $\mc L_{\Delta'}$ consists of $\Delta', \Delta' \cap \Delta_0$ and the trivial action of $\Gamma_K$. Here $\Delta'$ has type $A_{n-1-m}$ and by \cite[Lemma 15.5.8]{Spr} the subset $\Delta'_0 := \Delta' \cap \Delta_0$ is stable under the nontrivial automorphism of $A_{n-1-m}$. As shown in \cite[\S 3.3.2]{Tit}, this implies that there exists a divisor $d$ of $n-m$ such that \[ \Delta' \setminus \Delta'_0 = \Z d \cap [1,\ldots,n-1-m] . \] With \cite[\S 17.1]{Spr} we see that $(\Delta',\Delta'_0,\mr{triv})$ is the index of an inner form $\mc H$ of $GL_n$. Explicitly, we can take $\mc H (\Q) = GL_{(n-m)/d}(D)$ where $D$ is a division algebra whose centre equals the ground field $\Q$. As maximal $\Q$-split torus $\mc S^{\mc H}(\Q)$ we take the diagonal matrices with entries in $\Q^\times$. The isomorphism class of the Dynkin diagram $I' := I \cap \Delta$ determines the isomorphism class of the standard Levi $\Q$-subgroup $\mc L_{I'}^{\mc H}$ of $\mc H$. Namely, $\mc L_{I'}^{\mc H}(\Q)$ is a direct product of groups $GL_{n_j}(D)$, where $\sum_j n_j = (n-m)/d$ and $I'$ has connected components of sizes $d n_j - 1$. The set of simple roots $J' := J \cap \Delta'$ is associate to $I'$ by $w \in W(B_{n-m})$, so isomorphic to $I'$ as Dynkin diagram. It follows that the standard Levi $\Q$-subgroups $\mc L_{I'}^{\mc H}$ and $\mc L_{J'}^{\mc H}$ of $\mc H$ are isomorphic. That is, $\mc L_{J'}^{\mc H}$ is also a direct product of the groups $GL_{n_j}(D)$, but maybe situated in a different (standard) position inside $GL_{(n-m)/d}(D)$. With a permutation $w'$ from $S_{(n-m)/d}$ we can bring them in the same position. Then $n \mc L_{I'}^{\mc H} n^{-1} = \mc L_{J'}^{\mc H}$ for some $n \in N_{\mc H (\Q}(\mc S^{\mc H}(\Q))$ and $w' I' = J'$, where $w'$ is the image of $n$ in $W(\mc H,\mc S^{\mc H}) \cong S_{(n-m)/d}$. As $W(\mc H,\mc S^{\mc H}) = W(\mc L_{\Delta'},\mc S)$, we conclude that $I'$ and $J'$ are associate by an element of $W(\mc L_{\Delta'},\mc S) \subset W(\mc G,\mc S)$. Since $\Delta'$ and $\Delta''$ are orthogonal, $w'$ fixes $I \cap \Delta'' = J \cap \Delta''$ pointwise. Hence $w' I = J$ and by Lemma \ref{lem:2.2}.b $\mc L_I$ and $\mc L_J$ are $\mc G (K)$-conjugate. \end{proof} \newpage
1904.08835
\section{Introduction} Text-to-SQL generation is the task of translating a natural language question into the corresponding SQL. Recently, various deep learning approaches have been proposed based on the WikiSQL dataset \citep{wikisql}. However, because WikiSQL contains only very simple queries over just a single table, these approaches \citep{sqlnet, meta-learning, typesql, coarse-to-fine} cannot be applied directly to generate complex queries containing elements such as \texttt{JOIN}, \texttt{GROUP BY}, and nested queries. To overcome this limitation, \citet{spider} introduced \textit{Spider}, a new complex and cross-domain text-to-SQL dataset. It contains a large number of complex queries over different databases with multiple tables. It also requires a model to generalize to unseen database schema as different databases are used for training and testing. Therefore, a model should understand not only the natural language question but also the schema of the corresponding database to predict the correct SQL query. In this paper, we propose a novel SQL-specific clause-wise decoding neural network model to address the \textit{Spider} task. We first predict a sketch for each SQL clause (e.g., \texttt{SELECT}, \texttt{WHERE}) with text classification modules. Then, clause-specific decoders find the columns and corresponding operators based on the sketches. Our contributions are summarized as follows. \begin{itemize} \item We decompose the clause-wise SQL decoding process. We also modularize each of the clause-specific decoders into sub-modules based on the syntax of each clause. Our architecture enables the model to learn clause-dependent context and also ensures the syntactic correctness of the predicted SQL. \item Our model works recursively so that it can predict nested queries. \item We also introduce a self-attention based database schema encoder that enables our model to generalize to unseen databases. \end{itemize} In the experiment on the \textit{Spider} dataset, we achieve 24.3\% and 28.8\% exact SQL matching accuracy on the test and dev set respectively, which outperforms the previous state-of-the-art approach \citep{syntaxsqlnet} by 4.6\% and 9.8\%. In addition, we show that our approach is significantly more effective compared to previous work at predicting not only simple SQL queries, but also complex and nested queries. \section{Related Work} Our work is related to the grammar-based constrained decoding approaches for semantic parsing \citep{yin2017syntactic, ASN, iyer2018mapping}. While their approaches are focused on general purpose code generation, we instead focus on SQL-specific grammar to address the text-to-SQL task. Our task differs from code generation in two aspects. First, it takes a database schema as an input in addition to natural language. To predict SQL correctly, a model should fully understand the relationship between the question and the schema. Second, as SQL is a non-procedural language, predictions of SQL clauses do not need to be done sequentially. For text-to-SQL generation, several SQL-specific approaches have been proposed \citep{wikisql, sqlnet, meta-learning, typesql, coarse-to-fine, yavuz2018takes} based on WikiSQL dataset \citep{wikisql}. However, all of them are limited to the specific WikiSQL SQL sketch, which only supports very simple queries. It includes only the \texttt{SELECT} and \texttt{WHERE} clauses, only a single expression in the \texttt{SELECT} clause, and works only for a single table. To predict more complex SQL queries, sequence-to-sequence \citep{iyer, improve-text-to-sql} and template-based \citep{improve-text-to-sql, lee2019one} approaches have been proposed. However, they focused only on specific databases such as ATIS \cite{atis} and GeoQuery \cite{geo}. Because they only considered question and SQL pairs without requiring an understanding of database schema, their approaches cannot generalize to unseen databases. SyntaxSQLNet \citep{syntaxsqlnet} is the first and state-of-the-art model for the \textit{Spider} \citep{spider}, a complex and cross-domain text-to-SQL task. They proposed an SQL specific syntax tree-based decoder with SQL generation history. Our approach differs from their model in the following aspects. First, taking into account that SQL corresponds to non-procedural language, we develop a clause-specific decoder for each SQL clause, where SyntaxSQLNet predicts SQL tokens sequentially. For example, in SyntaxSQLNet, a single column prediction module works both in the \texttt{SELECT} and \texttt{WHERE} clauses, depending on the SQL decoding history. In contrast, we define and train decoding modules separately for each SQL clause to fully utilize clause-dependent context. Second, we apply sequence-to-sequence architecture to predict columns instead of using the sequence-to-set framework from SyntaxSQLNet, because correct ordering is essential for the \texttt{GROUP BY} and \texttt{ORDER BY} clauses. Finally, we introduce a self-attention mechanism \citep{self-attention} to efficiently encode database schema, which includes multiple tables. \begin{figure}[t] \centering\includegraphics[scale=0.31]{figure1.PNG} \caption{\label{figure1} Clause-wise and recursive SQL generation process.} \end{figure} \section{Methodology} We predict complex SQL clause-wisely as described in Figure~\ref{figure1}. Each clause is predicted consecutively by at most three different types of modules (sketch, column, operator). The same architecture recursively predicts nested queries with temporal predicted SQL as an additional input. \subsection{Question and Schema Encoding} \label{sec:encoding} We encode a natural language question with a bi-directional LSTM. We denote $H_{Q} \in \mathbb{R}^{d \times \left\vert X \right\vert}$ as the question encoding, where $d$ is the number of LSTM units and $\left\vert X \right\vert$ is the number of tokens in the question. To encode a database schema, we consider each column in its tables as a concatenated sequence of words from the table name and column name with a separation token. (e.g., [student, \texttt{[SEP]}, first, name]). First, we apply bi-directional LSTM over this sequence for each column. Then, we apply the self-attention mechanism \citep{self-attention} over the LSTM outputs to form a summarized fixed-size vector for each column. For the $i$th column, its encoding $h_{col}^{(i)} \in \mathbb{R}^{d}$ is computed by a weighted sum of the LSTM output $o_{col}^{(i)} \in \mathbb{R}^{d \times \left\vert L \right\vert}$ as follows: \begin{gather} \alpha = \texttt{softmax}(w^T \texttt{tanh}(o_{col}^{(i)})) \\ h_{col}^{(i)} = o_{col}^{(i)} \, \alpha^T \end{gather} where $\left\vert L \right\vert$ is the number of tokens in the column and $w \in \mathbb{R}^{d}$ is a trainable parameter. We denote $H_{col} = [h_{col}^{(1)}, ... h_{col}^{(\left\vert C \right\vert)}]$ as columns encoding where $\left\vert C \right\vert$ is the number of columns in the database. \subsection{Sketch Prediction} \label{sec:sketch} We predict the clause-wise sketch via 8 different text classification modules that include the number of SQL expressions in each clause, the presence of \texttt{LIMIT} clause, and the presence of \texttt{INTERSECT/UNION/EXCEPT} as described in Figure~\ref{figure1}. All of them share the same model architecture but are trained separately. For the classification, we applied attention-based bi-directional LSTM following \citet{attention-text-clf}. First, we compute sentence representation $r_s \in \mathbb{R}^{d}$ by a weighted sum of question encoding $H_{Q} \in \mathbb{R}^{d \times \left\vert X \right\vert}$. Then we apply the softmax classifier to choose the sketch as follows: \begin{gather} \alpha_{s} = \texttt{softmax}(w_{s}^T \texttt{tanh}(H_{Q})) \\ r_s = H_{Q} \, \alpha_{s}^T \\ P_{sketch} = \texttt{softmax}(W_{s}r_s + b_{s}) \end{gather} where $w_s \in \mathbb{R}^{d}, W_s \in \mathbb{R}^{n_s \times d}, b_s \in \mathbb{R}^{n_s}$ are trainable parameters and $n_s$ is the number of possible sketches. \subsection{Columns and Operators Prediction} \label{sec:col-and-op} To predict columns and operators, we use the LSTM decoder with the attention mechanism \citep{luong-attention} such that \textit{the number of decoding steps are decided by the sketch prediction module}. We train 5 different column prediction modules separately for each SQL clause, but they share the same architecture. In the column prediction module, the hidden state of the decoder at the $t$-th decoding step is computed as $d_{col}^{(t)} (\in \mathbb{R}^{d}) = \texttt{LSTM}(d_{col}^{(t-1)}, h_{col}^{(t-1)})$, where $h_{col}^{(t-1)} \in \mathbb{R}^{d}$ is an encoding of the predicted column in the previous decoding step. The context vector $r^{(t)}$ is computed by a weighted sum of question encodings $H_{Q} \in \mathbb{R}^{d \times \left\vert X \right\vert}$ based on attention weight as follows: \begin{gather} \alpha^{(t)} = \texttt{softmax}({d_{col}^{(t)}}^T \, H_{Q}) \\ r^{(t)} = H_Q \, {\alpha^{(t)}}^T \end{gather} Then, the attentional output of the $t$-th decoding step $a_{col}^{(t)}$ is computed as a linear combination of $d_{col}^{(t)} \in \mathbb{R}^{d}$ and $r^{(t)} \in \mathbb{R}^{d}$ followed by \texttt{tanh} activation. \begin{equation}\label{eqn:attention} a_{col}^{(t)} = \texttt{tanh}(W_1 d_{col}^{(t)} + W_2 r^{(t)}) \end{equation} where $W_1,W_2 \in \mathbb{R}^{d \times d}$ are trainable parameters. Finally, the probability for each column at the $t$-th decoding step is computed as a dot product between $a_{col}^{(t)} \in \mathbb{R}^{d}$ and the encoding of each column in $H_{col} \in \mathbb{R}^{d \times \left\vert C \right\vert}$ followed by softmax. \begin{equation} P_{col}^{(t)} = \texttt{softmax}({a_{col}^{(t)}}^T \, H_{col}) \end{equation} To predict corresponding operators for each predicted column, we use a decoder of the same architecture as in the column prediction module. The only difference is that a decoder input at the $t$-th decoding step is an encoding of the $t$-th predicted column from the column prediction module. \begin{equation} d_{op}^{(t)} = \texttt{LSTM}(d_{op}^{(t-1)}, h_{col}^{(t)}) \end{equation} Attentional output $a_{op}^{(t)} \in \mathbb{R}^{d}$ is computed identically to Eq. (\ref{eqn:attention}). Then, the probability for operators corresponding to the $t$-th predicted column is computed by the softmax classifier as follows: \begin{equation} P_{op}^{(t)} = \texttt{softmax}(W_o a_{op}^{(t)} + b_o) \end{equation} where $W_o \in \mathbb{R}^{n_o \times d}$ and $b_o \in \mathbb{R}^{n_o}$ are trainable parameters and $n_o$ is the number of possible operators. \begin{table*}[h!] \small \centering \begin{tabular}{l|ccccc|c} \toprule & \multicolumn{5}{c}{Dev} & \multicolumn{1}{c}{Test} \\ Method & Easy & Medium & Hard & Extra Hard & All & All \\ \midrule SQLNet & 23.2\% & 8.6\% & 9.8\% & 0\% & 10.9\% & 12.4\% \\ TypeSQL & 18.8\% & 5.5\% & 4.6\% & 2.4\% & 8.0\% & 8.2\% \\ SyntaxSQLNet & 38.4\% & 15.0\% & 16.1\% & 3.5\% & 19.0\% & 19.7\% \\ \midrule Ours & \textbf{53.2\%} & \textbf{27.0\%} & \textbf{20.1\%} & \textbf{6.5\%} & \textbf{28.8\%} & \textbf{24.3\%} \\ -rec & 53.2\% & 27.0\% & 14.4\% & 2.9\% & 27.4\% & - \\ -rec - col-att & 46.4\% & 22.0\% & 12.1\% & 4.7\% & 23.4\% & - \\ -rec -col-att -sketch & 33.2\% & 18.6\% & 11.5\% & 4.7\% & 18.7\% & - \\ \bottomrule \end{tabular} \caption{\label{result-table-1} Accuracy of exact SQL matching with different hardness levels.} \end{table*} \begin{table*}[h!] \small \centering \begin{tabular}{l|ccccc} \toprule Method & SELECT & WHERE & GROUP BY & ORDER BY & KEYWORDS \\ \midrule SQLNet & 46.6\% & 20.6\% & 37.6\% & 49.2\% & 62.8\% \\ TypeSQL & 43.7\% & 14.8\% & 16.9\% & 52.1\% & 67.0\% \\ SyntaxSQLNet & 55.4\% & 22.2\% & 51.4\% & 50.6\% & 73.3\% \\ \midrule Ours & \textbf{68.7\%} & \textbf{39.0\%} & \textbf{63.1\%} & \textbf{63.5\%} & \textbf{76.5\%} \\ \bottomrule \end{tabular} \caption{\label{result-table-2} F1 scores of SQL component matching on the \textit{dev} set. } \end{table*} \subsection{From Clause Prediction} \label{sec:from} After the predictions of all the other clauses, we use a heuristic to generate the \texttt{FROM} clause. We first collect all the columns that appear in the predicted SQL, and then we \texttt{JOIN} tables that include these predicted columns. \subsection{Recursion for Nested Queries} To predict the presence of a sub-query, we train another module that has the same architecture as the operator prediction module. Instead of predicting corresponding operators for each column, it predicts whether each column is compared to a variable (e.g., \texttt{WHERE} age $>$ 3) or to a sub-query (e.g., \texttt{WHERE} age $>$ (\texttt{SELECT} \texttt{avg}(age) ..)). In the latter case, we add the temporal \texttt{[SUB\_QUERY]} token to the corresponding location in the SQL output. Additionally, if the sketch prediction module predicts one of \texttt{INTERSECT/UNION/EXCEPT} operators, we add a \texttt{[SUB\_QUERY]} token after the operator. To predict a sub-query, our model takes the temporal generated SQL with a \texttt{[SUB\_QUERY]} token as an input in addition to a natural language question with separate token \texttt{[SEP]} (e.g., What is ... \texttt{[SEP]} \texttt{SELECT} ... \texttt{INTERSECT} \texttt{[SUB\_QUERY]}). This input is encoded in the same way as question encoding described in Section~\ref{sec:encoding}. Then, the rest of the SQL generation process is identical to that described in Section~\ref{sec:sketch}--\ref{sec:from}. After the sub-query is predicted, it replaces the \texttt{[SUB\_QUERY]} token to form the final query. \section{Experiments} \subsection{Experimental Setup} We evaluate our model with \textit{Spider} \citep{spider}, a large-scale, complex and cross-domain text-to-SQL dataset. We follow the same database split as \citet{spider}, which ensures that any database schema that appears in the training set does not appear in the dev or test set. Through this split, we examine how well our model can be generalized to unseen databases. Because the test set is not opened to the public, we use the \textit{dev} set for the ablation analysis. For the evaluation metrics, we use 1) accuracy of exact SQL matching and 2) F1 score of SQL component matching, proposed by \citep{spider}. We also follow their query hardness criteria to understand the model performance on different levels of queries. Our model and all the baseline models are trained based on only the \textit{Spider} dataset without data augmentation. \subsection{Model Configuration} We use the same hyperparameters for every module. For the word embedding, we apply deep contextualized word representations (ELMO) from \citet{elmo} and allow them to be fine-tuned during the training. For the question and column encoders, we use a 1-layer 512-unit bi-directional LSTM. For the decoders in the columns and operators prediction modules, we use a 1-layer 1024-unit uni-directional LSTM. For the training, we use Adam optimizer \citep{adam} with a learning rate of 1e-4 and use early stopping with 50 epochs. Additionally, we use dropout \citep{dropout} with a rate of 0.2 for the regularization. \subsection{Result and Analysis} Table~\ref{result-table-1} shows the exact SQL matching accuracy of our model and previous models. We achieve 24.3\% and 28.8\% on the test and dev sets respectively, which outperforms the previous best model SyntaxSQLNet \citep{syntaxsqlnet} by 4.6\% and 9.8\%. Moreover, our model outperforms previous models on all different query hardness levels. To examine how each technique contributes to the performance, we conduct an ablation analysis of three aspects: 1) without recursion, 2) without self-attention for database schema encoding, and 3) without sketch prediction modules that decide the number of decoding steps. Without recursive sub-query generation, the accuracy drops by 5.7\% and 3.6\% for hard and extra hard queries, respectively. This result shows that the recursion we use enables the model to predict nested queries. When using the final LSTM hidden state as in \citet{syntaxsqlnet} instead of using self-attention for schema encoding, the accuracy drops by 4.0\% on all queries. Finally, when using only an encoder-decoder architecture without sketch generation for columns prediction, the accuracy drops by 4.7\%. For the component matching result for each SQL clause, our model outperforms previous approaches for all of the SQL components by a significant margin, as shown in Table~\ref{result-table-2}. Examples of predicted SQL from different models are shown in Appendix~\ref{sec:appendix-a}. \section{Conclusion} In this paper, we propose a recursive and SQL clause-wise decoding neural architecture to address the complex and cross-domain text-to-SQL task. We evaluate our model with the \textit{Spider} dataset, and the experimental result shows that our model significantly outperforms previous work for generating not only simple queries, but also complex and nested queries. \section*{Acknowledgments} We thank Yongsik Lee, Jaesik Yoon, and Donghun Lee (SAP) for their reviews and support. We also thank professor Sungroh Yoon, Jongyun Song, and Taeuk Kim (Seoul National University) for their insightful feedback and three anonymous reviewers for their helpful comments.
1007.0379
\section{Introduction} The intersymbol interefence (ISI) channel has been widely studied in communication theory. In optimal detection schemes for the ISI channel, input-output sequences, rather than individual symbols, have to be considered~\cite{Forney}. Sequence detectors such as the \emph{Viterbi} detector, only compute hard decisions~\cite{Viterbi}. On the other hand, modern coding techniques require detection schemes that also compute \emph{symbol reliabilities} (also known as \emph{soft-outputs}, \emph{log-likelihood ratios}, etc.,)~\cite{Turbo,Ma,Lim}. Some commonly cited detectors that perform this task, include the \emph{soft-output Viterbi algorithm} (SOVA)~\cite{SOVA}, the \emph{Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm}~\cite{BCJR}, and the \emph{max-log-map} (MLM) detector~\cite{EquivSOVA}. These detectors have been in use for some time, however there is scarce literature on their analysis. That being said, it appears there has been recent interest in the analysis of the MLM detector. The marginal symbol error probability has been derived for a 2-state convolutional code in~\cite{Yoshi}; this has been further extended for convolutional codes with constraint length two in~\cite{Lent}. Also, approximations for the MLM reliability distributions are obtained in~\cite{Reggiani,Avu}. In this paper, we consider the MLM receiver applied, using binary signaling, to an intersymbol interference (ISI) channel. In particular we consider its \emph{sliding-window} implementation. A MLM receiver is termed to be $\L$-truncated, if it only considers a signaling window of length $\L$ around the time instant of interest. The analysis of $\L$-truncated MLM receivers is shown to be tractable, in which for any number $n$ of chosen time instants, we derive \emph{exact, closed-form} expressions for \emph{both} i) the \emph{joint} distribution of the symbol reliabilities, and ii) the \emph{joint} probability that the detected symbols are in error. While past work considered only marginal distributions, we provide analytic expressions for joint MLM receiver statistics. Our derivation is simple; and follows from a simple observation. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{CS.eps} \caption{Time evolution of the channel states. Given the state at time $t-1$, the channel input $A_t$ determines the \emph{new} state at time $t$. The channel output $Z_t$ clearly depends on the two neighboring states.} \label{fig:CS} \vspace*{-15pt} \end{figure} \textbf{Notation}: {Deterministic} quantities are denoted as follows. Bold fonts are used to distinguish both vectors and matrices (e.g. denoted~$\mat{a}$ and $ \Mat{A}$, respectively) from scalar quantities (e.g. denoted~$a$). Next, {random} quantities are denoted as follows. Scalars are denoted using upper-case italics (e.g. denoted $A$) and vectors denoted using upper-case bold italics (e.g. denoted~$\randb{A}$). Note that we do not reserve specific notation for random matrices. Throughout the paper both $t$ and $\tau$ are used to denote time indices. Sets are denoted using curly braces, e.g. $\{a_1, a_2, a_3, \cdots\}$. Also, both $\alpha$ and $\beta$ are used for auxiliary notation as needed. Finally, the maximization over the components of the size-$n$ vector $\mat{a} = [a_1,a_2,\cdots, a_n]^T$, may be written either explicitly as $\max_{i \in \{1,2,\cdots, n \}} a_i$, or concisely as $\max \mat{a}$. Events are denoted in curly brackets, e.g. $\Ev{A\leq a}$ is the event where $A $ is at most $a$. The probability of the event $\Ev{A\leq a}$ is denoted $\Pr{A\leq a}$. The letter $F$ is reserved to denote probability \emph{cumulative} distribution functions, i.e. $F_A(a) = \Pr{A \leq a}$. The expectation of $A$ is denoted as $\mathbb{E}\{A\}$. \section{The MLM Algorithm} A random sequence of symbols drawn from the set $\{-1, 1\}$, denoted as $\cdots, \rand{A}_{-2},\rand{A}_{-1}, \rand{A}_{0},\rand{A}_{1}, \rand{A}_{2}, \linebreak[1] \cdots$, is transmitted across the ISI channel. Let the following random sequence denoted as $\cdots, \rand{Z}_{-2}, \rand{Z}_{-1}, \rand{Z}_{0}, \linebreak[1]\rand{Z}_{1}, \rand{Z}_{2}, \cdots$ be the ISI \emph{channel output} sequence. Let $h_0, h_1, \cdots, h_\ell$ denote the ISI \emph{channel coefficients}, here $\ell$ is a non-negative integer. The input-output relationship of the ISI channel is given by the following equation\vspace*{-15pt} \begin{eqnarray} \rand{Z}_t &=& \sum_{i=0}^\ell h_i \rand{A}_{t-i} - \rand{W}_t, \label{eqn:chan1} \end{eqnarray} and we assume that the noise\footnote{To obtain neater expressions in the sequel, the Gaussian noise sample $\rand{W}_t$ in (\ref{eqn:chan1}) is subtracted. This differs from convention where $\rand{W}_t$ is typically added~\cite{Forney}. Note there is no loss in generality when subtracting, because the Gaussian distribution is symmetric about its mean. }~samples $\cdots, \rand{W}_{-2},\rand{W}_{-1}, \rand{W}_{0}, \linebreak[1] \rand{W}_{1}, \rand{W}_{2}, \cdots$ are zero-mean and jointly Gaussian distributed (note that we \emph{do not} assume they are \emph{independent}). \begin{defn} \label{defn:state} The ISI \textbf{channel state} at time $t$ equals the (length-$\ell$) vector of input symbols $[\rand{A}_{t-\ell+1},\rand{A}_{t-\ell+2},\cdots, \rand{A}_{t}]^T$. The constant $\ell$ in (\ref{eqn:chan1}) is termed the ISI channel \textbf{memory length}. \end{defn} Figure \ref{fig:CS} depicts the time evolution of the ISI channel states. The total number of possible states is clearly $2^\ell$, which is exponential in the memory length $\ell$. \subsection{The $\L$-truncated max-log-map (MLM) detector} We proceed to describe the sliding-window MLM receiver. At time instant $t$, the $\L$-truncated MLM detector considers the neighborhood of $2\L+\ell+1$ channel outputs $\randb{Z}_t \stackrel{\triangle}{=} [\rand{Z}_{t-\L},\rand{Z}_{t-\L+1},\cdots, \rand{Z}_{t+\L+\ell}]^T$. Define the symbol neighborhood $\randb{A}_t$ containing the following $2(\L+\ell)+1$ input symbol \begin{eqnarray} \randb{A}_t \stackrel{\triangle}{=} [\rand{A}_{t-\L-\ell}, \rand{A}_{t-\L-\ell+1},\cdots, \rand{A}_{t+\L+\ell}]^T. \label{eqn:sig_vect} \end{eqnarray} Both $\randb{A}_t$ and $\randb{Z}_t$ are depicted in Figure \ref{fig:Trellis}. Let $\mat{h}_i$ denote the following length-$(2\L+\ell+1)$ vector\vspace*{-5pt} \begin{eqnarray} \mat{h}_i &\stackrel{\triangle}{=}& [\overbrace{0,0,\cdots, 0}^{\L+i},h_0,h_1, \cdots, h_\ell, \overbrace{0,0,\cdots, 0}^{\L-i}]^T, \label{eqn:h_i} \end{eqnarray} where $i$ can take values $|i| \leq m$. Let $\mathbbb{0}$ denote an \emph{all-zeros} vector $\mathbbb{0}\stackrel{\triangle}{=} [0,0,\cdots,0]^T$. Let both $\Mat{H}$ and $\Mat{T}$ denote the size $2\L+\ell+1$ by $2(\L+\ell)+1$ matrices given as \newcommand{\mbox{ $\0$ } }{\mbox{ $\mathbbb{0}$ } } \begin{eqnarray} \mat{H} \!\!\!\!\! &\stackrel{\triangle}{=}& \!\!\![\overbrace{\mathbbb{0},\mathbbb{0},\cdots,\mathbbb{0}}^{\ell}, \overbrace{\mat{h}_{-\L},\mat{h}_{-\L+1},\cdots,\mat{h}_{\L}}^{2m+1},\overbrace{\mathbbb{0},\mathbbb{0},\cdots,\mathbbb{0}}^{\ell}], \nonumber\\ \mat{T} \!\!\!\!\! &\stackrel{\triangle}{=}& \!\!\! [ \mbox{\quad\quad$\mat{T}_1$\quad\;}, \mbox{ $\0$ } \;\;\;,\mbox{ $\0$ } \quad\;\;,\cdots,\mbox{ $\0$ } \;, \mbox{\quad\quad$\mat{T}_2$\quad\;}], \label{eqn:HandT} \end{eqnarray} where the two \emph{submatrices} $\mat{T}_1$ and $\mat{T}_2$ equal \begin{eqnarray} \mat{T}_1 \!\!\!\! &=& \!\!\!\!\!\! \left[ \begin{array}{@{}c@{}} \begin{array}{*{12}{@{\hspace{.5ex}}c@{ \hspace{.5ex} }}} h_\ell & h_{\ell-1} & \cdots & h_1 \\ & h_\ell & & \vdots \\ & & \ddots & \vdots \\ & & & h_\ell \\ \end{array} \\ \begin{array}{c} \\ \\ \\ \\ \\ \\ \end{array} \end{array} \right], \mat{T}_2 = \!\! \left[ \begin{array}{@{}c@{}} \begin{array}{c} \\ \\ \\ \\ \\ \\ \end{array}\\ \begin{array}{*{12}{@{\hspace{.5ex}}c@{ \hspace{.5ex} }}} h_0 \\ \vdots & \ddots \\ h_{\ell-2} & \cdots & h_0 \\ h_{\ell-1} & \cdots & h_1 & h_0 \end{array} \end{array} \right] \left. \begin{array}{@{}c@{}} \\ \\ \\ \\ \\ \\ \\ \\ \\\\ \end{array} \right\} \renewcommand{\arraystretch}{.7} \small \begin{array}{@{}c@{}} 2m \\ + \\ \ell \\ + \\ 1 \end{array}. \nonumber \end{eqnarray} Using (\ref{eqn:HandT}), rewrite $\randb{Z}_t \stackrel{\triangle}{=} [Z_{t-m},Z_{t-m+1}, \cdots, Z_{t+m+\ell}]^T$ using (\ref{eqn:chan1}) into the following form \begin{eqnarray} \randb{Z}_t &=& \left( \Mat{H} + \Mat{T} \right) \randb{A}_t - \randb{W}_t, \label{eqn:isi_chan} \end{eqnarray} where here $\randb{W}_t$ denotes the neighborhood of noise samples \begin{eqnarray} \randb{W}_t \stackrel{\triangle}{=} [\rand{W}_{t-\L},\rand{W}_{t-\L+1},\cdots,\rand{W}_{t+\L+\ell}]^T \label{eqn:Wt} \end{eqnarray} \begin{defn} \label{defn:Sym} Denote the set $\mathcal{M}$ that contains the $\L$-truncated MLM \textbf{candidate} sequences \begin{eqnarray} \mathcal{M} &\stackrel{\triangle}{=}& \left\{ \mat{\a} \in \{-1,1\}^{2(\L+\ell)+1} : \a_i = 1 \mbox{ for all } |i| > \L \right\}. \nonumber\\ \label{eqn:Sym} \end{eqnarray} Each candidate $\mat{\a} \in \mathcal{M}$ has the following form \[ \mat{\a} = [\overbrace{1,1,\cdots, 1}^{\ell}, \a_{-\L}, \a_{-\L+1},\cdots, \a_{\L}, \overbrace{1,1,\cdots, 1}^{\ell}]^T, \] i.e. candidates $\mat{\a} \in \mathcal{M}$ have boundary\footnote{Alternatively, the boundary symbols can be specified to be any sequence of choice in the set $\{-1,1\}^\ell$; here we choose the boundary sequence $[1,1,\cdots,1] = \1$ simply for clearer exposition.}~symbols equal to $1$. \end{defn} An example of a candidate sequence in the set $\mathcal{M}$ is illustrated in Figure \ref{fig:Trellis}. The boundary symbols of the candidates $\mat{a} \in \mathcal{M}$ are fixed, because the boundary symbols of the transmitted sequence $\randb{A}_t$ are \emph{unknown} to the detector. The start/end states of $\randb{A}_t$ (colored \emph{black}), is shown (see Figure \ref{fig:Trellis}) to be different from the start/end states of the candidate $\mat{a} \in \mathcal{M}$ (colored \emph{white}). Let the following sequence $\cdots, \rand{B}_{-2},\rand{B}_{-1}, \linebreak[1] \rand{B}_{0},\rand{B}_{1}, \rand{B}_{2}, \cdots$ denote \emph{symbol decisions} on the channel inputs $\cdots, \rand{A}_{-2},\rand{A}_{-1}, \rand{A}_{0},\rand{A}_{1}, \rand{A}_{2}, \cdots$. Let $\mathbbb{1}$ denote the \emph{all-ones} vector $\mathbbb{1} \stackrel{\triangle}{=} [1,1,\cdots, 1]^T$. In the following let $|\mat{a}|$ denote the Euclidean norm of the vector $\mat{a}$. \begin{defn} \label{defn:Bt} The symbol decision $\rand{B}_{t}$ on channel input $\rand{A}_{t}$, is obtained by i) computing the sequence $\randb{B}\bI{t}$ that achieves the following minimum \begin{eqnarray} \randb{B}\bI{t} &\stackrel{\triangle}{=}& \arg \min_{\mat{\a}\in\mathcal{M}} |\pmb{\rand{Z}}_t - (\Mat{H} + \Mat{T}) \mat{\a} |^2, \nonumber\\ &=& \arg \min_{\mat{\a}\in\mathcal{M}} |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\mat{\a}|^2, \label{eqn:B1} \end{eqnarray} and ii) setting the symbol decision $\rand{B}_{t}$ to the $0$-th component of $\randb{B}\bI{t}$ in (\ref{eqn:B1}), i.e. set $\rand{B}_{t} \stackrel{\triangle}{=} \rand{B}\bI{t}_0$ where the sequence $\randb{B}\bI{t} = [\mathbbb{1}, \rand{B}\bI{t}_{-\L}, \rand{B}\bI{t}_{-\L+1},\cdots, B\bI{t}_{-1},B\bI{t}_{0},B\bI{t}_{1},\cdots, \rand{B}\bI{t}_{\L}, \mathbbb{1}]^T$. \end{defn} \begin{figure*}[!t] \centering \includegraphics[width=.7\linewidth]{Trellis3.eps} \caption{The $\L$-truncated Max-Log-Map (MLM) detector. Here we illustrate the case $m=6$ and $\ell=2$, where the time evolution of the ISI channel states are depicted similarly as before in Figure \ref{fig:CS}. All $2^\ell=4$ possible states are shown. Channel states colored black and white, correspond respectively to the symbol neighborhood $\pmb{\rand{A}}_t$, and a candidate sequence $\textbf{a}$ in the set $\mathcal{M}$ (see Definition \ref{defn:Sym}). As shown, $\pmb{\rand{A}}_t$ and $\textbf{a}$ may not have the same starting and/or end states.} \label{fig:Trellis} \vspace*{-10pt} \end{figure*} \renewcommand{a'}{\bar{\mat{a}}} The sequence $\randb{B}\bI{t}$ in (\ref{eqn:B1}), and therefore the symbol decision $\rand{B}_{t}$, is obtained by considering the candidate sequences in the set $\mathcal{M}$, recall Definition \ref{defn:Sym} and refer to Figure \ref{fig:Trellis}. Note that $\randb{B}\bI{t}$ does not equal the MLM bit detection sequence $\cdots, \rand{B}_{-2},\rand{B}_{-1}, \rand{B}_{0},\rand{B}_{1},\linebreak[1] \rand{B}_{2}, \cdots$; only the $t$-th symbol $B_t$ is obtained from $\randb{B}\bI{t}$. To obtain $\randb{B}\bI{t}$, we compare the squared Euclidean distances of each candidate $\Mat{H}\mat{\a}$ from the received neighborhood $\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1}$. \renewcommand{}{\footnote{The relaxation of this assumption is discussed in the latter-half of the upcoming Subsection \ref{ssect:33}, where we allow some of the probabilities $\Pr{\randb{A}_t=\mat{a}}$ to equal zero, i.e. in the case of modulation coding. We also comment on non-uniform signal priors in the upcoming Remark \ref{rem:ass}.}} In addition to computing \emph{hard}, i.e., $\{-1,1\}$, symbol decisions $\cdots, \rand{B}_{-2},\rand{B}_{-1},\rand{B}_{0},\rand{B}_{1},\rand{B}_{2},\cdots$, the $\L$-truncated MLM also computes the symbol \emph{reliability} sequence, to be denoted as $\cdots, \rand{R}_{-2},\linebreak[1]\rand{R}_{-1},\linebreak[1] \rand{R}_{0},\rand{R}_{1},\linebreak[1] \rand{R}_{2},\cdots$. Consider the following log-likelihood approximation (see~\cite{EquivSOVA}) \begin{eqnarray} \log \frac{\Pr{A_t = B_t | \randb{Z}_t}}{\Pr{A_t \neq B_t| \randb{Z}_t}} \!\!\!\! &=& \!\!\!\! \log \frac{\sum_{\mat{a} \in \mathcal{M} : \; a_0 = B_t} \Pr{\randb{Z}_t|A_t = \mat{a} }}{\sum_{\mat{a} \in \mathcal{M} : \; a_0 \neq B_t} \Pr{\randb{Z}_t| A_t = \mat{a}} } \nonumber\\ &\approx& \!\!\!\!\!\! \mathop{\min_{\mat{a} \in \mathcal{M}}}_{a_0 \neq \rand{B}_t} \frac{1}{2\sigma^2} |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\vec{\a} |^2 \nonumber\\ && \!\!\!\! - \mathop{\min_{\mat{a} \in \mathcal{M}}}_{a_0 = \rand{B}_t} \frac{1}{2\sigma^2} |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\vec{\a} |^2, \label{eqn:llr} \end{eqnarray} where the first equality assumes~uniform signal priors , i.e. $\Pr{\randb{A}_t=\mat{a}}=2^{-2(m+\ell)-1}$, see (\ref{eqn:sig_vect}). We also denote $\sigma^2$ as the worst-case noise variance\footnote{If $\rand{W}_t$ is stationary, then $\sig^2 = \E\{\rand{W}_t^2 \}$.} \begin{eqnarray} \sigma^2 & \stackrel{\triangle}{=} & \sup_{t \in \mathbb{Z}} \mathbb{E} \{W_t^2\}. \label{eqn:snr} \end{eqnarray} We assume that $\sigma^2$ is bounded, i.e. $\sigma^2< \infty$. We want to set the ($m$-truncated MLM) reliability $R_t$, to equal the log-likelihood approximation (\ref{eqn:llr}); before formally stating the expression for $R_t$, we first make another definition. Denote the difference in the obtained squared Euclidean distances \begin{eqnarray} \Delta(\vec{\a},\vec{a'}) &=&\Delta(\mat{\a},\mat{a'}; \randb{Z}_t) \nonumber\\ &\stackrel{\triangle}{=}& \!\!\!\!\! |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\vec{\a} |^2 - |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\vec{a'} |^2 ,\label{eqn:D} \end{eqnarray} where both $\mat{\a}$ and $ \mat{a'} $ are arbitrary sequences in $\{-1,1\}^{2(\L+\ell)+1}$. Recalling (\ref{eqn:B1}), we write $R_t$ as follows. \begin{defn} \label{defn:relt} The non-negative $\L$-truncated MLM reliability $\rand{R}_t$ is defined as \begin{eqnarray} \rand{R}_t &\stackrel{\triangle}{=}& \BigMin{\a} \frac{1}{2\sigma^2} \Delta(\vec{\a},\pmb{\rand{B}}\bI{t}) , \label{relt} \end{eqnarray} where $\Delta(\vec{\a},\pmb{\rand{B}}\bI{t}) \geq 0$, is the difference in the obtained squared Euclidean distances corresponding to candidates $\mat{a},\pmb{\rand{B}}\bI{t} \in \mathcal{M}$, and $ \sigma^2 $ is the noise variance (\ref{eqn:snr}). \end{defn} Note that $\Delta(\vec{\a},\pmb{\rand{B}}\bI{t}) \geq 0$ for all $\mat{a} \in \mathcal{M}$, simply because $\pmb{\rand{B}}\bI{t}$ achieves the minimum squared Euclidean distance amongst all candidates in $\mathcal{M}$, see (\ref{eqn:B1}). \section{Key Observation and Statement of Main Result} This section contains three subsections. In the first subsection, we describe an important \emph{key observation}; the main result of this paper is derived based on this observation. In the second subsection, we state the main result and give closed-form expressions for i) the joint reliability distribution $F_{R_{t_1}, R_{t_2},\cdots, R_{t_n}}(r_1,r_2,\cdots, r_n)$, and ii) the joint symbol error probability $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}}$. The result holds for any number $n$ of arbitrarily chosen time instants $t_1,t_2,\cdots,t_n$. Also, in the second subsection, a Monte-Carlo based procedure that evaluates these closed-form expressions is also given. In the third subsection, we address two important points regarding the given Monte-Carlo procedure, namely i) how to efficiently implement this procedure, and ii) how this procedure may be modified when one wishes to only consider a subset $\bar{\mathcal{M}} \subset \mathcal{M}$ of the candidates $\mathcal{M}$ (recall Definition \ref{defn:Sym}). \subsection{Key observation} For all times $t$, define the following two random variables $\rand{X}_t$ and $\rand{Y}_t$ as \begin{eqnarray} \rand{X}_t &\stackrel{\triangle}{=}& \mathop{\max_{\mat{a} \in \mathcal{M}}}_{a_0 \neq \rand{A}_t} \frac{1}{4} \Delta(\pmb{\rand{A}}_t,\vec{\a}), \nonumber\\ \rand{Y}_t &\stackrel{\triangle}{=}& \mathop{\max_{\mat{a} \in \mathcal{M}}}_{a_0 = \rand{A}_t} \frac{1}{4} \Delta(\pmb{\rand{A}}_t,\vec{\a}) \geq 0 , \label{eqn:XaY} \end{eqnarray} where $\Delta(\randb{A}_t,\mat{a})$ is the difference in obtained squared Euclidean distances, corresponding to the transmitted sequence $\randb{A}_t$ and a candidate $\mat{a} \in \mathcal{M}$, see (\ref{eqn:D}). Note that the random variable $Y_t$ satisfies $\rand{Y}_t \geq 0 $, because there must exist a candidate $\mat{\a} \in \mathcal{M}$ that satisfies $\Delta(\pmb{\rand{A}}_t,\vec{\a}) =0$, see (\ref{eqn:D}); this particular candidate $\mat{a} \in \mathcal{M}$ satisfies $a_i= A_{t+i}$ for all values of $i$ satisfying $|i| \leq m$. \begin{pro}[Key Observation] \label{relprop} The $\L$-truncated MLM reliability $\rand{R}_t$ in (\ref{relt}) satisfies \begin{eqnarray} \rand{R}_t &=& \frac{2}{\sigma^2}|\rand{X}_t-\rand{Y}_t|, \label{eqn:relprop} \end{eqnarray} where both random variables $\rand{X}_t$ and $ \rand{Y}_t$ are given in (\ref{eqn:XaY}). \hspace*{\fill}\IEEEQEDopen \begin{proof} \rm Scale (\ref{relt}) by $\sigma^2/2$ and write \begin{eqnarray} \frac{\sigma^2}{2} \cdot \rand{R}_t \!\! &=& \BigMin{a} \frac{\Delta(\vec{a},\pmb{\rand{A}}_t)}{4} + \frac{\Delta(\pmb{\rand{A}}_t,\pmb{\rand{B}}\bI{t})}{4} \nonumber\\ &=& \!\!\!\! \left(- \BigMax{a}{\neq} \frac{\Delta(\pmb{\rand{A}}_t,\vec{a})}{4} \right) + \frac{\Delta(\pmb{\rand{A}}_t,\pmb{\rand{B}}\bI{t})}{4}. \label{relt2} \end{eqnarray} To obtain the last equality in (\ref{relt2}), we used the relationship $\Delta(\randb{A}_t,\mat{\a}) = - \Delta(\vec{\a},\randb{A}_t)$, see (\ref{eqn:D}). Recall Definition \ref{defn:Bt} which states the symbol decision $\rand{B}_t$. Because $\rand{B}_t$ is either $-1$ or $1$, we have either $\rand{B}_t \neq \rand{A}_t$ or $\rand{B}_t = \rand{A}_t$. Consider the former case $\rand{B}_t \neq \rand{A}_t$, in which (\ref{relt2}) reduces to \begin{eqnarray} \frac{\sigma^2}{2} \cdot \rand{R}_t &=& \left( - \mathop{\max_{\mat{a} \in \mathcal{M}}}_{a_0 = \rand{A}_t} \frac{\Delta(\pmb{\rand{A}}_t,\vec{a})}{4} \right) + \mathop{\max_{\mat{a} \in \mathcal{M}}}_{a_0 \neq \rand{A}_t} \frac{\Delta(\pmb{\rand{A}}_t,\mat{a})}{4}, \nonumber\\ &=& -\rand{Y}_t + \rand{X}_t =|\rand{X}_t - \rand{Y}_t|, \nonumber \end{eqnarray} where the second equality follows from (\ref{eqn:XaY}), and the third from the fact $\rand{R}_t \geq 0$, see Definition \ref{defn:relt}. We have thus shown (\ref{eqn:relprop}) for the case $\rand{B}_t \neq \rand{A}_t$. The same conclusion follows for the other case $\rand{B}_t = \rand{A}_t$ in similar manner. \end{proof} \end{pro} Note that the expression (\ref{eqn:relprop}) for $R_t$ in Proposition \ref{relprop}, cannot be computed in practice; it is developed purely for analysis purposes. This is (\ref{eqn:relprop}) relies on the ability to compute $X_t$ and $Y_t$, which in turn requires knowledge of the transmitted sequence $\randb{A}_t$. Clearly, it is absurd to assume that the detector knows $\randb{A}_t$. \begin{rem} From past literature (e.g.~\cite{Reggiani}), there seems to be a misconception that the reliability $\rand{R}_t$, must be expressed in terms of $\pmb{\rand{B}}\bI{t}$ (as in (\ref{relt})). However as shown in Proposition \ref{relprop}, this is not true. The reliability $\rand{R}_t$ can be simply written as $\rand{R}_t = 2/\sigma^2 \cdot |\rand{X}_t - \rand{Y}_t|$, where we see from (\ref{eqn:XaY}) that both $\rand{X}_t$ and $\rand{Y}_t$ \textbf{depend only} on the transmitted sequence $\pmb{\rand{A}}_t$. In other words, the reliability $R_t$ can be alternatively computed using (\ref{eqn:relprop}), which does not require any knowledge of $\pmb{\rand{B}}\bI{t}$. \end{rem} As mentioned before, the key observation Proposition \ref{relprop} will be used to prove the main result. However before going into detailed derivations, we would like to first state the main result. This will be done in the next subsection; we believe that by doing so this will better motivate the significance of this work. \subsection{Statement of main result} For any $\kappa$ number of arbitrarily chosen time instants $t_1, t_2, \cdots, t_\kappa$, we wish to obtain the distribution of the vector $\pmb{\rand{R}}_{\mat{t}_1^\kappa}$, containing the following reliabilities \begin{eqnarray} \pmb{\rand{R}}_{\mat{t}_1^\kappa} &\stackrel{\triangle}{=} &[\rand{R}_{t_1}, \rand{R}_{t_2}, \cdots, \rand{R}_{t_\kappa}]^T. \label{eqn:Rtk} \end{eqnarray} \begin{defn} \label{defn:E} Define the \textbf{binary vector} $\mat{e}_i$ of size $2(\L+\ell)+1$ as \begin{eqnarray} \mat{e}_i &\stackrel{\triangle}{=}& [\overbrace{0,0,\cdots,0}^{\L+\ell+i}, 1, \overbrace{0,0,\cdots,0}^{\L+\ell-i}]^T, \label{eqn:mate} \end{eqnarray} where $i$ can take values $|i| \leq \L + \ell$. Further define the matrix $\Mat{E}$ of size $2(\L+\ell) + 1$ by $2\L$ as \begin{eqnarray} \Mat{E} \stackrel{\triangle}{=} [\mat{e}_{-\L}, \mat{e}_{-\L+1},\cdots, \mat{e}_{-1},\mat{e}_{1}, \mat{e}_{2}, \cdots, \mat{e}_{\L}]. \end{eqnarray} \end{defn} \begin{defn} \label{defn:S} Define the matrix $\S$ of size $2\L$ by $2^{2\L}$ as \begin{eqnarray} \S \stackrel{\triangle}{=} [\mat{s}_0, \mat{s}_1, \cdots, \mat{s}_{2^{2\L}-1}], \label{eqn:S} \end{eqnarray} where the columns $\mat{s}_0, \mat{s}_1, \cdots, \mat{s}_{2^{2\L}-1}$ make up all $2^{2\L}$ possible, length-$(2\L)$ binary vectors, i.e. $\Ev{\mat{s}_0, \mat{s}_1, \cdots, \mat{s}_{2^{2\L}-1}}= \{0,1\}^{2m}$. \end{defn} Let $\diag(\randb{A}_t)$ denote the \emph{diagonal matrix}, whose diagonal equals the vector $\randb{A}_t$. Recall the size $2\L+\ell+1$ by $2(\L+\ell)+1$ channel matrix $\Mat{H}$ given in (\ref{eqn:HandT}). Define the matrix $\mat{G}(\randb{A}_t)$ of size $2\L + \ell +1$ by $2^{2\L}$ as \begin{eqnarray} \mat{G}(\randb{A}_t) &\stackrel{\triangle}{=}& \Mat{H} \diag(\randb{A}_t) \Mat{E}. \label{eqn:G} \end{eqnarray} Recall the noise neighborhood $\randb{W}_t$ from (\ref{eqn:Wt}). Let $\pmb{\rand{W}}_{\mat{t}_1^\kappa}$ denote the concatenation \begin{eqnarray} \pmb{\rand{W}}_{\mat{t}_1^\kappa} \stackrel{\triangle}{=} \ba{c} \randb{W}_{t_1} \\ \randb{W}_{t_2} \\ \vdots \\ \randb{W}_{t_n} \end{array}\right]. \label{eqn:Wtk} \end{eqnarray} \begin{defn} \label{defn:Kw} Define the noise covariance matrix \begin{eqnarray} \Mat{K}_{\pmb{\rand{W}}} &\stackrel{\triangle}{=}& \ba{ccc} \mathbb{E}\{\pmb{\rand{W}}_{t_1} \pmb{\rand{W}}_{t_1}^T \} & \cdots & \mathbb{E}\{\pmb{\rand{W}}_{t_1}\pmb{ \rand{W}}_{t_n}^T \} \\ \vdots & \ddots & \vdots \\ \mathbb{E}\{\pmb{\rand{W}}_{t_n} \pmb{\rand{W}}_{t_1}^T \} & \cdots & \mathbb{E}\{\pmb{\rand{W}}_{t_n} \pmb{\rand{W}}_{t_n}^T \} \end{array}\right] \nonumber\\ &=& \mathbb{E}\{ \pmb{\rand{W}}_{\mat{t}_1^\kappa} \pmb{\rand{W}}_{\mat{t}_1^\kappa}^T\}. \label{eqn:mx_cov} \end{eqnarray} Note $\Mat{K}_{\pmb{\rand{W}}}$ is generally not Toeplitz even if $\rand{W}_t$ is stationary. \end{defn} \newcommand{\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})}{\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})} \newcommand{\pmb{\diag}}{\pmb{\diag}} Similarly to (\ref{eqn:Wtk}), let $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$ denote the concatenation \begin{eqnarray} \pmb{\rand{A}}_{\mat{t}_1^\kappa} \stackrel{\triangle}{=} \ba{c} \randb{A}_{t_1} \\ \randb{A}_{t_2} \\ \vdots \\ \randb{A}_{t_n} \end{array}\right]. \label{eqn:Atk} \end{eqnarray} Let $\Mat{I}$ denote the identity matrix; in particular $\Mat{I}_{2\L}$ has size $2\L$ by $2\L$. The matrix $\S\S^T$ can be verified to have the following simple expression \begin{eqnarray} \S\S^T &=& \sum_{k=0}^{2^{2\L}-1} \mat{s}_k \mat{s}_k^T = 2^{2(\L-1)} \cdot [\Mat{I}_{2\L} + \mathbbb{1}\1^T], \label{eqn:SS} \end{eqnarray} where the vector $\mathbbb{1} \stackrel{\triangle}{=} [1, 1, \cdots, 1]^T$. Denote the matrix \emph{Kronecker product} using the operation $\otimes$. Let $\pmb{\diag}\left( \Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n}) \right)$ denote a \emph{block diagonal matrix}, whose block-diagonal entries equal $\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})$. \begin{defn} \label{defn:QD} Let the square matrix $\Mat{Q} = \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ of size $2mn$ by $2mn$ satisfy the following two conditions: \begin{itemize} \item[i)] the matrix $\Mat{Q}$ \textbf{decomposes} the following size $2\L n$ matrix \begin{eqnarray} \!\!\!\! \!\!\!\! \Mat{Q}\pmb{\Lambda}^2\Mat{Q}^T \!\!\!\!\!\! &=& \!\!\!\! \pmb{\diag}\left(\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})\right)^T \Mat{K}_{\pmb{\rand{W}}} \nonumber\\ && \!\cdot~\pmb{\diag}\left(\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})\right), \nonumber\\ \label{eqn:QDQ} \end{eqnarray} where $\pmb{\Lambda} =\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ on the l.h.s. of (\ref{eqn:QDQ}) is a diagonal matrix. The number of positive diagonal elements in the matrix $\pmb{\Lambda}$, equals the rank of the matrix on the r.h.s. of (\ref{eqn:QDQ}). \item[ii)] the matrix $\Mat{Q}$ \textbf{diagonalizes} the matrix $\Mat{I}_n \otimes \S\S^T$, i.e. the matrix $\Mat{Q}$ satisfies \begin{eqnarray} \Mat{Q}^T (\Mat{I}_n \otimes \S\S^T) \Mat{Q} = \Mat{I}, \end{eqnarray} noting that the matrix $\S\S^T$ is square of size $2m$. \end{itemize} \end{defn} \renewcommand{}{\footnote{The matrix appearing in (\ref{eqn:F}), with elements $\mat{G}(\randb{A}_{t_i})$, can also be written as $\pmb{\diag}\left( \Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n}) \right)$.}} It is shown in Appendix \ref{app:spec} how to compute such a matrix $\Mat{Q} = \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$, and also obtain the diagonal matrix $\pmb{\Lambda} = \pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:QDQ}). We partition the matrix $\Mat{Q}$ into $n$ partitions of equal size $2m$ by $2mn$, i.e., \begin{eqnarray} \Mat{Q} = \ba{c} \Mat{Q}_1 \\ \Mat{Q}_2 \\ \vdots \\ \Mat{Q}_n \end{array}\right]. \label{eqn:Qpart} \end{eqnarray} Let $\diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) $ denote the diagonal matrix, whose diagonal equals $[A_{t_1},A_{t_2},\cdots, A_{t_n}]^T$. Define the size $n$ by $2\L n $ matrix $\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ as \begin{eqnarray} \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) &\stackrel{\triangle}{=}& \diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0^T \Mat{K}_{\pmb{\rand{W}}} \nonumber\\ && \cdot \renewcommand{\arraystretch}{.7} \ba{@{}c@{}c@{}c@{}c@{}} \Mat{G}(\randb{A}_{t_1}) \\ & \Mat{G}(\randb{A}_{t_2}) \\ & & \ddots \\ & & & \Mat{G}(\randb{A}_{t_n}) \end{array}\right] \ba{c} \S\S^T\Mat{Q}_1 \\ \S\S^T\Mat{Q}_2 \\ \vdots \\ \S\S^T\Mat{Q}_n \end{array}\right] \pmb{\Lambda}^{\dagger}, \label{eqn:F} \nonumber\\ \end{eqnarray} where $\mat{h}_0$ is given in (\ref{eqn:h_i}), and $\pmb{\Lambda}^{\dagger}$ is formed by reciprocating only the \emph{non-zero} diagonal elements of $\pmb{\Lambda}$. Define the following length-$2^{2\L}$ vectors $\pmb{\mu}(\randb{A}_t)$ and $\pmb{\nu}(\randb{A}_t)$ as \begin{align} \pmb{\mu}(\randb{A}_t) =& [\mu_1, \mu_2, \cdots, \mu_{2^{2m}-1}]^T \nonumber\\ \stackrel{\triangle}{=} & [\Mat{G}(\randb{A}_t)\S]^T \cdot \Mat{T} (\mathbbb{1} - \randb{A}_t) \nonumber\\ & - \left[|\Mat{G}(\randb{A}_t)\mat{s}_0|^2, |\Mat{G}(\randb{A}_t)\mat{s}_1|^2,\cdots, |\Mat{G}(\randb{A}_t)\mat{s}_{2^{2\L}-1}|^2\right]^T, \nonumber\\ \label{eqn:muY} \\ \pmb{\nu}(\randb{A}_t) =& [\nu_1, \nu_2, \cdots, \nu_{2^{2m}-1}]^T \nonumber\\ \stackrel{\triangle}{=}& \pmb{\mu}(\randb{A}_t) - 2\rand{A}_t \cdot \mat{h}_0 ^T \Mat{G}(\randb{A}_t)\S \label{eqn:nuX}, \end{align} where $\mu_k = \mu_k(\randb{A}_t)$ and $\nu_k = \nu_k(\randb{A}_t)$ denote the $k$-th components of $ \pmb{\mu}_k(\randb{A}_t)$ and $\pmb{\nu}_k(\randb{A}_t) $ respectively, and $\Mat{T}$ is given in (\ref{eqn:HandT}). Let $\Phi_{\Mat{K}}(\mat{r})$ denote the distribution function of a zero-mean Gaussian random vector with covariance matrix $\Mat{K}$. Finally define the following length-$n$ random vectors \begin{eqnarray} \pmb{\rand{X}}_{\mat{t}_1^\kappa} &\stackrel{\triangle}{=} &[\rand{X}_{t_1}, \rand{X}_{t_2}, \cdots, \rand{X}_{t_\kappa}]^T, \nonumber\\ \pmb{\rand{Y}}_{\mat{t}_1^\kappa} &\stackrel{\triangle}{=} &[\rand{Y}_{t_1}, \rand{Y}_{t_2}, \cdots, \rand{Y}_{t_\kappa}]^T, \end{eqnarray} where both $\rand{X}_{t_i}$ and $\rand{Y}_{t_i}$ are given in (\ref{eqn:XaY}). Let $\mathbb{R}$ denote the set of real numbers. We are now ready to state the main result. \newcommand{\U}{\pmb{\rand{U}}} \begin{thm} \label{thm:1} The distribution of $\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}$ equals \begin{eqnarray} F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r}) \!\!\!\! &=& \!\!\!\! \mathbb{E}\left\{ \Phi_{\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})}\left(\mat{r} + \pmb{\d}(\U, \pmb{\rand{A}}_{\mat{t}_1^\kappa}) - \pmb{\eta}(\U, \pmb{\rand{A}}_{\mat{t}_1^\kappa})\right) \right\ \nonumber\\ \label{eqn:main_preview} \end{eqnarray} for all $\mat{r} \in \mathbb{R}^\kappa$, where the following random vectors and matrices appear in (\ref{eqn:main_preview}) \begin{itemize} \item $\U $ is a standard zero-mean identity-covariance Gaussian random vector of length-$(2\L\kappa)$. \item $\pmb{\d}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa})=[\d_1,\d_2,\cdots,\d_\kappa]^T$ is a length-$n$ vector in $\mathbb{R}^n$, where \begin{eqnarray} \!\!\!\!\!\!\!\! \d_i = \d_i(\U, \pmb{\rand{A}}_{\mat{t}_1^\kappa}) \!\!\!\! &\stackrel{\triangle}{=}& \!\!\!\! \max( \S^T\Mat{Q}_i\pmb{\Lambda}\U + \pmb{\mu}(\randb{A}_{t_i})) \nonumber\\ && - \max ( \S^T\Mat{Q}_i\pmb{\Lambda}\U + \pmb{\nu}(\randb{A}_{t_i})). \label{eqn:m} \end{eqnarray} \item $\pmb{\eta}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa}) = [\eta_1,\eta_2,\cdots,\eta_\kappa]^T$ is a length-$n$ vector in $\mathbb{R}^n$, where \begin{eqnarray} \pmb{\eta}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa}) &\stackrel{\triangle}{=}& \diag(\rand{A}_{t_1},\rand{A}_{t_2},\cdots, \rand{A}_{t_n}) \Mat{T} \nonumber\\ && \cdot \left(\mathbbb{1}\cdot \mathbbb{1}^T - [\randb{A}_{t_1},\randb{A}_{t_2},\cdots, \randb{A}_{t_n}]\right)^T \mat{h}_0 \nonumber\\ && -|\mat{h}_0 |^2 \cdot \mathbbb{1} + \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \U. \label{eqn:eta} \end{eqnarray} \item $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ is the $n$ by $n$ matrix \begin{eqnarray} \Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) &\stackrel{\triangle}{=}& \diag(\rand{A}_{t_1},\rand{A}_{t_2},\cdots, \rand{A}_{t_n}) \otimes \mat{h}_0^T \Mat{K}_{\pmb{\rand{W}}} \nonumber\\ &&\cdot \diag(\rand{A}_{t_1},\rand{A}_{t_2},\cdots, \rand{A}_{t_n}) \otimes \mat{h}_0 \nonumber\\ && - \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T. \label{eqn:nu} \end{eqnarray} \end{itemize} Refer to (\ref{eqn:h_i}), (\ref{eqn:S}), (\ref{eqn:QDQ}), (\ref{eqn:Qpart}), (\ref{eqn:F}), (\ref{eqn:muY}) and (\ref{eqn:nuX}) for clarifications of the notation used above.\hspace*{\fill}\IEEEQEDopen \end{thm} The proof of Theorem \ref{thm:1} is given in Subsection \ref{ssect:prf}. Both i) the joint distribution of the reliabilities $\pmb{\rand{R}}_{\mat{t}_1^\kappa} \stackrel{\triangle}{=} [\rand{R}_{t_1},\rand{R}_{t_1},\cdots, \rand{R}_{t_n}]^T$ in (\ref{eqn:Rtk}), and ii) the joint error probability $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}}$, follow as corollaries from our main result Theorem \ref{thm:1}. In the following we denote an index subset $\{\tau_1,\tau_2,\cdots, \tau_j \} \subseteq \{t_1,t_2,\cdots,t_n\}$ of size $j$, written compactly in vector form as $\pmb{\tau}_1^j=[\tau_1,\tau_2,\cdots, \tau_j]^T$. \begin{cor} \label{cor:main} The distribution of $\pmb{\rand{R}}_{\mat{t}_1^\kappa} \stackrel{\triangle}{=} 2/\sigma^2 \cdot |\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}|$, see Proposition \ref{relprop}, is given as \begin{align} F_{\pmb{\rand{R}}_{\mat{t}_1^\kappa}}(\mat{r}) &= F_{|\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}|}(\sigma^2 /2 \cdot \mat{r}) \nonumber\\ &= \sum_{j=0}^n\mathop{\sum_{\{\tau_1,\tau_2,\cdots,\tau_j \} \subseteq }}_{\{t_1,t_2,\cdots,t_n \}}\!\!\!\!\!\!\!\! (-1)^j \cdot F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}\left(\frac{\sigma^2}{2} \cdot \pmb{\alpha}(\pmb{\tau}_1^j, \mat{r}) \right) \nonumber \end{align} where the length-$n$ vector $\pmb{\alpha}(\pmb{\tau}_1^j, \mat{r})=[\alpha_1,\alpha_2,\cdots,\alpha_n]^T$ satisfies \[ \alpha_i =\alpha_i(\pmb{\tau}_1^j,r_i) = \left\{\begin{array}{rl} -r_i & \mbox{ if } t_i \in \{\tau_1,\tau_2,\cdots, \tau_j\}, \\ r_i & \mbox{ otherwise }, \end{array} \right. \] and $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}\left(\frac{\sigma^2}{2}\cdot \pmb{\alpha}(\pmb{\tau}_1^j, \mat{r})\right)$ has the similar closed form as in Theorem \ref{thm:1}. \hspace*{\fill}\IEEEQEDopen \end{cor} \newcommand{{\mat{t}_1^{n-1}}}{{\mat{t}_1^{n-1}}} Corollary \ref{cor:main} can be verified using recursion; for the $n$-th case we express \begin{eqnarray} F_{|\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}|}(\mat{r}) &=& F_{|\randb{X}_{\mat{t}_1^{n-1}} - \randb{Y}_{\mat{t}_1^{n-1}}|, X_{t_n} - Y_{t_n}}(\mat{r}_1^{n-1}, r_n) \nonumber\\ && - F_{|\randb{X}_{\mat{t}_1^{n-1}} - \randb{Y}_{\mat{t}_1^{n-1}}|, X_{t_n} - Y_{t_n}}(\mat{r}_1^{n-1}, -r_n). \nonumber \end{eqnarray} Observe that we still may apply Corollary \ref{cor:main} to each of the two terms on the r.h.s.; we apply Corollary \ref{cor:main} only to the variables $|\randb{X}_{\mat{t}_1^{n-1}} - \randb{Y}_{\mat{t}_1^{n-1}}|$, at the same time accounting for the (respective) joint events $\{X_{t_n} - Y_{t_n} \leq r_n \}$ and $\{X_{t_n} - Y_{t_n} \leq -r_n \}$. The desired expression will be obtained after using some algebraic manipulations. \newcommand{\mat{u}}{\mat{u}} \begin{algorithm}[!t] \SetAlgoLined \LinesNumbered \NoCaptionOfAlgo \SetKwInput{Init}{Initialize} \Init{Set $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r}) := 0$ for all $\mat{r} \in \mathbb{R}^\kappa$;} \While{$F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ not converged}{ Sample $\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa$ using $\Pr{\pmb{\rand{A}}_{\mat{t}_1^\kappa} = \mat{a}_1^\kappa}$; Sample the length-$n$, standard zero-mean identity-covariance Gaussian vector $\U = \mat{u}$\; Using the sampled realization $\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa$, obtain the matrices $\Mat{Q} = \Mat{Q}(\mat{a}_1^\kappa)$ and $\pmb{\Lambda} = \pmb{\Lambda}(\mat{a}_1^\kappa)$ satisfying Definition \ref{defn:QD}, see Appendix \ref{app:spec}\ Compute $\d_i = \d_i(\mat{u}, \mat{a}_1^\kappa)$ for all $i \in \{1,2,\cdots, n\}$. For $\d_i$ compute \begin{eqnarray} \max_{k\in\{0,1,\cdots,2^{2\L}-1\}} \mat{s}_k^T\Mat{Q}_i \pmb{\Lambda}\mat{u} + \mu_k(\mat{a}), \nonumber\\ \max_{k\in\{0,1,\cdots,2^{2\L}-1\}} \mat{s}_k^T\Mat{Q}_i \pmb{\Lambda}\mat{u} + \nu_k(\mat{a}), \nonumber \end{eqnarray} see (\ref{eqn:m}). Here $\mat{a}$ is the sampled realization $\randb{A}_{t_i} = \mat{a}$, and both $\mu_k(\mat{a})$ and $\nu_k(\mat{a})$ are the $k$-th components of $\pmb{\mu}(\mat{a})$ and $\pmb{\nu}(\mat{a})$, see (\ref{eqn:muY}) and (\ref{eqn:nuX})\; Compute $\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:F}); Also compute $\pmb{\eta}(\mat{u}, \mat{a}_1^\kappa)$ in (\ref{eqn:eta}) and $\Mat{K}_{\pmb{\rand{V}}}(\mat{a}_1^\kappa)$ in (\ref{eqn:nu})\; Update \begin{align} F&_{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r}) \nonumber\\ &:= F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r}) + \Phi_{\Mat{K}_{\pmb{\rand{V}}}(\mat{a}_1^\kappa)} \left( \mat{r} + \pmb{\d}(\mat{u},\mat{a}_1^\kappa) - \pmb{\eta}(\mat{u}, \mat{a}_1^\kappa) \right) \nonumber \end{align} for all $\mat{r} \in \mathbb{R}^\kappa$\; } \caption{\textbf{Procedure 1}: Evaluating the Joint Distribution $F_{\pmb{\rand{X}}_{\textbf{t}_1^n}-\pmb{\rand{Y}}_{\textbf{t}_1^n}}(\textbf{r})$} \label{proce:pdfXmY} \end{algorithm} \begin{cor} \label{cor:main2} The probability $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}}$ that \textbf{all} symbol decisions $\rand{B}_{t_1},\rand{B}_{t_2},\cdots, \rand{B}_{t_n}$ are in error, equals \begin{align} &\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}} = \Pr{\pmb{\rand{X}}_{\mat{t}_1^\kappa} \geq \pmb{\rand{Y}}_{\mat{t}_1^\kappa}} \nonumber\\ &= 1 + \sum_{j=1}^n \mathop{\sum_{\{\tau_1,\tau_2,\cdots,\tau_j \} \subseteq }}_{\{t_1,t_2,\cdots,t_n \} } (-1)^j \cdot F_{\randb{X}_{\pmb{\tau}_1^j}-\randb{Y}_{\pmb{\tau}_1^j}}(\mat{0}), \nonumber \end{align} where the probability \begin{eqnarray} F_{\randb{X}_{\pmb{\tau}_1^j}-\randb{Y}_{\pmb{\tau}_1^j}}(\mat{0}) = \Pr{\bigcap_{\tau \in \{\tau_1,\tau_2,\cdots, \tau_j\}} \{ X_{\tau} - Y_\tau \leq 0 \}} \nonumber \end{eqnarray} has the similar closed form as in Theorem \ref{thm:1}. \hspace*{\fill}\IEEEQEDopen \begin{proof} \rm From (\ref{eqn:XaY}) we clearly see that the event $\Ev{X_t \geq Y_t}$ indicates that the sequence $\pmb{B}\bI{t}$ in (\ref{eqn:B1}) will have its $0$-th component $B\bI{t}_0 \neq A_t$. Because the symbol decision $B_t$ is set to $B_t = B\bI{t}_0$, see Definition \ref{defn:Bt}, the event $\Ev{X_t \geq Y_t}$ indicates that $B_t \neq A_t$, which is exactly a symbol decision error occurring at time $t$. \end{proof} \end{cor} Denote the realizations of $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$, $\randb{A}_t$ and $\pmb{\rand{U}}$, as $\pmb{\rand{A}}_{\mat{t}_1^\kappa} = \mat{a}_1^\kappa$, and $\randb{A}_t = \mat{a}$, and $\pmb{\rand{U}} = \mat{u}$. The Monte-Carlo procedure used to evaluate the closed-form of $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ in Theorem \ref{thm:1}, is given in Procedure \ref{proce:pdfXmY}. The following Remarks \ref{rem:spec}-\ref{rem:conv} pertain to Procedure \ref{proce:pdfXmY}. \begin{rem} \label{rem:spec} We may reduce the number of computations used to the obtain matrices $\Mat{Q}=\Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ and $\pmb{\Lambda}=\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in Line 3, by sampling $\pmb{\rand{U}} = \mat{u}$ multiple times for a fixed $\pmb{\rand{A}}_{\mat{t}_1^\kappa} = \mat{a}_1^\kappa$. \end{rem} \begin{rem} The matrix $\Mat{K}_{\pmb{\rand{V}}}(\mat{a}_1^\kappa)$ computed in Line 5 (also see (\ref{eqn:nu})) may not have full rank. Hence when evaluating the Gaussian distribution function $\Phi_{\Mat{K}_{\pmb{\rand{V}}}(\mat{a}_1^\kappa)} (\mat{r})$ with covariance matrix $\Mat{K}_{\pmb{\rand{V}}}(\mat{a}_1^\kappa)$ in Line 6, we may require techniques designed for rank deficient covariances, see for example~\cite{Somer}. \end{rem} \begin{rem} \label{rem:ass} Our proposed method requires no assumptions on the noise covariance matrix $\Mat{K}_{\pmb{\rand{W}}}$ in (\ref{eqn:mx_cov}), and can be applied even when the noise $W_t$ is correlated and/or non-stationary. Also at the end of this subsection, we present a modification of the previous Procedure 1, which addresses certain cases where we do not want to consider all candidates in $\mathcal{M}$ (see Definition \ref{defn:Sym}), i.e. $\Pr{\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa}=0$ for some $\mat{a}_1^\kappa$. This particular situation arises, for example, when we have a modulation code (see~\cite{RLL,Immink}) present in the system. Here we always assume that $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$ is equally-likely amongst all its realizations $\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa$. Further modifications will be required to extend our method to the general case of non-uniform priors $\Pr{\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa}$ (the first equality of (\ref{eqn:llr}) is not valid for such cases). \end{rem} \begin{rem} \label{rem:conv} Because we have that \[ 0 \leq \Phi_{\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})} \left( \mat{r} + \pmb{\d}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) - \pmb{\eta}( \pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \right) \leq 1, \] the well-known Hoeffding probability inequalities can be applied to obtain convergence guarantees, see~\cite{Hoef}. \end{rem} The main thrust of the next subsection is to address Line 4 of Procedure \ref{proce:pdfXmY}. It appears that to execute Line 4 of Procedure \ref{proce:pdfXmY}, we require an exhaustive search over an exponential $2^{2\L}$ number of terms, in order to perform the two maximizations. However, we point out in the next subsection, that these maximizations can be performed more efficiently by utilizing dynamic programming optimization techniques. Also in the next subsection, we address the computation of $F_{\pmb{X}_{\mathbf{t}_1^n}- \pmb{Y}_{\mathbf{t}_1^n}}(\mathbf{r})$, in instances where one wishes to only consider a subset $\bar{\mathcal{M}} \subset \mathcal{M}$ of the candidates $\mathcal{M}$ (see Definition \ref{defn:Sym}). \subsection{On computing the closed-form of $F_{\pmb{X}_{\mathbf{t}_1^n}- \pmb{Y}_{\mathbf{t}_1^n}}(\mathbf{r})$ using Procedure \ref{proce:pdfXmY}} \label{ssect:33} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{CS2.eps} \caption{Time evolution of the dynamic programming states.} \label{fig:CS2} \end{figure} \renewcommand{\k}{\tau} To compute $\d_i$ in (\ref{eqn:m}) while executing Line 4 of Procedure \ref{proce:pdfXmY}, we need to perform the following two maximizations \begin{align} \max_{\mat{s} \in \{0,1\}^{2m}} &\; \mat{s}^T \Mat{Q}_i \pmb{\Lambda}\mat{u} + [\Mat{G}(\mat{a})\mat{s}]^T \cdot \Mat{T} (\mathbbb{1} - \mat{a}) - |\Mat{G}(\mat{a})\mat{s}|^2, \nonumber\\ \max_{\mat{s} \in \{0,1\}^{2m}} &\; \mat{s}^T \Mat{Q}_i \pmb{\Lambda}\mat{u} + [\Mat{G}(\mat{a})\mat{s}]^T \cdot [\Mat{T} (\mathbbb{1} - \mat{a})-2 a_{0} \cdot \mat{h}_0] \nonumber\\ &- |\Mat{G}(\mat{a})\mat{s}|^2 , \label{eqn:Max} \end{align} where both $\mat{a}$ and $\mat{u}$ are realizations $\randb{A}_{t_i}=\mat{a}$ and $\pmb{\rand{U}} = \mat{u}$. Note that we obtain (\ref{eqn:Max}) from (\ref{eqn:m}), by substituting for both $\pmb{\mu}(\mat{a})$ and $\pmb{\nu}(\mat{a})$ using (\ref{eqn:muY}) and (\ref{eqn:nuX}) respectively. Index the realization $\randb{A}_{t_i}=\mat{a}$ similarly as in Definition \ref{defn:Sym} \[ \mat{a} \stackrel{\triangle}{=} [a_{-m-\ell}, a_{-m-\ell+1}, \cdots, a_{m+\ell}]^T. \] Let $\diag(\mat{a})$ denote the diagonal matrix, with diagonal $\mat{a}$. The matrix $\mat{G}(\mat{a})$ appearing in both maximization problems (\ref{eqn:Max}), has a distinctive structure. We now proceed to clarify this structure. \begin{defn} \label{defn:gm} Let $\mat{g}_\k$ denote the length $2(\L+\ell) + 1$ vector \begin{align} &\mat{g}_\k \stackrel{\triangle}{=} \nonumber\\ \!\!\!\! &[\overbrace{0,0,\cdots, 0}^{\L+\k},h_\ell a_{\k-\ell} ,h_{\ell-1} a_{\k-(\ell-1)}, \cdots, h_0 a_\k, \overbrace{0,0,\cdots, 0}^{\L+\ell-\k}]^T, \nonumber \end{align} where $\k$ can take values $\k \in \{-m,-,m+1,\cdots, m+\ell\}$. \end{defn} Using the $2m + \ell + 1$ vectors $\mat{g}_\k$, we rewrite $\mat{G}(\mat{a})$ as \begin{eqnarray} \mat{G}(\mat{a}) &\stackrel{\triangle}{=}& \Mat{H}\diag(\mat{a}) \Mat{E} = \ba{c} \mat{g}_{-m}^T \\ \mat{g}_{-m+1}^T \\ \vdots \\ \mat{g}_{m+\ell}^T \end{array}\right] \Mat{E}, \label{eqn:GG} \end{eqnarray} recall the definition of $\mat{G}(\mat{a})$ from (\ref{eqn:G}). From the observed structure of $\mat{g}_\k$ it can be clearly seen from (\ref{eqn:GG}) that $\mat{G}(\mat{a})$ is a \emph{sparse matrix} with many zero entries. The matrix $\mat{G}(\mat{a})$ is an $(\ell+1)$-banded matrix, see~\cite{Golub}, p. 16. As it is well-known in the literature on ISI channels, it is efficient to employ \emph{dynamic programming} techniques to solve both problems (\ref{eqn:Max}), by exploiting this $(\ell+1)$-banded sparsity~\cite{Viterbi}. \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\beta}{\beta} \renewcommand{\tt}{\tau} \renewcommand{\ss}{\bar{s}} \newcommand{\bar{\s}}{\bar{\mat{s}}} It is clear that the inner product $\mat{g}_\k^T \mat{e}_j$ extracts the $j$-th component of the vector $\mat{g}_\k^T $, i.e. \begin{eqnarray} \mat{g}_\k^T \mat{e}_{\k-j} &=& \left\{ \begin{array}{cc} h_j \cdot a_{\k-j} & \mbox{ if } 0 \leq j \leq \ell, \\ 0 & \mbox{ otherwise }, \end{array}\right. \label{eqn:GGG} \end{eqnarray} where $j$ satisfies $|j| \leq m + \ell$. Both problems (\ref{eqn:Max}) are optimized over all $\mat{s} \in \{0,1\}^{2m}$; we index \[ \mat{s}\stackrel{\triangle}{=} [s_{-\L}, s_{-\L+1}, \cdots, s_{-1}, s_1, s_2, \cdots , s_\L]^T. \] It is clear that by using (\ref{eqn:GGG}), the following is true for all vectors $\mat{g}_\k^T$ given in Definition \ref{defn:gm} \begin{eqnarray} \mat{g}_\k^T \Mat{E} \mat{s} &=& \sum_{j=-\L-\ell}^{\L+\ell} (\mat{g}^T_\k\mat{e}_j) \cdot s_j \nonumber\\ &=& \sum_{j=0}^\ell h_j \cdot a_{\k-j} \cdot s_{\k-j}, \label{eqn:G4} \end{eqnarray} if we set $s_0 =0$ and $s_\k =0$ for all $|\k| > \L$. \begin{algorithm}[!t] \SetAlgoLined \LinesNumbered \NoCaptionOfAlgo \SetKwInput{Init}{Initialize} \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Conv}{\emph{Convention}} \SetKwInput{ConvT}{\mbox{ \hspace{40pt} }} \Conv{Set $\mathcal{C}_0 := -\infty$ and also set values $\mathcal{C}_j:=0 $ for all $|j| > \L$;} \ConvT{Denote the length-$\ell$ binary vector by $\bar{\s}\stackrel{\triangle}{=} [\ss_{\ell-1}, \ss_{\ell-2},\cdots, \ss_0]^T$;} \Input{Matrix $\mat{G}(\mat{a})$; Vector of constants $\pmb{\mathcal{C}} = [\mathcal{C}_{-\L}, \mathcal{C}_{-\L+1}, \cdots, \mathcal{C}_{-1}, \mathcal{C}_1, \mathcal{C}_2, \cdots, \mathcal{C}_\L]^T$;} \Output{Value stored in $\beta_{\L+\ell} (\bar{\s}) = \beta_{\L+\ell}(\mat{0})$; } \Init{For all $\bar{\s} \in \{0,1\}^{\ell} $, set the values \[ \renewcommand{\arraystretch}{.7} \beta_{-\L-1} (\bar{\s}) := \left\{ \begin{array}{cl} 0 & \mbox{ if } \bar{\s} = \mat{0} ,\\ -\infty & \mbox{ otherwise } . \end{array} \right. \]} \ForAll{$\tt \in \{-\L, -\L + 1, \cdots, \L+ \ell$ \}}{ \ForAll{$\bar{\s} \in \{0,1\}^{\ell} $}{ Set the value $\alpha = \alpha(\bar{\s}) := \sum_{j=0}^{\ell-1} h_j a_{\tt-j} \ss_{j}$. Set the states $\bar{\s}_0$ and $\bar{\s}_1$ as \mbox{}~~~~~~~~ $\begin{array}{cc} \bar{\s}_0 &:= [0, \ss_{\ell-1}, \cdots, \ss_2, \ss_1]^T, \\ \bar{\s}_1 &:= [1, \ss_{\ell-1}, \cdots, \ss_2, \ss_1]^T. \end{array}$\; Compute $\beta_\tt (\bar{\s}) : = \max \{- \alpha^2 + \beta_{\tt-1} (\bar{\s}_0), \mathcal{C}_{\tt-\ell} - [h_\ell a_{\tt-\ell} + \alpha]^2 + \beta_{\tt-1} (\bar{\s}_1)\}$\; } } \caption{\textbf{Procedure 2}: Solving $\displaystyle \max_{\mathbf{s} \in \{0,1\}^{2m}} \mathbf{s}^T\pmb{\mathcal{C}} - |\mathbf{G}(\mathbf{a})\mathbf{s}|^2$ using Dynamic Programming} \label{proce:DP} \end{algorithm} Define the length-$(2m)$ vector $\pmb{\mathcal{C}} \stackrel{\triangle}{=} [\mathcal{C}_{-\L}, \mathcal{C}_{-\L+1}, \cdots, \linebreak[1]\mathcal{C}_{-1}, \mathcal{C}_1, \mathcal{C}_2, \cdots, \mathcal{C}_\L]^T$. Set $\mathcal{C}_0 := -\infty$ and $\mathcal{C}_\k :=0$ for all $|\k| > m$. By setting \begin{eqnarray} \pmb{\mathcal{C}} &:=& \Mat{Q}_i\pmb{\Lambda}\mat{u} + [\mat{G}(\mat{a})]^T \cdot \Mat{T} (\mathbbb{1}-\mat{a}) \nonumber \end{eqnarray} and \begin{eqnarray} \pmb{\mathcal{C}} &:=& \Mat{Q}_i\pmb{\Lambda}\mat{u} + [\mat{G}(\mat{a})]^T \cdot [\Mat{T} (\mathbbb{1}-\mat{a}) - 2a_0 \cdot \mat{h}_0 ], \nonumber \end{eqnarray} respectively, we can solve both problems (\ref{eqn:Max}) as \begin{align} \max&_{\mat{s} \in \{0,1\}^{2\L}} \; \mat{s}^T\pmb{\mathcal{C}} - |\mat{G}(\mat{a})\mat{s}|^2 \nonumber\\ &= \max_{\mat{s} \in \{0,1\}^{2\L}} \sum_{\k=-\L}^{\L+\ell} \mathcal{C}_\k \cdot s_\k - (\mat{g}^T_\tt \Mat{E} \mat{s})^2,\label{eqn:Max2} \end{align} where the $\tt$-th term $\mat{g}^T_\tt \Mat{E} \mat{s} = \sum_{j=0}^\ell h_j a_{\k-j} s_{\k-j}$ For the sake of completeness, we shall state the dynamic programming procedure that solves (\ref{eqn:Max2}). \begin{defn} \label{defn:DPstate} The \textbf{dynamic programming state} at time $\tau$ equals the length-$\ell$ vector of binary symbols $[s_{\tau-\ell+1},s_{\tau-\ell+2},\cdots, s_{\tau}]^T \in \{0,1\}^{\ell}$. \end{defn} For the benefit of readers knowledgeable in dynamic programming techniques, we illustrate the time evolution of the dynamic programming states in Figure \ref{fig:CS2}. Dynamic programs can be solved with complexity that is \emph{linear} in the state size~\cite{Viterbi}; in our case we have $2^\ell$ states. The dynamic programming procedure optimizing (\ref{eqn:Max2}) is given in Procedure \ref{proce:DP}. \newcommand{\bar{\Sym}}{\bar{\mathcal{M}}} The second part of this subsection addresses the following separate issue. Recall from Remark \ref{rem:ass} that Theorem \ref{thm:1} requires no assumptions on the distribution $\Pr{\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa}$. In other words, the distribution $\Pr{\randb{A}_{t}= \mat{a}}$ for each time $t$ can be arbitrary specified. One may particularly want to consider certain cases, where some of the probabilities $\Pr{\randb{A}_{t}= \mat{a}}$ equal $0$; one example of such a case is where a modulation code is present in the system~\cite{RLL,Immink}. In these cases we would not want to consider candidates in the set $\mathcal{M}$ (see Definition \ref{defn:Sym}) that have zero probability of occurrence. We would consider the subset $ \bar{\Sym} \subset \mathcal{M} $, explicitly written as \begin{eqnarray} \bar{\Sym} = \bar{\Sym}_{t} \stackrel{\triangle}{=} \left\{ \mat{a} \in \mathcal{M} : \Pr{ \bigcap_{j=-m}^m \{ A_{t+j}=a_{j} \}} =0 \right\} \label{eqn:excl} \end{eqnarray} for each time instant $t$. If we consider the subsets $ \bar{\Sym} \subset \mathcal{M} $ then Procedure \ref{proce:pdfXmY} has to be modified. The modification of Procedure \ref{proce:pdfXmY} is given as Procedure \ref{proce:FT}; this modification will be justified in the upcoming Section \ref{sect:dens}). \begin{rem} Line 4 of Procedure \ref{proce:FT} may also be efficiently solved using dynamic programming techniques \end{rem} Thus far, we have completed the statement of our main result Theorem \ref{thm:1} and the two main Corollaries 1 and 2. We have given Procedures \ref{proce:pdfXmY}-\ref{proce:FT} (also see Appendix \ref{app:spec}), used to efficiently evaluate the given closed-form expressions. The rest of this paper is organized as follows. In the following Section \ref{sect:dens}, we shall prove the correctness of both Theorem \ref{thm:1}, and also Procedure \ref{proce:FT}. A simple upper bound on the rank of $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:nu}) will also be given. In Section \ref{sect:egs}, numerical computations will be presented for various commonly-cited ISI channels in magnetic recording literature~\cite{PR}. The computations are performed for various scenarios, so that we may demonstrate a range of applications of our results. We conclude in Section \ref{sect:con}. \begin{algorithm}[!t] \SetAlgoLined \LinesNumbered \NoCaptionOfAlgo \SetKwInput{Init}{Initialize} \Init{Set $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r}) := 0$ for all $\mat{r} \in \mathbb{R}^\kappa$;} \While{$F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ not converged}{ Perform Lines 2-3 of Procedure \ref{proce:pdfXmY}\; Compute $\d_i = \d_i(\mat{u}, \mat{a}_1^\kappa)$ for all $i \in \{1,2,\cdots, n\}$ by computing \begin{eqnarray} \mathop{\max}_{k : \; \pmb{\alpha}(\Mat{E}\mat{s}_k, \mat{a}) \in \bar{\Sym}_{t_i}} \mat{s}_k^T\Mat{Q}_i \pmb{\Lambda} \mat{u} + \mu_k(\mat{a}), \nonumber\\ \mathop{\max}_{k : \; \pmb{\alpha}(\Mat{E}\mat{s}_k + \mat{e}_0, \mat{a} )\in \bar{\Sym}_{t_i}} \mat{s}_k^T\Mat{Q}_i \pmb{\Lambda} \mat{u} + \nu_k(\mat{a}), \nonumber \end{eqnarray} see (\ref{eqn:m}), where $\mu_k(\mat{a})$ and $\nu_k(\mat{a})$ denote the $k$-th components of $\pmb{\mu}(\mat{a})$ and $\pmb{\nu}(\mat{a})$, see (\ref{eqn:muY}) and (\ref{eqn:nuX}). Both $\Mat{E}$ and $\mat{e}_0$ are given in Definition \ref{defn:E}. Also, the vector $\pmb{\alpha}(\mat{e}, \mat{a}) = [\alpha_{-\L-\ell},$ $ \alpha_{-\L-(\ell-1)}, $$ \cdots, \alpha_{\L+\ell} ]^T$ satisfies \newline \mbox{}~~~~~~~~ $ \alpha_j =$$ \alpha_j(e_j,a_j) = \left\{\begin{array}{rl} -a_j & \mbox{ if } e_j = 1, \\ a_j & \mbox{ if } e_j = 0. \end{array} \right. $\; Perform Lines 5-6 of Procedure \ref{proce:pdfXmY}\; } \caption{\textbf{Procedure 3}: Evaluating $F_{\pmb{\rand{X}}_{\textbf{t}_1^n}-\pmb{\rand{Y}}_{\textbf{t}_1^n}}(\textbf{r})$, for candidate subsets $\bar{\Sym} \subset \mathcal{M}$, see (\ref{eqn:excl})} \label{proce:FT} \end{algorithm} \renewcommand{\pmb{\beta}}{\pmb{\Gamma}} \renewcommand{\alpha}{\theta} \renewcommand{\mat}[1]{\begin{bf} #1 \end{bf}} \section{Distribution of $\pmb{\rand{X}}_{\mathbf{t}_1^n}-\pmb{\rand{Y}}_{\mathbf{t}_1^n}$ and reliability $\pmb{\rand{R}}_{\mathbf{t}_1^n} = 2/\sigma^2 \cdot |\pmb{\rand{X}}_{\mathbf{t}_1^n}-\pmb{\rand{Y}}_{\mathbf{t}_1^n}|$} \label{sect:dens} \subsection{Proof of Theorem \ref{thm:1}} \label{ssect:prf} We begin by showing the correctness of Theorem \ref{thm:1}, which was stated in the previous section. Define the random variable \begin{eqnarray} \v_t &\stackrel{\triangle}{=}& \rand{A}_{t} \cdot \mat{h}_0^T\pmb{\rand{W}}_t. \label{eqn:v} \end{eqnarray} It is easy to verify that $\v_t$ is Gaussian: recall that $\pmb{\rand{W}}_t \stackrel{\triangle}{=} [\rand{W}_{t-M},\rand{W}_{t-M+1},\cdots,\rand{W}_{t+M+I}]^T$ is the neighborhood of (Gaussian) noise samples. To improve clarity, we shall introduce the following new notation, both used only in this section \begin{eqnarray} \!\!\! \!\!\! \!\!\! \alpha(\randb{A}_t) \!\!\! &\stackrel{\triangle}{=}& \!\!\! \rand{A}_t \cdot [\Mat{T}(\mathbbb{1} - \randb{A}_t)]^T \mat{h}_0 - |\mat{h}_0|^2, \nonumber\\ \!\!\! \!\!\! \!\!\! \pmb{\beta} = \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \!\!\! &\stackrel{\triangle}{=}& \!\!\! \pmb{\diag}\left(\Mat{G}(\randb{A}_{t_1}),\Mat{G}(\randb{A}_{t_2}), \cdots, \Mat{G}(\randb{A}_{t_n})\right). \label{eqn:short} \end{eqnarray} Recall that $\Mat{I}_n$ denotes a size $n$ identity matrix, and that $\otimes$ denotes the matrix Kronecker product. Using (\ref{eqn:short}), we may now more compactly write \begin{eqnarray} \Mat{Q}\pmb{\Lambda}^2\Mat{Q}^T &=& \pmb{\beta}^T \Mat{K}_{\pmb{\rand{W}}} \pmb{\beta}, \nonumber\\ \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) &=& \diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0^T \Mat{K}_{\pmb{\rand{W}}} \pmb{\beta} \nonumber\\ &&\cdot [\Mat{I}_n \otimes \S\S^T] \cdot \Mat{Q} \pmb{\Lambda}^{\dagger}, \nonumber\\ \pmb{\eta}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) &=& [\alpha(\randb{A}_{t_1}),\alpha(\randb{A}_{t_2}),\cdots, \alpha(\randb{A}_{t_n})]^T + \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})\pmb{\rand{U}}, \nonumber\\ \end{eqnarray} where (recall that) matrices $\Mat{Q}=\Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ and $\pmb{\Lambda}=\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ are given in Definition \ref{defn:QD}, matrix $\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:F}), and $\pmb{\eta}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:eta}). \begin{pro} \label{pro:XaYeqdist} The random variables $\rand{X}_t$ and $\rand{Y}_t$ in (\ref{eqn:XaY}) can be written as \begin{eqnarray} \rand{X}_t &=& \max \left( [\mat{G}(\randb{A}_t)\S]^T \pmb{\rand{W}}_t + \pmb{\nu}(\randb{A}_t) + [\v_t + \alpha(\randb{A}_t)]\cdot \mathbbb{1} \right),\nonumber\\ \rand{Y}_t &=& \max \left( [\mat{G}(\randb{A}_t)\S]^T \pmb{\rand{W}}_t + \pmb{\mu}(\randb{A}_t) \right),\nonumber \end{eqnarray} where $\alpha(\randb{A}_t) \stackrel{\triangle}{=} \rand{A}_t \cdot [\Mat{T}(\mathbbb{1} - \randb{A}_t)]^T \mat{h}_0 - |\mat{h}_0|^2$ as given in (\ref{eqn:short}). \hspace*{\fill}\IEEEQEDopen \begin{proof} \rm We expand $\Delta(\pmb{\rand{A}}_t,\vec{a})$ in (\ref{eqn:D}) by substituting for $\pmb{\rand{Z}}_t$ using (\ref{eqn:isi_chan}) to get \begin{align} &\Delta(\pmb{\rand{A}}_t,\vec{a}) \nonumber\\ &= |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\pmb{\rand{A}}_t |^2 - |\pmb{\rand{Z}}_t - \Mat{T}\mathbbb{1} - \Mat{H}\vec{a}|^2 \nonumber\\ &= |-\pmb{\rand{W}}_t + \Mat{T}(\pmb{\rand{A}}_t - \mathbbb{1})|^2 \nonumber\\ &~~~~- |-\pmb{\rand{W}}_t + \Mat{T}(\pmb{\rand{A}}_t - \mathbbb{1}) + \Mat{H}(\pmb{\rand{A}}_t - \vec{a})|^2 \nonumber\\ &= -2[-\pmb{\rand{W}}_t+ \Mat{T}(\pmb{\rand{A}}_t - \mathbbb{1})]^T\Mat{H}(\pmb{\rand{A}}_t - \vec{a}) - |\Mat{H}(\pmb{\rand{A}}_t - \vec{a})|^2. \nonumber\\\label{xmas} \end{align} We substitute (\ref{xmas}) into the definition of $\rand{X}_t$ and $\rand{Y}_t$ in (\ref{eqn:XaY}) to obtain \begin{eqnarray} \rand{X}_t &=& \mathop{\max_{\mat{a} \in \mathcal{M} }}_{a_0 \neq \rand{A}_t} [\pmb{\rand{W}}_t+ \Mat{T}(\mathbbb{1}-\randb{A}_t)]^T\left( \frac{1}{2} \cdot \Mat{H}(\randb{A}_t - \mat{a})\right)\nonumber\\ && - \left| \frac{1}{2} \cdot \Mat{H}(\randb{A}_t - \mat{a})\right|^2 , \nonumber\\ \rand{Y}_t &=& \mathop{\max_{\mat{a} \in \mathcal{M} }}_{a_0 = \rand{A}_t} [\pmb{\rand{W}}_t+ \Mat{T}(\mathbbb{1}-\randb{A}_t)]^T\left( \frac{1}{2} \cdot \Mat{H}(\randb{A}_t - \mat{a})\right) \nonumber\\ && - \left| \frac{1}{2} \cdot \Mat{H}(\randb{A}_t - \mat{a})\right|^2. \label{eqn:XaY01} \end{eqnarray} Using (\ref{eqn:mate}) and Definitions \ref{defn:Sym}, \ref{defn:E} and \ref{defn:S}, we establish the following equality of sets \begin{align} &\Ev{ \frac{1}{2} (\rand{A}_t - \mat{a}) : \mat{a} \in \mathcal{M}, a_0 \neq \rand{A}_t } \nonumber\\ &\quad\quad\quad= \Ev{\diag(\randb{A}_t) \Mat{E} \mat{s}_j + A_t \cdot \mat{e}_0 : 0 \leq j \leq 2^{2m}-1 }, \nonumber\\ &\Ev{ \frac{1}{2} (\rand{A}_t - \mat{a}) : \mat{a} \in \mathcal{M}, a_0 = \rand{A}_t } \nonumber\\ &\quad\quad\quad= \Ev{\diag(\randb{A}_t) \Mat{E} \mat{s}_j: 0 \leq j \leq 2^{2m}-1 }. \label{eqn:mod} \end{align} Next, we utilize both (\ref{eqn:XaY01}) and (\ref{eqn:G}) to rewrite (\ref{xmas}) as \begin{eqnarray} \rand{X}_t \!\!&=& \!\!\!\!\!\!\! \max_{j \in \{0,1,\cdots,2^{2\L}-1\}} [\pmb{\rand{W}}_t+ \Mat{T}(\mathbbb{1}-\randb{A}_t)]^T [\Mat{G}(\randb{A}_t)\mat{s}_j + A_t \mat{h}_0] \nonumber\\ && - |\Mat{G}(\randb{A}_t)\mat{s}_j + A_t \mat{h}_0|^2, \nonumber\\ \rand{Y}_t\!\! &=& \!\!\!\!\!\!\!\max_{j \in \{0,1,\cdots,2^{2\L}-1\}} [\pmb{\rand{W}}_t+ \Mat{T}(\mathbbb{1}-\randb{A}_t)]^T [\Mat{G}(\randb{A}_t)\mat{s}_j ] \nonumber\\ && - |\Mat{G}(\randb{A}_t)\mat{s}_j |^2. \label{eqn:XaY02} \end{eqnarray} By the definition of $\pmb{\mu}(\randb{A}_t)$ in (\ref{eqn:muY}) and $\Mat{S}$ in Definition \ref{defn:S}, the expression for $Y_t$ in the proposition statement follows from (\ref{eqn:XaY02}). For $X_t$, we continue to expand (\ref{eqn:XaY02}) to get \begin{align} &\rand{X}_t \nonumber\\ &= \max \Big( [\Mat{G}(\randb{A}_t)\Mat{S} ]^T\randb{W}_t + \overbrace{\pmb{\mu}(\randb{A}_t) - 2A_t \cdot \mat{h}_0^T \Mat{G}(\randb{A}_t)\S}^{\pmb{\nu}(\randb{A}_t)} \nonumber\\ & \quad \quad \quad + \underbrace{A_t \cdot \mat{h}_0^T \randb{W}_t}_{\v_t} \cdot \mathbbb{1} + \{\underbrace{A_t [\Mat{T}(\mathbbb{1}-\randb{A}_t)]^T\mat{h}_0 - |\mat{h}_0|^2}_{\alpha(\randb{A}_t)} \} \cdot \mathbbb{1} \Big), \nonumber \end{align} in the same form as in the proposition statement, where $\pmb{\nu}(\randb{A}_t)$ is defined in (\ref{eqn:nuX}), and $\v_t$ in (\ref{eqn:v}), and $\alpha(\randb{A}_t)$ in (\ref{eqn:short}). \end{proof} \end{pro} Recall $\Mat{Q} = \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ and $\pmb{\Lambda} = \pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ from Definition \ref{defn:QD}. To prove Theorem \ref{thm:1} we require the following lemma. \renewcommand{\zeta}{U} \begin{lem} \label{lem:eig} Let $\pmb{\zeta}$ denote a standard zero-mean identity-covariance Gaussian random vector of length-$(2\L\kappa)$. Recall $\pmb{\rand{W}}_{\mat{t}_1^\kappa}$ in (\ref{eqn:Wtk}). The following transformation of random vectors holds \begin{align} &\ba{c} \S^T \Mat{Q}_1(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \\ \S^T \Mat{Q}_2(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \\ \vdots \\ \S^T \Mat{Q}_n (\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \end{array}\right] \pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \pmb{\zeta} \nonumber\\ &= \ba{@{}c@{}c@{}c@{}c@{}} \Mat{G}(\randb{A}_{t_1})\S \\ & \Mat{G}(\randb{A}_{t_2})\S \\ & & \ddots \\ & & & \Mat{G}(\randb{A}_{t_n})\S \end{array}\right]^T \ba{c} \pmb{\rand{W}}_{t_1} \\ \pmb{\rand{W}}_{t_2} \\ \vdots \\ \pmb{\rand{W}}_{t_n} \end{array}\right], \end{align} or more concisely we equivalently write \begin{align} (\Mat{I}_n \otimes \S^T)&\cdot \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})\pmb{\zeta} \nonumber\\ & = (\Mat{I}_n \otimes \S^T) \cdot \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \pmb{\rand{W}}_{\mat{t}_1^\kappa}. \label{eqn:eig} \end{align} using $\Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:Qpart}) and $\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:short}). \hspace*{\fill}\IEEEQEDopen \begin{proof} \rm After conditioning on $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$, both vectors that appear on either side of (\ref{eqn:eig}), are seen to be zero mean Gaussian random vectors (recall that $W_t$ is zero mean). Therefore to prove the lemma, we only need to verify that after conditioned on $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$, both l.h.s. and r.h.s. of (\ref{eqn:eig}) have the same covariance matrix. This is easily done by using property i) of $\Mat{Q} = \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in Definition \ref{defn:QD}, which yields \begin{eqnarray} \mathbb{E}\left\{\Mat{Q}\pmb{\Lambda}\pmb{\zeta}\pmb{\zeta}^T\pmb{\Lambda}\Mat{Q}^T \right | \pmb{\rand{A}}_{\mat{t}_1^\kappa}\} &=& \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^2\Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \nonumber\\ &=& \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\Mat{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}). \nonumber \end{eqnarray} \end{proof} \end{lem} We are now ready to prove Theorem \ref{thm:1}. The proof is split up into the following two seperate cases : \begin{itemize} \item $\rank[\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \Mat{K}_{\pmb{\rand{W}}} \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})]=2mn$, and \item $\rank[\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \Mat{K}_{\pmb{\rand{W}}} \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})] < 2mn$ for some realization $\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa$. \end{itemize} We begin with the first case. \renewcommand{\Omega}{U} \begin{proof}[Proof of Theorem \ref{thm:1} when $\rank(\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\pmb{\mathcal{K}}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})) = 2\L\kappa$] \newline\indent We first derive the following equalities \begin{align} (\pmb{\Lambda}^{\dagger}\Mat{Q}^T) &(\Mat{I}_n \otimes \S \S^T) \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\pmb{\rand{W}}_{\mat{t}_1^\kappa} \nonumber\\ &= (\pmb{\Lambda}^{\dagger}\Mat{Q}^T) (\Mat{I}_n \otimes \S \S^T) \Mat{Q}\pmb{\Lambda}\pmb{\zeta} \nonumber\\ &= \pmb{\Lambda}^{\dagger} \pmb{\Lambda}\pmb{\zeta} = \pmb{\rand{U}}. \label{eqn:Vinv} \end{align} The first two equalities follow by respectively applying properties i) and ii) of the matrix $\Mat{Q} = \Mat{Q}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$. The last equality holds because by virtue of the assumption $\rank(\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\Mat{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})) = 2mn$, in which then $\pmb{\Lambda}^{\dagger}$ is strictly an inverse of $\pmb{\Lambda}$. Recall both $\v_{t_i} \stackrel{\triangle}{=} A_{t_i} \cdot \mat{h}_0^T \randb{W}_{t_i}$ and $\pmb{\rand{V}}_{\mat{t}_1^\kappa} \stackrel{\triangle}{=} [\v_{t_1},\v_{t_2},\cdots, \v_{t_n}]^T$. Taking (\ref{eqn:Vinv}) together with (\ref{eqn:v}), we have the following transformation \begin{eqnarray} \ba{c} \pmb{\rand{V}}_{\mat{t}_1^\kappa} \\ \pmb{\zeta} \end{array}\right] = \ba{c} \diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0^T \\ (\pmb{\Lambda}^{\dagger}\Mat{Q}^T) (\Mat{I}_n \otimes \S \S^T) \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \end{array}\right] \pmb{\rand{W}}_{\mat{t}_1^\kappa}. \label{eqn:main01} \end{eqnarray} Consider the conditional event \begin{eqnarray} \Ev{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa} \leq \mat{r} | \pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\zeta }} \label{eqn:mainevt} \end{eqnarray} where $\mat{r}=[r_1,r_2,\cdots, r_n]^T \in \mathbb{R}^{n}$. It is clear from both Proposition \ref{pro:XaYeqdist} and (\ref{eqn:main01}), that after conditioning on both $ \pmb{\rand{A}}_{\mat{t}_1^\kappa} $ and $ \pmb{\zeta }$ in (\ref{eqn:mainevt}), the only quantity that remains random in (\ref{eqn:mainevt}) is the Gaussian vector $\pmb{\rand{V}}_{\mat{t}_1^\kappa}$. Using Lemma \ref{lem:eig}, we have the transformation \[ \S^T \Mat{Q}_i(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \pmb{\Omega} = [\mat{G}(\randb{A}_{t_i})\S]^T \randb{W}_{t_i}, \] therefore we may rewrite both $X_{t_i}$ and $Y_{t_i}$ from Proposition \ref{pro:XaYeqdist} as \begin{eqnarray} \rand{X}_{t_i} &=& \max \left(\S^T \Mat{Q}_i\pmb{\Lambda} \pmb{\Omega} + \pmb{\nu}(\randb{A}_{t_i}) \right) + \v_{t_i} + \alpha(\randb{A}_{t_i}),\nonumber\\ \rand{Y}_{t_i} &=& \max \left( \S^T\Mat{Q}_i\pmb{\Lambda} \pmb{\Omega} + \pmb{\mu}(\randb{A}_{t_i}) \right).\label{eqn:mod3} \end{eqnarray} The event (\ref{eqn:mainevt}) can then be written as \begin{align} & \Ev{\pmb{\rand{X}}_{\mat{t}_1^\kappa} - \pmb{\rand{Y}}_{\mat{t}_1^\kappa} \leq \mat{r} | \pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\zeta }} = \bigcap_{1\leq i \leq \kappa } \Ev{X_{t_i} \leq r_i + Y_{t_i} | \pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\zeta }} \nonumber\\ \!\!\!\! &= \!\!\!\! \bigcap_{1\leq i \leq \kappa }\left\{ \left. \max \left(\begin{array}{l} [\S^T \Mat{Q}_i \pmb{\Lambda}\pmb{\Omega} + \pmb{\nu}(\randb{A}_{t_i}) ] \\ + \v_{t_i} + \alpha(\randb{A}_{t_i}) \end{array} \right) \leq r_i + Y_{t_i} \right| \!\!\!\! \begin{array}{c} \pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\zeta}\end{array} \!\!\!\! \right\} \nonumber\\ \!\! &= \! \!\!\!\! \bigcap_{1\leq i \leq \kappa } \!\!\!\left.\left\{ \!\! \begin{array}{l} \v_{t_i} + \\ \alpha(\randb{A}_{t_i}) \end{array} \!\!\! \leq \left( \!\!\! \begin{array}{c} r_i + \max \left[\S^T \Mat{Q}_i\pmb{\Lambda}\pmb{\Omega} + \pmb{\mu}(\randb{A}_{t_i})\right] \\ - \max \left[\S^T\Mat{Q}_i\pmb{\Lambda}\pmb{\Omega} + \pmb{\nu}(\randb{A}_{t_i})\right] \end{array} \!\!\! \right) \right| \!\!\!\! \begin{array}{c} \pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\zeta}\end{array} \!\!\!\! \right\}.\nonumber\\ \label{eqn:main1} \end{align} Continuing from (\ref{eqn:main1}), we utilize (\ref{eqn:m}) to rewrite \begin{eqnarray} && \Ev{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa} \leq \mat{r} | \pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\Omega}} \nonumber\\ &=& \bigcap_{1\leq i \leq \kappa }\left\{ \v_{t_i} + \alpha(\randb{A}_{t_i }) \leq r_i + \delta_i(\pmb{\Omega},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) |\pmb{\rand{A}}_{\mat{t}_1^\kappa} , \pmb{\Omega} \right\}.~~~~~ \label{eqn:main001} \end{eqnarray} We now determine both the mean and variance of $\pmb{\rand{V}}_{\mat{t}_1^\kappa}$, after conditioning on both $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$ and $\pmb{\Omega}$. From (\ref{eqn:main01}), we derive the formula \begin{eqnarray} \mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} \pmb{\zeta}^T | \pmb{\rand{A}}_{\mat{t}_1^\kappa} \} &=& \diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0^T \Mat{K}_{\pmb{\rand{W}}} \nonumber\\ &&\cdot~\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) (\Mat{I}_n\otimes\S\S^T)\Mat{Q} \pmb{\Lambda}^{\dagger} \nonumber\\ &\stackrel{\triangle}{=}& \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}), \label{eqn:F2} \end{eqnarray} where $\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ is given in (\ref{eqn:F}) . Next, we compute the conditional mean \begin{eqnarray} \mathbb{E} \left\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} | \pmb{\rand{A}}_{\mat{t}_1^\kappa},\pmb{\zeta}\right\} &=& \mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} |\pmb{\rand{A}}_{\mat{t}_1^\kappa} \} + \mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} \pmb{\zeta}^T | \pmb{\rand{A}}_{\mat{t}_1^\kappa} \}\pmb{\rand{U}} \nonumber\\ &=& \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \pmb{\rand{U}},\label{eqn:m2} \end{eqnarray} where the second equality follows from $\mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} |\pmb{\rand{A}}_{\mat{t}_1^\kappa} \}=0$ (because $\pmb{\rand{W}}_{\mat{t}_1^\kappa}$ has zero mean, see (\ref{eqn:v})), and substituting (\ref{eqn:F2}). The conditional covariance matrix $\mbox{\rm $\mathbb{V}$} \left\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} | \pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\zeta} \right\} $ is obtained as follows \begin{align} & \mbox{\rm $\mathbb{V}$} \left\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} | \pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\zeta} \right\} \nonumber\\ &= \mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} \pmb{\rand{V}}_{\mat{t}_1^\kappa}^T| \pmb{\rand{A}}_{\mat{t}_1^\kappa}\} - \mathbb{E}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} \pmb{\zeta}^T| \pmb{\rand{A}}_{\mat{t}_1^\kappa}\} \cdot \mathbb{E}\{\pmb{\zeta} \pmb{\rand{V}}_{\mat{t}_1^\kappa}^T | \pmb{\rand{A}}_{\mat{t}_1^\kappa}\} \nonumber\\ &= \diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0^T \Mat{K}_{\pmb{\rand{W}}} \nonumber\\ &~~~\cdot\diag(A_{t_1},A_{t_2},\cdots, A_{t_n}) \otimes \mat{h}_0 - \Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T \nonumber\\ &\stackrel{\triangle}{=} \Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) ,\label{eqn:nu2} \end{align} where $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ is given in (\ref{eqn:nu}). The expression for $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}} (\mat{r})$ in Theorem \ref{thm:1} now follows easily from (\ref{eqn:main001}) \begin{align} \big\{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa} \leq &\mat{r} | \pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\Omega} \big\}\nonumber\\ &= \Big\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} + [\alpha(\randb{A}_{t_1}),\alpha(\randb{A}_{t_2}),\cdots, \alpha(\randb{A}_{t_n})]^T \nonumber\\ & \quad\quad\; \leq \mat{r} + \pmb{\d}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \Big|\pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\rand{U}} \Big\} \nonumber \end{align} and noticing that the random vector \begin{eqnarray} \pmb{\rand{V}}_{\mat{t}_1^\kappa} + [\alpha(\randb{A}_{t_1}),\alpha(\randb{A}_{t_2}),\cdots, \alpha(\randb{A}_{t_n})]^T \end{eqnarray} is (conditionally on $\pmb{\rand{A}}_{\mat{t}_1^\kappa}$ and $\pmb{\rand{U}}$) Gaussian distributed with distribution function \[ \Phi_{\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})}(\mat{r} - \pmb{\eta}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa})), \] where both the conditional mean and covariance $\pmb{\eta}(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ and $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$, are given respectively in (\ref{eqn:m2}) and (\ref{eqn:nu2}). \end{proof} Next we consider the other case where the rank of $\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\textbf{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) < 2mn$ for some value of $\pmb{\rand{A}}_{\mat{t}_1^\kappa} = \mat{a}_1^\kappa$. In this case, the arguments of the preceding proof fail in equation (\ref{eqn:Vinv}), where the final equality does not hold because then $\pmb{\Lambda}^{\dagger}$ is strictly not the inverse of $\pmb{\Lambda}$. However as we soon shall see, the expression for $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ in Theorem \ref{thm:1} still holds for this case. \renewcommand{\U}{\pmb{\Omega}_1^j} \newcommand{\bar{\Q}}{\bar{\Mat{Q}}} \newcommand{\bar{\Del}}{\bar{\pmb{\Lambda}}} \renewcommand{\O}{\pmb{\Omega}} \begin{proof}[Proof of Theorem \ref{thm:1} when $\rank(\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\pmb{\mathcal{K}}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})) < 2\L\kappa$ for some $\pmb{\rand{A}}_{\mat{t}_1^\kappa}= \mat{a}_1^\kappa$] \rm \newline\indent Recall that the matrix $[\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})]^{\dagger} = \pmb{\Lambda}^{\dagger}$ is formed by only reciprocating the non-zero diagonal elements of $\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) = \pmb{\Lambda}$. For a particular realization $\pmb{\rand{A}}_{\mat{t}_1^\kappa}=\mat{a}_1^\kappa$, let the value $j=\rank(\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\Mat{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) )$ equal the rank of the matrix $\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\Mat{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) $. Consider what happens if $j < 2mn$. Without loss of generality, assume that all non-zero diagonal elements of $\pmb{\Lambda}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) = \pmb{\Lambda}$, are located at the first $j < 2mn$ diagonal elements of $\pmb{\Lambda}$. Define the following size-$j$ quantities \begin{itemize} \item the random vector $\U= [\Omega_1,\Omega_2, \cdots, \Omega_j]^T$, a truncated version of $\pmb{\rand{U}}= [\Omega_1,\Omega_2, \cdots, \Omega_{2mn}]^T$. \item the size $2mn$ by $j$ matrix $\bar{\Q}$, containing the first $j$ columns of the $\Mat{Q}$, see Definition \ref{defn:QD}. \item the size $j$ diagonal square matrix $\bar{\Del}$, containing the $j$ positive diagonal elements of $\pmb{\Lambda}$, also see Definition \ref{defn:QD}. \end{itemize} If we substitute the new quantities $\U$, $\bar{\Q}$ and $\bar{\Del}$ for $\pmb{\rand{U}}$, $\Mat{Q}$ and $\pmb{\Lambda}$ in equation (\ref{eqn:Vinv}), it is clear that (\ref{eqn:Vinv}) holds true, i.e., \begin{align} (\bar{\Del}^{\dagger}\bar{\Q}^T)& (\Mat{I}_n \otimes \S\S^T ) \pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\pmb{\rand{W}}_{\mat{t}_1^\kappa} \!\!\!\! \nonumber\\ &= \!(\bar{\Del}^{\dagger}\bar{\Q}^T) (\Mat{I}_n \otimes \S\S^T ) \bar{\Q}\bar{\Del}\U \nonumber\\ &= \bar{\Del}^{\dagger} \bar{\Del} \U = \U, \end{align} where note from Definition \ref{defn:QD} that it must be true that $\bar{\Q}^T(\Mat{I}_n \otimes \S\S^T ) \bar{\Q} = \Mat{I}_j$, here $\Mat{I}_j$ is the size $j$ identity matrix. Hence, Theorem \ref{thm:1} clearly holds when we substitute $\U$, $\bar{\Q}$ and $\bar{\Del}$ for $\pmb{\rand{U}}$, $\Mat{Q}$ and $\pmb{\Lambda}$ Further, we can verify the following facts: \begin{itemize} \item $\bar{\Q}_i \bar{\Del}_i \U = \Mat{Q}_i\pmb{\Lambda} \pmb{\Omega}$, and therefore \item $\pmb{\d}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa})= \pmb{\d}(\pmb{\Omega},\pmb{\rand{A}}_{\mat{t}_1^\kappa})$. Also, \item $\Mat{F}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ remains unaltered whether we use $\Mat{Q},\pmb{\Lambda}$ or $\bar{\Q}, \bar{\Del}$, therefore \item $\pmb{\eta}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa})= \pmb{\eta}(\pmb{\Omega},\pmb{\rand{A}}_{\mat{t}_1^\kappa})$. Also, \item $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ remains unaltered whether we use $\Mat{Q},\pmb{\Lambda}$ or $\bar{\Q}, \bar{\Del}$. \end{itemize} Thus we conclude that \begin{align} &\mathbb{E} \Ev{ \left.\Phi_{\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})}(\pmb{\d}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa}) - \pmb{\eta}(\U,\pmb{\rand{A}}_{\mat{t}_1^\kappa})) \right|\pmb{\rand{A}}_{\mat{t}_1^\kappa} }\nonumber\\ & = \mathbb{E} \Ev{ \left. \Phi_{\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})}(\pmb{\d}(\O,\pmb{\rand{A}}_{\mat{t}_1^\kappa}) - \pmb{\eta}(\O,\pmb{\rand{A}}_{\mat{t}_1^\kappa})) \right| \pmb{\rand{A}}_{\mat{t}_1^\kappa} }\nonumber \end{align} must hold, and thus Theorem \ref{thm:1} must be true even when $\rank [\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})^T\Mat{K}_{\pmb{\rand{W}}}\pmb{\beta}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})] < 2mn$ for certain values of $\pmb{\rand{A}}_{\mat{t}_1^\kappa} = \mat{a}_1^\kappa$. \end{proof} We have thus far completed our proof of Theorem \ref{thm:1}; we next show an upper bound for the rank of the matrix $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ in (\ref{eqn:nu2}). We point out that $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ sometimes may even have rank $0$, i.e. $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ equals the zero matrix. \subsection{Other comments} \renewcommand{\pmb{\beta}}{\pmb{\beta}} \renewcommand{\alpha}{\alpha} The following proposition states that the rank of $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ depends on both the chosen time instants $\{t_1,t_2,\cdots, t_\kappa\}$, and the MLM truncation length $\L$. The following proposition gives the upper bound on $\rank(\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}))$. \begin{pro} \label{pro:rankKv} The rank of $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$ equals at most the number of time instants $t \in \{t_1,t_2,\cdots, t_\kappa\}$, that satisfy $|t - t'| > \L$ for all $t' \in \{t_1,t_2,\cdots, t_\kappa\}\setminus \{t \}$. \hspace*{\fill}\IEEEQEDopen \end{pro} Proposition \ref{pro:rankKv} is proved using the following lemma. \newcommand{\mat{s}}{\mat{s}} \begin{lem} \label{lem:WandV} If two time instants $t_1$ and $t_2$ satisfy $|t_1 - t_2| \leq \L$, then observation of $[\mat{G} (\randb{A}_{t_1}) \S ]^T\pmb{\rand{W}}_{t_1}$ uniquely determines $\rand{V}_{t_2} \stackrel{\triangle}{=} A_{t_2}\cdot \mat{h}_0^T\pmb{\rand{W}}_{t_2}$ (and vice versa observation of $[\mat{G} (\randb{A}_{t_2}) \S ]^T\pmb{\rand{W}}_{t_2}$ uniquely determines $\rand{V}_{t_1} \stackrel{\triangle}{=} A_{t_1} \cdot \mat{h}_0^T\pmb{\rand{W}}_{t_1}$). \hspace*{\fill}\IEEEQEDopen \begin{proof} \rm Recall that $\rand{V}_{t_2}$ equals \begin{eqnarray} \rand{V}_{t_2} \stackrel{\triangle}{=} A_{t_2} \cdot \mat{h}_0^T \pmb{\rand{W}}_{t_2} = A_{t_2} \cdot \left( h_0 \rand{W}_{t_2} + \cdots + h_I \rand{W}_{t_2+I}. \right)\nonumber \end{eqnarray} If the condition $|t_1 - t_2| \leq \L$ is satisfied, then $\rand{W}_{t_2}, \cdots, \rand{W}_{t_2+I}$ is a length-$(I+1)$ subsequence of $\pmb{\rand{W}}_{t_1} \stackrel{\triangle}{=} [\rand{W}_{t_1-\L}, \rand{W}_{t_1-\L+1}, \linebreak[3]\cdots, \rand{W}_{t_1+\L+I}]^T$. From the definition of $\S$ (see Definition \ref{defn:S}) and because $|t_1 - t_2| \leq \L$, then the matrix $\S$ must have a column $\mat{s}$ that satisfies $\Mat{E}\mat{s} = \mat{e}_{t_2-t_1}$, see Definition \ref{defn:E} for $\Mat{E}$ and its columns $\mat{e}_i$. Then for this particular column $\mat{s}$ we have \begin{eqnarray} [\mat{G} (\randb{A}_{t_1}) \mat{s} ]^T\pmb{\rand{W}}_{t_1} \!\!\!&=& \!\!\![\Mat{H}\diag(\randb{A}_{t_1})\Mat{E}\mat{s}]^T\pmb{\rand{W}}_{t_1} \nonumber\\ &=& A_{t_2} \cdot [\Mat{H}\mat{e}_{t_2-t_1}]^T\pmb{\rand{W}}_{t_1} \nonumber\\ &=& A_{t_2} \cdot \mat{h}_0^T \pmb{\rand{W}}_{t_2} \stackrel{\triangle}{=} \rand{V}_{t_2}, \nonumber \end{eqnarray} where the second equality holds because $\mat{s}$ satisfies $\diag(\randb{A}_{t_1})\Mat{E}\mat{s} = \diag(\randb{A}_{t_1}) \mat{e}_{t_2-t_1} = A_{t_2}\cdot \mat{e}_{t_2-t_1}$, and also \begin{align} &[\Mat{H}\mat{e}_{t_2-t_1}]^T\pmb{\rand{W}}_{t_1} \nonumber\\ &= [\Mat{H}\mat{e}_{t_2-t_1}]^T [\rand{W}_{t_1-\L}, \rand{W}_{t_1-\L+1}, \cdots, \rand{W}_{t_1+\L+I}] \nonumber\\ &= h_0 \rand{W}_{t_2} + h_1 \rand{W}_{t_2+1} + \cdots + h_I \rand{W}_{t_2+I}. \nonumber \end{align} By symmetry, the same argument holds for $[\mat{G} (\randb{A}_{t_1}) \S ]^T\pmb{\rand{W}}_{t_2}$ and $\rand{V}_{t_2} \stackrel{\triangle}{=} A_{t_1}\cdot \mat{h}_0 \mat{W}_{t_1}$. \end{proof} \end{lem} \begin{proof}[Proof of Proposition \ref{pro:rankKv}] Recall from (\ref{eqn:nu2}) that $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \stackrel{\triangle}{=} \mbox{\rm $\mathbb{V}$}\{\pmb{\rand{V}}_{\mat{t}_1^\kappa} |\pmb{\rand{A}}_{\mat{t}_1^\kappa}, \pmb{\zeta } \}$ is the (conditional) covariance matrix of $\pmb{\rand{V}}_{\mat{t}_1^\kappa}$. After conditioning on $\pmb{\zeta}$, the vector $\Mat{Q}_i\pmb{\Lambda}\pmb{\zeta} = [\mat{G}(\randb{A}_{t_i})\S]^T \pmb{\rand{W}}_{t_i} $ is uniquely determined, see Lemma \ref{lem:eig}. Furthermore by Lemma \ref{lem:WandV}, if $\Mat{Q}_i\pmb{\Lambda}\pmb{\zeta} = [\mat{G}(\randb{A}_{t_i})\S]^T \pmb{\rand{W}}_{t_i} $ is uniquely determined then $\rand{V}_{t_j} \stackrel{\triangle}{=} A_{t_j} \cdot \mat{h}_0^T \pmb{\rand{W}}_{t_j}$ is determined whenever $|t_i - t_j | \leq \L$. Thus we conclude that the only variables $\rand{V}_{t_i}$ that may contribute to the rank of $\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa})$, must be those with corresponding $t_i$ that are separated from all other $\{t_1,t_2,\cdots,t_\kappa \}\setminus \{ t_i\}$ by greater than $\L$. \end{proof} \renewcommand{\tt}{{t_i}} \begin{rem} \label{rem:smooth} From the expression for $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ in Theorem \ref{thm:1}, the distribution function $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ must be left-continuous~\cite{Papoulis:Text}, if the $\rank(\Mat{K}_{\pmb{\rand{V}}}(\pmb{\rand{A}}_{\mat{t}_1^\kappa}))=n$. \end{rem} \begin{table} \renewcommand{\arraystretch}{.8} \centering \caption{Various ISI channels in magnetic recording~\cite{PR}} \begin{tabular}[t]{|c|rrr|c|} \hline \multirow{2}{*}{Channel} & \multicolumn{3}{c|}{Coefficients} & Memory \\ & $h_0$ & $h_1$ & $h_2$ & Length $\ell$\\ \hline PR1 & $1$ & $1$ & - & 1\\ \hline Dicode & $1$ & $-1$ & - & 1\\ \hline PR2 & $1$ & $2$ & $1$ & 2\\ \hline PR4 & $1$ & $0$ & $-1$ & 2\\ \hline \end{tabular} \label{tab:1} \end{table} \begin{figure*}[!t] \centering \includegraphics[width=.8\linewidth]{marginal_dicode.eps} \caption{Marginal reliability distribution $F_{\rand{X}_t-\rand{Y}_t}(\sigma^2/2 \cdot r)$ computed for the PR1 channel (see Table \ref{tab:1}). Truncation lengths $\L$ are varied from $1$ to $5$. At SNR 3 dB, all curves are seen to be extremely close, with the exception of $\L=1$. At SNR 3 dB and choice of $\L=2$, the computed distribution appears close to the simulated distribution. Hence, $\L=2$ seems to be a good choice. At SNR 10 dB, a good choice appears to be $\L=5$.} \label{fig:marg_dicode} \end{figure*} We conclude this section by verifying the correctness of Procedure \ref{proce:FT}, used to evaluate $F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\mat{r})$ when candidate subsets $\bar{\Sym} \subset \mathcal{M}$ (see (\ref{eqn:excl})) are considered. The only difference between Procedures \ref{proce:pdfXmY} and \ref{proce:FT}, is that Line 3 of Procedure \ref{proce:FT} replaces Line 4 of Procedure \ref{proce:pdfXmY}. First verify that the following equality of sets is true \begin{align} &\Ev{\mat{a} \in \bar{\Sym}_{t_i} : a_0 \neq A_{t_i}} \nonumber\\ &~~~= \Ev{\pmb{\alpha}(\Mat{E}\mat{s}_k + \mat{e}_0, \randb{A}_{t_i})\in \bar{\Sym}_{t_i} : 0 \leq k \leq 2^{2m}-1},\nonumber\\ &\Ev{\mat{a} \in \bar{\Sym}_{t_i} : a_0 = A_{t_i}} \nonumber\\ &~~~= \Ev{\pmb{\alpha}(\Mat{E}\mat{s}_k , \randb{A}_{t_i})\in \bar{\Sym}_{t_i} : 0 \leq k \leq 2^{2m}-1}, \label{eqn:mod2} \end{align} where here the function $\pmb{\alpha}(\mat{e}, \randb{A}_{t_i})$ is given in Line 3 of Procedure \ref{proce:FT}. Next perform the following verifications in the order presented: \begin{itemize} \item Replace $\mathcal{M}$ by $\bar{\Sym}_{t_i}$ in the definitions of $R_{t_i}$ in (\ref{relt}). Replace $\mathcal{M}$ by $\bar{\Sym}_{t_i}$ in both $X_{t_i}$ and $Y_{t_i}$ in (\ref{eqn:XaY}). The validity of Proposition \ref{relprop} remains unaffected. \item Replace $\mathcal{M}$ by $\bar{\Sym}_{t_i}$ in the proof of Proposition \ref{pro:XaYeqdist}. The change first affects the proof starting from (\ref{eqn:XaY01}), and (\ref{eqn:mod}) needs to be slightly modified using (\ref{eqn:mod2}). The new Proposition \ref{pro:XaYeqdist} finally reads \begin{align} \rand{X}_\tt &= \max_{k :\; \pmb{\alpha}(\Mat{E}\mat{s}_k + \mat{e}_0, \randb{A}_\tt) \in \bar{\Sym}_\tt} \mat{s}_k^T [\mat{G}(\randb{A}_\tt)]^T \pmb{\rand{W}}_\tt\nonumber\\ & \quad+ \nu_k(\randb{A}_\tt) + \v_\tt + \theta(\randb{A}_\tt),\nonumber\\ \rand{Y}_\tt &= \max_{k :\; \pmb{\alpha}(\Mat{E}\mat{s}_k, \randb{A}_\tt) \in \bar{\Sym}_\tt} \mat{s}_k^T [\mat{G}(\randb{A}_\tt)]^T \pmb{\rand{W}}_\tt + \mu_k(\randb{A}_\tt).\nonumber \end{align} \item Utilize the new Proposition \ref{pro:XaYeqdist} in the proof of Theorem \ref{thm:1}. The change first affects the proof starting from (\ref{eqn:mod3}). Proceeding from (\ref{eqn:main1})-(\ref{eqn:main001}) we arrive at the new formulas \begin{eqnarray} \d_i &=& \d_i(\pmb{\rand{U}},\pmb{\rand{A}}_{\mat{t}_1^\kappa}) \nonumber\\ &=& \max_{k :\; \pmb{\alpha}(\Mat{E}\mat{s}_k + \mat{e}_0, \randb{A}_\tt) \in \bar{\Sym}_\tt} \mat{s}_k^T \Mat{Q}_i \pmb{\Lambda} \pmb{\rand{U}} + \nu_k(\randb{A}_\tt) \nonumber\\ && - \max_{k :\; \pmb{\alpha}(\Mat{E}\mat{s}_k, \randb{A}_\tt) \in \bar{\Sym}_\tt} \mat{s}_k^T \Mat{Q}_i \pmb{\Lambda} \pmb{\rand{U}} + \mu_k(\randb{A}_\tt). \nonumber \end{eqnarray} This is exactly the way $\d_i$ is computed in Procedure \ref{proce:FT}, Line 3. \end{itemize} This concludes our verification of Procedure \ref{proce:FT}. \section{Numerical Computations} \label{sect:egs} \newcommand{F_{\Xtk-\Ytk}(\sig^2/2\cdot \mat{r})}{F_{\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}}(\sigma^2/2\cdot \mat{r})} We now present numerical computations performed for various ISI channels. To demonstrate the generality of our results, various cases will be considered. Both i) the reliability distribution $F_{\pmb{\rand{R}}_{\mat{t}_1^\kappa}}(\mat{r})$ and ii) the symbol error probability $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}}$ will be graphically displayed in the following manner. Recall from Corollaries \ref{cor:main} and \ref{cor:main2} that we have $F_{\pmb{\rand{R}}_{\mat{t}_1^\kappa}}(\mat{r})=F_{|\pmb{\rand{X}}_{\mat{t}_1^\kappa}-\pmb{\rand{Y}}_{\mat{t}_1^\kappa}|}(\sigma^2/2\cdot \mat{r})$ (here $\sigma^2$ denotes the noise variance in (\ref{eqn:snr})) and $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}} = \Pr{\pmb{\rand{X}}_{\mat{t}_1^\kappa} \geq \pmb{\rand{Y}}_{\mat{t}_1^\kappa}}$. Therefore, both quantities i) and ii) will be displayed utilizing a \emph{single} graphical plot of $F_{\Xtk-\Ytk}(\sig^2/2\cdot \mat{r})$. The chosen ISI channels for our tests are given in Table \ref{tab:1}; these are commonly-cited channels in the magnetic recording literature~\cite{PR,Immink}. Define the signal-to-noise (SNR) ratio as $10\log_{10} ( \sum_{i=0}^\ell h_i^2 /\sigma^2)$. The input symbol distribution $\Pr{\randb{A}_t=\mat{a}}$ will always be uniform, i.e. $\Pr{\randb{A}_t=\mat{a}}=2^{-2(m+\ell)-1}$ see (\ref{eqn:sig_vect}), unless stated otherwise. \newcommand{F_{X_t-Y_t}(\sig^2/2 \cdot r)}{F_{X_t-Y_t}(\sigma^2/2 \cdot r)} \subsection{Marginal distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ when the noise is i.i.d.} \label{ssect:marg1} First, consider the case where the noise samples $\rand{W}_t$ are i.i.d, thus $\sigma^2 = \mathbb{E}\{\rand{W}_t^2\}$. Figure \ref{fig:marg_dicode} shows the marginal distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ computed for the PR1 channel (see Table \ref{tab:1}) with memory $\ell=1$. The distribution is shown for various truncation lengths $\L=1$ to $5$, and two different SNRs : 3 dB and 10 dB. At SNR 3 dB, we observe that with the exception of $\L=1$, all curves appear to be extremely close. At SNR 3 dB, a good choice for the truncation length $\L$ appears to be $\L=2$; the computed distribution for $\L=2$ appears close to the simulated distribution. At SNR 10 dB, it appears that $\L=5$ is a good choice. The probability of symbol error $\Pr{\rand{B}_t \neq \rand{A}_t}=\Pr{\rand{X}_t \geq \rand{Y}_t} = 1 - F_{X_t - Y_t}(0)$ is observed to decrease as the truncation length $\L$ increases; this is expected. At SNR 3 dB, the (error) probability $\Pr{\rand{X}_t \geq \rand{Y}_t} = 1 - F_{X_t - Y_t}(0) \approx 1.4 \times 10^{-1}$ for truncation lengths $\L > 1$. For SNR 10 dB, the (error) probability $\Pr{\rand{X}_t \geq \rand{Y}_t} $ is seen to vary significantly for both truncation lengths $\L=1$ and $5$; the probability $\Pr{\rand{X}_t \geq \rand{Y}_t} \approx 1.1 \times 10^{-1} $ and $1 \times 10^{-2}$ for $m=1$ and $5$, respectively. For the PR1 channel and a fixed truncation length $\L = 4$, the marginal distributions $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ are compared across various SNRs in Figure \ref{fig:marg_comp}. As SNR increases, the distributions $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ appear to concentrate more probability mass over negative values of $X_t-Y_t$. This is intuitively expected, because as the SNR increases, the symbol error probability $\Pr{\rand{B}_t \neq \rand{A}_t}=\Pr{\rand{X}_t \geq \rand{Y}_t} = 1 - F_{X_t - Y_t}(0)$ should decrease. From Figure \ref{fig:marg_comp}, the (error) probabilities $\Pr{\rand{X}_t \geq \rand{Y}_t}$ are found to be approximately $ 1.2 \times 10^{-1}, 8 \times 10^{-2}, 3 \times 10^{-2}$, and $1 \times 10^{-2}$, respectively for SNRs 3 to 10 dB. \begin{figure}[!t] \centering \includegraphics[width=.9\linewidth]{marginal_comp.eps} \caption{Comparing the distributions $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ across different SNRs, for a fixed truncation length $\L=5$. The channel is the PR1 channel, see Table \ref{tab:1}. The probability mass shifts to the left as SNR increases, which is expected.} \label{fig:marg_comp} \end{figure} \newcommand{F_{\randb{X}_t-\randb{Y}_t}(\sig^2/2 \cdot [\stackrel{r_1}{r_2}])}{F_{\randb{X}_{\mathbf{t}_1^2}-\randb{Y}_{\mathbf{t}_1^2}}(\sigma^2/2 \cdot \mathbf{r})} \newcommand{F_{\randb{X}_{\mathbf{t}_1^2}-\randb{Y}_{\mathbf{t}_1^2}}(\sig^2/2 \cdot [r_1,r_2]^T)}{F_{\randb{X}_{\mathbf{t}_1^2}-\randb{Y}_{\mathbf{t}_1^2}}(\sigma^2/2 \cdot [r_1,r_2]^T)} \begin{figure*}[t] \centering \includegraphics[width=.7\linewidth]{joint_pr2.eps} \caption{Joint reliability distribution $F_{\randb{X}_t-\randb{Y}_t}(\sig^2/2 \cdot [\stackrel{r_1}{r_2}])$ computed for both the PR1 and PR2 channels, with chosen truncation lengths $\L=2$ and $5$. } \label{fig:joint_pr2} \end{figure*} \subsection{Joint distribution $F_{\randb{X}_t-\randb{Y}_t}(\sig^2/2 \cdot [\stackrel{r_1}{r_2}])$, here $n=2$, when the noise is i.i.d.} We consider again i.i.d noise $\rand{W}_t$, and the PR1 and PR2 channels (see Table \ref{tab:1}). Here, we choose the SNR to be moderate at 5 dB. For the PR1 channel with memory length $\ell=1$, the truncation length is fixed to be $\L=2$. For the PR2 channel with $\ell=2$, we fix $\L=5$. Figure \ref{fig:joint_pr2} compares the joint distributions $F_{\randb{X}_t-\randb{Y}_t}(\sig^2/2 \cdot [\stackrel{r_1}{r_2}])$, computed for both PR1 and PR2 channels and for both time lags $|t_1 - t_2 |=1$ (i.e. neighboring symbols) and $|t_1 - t_2 |=7$. The difference between the two cases $|t_1 - t_2 |=1$ and $7$ is subtle (but nevertheless inherent) as observed from the differently labeled points in the figure. For the PR1 channel, the joint symbol error probability $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1},\rand{B}_{t_2}\neq \rand{A}_{t_2}}= \Pr{\randb{X}_{\mat{t}_1^2} \geq \randb{Y}_{\mat{t}_1^2}} $ is approximately $ 6 \times 10^{-2}$ and $2 \times 10^{-2} $ for both cases $|t_1 - t_2 |=1$ and $7$, respectively. Similarly for the PR2, the (error) probability is approximately $ 3 \times 10^{-2}$ and $1 \times 10^{-2}$ for both respective cases $|t_1 - t_2 |=1$ and $7$. Finally note that for the PR1 channel when $|t_1 - t_2 | = 7 $, both MLM reliability values $R_{t_1} = 2/\sigma^2 \cdot |X_{t_1} - Y_{t_1}|$ and $R_{t_2} = 2/\sigma^2 \cdot |X_{t_2} - Y_{t_2}|$ are \emph{independent}; this is because then $|t_1 - t_2 | = 7 > 2(\L+\ell) = 6$, refer to Figure \ref{fig:Trellis}. \begin{figure*}[t] \centering \includegraphics[width=.75\linewidth]{marginal_corr.eps} \caption{Marginal distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ for correlated noises, for the PR2 channel, at SNR $5$ dB. Truncation length $\L = 5$. This figure suggests that the $\L$-truncated MLM tolerates more noise in the frequency region where the signal power is high.} \label{fig:marg_corr} \vspace{-10pt} \end{figure*} \newcommand{\bar{t}}{\bar{t}} \newcommand{\E \Ev{W_t \cdot W_{t+1}} / \sig^2}{\mathbb{E} \Ev{W_t \cdot W_{t+1}} / \sigma^2} \subsection{Marginal distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ when the noise is correlated.} \label{ssect:corr} Consider the PR2 channel, and now consider the case where the noise samples $\rand{W}_t$ are \emph{correlated}. For simplicity of argument we consider single lag correlation, i.e. $\mathbb{E} \Ev{W_t \cdot W_{\bar{t}}} = 0$ for all $|t-\bar{t}| > 1$, and consider the following two cases : \begin{itemize} \item the \emph{correlation coefficient} $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = 0.5$, and \item the \emph{correlation coefficient} $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = -0.5$. \end{itemize} We consider a moderate SNR of 5 dB. Figure \ref{fig:marg_corr} shows the distributions $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ computed for both cases. Also in Figure \ref{fig:marg_corr}, the \emph{power spectral densities} of the correlated noise samples $\rand{W}_t$ (see~\cite{Papoulis:Text}, p.~408) are shown for both cases. It is apparent that the truncated MLM detector performs better (i.e. smaller symbol error probability) when the correlation coefficient $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = -0.5$. This is explained intuitively as follows. The detector should be able to tolerate more noise in the signaling frequency region. Observe the PR2 \emph{frequency response}~\cite{PR,Immink} displayed in Figure \ref{fig:marg_corr}. When the correlation coefficient equals $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = -0.5$, the noise power is strongest amongst signaling frequencies, and the symbol error probability $\Pr{\rand{B}_t \neq \rand{A}_t}= \Pr{\rand{X}_t \geq \rand{Y}_t}$ is observed to be the lowest (approximately $ 8 \times 10^{-2}$). On the other hand when the correlation coefficient is $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = 0.5$, the noise is strongest at frequencies near the \emph{spectral null} of the PR2 channel, and the (error) probability $\Pr{\rand{X}_t \geq \rand{Y}_t} $ is the highest (approximately $ 1.6 \times 10^{-1}$). Note that in the latter case $\E \Ev{W_t \cdot W_{t+1}} / \sig^2 = -0.5$, the MLM performs even better than the i.i.d case, see Figure \ref{fig:marg_corr}. In the i.i.d case, the error probability $\Pr{\rand{X}_t \geq \rand{Y}_t} \approx 1.3 \times 10^{-1}$. \begin{rem} One intuitively expects that similar observations will be made even for other (more complicated) choices for the noise covariance matrix $\Mat{K}_{\pmb{\rand{W}}}$, recall (\ref{eqn:mx_cov}). We stress that our results are general in the sense that we may arbitrarily specify $\Mat{K}_{\pmb{\rand{W}}}$; even if the noise samples $W_t$ are \textbf{non-stationary} our methods still apply. \end{rem} \begin{figure*}[!t] \centering \includegraphics[width=.8\linewidth]{marginal_rll.eps} \caption{Marginal distributions $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ computed for cases when a run-length limited (RLL) code is present. Here, we compare both the PR4 and dicode (see Table \ref{tab:1}) channels at SNR 5 dB. The PR4 channel has a spectral null Nyquist frequency, but the dicode channel does not. We see how a simple RLL code, which prevents neighboring transitions, aids channels with spectral nulls at Nyquist frequency.} \label{fig:marg_rll} \end{figure*} \subsection{Marginal distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ when the noise is i.i.d., and when run-length limited (RLL) codes are used.} \label{ssect:RLL} We demonstrate Procedure \ref{proce:FT} in Subsection \ref{ssect:33}, used to compute the distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ when a modulation code is present in the system. In particular, consider a \emph{run-length limited (RLL)} code; we test the simple RLL code that prevents neighboring symbol transitions~\cite{RLL,Immink}. This code improves transmission over ISI channels, that have spectral nulls near the Nyquist frequency~\cite{Immink}; one such channel is the PR4, see Table \ref{tab:1}. Figure \ref{fig:marg_rll} shows $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ computed for both the PR4, as well as the dicode channel, see Table \ref{tab:1}. The PR4 channel has a spectral null at Nyquist frequency (recall Subsection \ref{ssect:corr}), but the dicode channel does not. It is clearly seen from Figure \ref{fig:marg_rll} that the RLL code improves the performance when used on the PR4 channel. For the PR4 channel, the distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$ appears to concentrate more probability mass over negative values of $X_t - Y_t$ (similar to the observations made in Figure \ref{fig:marg_comp} when there is a SNR increase). The error probability $\Pr{\rand{B}_t\neq\rand{A}_t}=\Pr{\rand{X}_t \geq \rand{Y}_t} = 1 - F_{X_t -Y_t}(0) $ decreases by a factor of 2, dropping from approximately $9.5 \times 10^{-2}$ to $4 \times 10^{-2}$. On the other hand, the RLL code worsens the performance when applied to the dicode channel. For the dicode channel, $F_{\rand{{X}}_t-{\rand{Y}}_t}(r)$ concentrates more probability mass over positive values of $X_t - Y_t$ (similar to the observations made in Figure \ref{fig:marg_comp} when there is an SNR decrease), and the (error) probability $\Pr{\rand{X}_t \geq \rand{Y}_t} $ increases from approximately $8.8 \times 10^{-2}$ to $1.35\times 10^{-1}$. \subsection{Marginal distribution of $2/\sigma^2\cdot (\rand{X}_t - \rand{Y}_t)$, when conditioning on neighboring error events $\Ev{\rand{B}_{t-1}\neq \rand{A}_{t-1}}$ and $\Ev{\rand{B}_{t+1} \neq \rand{A}_{t+1}}$} \newcommand{\Xtt}[1]{\randb{X}_{#1}} \newcommand{\Ytt}[1]{\randb{Y}_{#1}} \newcommand{r}{r} Here we consider three \emph{neighboring} symbol reliabilities, i.e. we consider $\randb{R}_{\mat{t}_1^3} = [\rand{R}_{t-1},\rand{R}_t, \rand{R}_{t+1}]^T$. We consider the following two conditional distributions : \begin{align} \mbox{(a)}&~~~~\Pr{\rand{X}_t - \rand{Y}_t \leq r | \rand{X}_{t-1} < \rand{Y}_{t-1}, \rand{X}_{t+1} < \rand{Y}_{t+1}} \nonumber\\ &= \frac{1}{\mathcal{C}_1} \cdot F_{\Xtt{\mat{t}_1^3}-\Ytt{\mat{t}_1^3}}(0,r,0),~\mbox{and }\nonumber\\ \mbox{(b)}&~~~~\Pr{\rand{X}_t - \rand{Y}_t \leq r | \rand{X}_{t-1} \geq \rand{Y}_{t-1}, \rand{X}_{t+1} \geq \rand{Y}_{t+1}}\nonumber\\ &= \frac{1}{\mathcal{C}_2} \left(F_{\rand{X}_t-\rand{Y}_t}(r) - F_{\Xtt{\mat{t}_2^3}-\Ytt{\mat{t}_2^3}}(r,0)\right. \nonumber\\ &~~~~~~~~~~ \left. - F_{\Xtt{\mat{t}_1^2}-\Ytt{\mat{t}_1^2}}(0,r) + F_{{\Xtt{\mat{t}_1^3}-\Ytt{\mat{t}_1^3}}}(0,r,0) \right), \nonumber \end{align} where the normalization constants $\mathcal{C}_1$ and $\mathcal{C}_2$ equal the probabilities of the (respective) events that were conditioned on. Distribution (a) is conditioned on the event that both neighboring symbols are \emph{correct}. i.e. $\Ev{\rand{B}_{t-1} = \rand{A}_{t-1}, \rand{B}_{t+1} = \rand{A}_{t+1}}$. Distribution (b) is conditioned on the event that both neighboring symbols are \emph{wrong}, i.e. $\Ev{\rand{B}_{t-1} \neq \rand{A}_{t-1}, \rand{B}_{t+1} \neq \rand{A}_{t+1}}$. For the PR1, PR2 and PR4 channels, both conditional distributions (a) and (b) are shown in Figures \ref{fig:triple_comp2} and \ref{fig:triple_comp}. We compare two different SNRs 3 and 10 dB. For comparison purposes, we also show the \emph{unconditioned} distribution $F_{\Xtk-\Ytk}(\sig^2/2\cdot \mat{r})$ in both Figures \ref{fig:triple_comp2} and \ref{fig:triple_comp}. We make the following observations. In all considered cases, distribution (a) is seen to be similar to the {unconditioned} distribution. However, distribution (b) is observed to vary for all the considered cases. Take for example the PR2 channel, we see from Figure \ref{fig:triple_comp} that distribution (b) has probability mass concentrated to the right of the unconditioned $F_{X_t-Y_t}(\sig^2/2 \cdot r)$. This is true for both SNRs 3 and 10 dB. In contrast for the PR1, the MLM detector behaves differently at the two SNRs. We see from Figure \ref{fig:triple_comp2} that at SNR 10 dB, the distribution (b) has a lower symbol error probability than that of the unconditioned $F_{X_t-Y_t}(\sig^2/2 \cdot r)$. At SNR 3 dB however, the opposite is observed, i.e. the symbol error probability is higher than that of the distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$. This is because at SNR 10 dB, errors occur \emph{sparsely}, interspaced by correct symbols; it is uncommon to encounter \emph{consecutive} symbols in error. Hence conditioned on adjacent symbols $B_{t-1}$ and $B_{t+1}$ being wrong, it is uncommon for $B_t$ to be also wrong, as this is the event where we have three consecutive errornous symbols. Finally, the observations made for the PR4 channel, is again different. We notice that both distributions (a) and (b) always (practically) equal the unconditioned distribution $F_{X_t-Y_t}(\sig^2/2 \cdot r)$. This is because the even/odd output subsequences of the PR4 channel are independent of each other. \begin{figure}[!t] \centering \includegraphics[width=.9\linewidth]{triple_comp2.eps} \caption{Marginal distributions of ${\rand{X}_t-\rand{Y}_t}$ computed for the PR1 channel, obtained when conditioning on either events $\Ev{\rand{B}_{t-1} \neq \rand{A}_{t-1}, \rand{B}_{t+1} \neq \rand{A}_{t+1}}$ and $\Ev{\rand{B}_{t-1} = \rand{A}_{t-1}, \rand{B}_{t+1} = \rand{A}_{t+1}}$. These two events correspond to error (or non-error) events at neighboring time instants $t-1$ and $t+1$. The solid black line represents the unconditioned marginal distribution of ${\rand{X}_t-\rand{Y}_t}$. } \label{fig:triple_comp2} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=.8\linewidth]{triple_comp.eps} \caption{Marginal distributions of ${\rand{X}_t-\rand{Y}_t}$ computed for both the PR2 and PR4 channels, obtained when conditioning on either events $\Ev{\rand{B}_{t-1} \neq \rand{A}_{t-1}, \rand{B}_{t+1} \neq \rand{A}_{t+1}}$ and $\Ev{\rand{B}_{t-1} = \rand{A}_{t-1}, \rand{B}_{t+1} = \rand{A}_{t+1}}$. These two events correspond to error (or non-error) events at neighboring time instants $t-1$ and $t+1$. The solid black line represents the unconditioned marginal distribution of $2/\sigma^2 \cdot (\rand{X}_t-\rand{Y}_t)$. } \label{fig:triple_comp} \end{figure*} \section{Conclusion} \label{sect:con} In this paper, we derived closed-form expressions for both i) the reliability distributions $F_{\Xtk-\Ytk}(\sig^2/2\cdot \mat{r})$, and ii) the symbol error probabilities $\Pr{\rand{B}_{t_1}\neq \rand{A}_{t_1}, \cdots, \rand{B}_{t_n}\neq \rand{A}_{t_n}}$, for the $\L$-truncated MLM detector. Our results hold jointly for any number $n$ of arbitrarily chosen time instants $t_1,t_2,\cdots, t_n$. The general applicability of our result has been demonstrated for a variety of scenarios. Efficient Monte-Carlo procedures that utilize dynamic programming simplifications have been given, that can be used to numerically evaluate the closed-form expressions. It would be interesting to further generalize the exposition to consider \emph{infinite impulse response} (IIR) filters, such as in \emph{convolutional codes}.
1007.0886
\section{Introduction} \label{sect:Intro} The field of ultracold atomic gases has been rapidly growing during the past decades. One of the main sources of growth is the large degree of tunability to employ ultracold gases as model quantum systems \cite{varenna08,blochdalibard08}. In particular the strength of the two-body interaction parameter, captured by the scattering-length $a$, can be tuned over many orders of magnitude. A quantum system can be made repulsive ($a>0$), attractive ($a<0$), non-interacting ($a=0$) or strongly interacting ($|a|\rightarrow \infty$) in a continuous manner by means of Feshbach resonances~\cite{chin08}. These resonances are induced by external fields: magnetically induced Feshbach resonances are conveniently used for alkali-metal atoms, while optically induced Feshbach resonances seem more promising for e.g.~alkaline-earth atoms. In this paper we consider magnetically induced resonances only. Feshbach resonances depend crucially on the existence of an internal atomic structure, which can be modified by external fields. For alkali-metal atoms, this structure is initiated by the hyperfine interaction, which can be energetically modified by a magnetic field via the Zeeman interaction. For a given initial spin state, its collision threshold and its two-body bound states depend in general differently on the magnetic field. A Feshbach resonance occurs when the threshold becomes degenerate with a bound state. Accurate knowledge of the Feshbach resonance structure is crucial for experiments. The two-body system has to be solved to obtain the bound state solutions. Since the interactions have both orbital and spin degrees of freedom, this results in a set of radially coupled Schr\"odinger equations in the spin basis. The set of equations is referred to as Coupled Channels equations~\cite{stoof88}, and can be solved numerically. Quite often it is far from trivial to obtain reliable predictions for the two-body problem, due to several reasons: the ab-initio interaction potentials are usually not accurate enough to describe ultracold collisions. Therefore these potentials have to be modelled by adding and modifying potential parameters. A full calculation for all spin combinations and all potential variations is very time-consuming. Moreover, one can easily overlook some features of the bound state spectrum due to numerical issues such as grid sizes and numerical accuracy. This is also due to a lack of insight of the general resonance structure, which is often not obvious from the numerical results. Given the above, there is certainly a need for fast and simple models to predict and describe Feshbach resonances, which allow for a detailed insight in the resonance structure. In the last decade various simple models have been developed for ultracold collisions \cite{houbiers98,vogels98,hanna09}, which vary significantly in terms of complexity, accuracy and applicability. In all these models the radial equation plays a central role in describing the Feshbach resonances. In this Paper we present in detail the Asymptotic Bound-state Model (ABM). This model, briefly introduced in Ref.~\cite{wille}, and extended in Ref.~\cite{dePRL} was successfully applied to the Fermi-Fermi mixture of $^{6}$Li and $^{40}$K. In Ref.~\cite{wille} the observed loss features were assigned to 13 Feshbach resonances with high accuracy, and the obtained parameters served as an input to a full coupled channels analysis. The ABM builds on an earlier model by Moerdijk et al.~\cite{moerdijk} for homonuclear systems, which was also applied by Stan et al.~\cite{stan} for heteronuclear systems. This earlier model neglects the mixing of singlet and triplet states, therefore allowing the use of uncoupled orbital and spin states. In the ABM we make use of the radial singlet and triplet eigenstates and include the coupling between them. This crucial improvement makes the whole approach in principle exact, and it allows for a high degree of accuracy given a limited number of parameters. We show how we can systematically extend the most simple version of ABM to predict the width of the Feshbach resonances by including threshold behavior. Additionally we allow for the inclusion of multiple vibrational levels and parameter for the spatial wavefunction overlap. The fact that ABM is computationally light provides the possibility to map out the available Feshbach resonance positions and widths for a certain system, as has been shown in Ref. \cite{dePRL}. Throughout the paper we will use the $^{6}$Li-$^{40}$K mixture as a model system to illustrate all introduced aspects. Additionally, we present ABM calculations on the $^{40}$K-$^{87}$Rb mixture to demonstrate its validity on a more complex system, comparing it with accurate coupled channel calculations \cite{juliennePrivate09}. The case of metastable helium atoms where each atom has an electron spin of $s=1$ and the interaction occurs through singlet, triplet and quintet interaction potentials we discuss elsewhere \cite{heliumABM}. In the following we describe the ABM (Sec. \ref{sect:ABM}) and various methods to obtain the required input parameters. In Sec. \ref{sect:Application} the ABM is applied to the three physical systems and in Sec. \ref{sect:OpenChannel} we introduce the coupling to the open channel to predict the width of Feshbach resonances. In Section \ref{sect:Discussion} we summarize our findings and comment on further extensions of the model. \section{Asymptotic Bound-state Model\label{sect:ABM}} In this section we give a detailed description of the asymptotic bound state model. In Section \ref{sect:ABMoverview} we start with a general overview of the model which is described in more detail in the subsequent sections \ref{sec:InternalEnergy} to \ref{sect:ABS}. \subsection{Overview\label{sect:ABMoverview}} In the ABM we consider two atoms, $\alpha $ and $\beta $, in their electronic ground-state. To search for Feshbach resonances we use the effective Hamiltonian \cite{tiesinga} \begin{equation} \mathcal{H}=\mathcal{H}^{\mathrm{rel}}+\mathcal{H}^{\mathrm{int}}. \label{1} \end{equation}Here $\mathcal{H}^{\mathrm{rel}}=\mathbf{p}^{2}/2\mu +\mathcal{V}$ describes the relative motion of the atoms in the center of mass frame: the first term is the relative kinetic energy, with $\mu $ the reduced mass, the second term the effective interaction potential $\mathcal{V}$. The Hamiltonian $\mathcal{H}^{\mathrm{int}}$ stands for the internal energy of the two atoms. We will represent $\mathcal{H}^{\mathrm{int}}$ by the hyperfine and Zeeman contributions to the internal energy (Section \ref{sec:InternalEnergy}). Therefore, $\mathcal{H}^{\mathrm{int}}$ is diagonal in the Breit-Rabi pair basis $\{|\alpha \beta \rangle \}$ with eigen-energies $E_{\alpha \beta }$ and typically dependent on the magnetic field $B$. The internal states $|\alpha \beta \rangle $ in combination with the quantum number $l$ for the angular momentum of the relative motion define the \textit{scattering channels} $\left( \alpha \beta ,l\right) $. Because the effective potential $\mathcal{V}$ is in general not diagonal in the pair basis $\{|\alpha \beta \rangle \}$, the internal states of the atoms can change in collisions. To include the coupling of the channels by $\mathcal{V}$, we transform from the pair basis to a spin basis $\{|\sigma \rangle \}$ in which $\mathcal{H}^{\mathrm{rel}}$ is diagonal. We will restrict ourselves (Section \ref{sec:RelativeHamiltonian}) to effective potentials $\mathcal{V}$ which are diagonal in $S$, the quantum number of the total electron spin $\mathbf{S=s}_{\alpha }\mathbf{+s}_{\beta }$, where $\mathbf{s}_{\alpha }$ and $\mathbf{s}_{\beta }$ are the electron spins of the colliding atoms. The effective potential can thus be written as $\mathcal{V}(r)=\sum_{S}|S\rangle V_{S}(r)\langle S|$, where $r$ is the interatomic separation. The examples discussed in this paper are alkali atoms $\left( s=1/2\right) $ which lead to a decomposition in singlet $(S=0) $ and triplet potentials $(S=1)$. The eigenstates of $\mathcal{H}^{\mathrm{rel}}$ (bound-states and scattering states) are solutions of the Schr\"{o}dinger equations for given value of $l$, using effective potentials $V_{S}^{l}(r)$ in which the centrifugal forces are included (Section \ref{sec:RelativeHamiltonian}). Since the effective potentials are central interactions, a separation of variables can be performed to describe the wavefunction as a product of a radial and angular part, $|\Psi\rangle = |\psi\rangle | Y^l_{m_l} \rangle$. The ABM solves the Schr\"{o}dinger equation for the Hamiltonian (\ref{1}) starting from a restricted set of (typically just a few) \textit{discrete} eigenstates $|\psi _{\nu}^{Sl}\rangle |Y^l_{m_l}\rangle $ of $\mathcal{H}^{\mathrm{rel}}$, using their binding energies $\epsilon _{\nu}^{Sl} $ as free parameters. The continuum states are not used in the model. The set $\{|\psi _{\nu}^{Sl}\rangle \}$ corresponds to the bound-state wavefunctions $\psi _{\nu}^{Sl}(r)=\langle r|\psi _{\nu}^{Sl}\rangle $ in the effective potentials $V_{S}^{l}(r)$, with $\nu $ and $l$ being the vibrational and rotational quantum numbers, respectively. The ABM solutions are obtained by diagonalization of the Hamiltonian (\ref{1}) using the restricted set of bound states $\{|\psi _{\nu}^{Sl}\rangle |\sigma \rangle \}$. This is equivalent to solving the secular equation \begin{equation} \det |(\epsilon _{\nu}^{Sl}-E_{b})\delta _{\nu l\sigma, \nu ^{\prime} l^{\prime }\sigma ^{\prime }} +\langle \psi _{\nu}^{Sl}|\psi _{\nu^{\prime }}^{S^{\prime }l}\rangle \langle \sigma |\mathcal{H}^{\mathrm{int}} |\sigma ^{\prime }\rangle |=0, \label{eq:secular} \end{equation} where we have used the orthonormality of $|Y^l_{m_l}\rangle$. The roots $E_{b}$ represent the eigenvalues of $\mathcal{H}$ which are shifted with respect to the \textit{bare} levels $\epsilon _{\nu}^{Sl}$ due to the presence of the coupling term $\mathcal{H}^{\mathrm{int}}$. The roots $E_{b}$ will be accurate as long as the influence of the continuum solutions is small. Since the bound-state wavefunctions $\psi _{\nu}^{Sl}(r) $ are orthonormal for a given value of $S$ and $l$, the Franck-Condon factors are $\langle \psi _{\nu}^{Sl}|\psi _{\nu^{\prime }}^{S l}\rangle =\delta _{\nu \nu ^{\prime }}$. The eigenstates of $\mathcal{H}$ define bound states in the system of \textit{coupled channels}. We define the \textit{entrance channel} $\left( \alpha \beta ,l\right) _{0}$ by the internal states $|\alpha \beta \rangle $ for which we want to find Feshbach resonances with a given angular momentum state of $l=0,1,\cdots $. The energy $E_{\alpha \beta }^{0}(B)$ of two free atoms at rest in the entrance channel defines the \textit{threshold energy}, which separates the continuum of scattering states from the discrete set of bound states. In the ABM we define $\mathcal{H}$ relative to this energy. With this convention the threshold energy always corresponds to $E=0$, irrespective of the magnetic field. Further, we consider only entrance channels that are stable against spin exchange relaxation. Since in the ABM we only consider bound states, and therefore are in the regime $E<0$, all channels are energetically \textit{closed}. Collisions in the entrance channel would have a collision energy of $E>0$ and the entrance channel would be energetically \textit{open}, i.e. the atoms are not bound and have a finite probability of reaching $r=\infty$ in this channel. Although all channels are closed in the ABM we will refer to the entrance channel as the open channel, anticipating on the inclusion of threshold effects in Section \ref{sect:OpenChannel}. In the Sections \ref{sec:InternalEnergy}-\ref{sect:ABS} we discuss the ABM in its simplest form, where level broadening by coupling to the continuum is neglected \cite{wille}. In this approximation, Feshbach resonances are predicted for magnetic fields $B_{0}$ where a bound level crosses the threshold, $E_{b}=\mu _{rel}(B-B_{0})$ with $\mu _{rel}\equiv \partial E_{b}/\partial B|_{B=B_{0}}$, and where coupling to the continuum is in principle allowed by conservation of the angular momentum. To determine the crossings the diagonalization (\ref{eq:secular}) has to be carried out as a function of magnetic field. The procedure becomes particularly simple when the coupling strength $\mathcal{H}^{\mathrm{int}}$ is small compared to the separation of the ro-vibrational levels in the various potentials $V_{S}^{l}(r)$ because in this case the basis set can be restricted to only the least bound level in each of the potentials $V_{S}^{l}(r)$. In this case the set of levels $\{\epsilon _{\nu}^{Sl}\}$ reduces to a small number, $\{\epsilon ^{Sl}\}$, with $|s_{\alpha }-s_{\beta }|\leq S\leq s_{\alpha }+s_{\beta }$. In the case of the alkalis only two levels, $\epsilon ^{0l}$ and $\epsilon ^{1l}$, are relevant for each value of $l$ . Further, as will be shown in Section \ref{sect:ABS}, the least bound states have the long-range behavior of \textit{asymptotically bound states}, which makes it possible to estimate the value of Franck-Condon factors $\langle \psi _{\nu}^{Sl}|\psi _{\nu^{\prime }}^{S^{\prime }l}\rangle $ without detailed knowledge of the short-range behavior of the potentials $V_{S}^{l}(r)$. This reduces the diagonalization (\ref{eq:secular}) to the diagonalization of a spin Hamiltonian. Treating the $\{\epsilon ^{Sl}\}$ as fitting parameters, their values can be determined by comparison with a hand full of experimentally observed resonances. Once these $\{\epsilon ^{Sl}\}$ are known, the position of all Feshbach resonances associated with these levels can be predicted. As this procedure does not involve numerical solution of the Schr\"{o}dinger equation for the relative motion it provides an enormous simplification over coupled channels calculations. In Section \ref{sect:OpenChannel} we turn to the extended version of the ABM in which also the coupling to the open channel is taken into account. The presence of such a coupling gives rise to a shift $\Delta$ of the uncoupled levels and above threshold to a broadening $\Gamma$ \cite{dePRL}. The width of a Feshbach resonance is related to the lifetime $\tau =\hbar /\Gamma $ of the bound-state above threshold and provides a measure for the coupling to the continuum. In magnetic field units the width $\Delta B$ is related to the scattering length by the expression \cite{moerdijk} \begin{equation} \label{eq:dispersiveA} a(B)=a_{bg}\left( 1-\frac{\Delta B}{B-B_{0}}\right), \end{equation} where $a_{bg}$ is the background scattering length. Interestingly, the width $\Delta B$ can also be determined with the same restricted basis set $\{|\psi _{\nu}^{Sl}\rangle \}$, which does not include continuum states. In Section \ref{sect:OpenChannel} this is shown for the simplest case where only a single level is resonant and the resonance width can be found from the coupling of two bound-state levels: the resonant level and the least-bound level in the entrance channel. \subsection{Internal energy\label{sec:InternalEnergy}} To describe the internal energy of the colliding atoms we restrict the atomic Hamiltonian to the hyperfine and Zeeman interactions \begin{eqnarray} \mathcal{H}^{\mathrm{A}} &=&\mathcal{H}^{\mathrm{hf}} + \mathcal{H}^{\mathrm{Z}}\\ & = & \frac{a_{\mathrm{hf}}}{\hbar ^{2}}\mathbf{i}\cdot \mathbf{s}+(\gamma _{e}\mathbf{s}-\gamma _{i}\mathbf{i})\cdot \mathbf{B}, \end{eqnarray} where $\mathbf{s}$ and $\mathbf{i}$ are the electron and nuclear spins respectively, $\gamma _{e}$ and $\gamma _{i}$ are their respective gyromagnetic ratios, $a_{\mathrm{hf}}$ is the hyperfine energy and $\mathbf{B}$ is the externally applied magnetic field. The hyperfine interaction couples the electron and nuclear spin which add to a total angular momentum $\mathbf{f}=\mathbf{s}+\mathbf{i}$. In Fig.~\ref{fig:LiKhyperfine} the well known Breit-Rabi diagrams of $^{6}$Li and $^{40}$K are shown, the curves correspond to the eigenvalues of $\mathcal{H}^{\mathrm{A}}$. The one-atom hyperfine states are labeled $|fm_{f}\rangle $, although $f$ is only a good quantum number in the absence of an external magnetic field. \begin{figure}[ht!] \includegraphics[width=8.3088cm]{LiKhyperfine} \caption{The single atom hyperfine diagrams for $^{6}$Li and $^{40}$K. The curves correspond to the eigenvalues of $\mathcal{H}^{\mathrm{A}}$ and are labeled by the zero field quantum numbers $|fm_{f}\rangle $.} \label{fig:LiKhyperfine} \end{figure} By labeling the colliding atoms with $\alpha $ and $\beta $, the two-body internal Hamiltonian becomes $\mathcal{H}^{\mathrm{int}}=\mathcal{H}_{\mathrm{\alpha }}^{\mathrm{A}}+\mathcal{H}_{\mathrm{\beta }}^{\mathrm{A}}$ and the spin state of the colliding pair can be described in the Breit-Rabi pair basis $|\alpha \beta \rangle \equiv |f_{\alpha }m_{f_{\alpha }},f_{\beta }m_{f_{\beta }}\rangle \equiv |f,m_{f}\rangle _{\alpha }\otimes |f,m_{f}\rangle _{\beta }$. The corresponding energy of two free atoms at rest defines the $B$-dependent threshold energy introduced in Section \ref{sect:ABMoverview}. \subsection{Relative Hamiltonian\label{sec:RelativeHamiltonian}} The bound eigenstates of $\mathcal{H}^{\mathrm{rel}}$ play a central role in the determination of the coupled bound states $\mathcal{H}$ responsible for the Feshbach resonances. The relative Hamiltonian includes the effective interaction $\mathcal{V}$ resulting from all Coulomb interactions between the nuclei and electrons of both atoms \footnote{The much weaker magnetic dipole-dipole interactions are neglected.}. This effective interaction is isotropic and depends only on the quantum number $S$ associated with the total electron spin. For these central potentials the two-body solutions will depend on the orbital quantum number $l$, but not on its projection $m_{l}$. In the absence of any anisotropic interaction both $l$ and $m_{l}$ are good quantum numbers of $\mathcal{H}^{\mathrm{rel}}$ and $\mathcal{H}.$ We specify the ABM basis states as $\{|\psi _{\nu}^{Sl}\rangle |\sigma \rangle \}$. Here the spin basis states $|\sigma \rangle \equiv |SM_{S}\mu _{\alpha }\mu _{\beta }\rangle $ are determined by the spin quantum number $S $ and the magnetic quantum numbers $M_{S}$, $\mu _{\alpha }$, and $\mu _{\beta }$ of the $\mathbf{S}$, $\mathbf{i}_{\alpha }$ and $\mathbf{i}_{\beta }$ operators, respectively. The sum $M_{F}=M_{S}+\mu _{\alpha }+\mu _{\beta }$ is conserved by the Hamiltonian $\mathcal{H}$. This limits the number of spin states which have to be included in the set ${|\sigma \rangle}$. The bound-state wavefunctions $\psi _{\nu}^{Sl}(r)$ for the singlet and triplet potentials, characterized by the vibrational and rotational quantum numbers $\nu $ and $l $, satisfy the reduced radial wave equation of $\mathcal{H}^{\mathrm{rel}}$ for specific values of $S$ and $l$, \begin{equation} \left[ -\frac{\hbar ^{2}}{2\mu }\frac{d^{2}}{dr^{2}}+V_{S}^{l}(r)\right] \psi _{\nu}^{Sl}(r)=\epsilon _{\nu}^{Sl}\psi _{\nu}^{Sl}(r). \label{eq:Hrel1D} \end{equation}Here $V_{S}^{l}(r)\equiv V_{S}(r)+l(l+1)\hbar ^{2}/(2\mu r^{2})$ represents the interaction potentials including the centripetal forces. The corresponding binding energies are given by $\epsilon _{\nu}^{Sl}$. In this paper we mainly focus on heteronuclear systems, however, the ABM works equally well for homonuclear systems. In the latter case one would rather use a symmetrized spin basis $|\sigma \rangle \equiv |SM_{S}IM_{I}\rangle $, where $I$ is the total nuclear spin and $M_{I}$ is the magnetic quantum number for $\mathbf{I} = \mathbf{i}_{\alpha} + \mathbf{i}_{\beta}$ as described in Ref.~\cite{moerdijk}. \subsection{Diagonalization of $\mathcal{H}$\label{sec:FullHamiltonian}} In the ABM basis $\{|\psi _{\nu}^{Sl}\rangle |\sigma \rangle \}$ the Zeeman term $\mathcal{H}_{\mathrm{Z}}$ is diagonal with \begin{equation} E_{\sigma }^{Z} = \langle \sigma |\mathcal{H}_{\mathrm{Z}}|\sigma \rangle=\hbar (\gamma _{e}M_{S}-\gamma _{\alpha }\mu _{\alpha }-\gamma _{\beta }\mu _{\beta })B \end{equation} the Zeeman energy of state $|\sigma \rangle$. As the orbital angular momentum is conserved, we can solve Eq.~(\ref{eq:secular}) separately for every $l$ subspace. Since the set $\{|\psi _{\nu}^{Sl}\rangle |\sigma \rangle \}$ is orthonormal the secular equation takes (for a given value of $l$) the form \begin{equation} \det |(\epsilon _{\nu}^{Sl}+E_{\sigma}^{Z} -E_{b})\delta _{\nu \sigma ,\nu ^{\prime} \sigma ^{\prime }}+\eta _{\nu ,\nu^{\prime }}^{S,S ^{\prime }}\langle \sigma |\mathcal{H}_{\mathrm{hf}}|\sigma ^{\prime }\rangle |=0, \end{equation} where $\eta _{\nu ,\nu^{\prime }}^{S,S ^{\prime }}=\langle \psi _{\nu}^{Sl}|\psi _{\nu^{\prime }}^{S^{\prime } l}\rangle $ are Franck-Condon factors between the different $S$ states, which are numbers in the range $0\leq |\eta _{\nu ,\nu^{\prime}}^{S,S ^{\prime }}|\leq 1$ for $ S\neq S^{\prime } $ and $\eta _{\nu ,\nu'}^{S,S}= \delta_{\nu,\nu'}$. Repeating the procedure as a function of magnetic field the energy level diagram of all bound states in the system of coupled channels is obtained. It is instructive to separate the hyperfine contribution into two parts, $% \mathcal{H}_{\mathrm{hf}}=\mathcal{H}_{\mathrm{hf}}^{+}+\mathcal{H}_{\mathrm{% hf}}^{-}$, where \begin{equation} \mathcal{H}_{\mathrm{hf}}^{\pm }=\frac{a_{\mathrm{hf}}^{\alpha }}{2\hbar ^{2}% }\left( \mathbf{s}_{\alpha }\pm \mathbf{s}_{\beta }\right) \cdot \mathbf{i}% _{\alpha }\pm \frac{a_{\mathrm{hf}}^{\beta }}{2\hbar ^{2}}\left( \mathbf{s}% _{\alpha }\pm \mathbf{s}_{\beta }\right) \cdot \mathbf{i}_{\beta }. \label{eq:HF+/-} \end{equation}% Because $\mathcal{H}_{\mathrm{hf}}^{+}$ conserves $S$, it couples the ABM states only within the singlet and triplet manifolds. The term $\mathcal{H}_{% \mathrm{hf}}^{-}$ is off-diagonal in the ABM basis, hence couples singlet to triplet. Accordingly, also the hyperfine term in the secular equation separates into two parts \begin{equation} \eta _{\nu ,\nu^{\prime }}^{S,S ^{\prime }}\langle \sigma |\mathcal{H}_{% \mathrm{hf}}|\sigma ^{\prime }\rangle =\delta _{\nu ,\nu ^{\prime}} \langle \sigma |\mathcal{H}_{\mathrm{% hf}}^{+}|\sigma ^{\prime }\rangle +\eta _{\nu ,\nu^{\prime }}^{S,S ^{\prime }}\langle \sigma |\mathcal{H}_{\mathrm{hf}}^{-}|\sigma ^{\prime }\rangle . \label{eq:HFseparation} \end{equation}% Note that the second term of Eq.\thinspace (\ref{eq:HFseparation}) is zero \textit{unless} $S\neq S^{\prime }$. This term was neglected in the models of Refs.~% \cite{moerdijk,stan}. This is a good approximation if no Feshbach resonances occur near magnetic fields where the energy difference between singlet and triplet levels is on the order of the hyperfine energy. However, for a generic case this term cannot be neglected. To demonstrate the procedure of identification of Feshbach resonances we show in Fig.\thinspace \ref{fig:explot} the ABM solutions for a simple fictitious homonuclear system with $S=1$ and $I=2$ for an entrance channel with $M_{F}=M_{S}+M_{I}=0$ and $l=0$. The example has the spin structure of $^{6}$Li but we use ABM parameters, $\epsilon ^{0}$, $\epsilon ^{1}$ and $\eta ^{01}$, with values chosen for convenience of illustration. The field dependence of threshold energy of the entrance channel $E_{\alpha \beta }^{0} $ is shown here explicitly (dashed line). The energies $E_b$ (solid lines) are labeled by their high field quantum numbers $|SM_{S}IM_{I}\rangle $ and the binding energies in the singlet and triplet potentials are chosen to be $\epsilon ^{0}=-10$ and $\epsilon ^{1}=-5$. The avoided crossings around $B=0$ are caused by the hyperfine interaction and is proportional to $a_{\mathrm{hf}}$; the avoided crossing between the singlet and triplet levels is proportional to the wavefunction overlap $\eta^{01}$. Four $s$-wave Feshbach resonances occur indicated at the crossings $1$, $2$ and $3$ (double resonance). The resonances at $1$ and $2$ are mostly determined by the triplet binding energy $\epsilon^1$ and the resonances at $3$ by the singlet binding energy $\epsilon^0$. \begin{figure}[ht!] \includegraphics[width=3.0441in]{expl} \caption{ABM calculation for a fictitious homonuclear system with $S=1$\ and $I=2$ for an entrance channel with $M_{F}=M_{S}+M_{I}=0$ and $l=0$. The threshold energy of the entrance channel $E_{\protect\alpha \protect\beta }^{0}$ is shown here explicitly as the dashed line. The energies $E_b$ (solid lines) are labeled by their high field quantum numbers $|SM_{S}IM_{I}\rangle $. The binding energies of the least bound states in the singlet and triplet potentials are chosen to be $\protect\epsilon ^{0}=-10$ and $\protect\epsilon ^{1}=-5$. The avoided crossing around $B=0$ is proportional to the hyperfine interaction $a_{hf}$ and those between the singlet and triplet levels to the wavefunction overlap $\protect\eta _{01}$ and the hyperfine interaction $a_{hf}$. Four Feshbach resonances occur indicated at the crossings $1$, $2$ and $3$ (double resonance). } \label{fig:explot} \end{figure} \subsection{Free parameters\label{sect:vLevels}} The free parameters of the ABM are the binding energies $\epsilon _{\nu}^{S ,l}$ and the Franck-Condon factors $\eta _{\nu ,\nu^{\prime }}^{S,S ^{\prime }}$. These parameters can be obtained in a variety of manners. Here we discuss three methods, two of which will be demonstrated in Sect. \ref{sect:Application} and the third in Ref. \cite{heliumABM}. First, if the scattering potentials $V_{S}^{l}(r)$ are very well known, the bound state wavefunctions of the vibrational levels can be obtained by solving equation (\ref{eq:Hrel1D}) for $\epsilon _{\nu}^{Sl}<0$. Numerical values of the Franck-Condon factors follow from the obtained eigenfunctions. This method is very accurate and can be extended to deeply bound levels, however accurate model potentials are only available for a limited number of systems. A second method can be used when the potentials are not very well or only partially known. For large interatomic distances the potentials can be parameterized by the dispersion potential \begin{equation} V(r)=-\frac{C_{6}}{r^{6}}. \label{eq:VanderWaals} \end{equation}Since this expression is not correct for short distances, we account for the inaccurate inner part of the potential by a boundary condition based on the accumulated phase method \cite{accphase}. This boundary condition has a one-to-one relationship to the interspecies $s$-wave singlet and triplet scattering lengths. This approach requires only three input parameters: the van der Waals $C_{6}$ coefficient and the singlet ($a_{S}$) and triplet ($a_{T}$) scattering lengths. For an accurate description involving deeper bound states the accumulated phase boundary condition can be made more accurate by including additional parameters \cite{accphase}. The third method to obtain the free parameters is by direct comparison of ABM predictions with experimentally observed Feshbach resonances, for instance obtained in a search for loss features in an ultracold atomic gas. A loss feature spectrum can be obtained by measuring, as a function of magnetic field, the remaining number of atoms after a certain holding time at fixed magnetic field. The ABM parameters follow by a fitting procedure yielding the best match of the predicted threshold crossing fields with the observed loss feature spectrum. We applied this method in Ref.~\cite{wille}, where it has proven to be very powerful for rapid assignment of Feshbach resonances in the $^6$Li-$^{40}$K mixture due to the small computational time required to diagonalize a spin hamiltonian. The number of fit parameters is determined by the number of bound states which have to be considered. Depending on the atomic species and the magnetic field, only a selected number of vibrational levels $\epsilon _{\nu}^{Sl}$ have to be taken into account. This number can be estimated by considering the maximum energy range involved. An upper bound results from comparing the maximum dissociation energy of the least bound vibrational level $D^{\ast }$ with the maximum internal energy of the atom pair $E_{\mathrm{int,max}}$. The maximum dissociation energy of the $\nu$-th vibrational level can be estimated semi-classically \cite{LeRoy70} \begin{equation} D^{\ast }=\left( \frac{\nu \zeta \hbar }{\mu ^{1/2}C_{6}^{1/6}}\right) ^{3} \label{eq:Dstar} \end{equation}where $\zeta =2\left[ \Gamma (1+1/6)/\Gamma (1/2+1/6)\right] \simeq 3.434$ where $\nu$ is counted from the dissociation limit, i.e. $\nu=1$ is the least bound state. The maximum internal energy is given by the sum of the hyperfine splitting of each of the two atoms at zero field, the maximum Zeeman energy for the free atom pair and the maximum Zeeman energy for the molecule \begin{equation} \label{eq:Eintmax} E_{\mathrm{int,max}}\simeq E_{hf}^{\alpha }+E_{hf}^{\beta }+2(s_{\alpha }+s_{\beta })g_{S}\mu _{B} B, \end{equation} where $E_{hf}^{\alpha ,\beta }=a_{hf}^{\alpha ,\beta }(i_{\alpha ,\beta }+s_{\alpha ,\beta })$ and we have neglected the nuclear Zeeman effect. Comparing equations (\ref{eq:Dstar}) and (\ref{eq:Eintmax}) gives us an expression for the number of vibrational levels $N_{\nu}$ which have to be considered \begin{equation} N_{\nu}\simeq \lceil \frac{\mu ^{1/2}C_{6}^{1/6}}{\hbar \zeta }E_{\mathrm{int,max}}^{1/3}\rceil \label{eq:Nstates} \end{equation}where $\lceil x\rceil $ denotes the smallest integer not less than the argument $x$. The maximum possible magnetic field $B_{max}$ required to encounter a Feshbach resonance can be estimated from Eq. \ref{eq:Dstar} by neglecting the hyperfine energy as \begin{equation} B_{max} \simeq \frac{D^{\ast }}{(s_{\alpha }+s_{\beta })g_{S}\mu _{B}}. \end{equation} If the hyperfine energy is comparable or larger than the vibrational level splitting $D^{\ast }$ the expression for $B_{max}$ overestimates the maximum field of the lowest field Feshbach resonance. \subsection{Asymptotic bound states\label{sect:ABS}} The most crucial ABM parameters are the binding energies $\epsilon _{\nu}^{Sl}$. However, for accurate predictions of the Feshbach resonance positions also the Franck-Condon factors have to be accurate. For weakly bound states these factors are mainly determined by the difference in binding energy of the overlapping states, rather than by the potential shape. Therefore good approximations can be made with little knowledge of the scattering potential. For very-weakly bound states the outer classical turning point $r_{c}$ is found at distances $r_{c}\gg r_{0}$; i.e., far beyond the Van der Waals radius of the interaction potential \begin{equation} r_{0}=\frac{1}{2}\left( \frac{2\mu C_{6}}{\hbar ^{2}}\right) ^{1/4}. \label{eq:rvdW} \end{equation}These states are called \emph{halo states}\cite{haloref}. Because in this case most of the probability density of the wavefunction is found outside the outer classical turning point, these states can be described by a zero-range potential with a wavefunction of the type $\psi (r)\sim e^{-\kappa r}$, where $\kappa =(-2\mu \epsilon /\hbar ^{2})^{1/2}$ is the wavevector corresponding to a bound state with binding energy $\epsilon $. The Franck-Condon factor of two halo states with wavevectors $\kappa _{0}$ and $\kappa _{1}$ is given by \begin{equation} \langle \psi ^{0}|\psi ^{1}\rangle =2\frac{\sqrt{\kappa _{0}\kappa _{1}}}{\kappa _{0}+\kappa _{1}}. \label{eq:overlapContact} \end{equation} This approximation is valid for binding energies $|\epsilon |\ll C_{6}/r_{0}^{6}$. The calculation of the Franck-Condon factors can be extended to deeper bound states by including the dispersive van der Waals tail. For distances $r\gg r_{X}$, where $r_{X}$ is the exchange radius, the potential is well described by Eq. (\ref{eq:VanderWaals}) and the Franck-Condon factors can be calculated by numerically solving the Schr\"{o}dinger equation (\ref{eq:Hrel1D}) for the Van der Waals potential (\ref{eq:VanderWaals}) on the interval $r_{X}<r<\infty $ \cite{Gao98}. The exchange radius $r_{X}$ is defined as the distance where the Van der Waals interaction equals the exchange interaction. This method can be used for \emph{asymptotic bound states}, which we define by the condition $r_{c}>r_{X}$. If even deeper bound states, with $r_{c}<r_{X}$, have to be taken into account, the potential can be extended by including the exchange interaction \cite{smirnov65}, or by using full model potentials. To illustrate the high degree of accuracy achieved by using asymptotic bound states we calculate the Franck-Condon factor for a contact potential (halo states), a van der Waals potential (asymptotic bound states) and a full model potential including short range behavior, derived from Refs. \cite{salami07,aymar05}. Figure \ref{fig:overlap} shows the Franck-Condon factor $\eta _{11}^{01}$ for $^{6}$Li-$^{40}$K calculated numerically for the model potential, van der Waals potential, and analytically using equation (\ref{eq:overlapContact}). The van der Waals coefficient used is $C_{6}=2322E_{h}a_{0}^{6}$ \cite{derevianko}, where $E_{h}=4.35974\times 10^{-18}~\mathrm{J}$ and $a_{0}=0.05291772~\mathrm{nm}$. The value of $\eta _{11}^{01}$ has been plotted as a function of the triplet binding energy $\epsilon ^{1}$ for three different values of the singlet binding energy $\epsilon ^{0}$. It is clear that the contact potential is only applicable for $\epsilon /h\lesssim 100$MHz, hence only for systems with resonant scattering in the singlet and triplet channels. The approximation based on the $C_{6}$ potential yields good agreement down to binding energies of $\epsilon/h \lesssim 40$GHz, which is much more than the maximum possible vibrational level splitting of the least bound states ($D^{\ast }/h= 8.2$GHz), hence is sufficient to describe Feshbach resonances originating for the least bound vibrational level. The black circle indicates the actual Franck-Condon factor for the least bound state of $^{6}$Li-$^{40}$K. For the contact, van der Waals and model potentials we find $\eta _{11}^{01}=0.991$, $\eta _{11}^{01}=0.981$ and $\eta _{11}^{01}=0.979$ respectively. \begin{figure}[ht!] \includegraphics[width=8.3088cm]{overlapPlots} \caption{(Color online) The Franck-Condon factor $\eta^{01}_{\nu,\nu'}$ for the least bound states of the $^6$Li-$^{40}$K system, calculated as a function of the triplet binding energy $\epsilon^1$ for three different values of $\epsilon^0/h=7.16$ MHz, $\epsilon^0/h=716$ MHz and $\epsilon^0/h=7.16\times 10^4$ MHz. $\eta^{01}_{\nu,\nu'}$ is calculated for the model potential (dashed blue), the $-C_6/r^6$ potential (solid red) and the contact potential, equation (\ref{eq:overlapContact}) (dash-dotted green). The black circle indicates the actual value for the least bound state of $^{6}$Li-$^{40}$K ($\protect\epsilon ^{0}/h=716$ MHz and $\protect\epsilon ^{1}/h=425$ MHz). The nodes in $\protect\eta^{01}_{\nu,\nu'}$ correspond approximately to the appearance of deeper lying vibrational states, i.e. for $\epsilon^1/h \gtrsim 10^4~\mathrm{MHz}$, $\nu>1$.} \label{fig:overlap} \end{figure} \begin{figure}[ht!] \includegraphics[width=8.3066cm]{lik3_thick_2} \caption{(Color online) The energies of all the coupled bound states for $^{6}$Li -$^{40}$K with total spin $M_{F}=\pm 3$. The black solid line indicates the threshold energy of the entrance channel $|1/2,+1/2\rangle _{\mathrm{Li}}\otimes |9/2,+5/2\rangle _{\mathrm{K}}$ for $B<0$ and $|1/2,+1/2\rangle _{\mathrm{Li}}\otimes |9/2,-7/2\rangle _{\mathrm{K}}$ for $B>0$. The grey area represents the scattering continuum and the (colored) lines indicate the coupled bound states. Feshbach resonances occur when a bound state crosses the threshold energy. The color scheme indicates the admixture of singlet and triplet contributions in the bound states obtained from the eigenstates of the Hamiltonian (\ref{1}). The strong admixture near the threshold crossings at $B\simeq 150$ $\mathrm{G}$ demonstrate the importance of the singlet-triplet mixing in describing Feshbach resonance positions accurately. Since in these calculations the coupled bound states are not coupled to the open channel, they exist even for energies above the threshold.} \label{fig:LiKSpectrum} \end{figure} \section{Application to various systems\label{sect:Application}} In this section we demonstrate the versatility of the ABM by applying it to two different systems using the different approaches as discussed in Section \ref{sect:vLevels}. \subsection{$^{6}$Li -$^{40}$K} Both $^{6}$Li and $^{40}$K have electron spin $s=1/2$, therefore the total electron spin can be singlet $S=0$ or triplet $S=1$. We intend to describe all loss features observed in Ref.\thinspace \cite{wille}. Since all these features were observed for magnetic fields below $300~$G we find that, by use of Eq.~(\ref{eq:Nstates}), it is sufficient to take into account only the least bound state ($\nu =1$) of the singlet and triplet potentials. This reduces the number of fit parameters to $\epsilon _{\nu}^{Sl}=\epsilon _{1}^{0,l}$ and $\epsilon _{1}^{1,l}$. Subsequently, we calculate the rotational shifts by parameterizing the $l>0$ bound state energies with the aid of model potentials \footnote{Note that this procedure can also be applied with only a $C_{6}$ coefficient by utilizing the accumulated phase method.} as described by \cite{salami07,aymar05} . This allows us to reduce the number of binding energies to be considered to only two: $\epsilon _{1}^{0,0}\equiv \epsilon ^{0}$ and $\epsilon _{1}^{1,0}\equiv \epsilon ^{1}$. We now turn to the Franck-Condon factor $\eta _{11}^{01}$ of the two bound states. As discussed in Section \ref{sect:ABS} its value is $\eta_{11}^{01}=0.979$ and can be taken along in the calculation or approximated as unity. We first consider the case of $\eta _{11}^{01}\equiv 1$, this reduces the total number of fit parameters to only two. We fit the positions of the threshold crossings to the 13 observed loss features reported in Ref.\thinspace \cite{wille} by minimizing the $\chi ^{2}$ value while varying $\epsilon ^{0}$ and $\epsilon ^{1}$. We obtain optimal values of $\epsilon ^{0}/h=716(15)~\mathrm{MHz}$ and $\epsilon ^{1}/h=425(5)~\mathrm{MHz}$, where the error bars indicate one standard deviation. In Fig.\thinspace \ref{fig:LiKSpectrum}, the threshold and spectrum of coupled bound states with $M_{F}=+3(-3)$ is shown for positive (negative) magnetic field values. The color scheme indicates the admixture of singlet and triplet contributions in the bound states. Feshbach resonances will occur at magnetic fields where the energy of the coupled bound states and the scattering threshold match. The strong admixture of singlet and triplet contributions at the threshold crossings emphasizes the importance of including the singlet-triplet mixing term $\mathcal{H}_{\mathrm{hf}}^{-}$ in the Hamiltonian. All 13 calculated resonance positions have good agreement with the coupled channel calculations as described in Ref.\thinspace \cite{wille}, verifying that the ABM yields a good description of the threshold behavior of the $^{6}$Li$-^{40}$K system for the studied field values. We repeat the $\chi^2$ fitting procedure now including the numerical value of the overlap. The value of $\eta^{01}_{11}$ for both the $s$-wave and $p$-wave bound states are calculated numerically while varying $\epsilon^0$ and $\epsilon^1$. This fit results in a slightly larger $\chi^2$ value with corresponding increased discrepancies in the resonance positions. However, all of the calculated resonance positions are within the experimental widths of the loss features. Therefore, the analysis with $\eta^{01}_{11}\equiv 1$ and $\eta^{01}_{11}=0.979$ can be safely considered to yield the same results within the experimental accuracy. \subsection{$^{40}$K -$^{87}$Rb} We now turn to the $^{40}$K -$^{87}$Rb mixture to demonstrate the application of the ABM to a system including multiple (three) vibrational levels in each potential and the corresponding non-trivial values for the Franck-Condon factors. We consider $s$-wave ($l=0$) resonances. Although accurate K-Rb scattering potentials are known \cite{PashovKRb}, we choose to use the accumulated phase method as discussed in Section \ref{sect:vLevels} using only three ABM parameters to demonstrate the accuracy of the ABM for a more complex system like $^{40}$K -$^{87}$Rb. We solve the reduced radial wave equation (\ref{eq:Hrel1D}) for $V_{S}(r)=-C_{6}/r^{6}$ and the continuum state $E =\hbar ^{2}k^{2}/2\mu $ in the limit $k\rightarrow 0$. We obtain the accumulated phase boundary condition at $r_{in}=18~a_{0}$ from the boundary condition at $r\rightarrow \infty $ using the asymptotic $s$-wave scattering phase shift $\delta _{0}=\arctan (-ka)$, where $a$ is the known singlet or triplet scattering length. Subsequently, we obtain binding energies for the three last bound states of the singlet and triplet potential by solving the same equation (\ref{eq:Hrel1D}) but now using the accumulated phase at $r=r_{in}$ and $\psi (r\rightarrow \infty )=0$ as boundary conditions. We numerically calculate the Franck-Condon factors by normalizing the wavefunctions for $r_{in}<r<r_{\infty}$, thereby neglecting the wavefunction in the inner part of the potential ($0<r<r_{in}$). This approximation becomes less valid for more deeply bound states. We use as input parameters $C_{6}=4274~E_{h}a_{0}^{6}$ \cite{derevianko}, $a_{S}=-111.5~a_{0}$ and $a_{T}=-215.6~a_{0}$ \cite{PashovKRb}. Figure \ref{fig:KRbSpectrum} shows the spectrum of bound states with respect to the threshold energy for the spin mixture of $|f,m_{f}\rangle =|9/2,-9/2\rangle _{\mathrm{K}}$ and $|1,1\rangle _{\mathrm{Rb}}$ states. The red curves indicate the ABM results and the blue curves correspond to full coupled channel calculations \cite{JulienneKRb}. The agreement between the two models is satisfactory, especially for the weakest bound states close to the threshold. A conceptually different analysis of the K-Rb system using also only three input parameters has been performed by Hanna, \emph{et al.} \cite{hanna09}. \begin{figure}[ht!] \includegraphics[width=8.3088cm]{KRbplot} \caption{(Color online) The bound state spectrum for $^{40}$K -$^{87}$Rb for $M_{F}=-7/2$ plotted with respect to the threshold energy $E_{\mathrm{K},\mathrm{Rb}}^{0}$ of the $|9/2,-9/2\rangle _{\mathrm{K}}$ + $|1,1\rangle _{\mathrm{Rb}}$ mixture. Solid red lines are ABM calculations and the blue dashed lines are numerical coupled channels calculations. Good agreement between the two calculations is found in particular for the weakest bound levels.} \label{fig:KRbSpectrum} \end{figure} \section{Feshbach Resonance Width} \label{sect:OpenChannel} \subsection{Overview} The asymptotic bound state model has been used so far to determine the position of the Feshbach resonances but not their width. As is well known from standard Feshbach theory, the width of $s$-wave resonances depends on the coupling between the resonant level and the continuum \cite{fesh1,fesh2}. For resonances with $l>0$ the width is determined by a physically different process, namely tunneling through the centrifugal barrier. Here we discuss only the width of $s$-wave resonances. We determine the resonance width by analyzing the shift of the resonant level close to threshold due to the coupling with the least bound state of the open channel. This is possible using again the restricted basis set of bound states $\{|\psi _{\nu}^{Sl}\rangle |\sigma \rangle \}$, introduced in Section \ref{sect:ABM}. The possibility to obtain the resonance width by analyzing the shift is plausible because near a resonance the scattering behavior in the zero energy limit is closely related to the threshold behavior of the bound-state. To reveal the coupling as contained in the ABM approximation we partition the total Hilbert space of the Hamiltonian (\ref{1}) into two orthogonal subspaces $\mathcal{P}$ and $\mathcal{Q}$. The states of the open channels are located in $\mathcal{P}$ space, those of the closed channels in $\mathcal{Q}$ space \cite{moerdijk}. This splits the Hamiltonian $\mathcal{H}$ in four parts (cf. Section \ref{sect:tailFeshbach}); $\mathcal{H} = \mathcal{H}_{PP}+\mathcal{H}_{PQ}+\mathcal{H}_{QP}+\mathcal{H}_{QQ}$. Here $\mathcal{H}_{PP}$ and $\mathcal{H}_{QQ}$ describe the system within each subspace and $\mathcal{H}_{PQ}(=\mathcal{H}_{QP}^\dag)$ describe the coupling between the $\mathcal{P}$ and $\mathcal{Q}$ spaces, thus providing a measure for the coupling between the open channels in $\mathcal{P}$ space and the closed channels in $\mathcal{Q}$ space. The scattering channels are defined by the Breit-Rabi pair states $|\alpha \beta \rangle =|f_{\alpha }m_{f_{\alpha }},f_{\beta }m_{f_{\beta }}\rangle $. In the associated pair basis the diagonal matrix elements of the Hamiltonian $\mathcal{H}$ correspond to the 'bare' binding energies of the pair states; i.e. the binding energy in the abscence of coupling between the channels by $V(r)$. Restricting ourselves, for purposes of introduction, to a single open channel and to the least bound states in the interaction potentials, $\mathcal{H}_{PP}$ is a single matrix element on the diagonal of $\mathcal{H}$, corresponding to the \textit{bare} binding energy of the least bound state of the entrance channel, $\epsilon _{P}=-\hbar ^{2}\kappa _{P}^{2}/2\mu $. The energy $\epsilon _{P}$ can be readily calculated by projecting the pair state on the spin basis $\{|\sigma \rangle =|SM_{s}\mu _{\alpha }\mu _{\beta }\rangle \}$, and is given by \begin{equation} \epsilon _{P}=\sum_{S}\epsilon _{\nu}^{Sl}\!\!\!\sum_{M_{S},\mu _{\alpha },\mu _{\beta }}\langle SM_{s}\mu _{\alpha }\mu _{\beta }|f_{\alpha }m_{f_{\alpha }},f_{\beta }m_{f_{\beta }}\rangle ^{2} \label{eq:epsilonPDIS2} \end{equation}In the following sections we will show that the width $\Delta B$ of the resonance is a function of the bare binding energy $\epsilon _{P}$ of the entrance channel and a single matrix element of $\mathcal{H}_{PQ}$, denoted by $\mathcal{K}$, representing the coupling of the level $\epsilon _{P}$ to the resonant level in $\mathcal{Q}$ space. We will show that the width is given by the expression \begin{equation} \mu _{rel}\Delta B=\frac{\mathcal{K}^{2}}{2a_{bg}\kappa _{P}|\epsilon _{P}|}. \end{equation} Hence, once the ABM parameters are known, the width of the resonances follows with an additional unitary transformation of the ABM matrix to obtain the coupling coefficient $\mathcal{K}$. In view of the orthogonality of the subspaces $\mathcal{P}$ and $\mathcal{Q}$, the submatrix $\mathcal{H}_{QQ}$, corresponding to all closed channels, can be diagonalized, leaving $\mathcal{H}_{PP}$ unaffected but changing the $\mathcal{H}_{PQ}$ and $\mathcal{H}_{QP}$ submatrices. In diagonalized form the $\mathcal{H}_{QQ}$ submatrix contains the energies $\epsilon _{Q}$ of all bound levels in $\mathcal{Q}$ space and includes the coupling of all channels except the coupling to $\mathcal{P}$ space. This transformation allows to identify the resonant bound state and the corresponding off-diagonal matrix element $\mathcal{K}$ in $\mathcal{H}_{PQ}$, which is a measure for the resonance width. Thus we can obtain the coupling between the open and closed channels without the actual use of continuum states. In Section \ref{sect:tailFeshbach} we present the Feshbach theory tailored to suit the ABM. We give a detailed description of the resonant coupling, and demonstrate with a two-channel model how the ABM bound state energy $E_b$ compares to the associated $\mathcal{P}$-space bare energy $\epsilon_{P}$, and to the dressed level in the entrance channel from which one can deduce the resonance width. In Section \ref{sect:dressABM} we generalize the results such that the width of the Feshbach resonances can be obtained for the general multi-channel case. For a more thorough treatment of the Feshbach formalism we refer the reader to \cite{fesh1,fesh2} and for its application to cold atom scattering e.g. \cite{moerdijk}. \subsection{Tailored Feshbach theory} \label{sect:tailFeshbach}By introducing the projector operators $P$ and $Q$, which project onto the subspaces $\mathcal{P}$ and $\mathcal{Q}$, respectively, the two-body Schr\"{o}dinger equation can be split into a set of coupled equations \cite{moerdijk} \begin{eqnarray} (E-\mathcal{H}_{PP})|\Psi _{P}\rangle &=&\mathcal{H}_{PQ}|\Psi _{Q}\rangle , \label{eq:P} \\ (E-\mathcal{H}_{QQ})|\Psi _{Q}\rangle &=&\mathcal{H}_{QP}|\Psi _{P}\rangle , \label{eq:Q} \end{eqnarray}where $|\Psi _{P}\rangle \equiv P|\Psi \rangle $, $|\Psi _{Q}\rangle \equiv Q|\Psi \rangle $, $\mathcal{H}_{PP}\equiv P\mathcal{H}P$, $\mathcal{H}_{PQ}\equiv P\mathcal{H}Q$, etc. Within $\mathcal{Q}$ space the Hamiltonian $\mathcal{H}_{QQ}$ is diagonal with eigenstates $|\phi _{Q}\rangle $ corresponding to the two-body bound state with eigenvalues $\epsilon _{Q}$. The energy $E=\hbar ^{2}k^{2}/2\mu $ is defined with respect to the open channel dissociation threshold. We consider one open channel and assume that near a resonance it couples to a single closed channel. This allows us to write the $S$ matrix of the effective problem in $\mathcal{P}$ space as \cite{moerdijk} \begin{equation} \label{eQ:SmatrixSRA} S(k)=S_P(k)\Bigg(1-2\pi i\frac{\vert\langle\phi_Q\vert \mathcal{H}_{QP}\vert\Psi_P^+\rangle\vert^2}{E-\epsilon_Q-\mathcal{A}(E)}\Bigg), \end{equation} where $|\Psi_P^+\rangle$ are scattering eigenstates of $\mathcal{H}_{PP}$, $S_P(k)$ is the direct scattering matrix describing the scattering process in $\mathcal{P}$ space in the absence of coupling to $\mathcal{Q}$ space. The complex energy shift $\mathcal{A}(E)$ describes the dressing of the bare bound state $|\phi_Q\rangle$ by the coupling to the $\mathcal{P}$ space and is represented by \begin{equation} \label{eQ:AEcomplex} \mathcal{A}(E)=\langle \phi_Q\vert \mathcal{H}_{QP}\frac{1}{E^+-\mathcal{H}_{PP}}\mathcal{H}_{PQ}\vert \phi_Q\rangle, \end{equation} where $E^+=E+i\delta$ with $\delta$ approaching zero from positive values. Usually the open channel propagator $[E^+-\mathcal{H}_{PP}]^{-1}$ is expanded to a complete set of eigenstates of $\mathcal{H}_{PP}$, where the dominant contribution comes from scattering states. To circumvent the use of scattering states we expand the propagator to Gamow resonance states, which leads to a Mittag-Leffler expansion \cite{bout} \begin{equation} \label{eq:Mittag} \frac{1}{E^+-\mathcal{H}_{PP}}=\frac{\mu}{\hbar^2}\sum_{n=1}^\infty\frac{|\Omega_n\rangle\langle\Omega_n^D|}{k_n(k-k_n)}, \end{equation} where $n$ runs over all poles of the $S_P$ matrix. The Gamow state $|\Omega_n\rangle$ is an eigenstate of $\mathcal{H}_{PP}$ with eigenvalue $\epsilon_{P_n}=\hbar^2k_n^2/(2\mu)$. Correspondingly, the dual state $|\Omega_n^D\rangle \equiv |\Omega_{n}\rangle^*$, is an eigenstate of $\mathcal{H}_{PP}^\dag$ with eigenvalue $(\epsilon_{P_n})^*$. Using these dual states, the Gamow states form a biorthogonal set such that $\langle \Omega_n^D\vert \Omega_{n^{\prime }}\rangle=\delta_{nn^{\prime }}$. For bound-state poles $k_n=i\kappa_n$, where $\kappa_n>0$, Gamow states correspond to properly normalized bound states. We assume the scattering in the open channel is dominated by a single bound state ($k_{n}=i\kappa _{P}$). This allows us to write the direct scattering matrix in Eq.~(\ref{eQ:SmatrixSRA}) as \begin{equation} S_{P}(k)=e^{-2ika_{\mathrm{bg}}}=e^{-2ika_{\mathrm{bg}}^{P}}\frac{\kappa _{P}-ik}{\kappa _{P}+ik} \end{equation}where $a_{\mathrm{bg}}$ is the open channel scattering length, and the $P$-channel background scattering length $a_{\mathrm{bg}}^{P}$ is on the order of the range of the interaction potential $a_{\mathrm{bg}}^{P}\approx r_{0}$. Since we only have to consider one bound state pole (with energy $\epsilon _{P}$) in $\mathcal{P}$ space, the Mittag-Leffler series Eq.~(\ref{eq:Mittag}) is reduced to only one term. Therefore, the complex energy shift Eq.~(\ref{eQ:AEcomplex}) reduces to \begin{equation} \mathcal{A}(E)=\frac{\mu }{\hbar ^{2}}\frac{-i\mathrm{A}}{\kappa _{P}(k-i\kappa _{P})}. \label{eq:ADR} \end{equation}where $\mathrm{A}\equiv \langle \phi _{Q}|\mathcal{H}_{QP}|\Omega _{P}\rangle \langle \Omega _{P}^{D}|\mathcal{H}_{PQ}|\phi _{Q}\rangle $ is a positive constant. The coupling matrix element between open-channel bound state and the closed-channel bound state responsible for the Feshbach resonance is related to $\mathrm{A}$. The complex energy shift can be decomposed into a real and imaginary part such that $\mathcal{A}(E)=\Delta _{\mathrm{res}}(E)-\frac{i}{2}\Gamma (E)$. For energies $E>0$ the unperturbed bound state becomes a quasi-bound state: its energy undergoes a shift $\Delta _{\mathrm{res}}$ and acquires a finite width $\Gamma $. For energies below the open-channel threshold, i.e. $E<0$, $\mathcal{A}(E)$ is purely real and $\Gamma (E)=0$. In the low-energy limit, $k\rightarrow 0$, Eq.~(\ref{eq:ADR}) reduces to \begin{equation} \mathcal{A}(E)=\Delta -iCk, \end{equation}where $C$ is a constant characterizing the coupling strength between $\mathcal{P}$ and $\mathcal{Q}$ space \cite{moerdijk}, given by $C=\mathrm{A}(2\kappa _{P}|\epsilon _{P}|)^{-1}$ and $\Delta =\mathrm{A}(2|\epsilon _{P}|)^{-1}$. Note that if the direct interaction is resonant, $|a_{\mathrm{bg}}|\gg r_{0}$, the energy dependence of the complex energy shift is given by \cite{marcelis06} $\mathcal{A}(E)=\Delta -iCk(1+ika^{P})^{-1}$ where $a^{P}=\kappa _{P}^{-1}$, yielding an energy dependence of the resonance shift.\begin{figure}[ht!] \includegraphics[width=8.3088cm]{ABMpbs} \caption{(Color online) Illustration of the threshold behavior in a (fictious) two-channel version of the dressed ABM. The threshold behavior is determined by the coupling between the least bound level in the open channel in $\mathcal{P}$ space with the resonant bound level in $\mathcal{Q}$ space. The uncoupled levels are shown as the blue ($\epsilon_P$) and red ($\epsilon_Q$) dash-dotted lines, with $\epsilon_Q$ crossing the threshold at $\widetilde{B}_{0}$. The solid black lines represent the dressed levels, with the upper branch crossing the threshold at $B_0$. Near the threshold, the dressed level shows the characteristic quadratic dependence on ($B-B_0$) (see inset). For pure ABM levels (dotted gray) no threshold effects occur and the coupled bound state crosses the threshold at $B_{0}^{\prime }$.} \label{SRplot} \end{figure} Since we consider one open channel, the (elastic) $S$-matrix element can be written as $e^{2i\delta (k)}$, where $\delta (k)$ is the scattering phase shift. The scattering length, defined as the limit $a\equiv -\tan \delta (k)/k,\,(k\rightarrow 0)$, is found to be Eq. (\ref{eq:dispersiveA}) which shows the well known dispersive behavior. The direct scattering process is described by the scattering length $a_{\mathrm{bg}}=a_{\mathrm{bg}}^{P}+a^{P}$. At magnetic field value $B_{0}$, where the \emph{dressed} bound state crosses the threshold of the entrance channel, the scattering length has a singularity. The dressed state can be considered as a (quasi-) bound state of the total scattering system. The energy of these states is obtained by finding the poles of the total $S$ matrix Eq.~(\ref{eQ:SmatrixSRA}). This results in solving \begin{equation} (k-i\kappa _{P})\left( E-\epsilon _{Q}-\mathcal{A}(E)\right) =0, \label{eq:PoleEqn} \end{equation}for $k$. Due to the underlying assumptions, this equation is only valid for energies around threshold where the open and closed channel poles dominate. Near threshold, the shifted energy of the uncoupled molecular state, $\epsilon _{Q}+\Delta $, can be approximated by $\mu _{rel}(B-B_{0})$. This allows to solve Eq. (\ref{eq:PoleEqn}) for $E$ and we readily obtain \begin{equation} E=-\left( \frac{2|\epsilon _{P}|^{3/2}\mu _{rel}(B-B_{0})}{\mathrm{A}}\right) ^{2} \end{equation} retrieving the characteristic quadratic threshold behavior of the dressed level as a function of ($B-B_0$). The energy dependence of molecular state close to resonance is also given by $E=-\hbar ^{2}/(2\mu a^{2})$ this allows us to express the field width of the resonance as $\Delta B=C(a_{\mathrm{bg}}\mu _{rel})^{-1}$. We apply the above Feshbach theory to a (fictitious) two-channel version of ABM, and the results are shown in Fig.~\ref{SRplot}. This two-channel system is represented by a symmetric $2\times 2$ Hamiltonian matrix, where there is only one open and one closed channel. The open and closed channel binding energies $\epsilon _{P}$ resp. $\epsilon _{Q}$ are given by the diagonal matrix elements, while the coupling is represented by the (identical) off-diagonal matrix elements. The closed channel bound state is made linearly dependent on the magnetic field, while the coupling is taken constant. In addition to $\epsilon _{P}$ and $\epsilon _{Q}$, we plot the corresponding ABM solution, which in this case is equivalent to a typical two-level avoided crossing solution. The figure now nicely illustrates the evolution from ABM to the dressed ABM approach, where the latter solutions are found from the two physical solutions of Eq.~(\ref{eq:PoleEqn}), which are also plotted. Since the dressed ABM solutions account for threshold effects, they show the characteristic quadratic bending towards threshold as a function of magnetic field. From this curvature the resonance width can be deduced. \begin{figure}[ht!] \includegraphics[width=8.3088cm]{ABMplusLiKmf3} \caption{(Color online) Dressed bound states for $^6$Li-$^{40}$K for $M_{F}=-3$ (black lines, see also Table \protect\ref{tab:ABMplusresult}). The uncoupled $\mathcal{Q}$ and $\mathcal{P}$ bound states ($\mathcal{H}_{PQ}=\mathcal{H}_{QP}=0$) are represented by the dot-dashed lines (red and blue respectively). The gray shaded area is shown in detail in Fig. \ref{fig:ABMplusLiKmf3scat}.} \label{fig:ABMplusLiKmf3} \end{figure} \subsection{The dressed Asymptotic Bound state Model} \label{sect:dressABM} To illustrate the presented model for a realistic case, we will discuss the $^6$Li-$^{40}$K system prepared in the $|f_{\mathrm{Li}} m_{f_{\mathrm{Li}}},f_\mathrm{K} m_{f_\mathrm{K}}\rangle= |1/2,+1/2,9/2,-7/2\rangle$ two-body hyperfine state as an example throughout this section. This particular mixture is the energetically lowest spin combination of the $M_F=-3$ manifold, allowing to consider only one open channel. We note that the model can be utilized to cases containing more open channels. In order to calculate the width of a Feshbach resonance using the method presented in Sect. \ref{sect:tailFeshbach} three quantities are required: the binding energy of the open channel $\epsilon_P$, of the closed channel responsible for the Feshbach resonance $\epsilon_Q$, and the coupling matrix element $\mathcal{K}$ between the two channels. In the following we will describe how to obtain these quantities from the ABM by two simple basis transformations. For ultracold collisions the hyperfine and Zeeman interactions determine the threshold of the various channels and thus the partitioning of the Hilbert space into subspaces $\mathcal{P}$ and $\mathcal{Q}$, and therefore a natural basis for our tailored Feshbach formalism consists of the eigenstates of $\mathcal{H}^\mathrm{int}$. Experimentally a system is prepared in an eigenstate $|\alpha \beta \rangle$ of the internal Hamiltonian $\mathcal{H}^\mathrm{int}$, which we refer to as the entrance channel (cf. Section \ref{sect:ABMoverview}). Performing a basis transformation from the ${|\sigma \rangle }$ basis to the pair basis ${|\alpha \beta \rangle }$ allows us to identify the open and closed channel subspace. The open channel has the same spin-structure as the entrance channel. We now perform a second basis transformation which diagonalizes within $\mathcal{Q}$ space without affecting $\mathcal{P}$ space. We obtain the eigenstates of $\mathcal{H}_{QQ}$ and are able to identify the bound state responsible for a particular Feshbach resonance. The bare bound states of $\mathcal{Q}$ space are defined as $\{|\phi _{Q_{1}}\rangle ,|\phi _{Q_{2}}\rangle ,\ldots \}$ with binding energies $\{\epsilon _{Q_{1}},\epsilon _{Q_{2}},\ldots \}$. For the one dimensional $\mathcal{P}$ space, which is unaltered by this transformation, the bare bound state $|\Omega _{P}\rangle $ of $\mathcal{H}_{PP}$ is readily identified with binding energy $\epsilon _{P}$. In the basis of eigenstates of $\mathcal{H}_{PP}$ and $\mathcal{H}_{QQ}$ we easily find the coupling matrix elements between the $i$-th $\mathcal{Q}$ space bound state and the open channel bound state $\langle \phi _{Q_{i}}|\mathcal{H}_{QP}|\Omega _{P}\rangle $. This gives the coupling constant $\mathrm{A_{i}}=\langle \phi _{Q_{i}}|\mathcal{H}_{QP}|\Omega _{P}\rangle \langle \Omega _{P} ^{D}|\mathcal{H}_{PQ}|\phi _{Q_{i}}\rangle =\mathcal{K}^{2}$ that determines the resonance field $B_{0}$ by solving Eq.~(\ref{eq:PoleEqn}) at threshold, \begin{equation} \epsilon _{Q_{i}}\epsilon _{P}=\frac{\mathcal{K}^{2}}{2}. \end{equation}The field width of this Feshbach resonance is proportional to the magnetic field difference between the crossings of the dressed ($B_{0}$) and uncoupled $\mathcal{Q}$ bound states ($\widetilde{B}_{0}$) with threshold since \begin{equation} \Delta B=\frac{a^{P}}{a_{\mathrm{bg}}}(B_{0}-\widetilde{B}_{0})=\frac{1}{a_{\mathrm{bg}}}\frac{\mathcal{K}^{2}}{2\kappa _{P}|\epsilon _{P}|\mu _{rel}}. \end{equation}\begin{figure}[ht!] \includegraphics[width=8.3088cm]{ABMLiKmf3Scat} \caption{(Color online) A zoom of the dressed ABM as shown in figure \ref{fig:ABMplusLiKmf3}. The dressed molecular states are shown near threshold (black). The field width of a resonance is related to the magnetic field difference of where the dressed and uncoupled $\mathcal{Q}$ bound state cross the threshold.} \label{fig:ABMplusLiKmf3scat} \end{figure} We illustrate the dressed ABM for $^6$Li-$^{40}$K in Figs.~\ref{fig:ABMplusLiKmf3} and \ref{fig:ABMplusLiKmf3scat}, for $M_{F}=-3$. To demonstrate the effect of $\mathcal{H}_{PQ}$, we plotted for comparison both the uncoupled and dressed bound states \footnote{For clarity only one of the two physical solutions is shown.}. Details of near-threshold behavior (gray shaded area in Fig.~\ref{fig:ABMplusLiKmf3}) are shown in Fig.~\ref{fig:ABMplusLiKmf3scat} together with the obtained scattering length. We solved the pole equation of the total $S$-matrix Eq.~(\ref{eq:PoleEqn}) for each $Q$-state and plotted only the physical solutions which cause Feshbach resonances. The dressed bound states show the characteristic quadratic bending near the threshold. We have used $C_{6}$ to determine $r_{0}$ ($\approx a_{\mathrm{bg}}^{P}$) from Eq.~\ref{eq:rvdW}. Table \ref{tab:ABMplusresult} summarizes the results of the dressed ABM for the $^6$Li-$^{40}$K mixture. Note that the position of the Feshbach resonances will be slightly different compared to the results from the regular ABM, for equal values of $\epsilon_\nu^{S,0}$. Therefore, we have again preformed a $\chi^2$ analysis and we found new values of the binding energies $\epsilon^0/h=713~\mathrm{MHz}$ and $\epsilon^1/h=425~\mathrm{MHz}$, which yields a lower $\chi^2$ minimum as compared to the regular ABM calculation. The obtained value of $\Delta B$ generally underestimates the field width of a resonance. This originates from the fact that only the dominant bound state pole corresponding to $a^P$ has been taken into account. By including the pole of the dominant \emph{virtual} state in the Mittag-Leffler expansion, the coupling between the open and closed channel will increase, hence, $\Delta B$ will increase. \begin{table} \caption{\label{tab:ABMplusresult} The positions of all experimentally observed $s$-wave Feshbach resonances of $^6$Li$-^{40}$K. Column 2 gives the $^6$Li ($m_{f_{Li}}$) and $^{40}$K ($m_{f_K}$) hyperfine states. For all resonances $f_{Li}=1/2$ and $f_{K}=9/2$. Note that the experimental width of the loss feature $\Delta B_\mathrm{exp}$ is not the same as the field width $\Delta B$ of the scattering length singularity. Feshbach resonance positions $B_0$ and widths $\Delta B$ for $^6$Li-$^{40}$K as obtained by the dressed ABM, obtained by minimizing $\chi^2$. The last two columns show the results of full coupled channels (CC) calculations. All magnetic fields are given in Gauss. The experimental and CC values for $M_F<0$ and $M_F>0$ are taken from Ref. \cite{wille} and \cite{dePRL} respectively. The resonances marked with $^{\ast}$ have also been studied in Refs. \cite{voigt09,spiegelhalder10}.} \begin{tabular}{cccccccc} \hline \hline & & \multicolumn{2}{c}{Experiment} & \multicolumn{2}{c}{ABM+} & \multicolumn{2}{c}{CC}\\ $M_F$ & $m_{f_{Li}}$,$m_{f_{K}}$ & $B_0$ & $\Delta B_\mathrm{exp}$ & $B_0$ & $\Delta B$ & $B_0$ & $\Delta B$ \\ \hline \\ [-2ex] -5 & $-\frac{1}{2}$, $-\frac{9}{2}$ & 215.6 & 1.7 & 216.2 & 0.16 & 215.6 & 0.25\\ [.5ex] -4 & $+\frac{1}{2}$, $-\frac{9}{2}$ & 157.6 & 1.7 & 157.6 & 0.08 & 158.2 & 0.15 \\ [.5ex] -4 & $+\frac{1}{2}$, $-\frac{9}{2}$ & 168.2$^{\ast}$ & 1.2 & 168.5 & 0.08 & 168.2 & 0.10 \\ [.5ex] -3 & $+\frac{1}{2}$, $-\frac{7}{2}$ & 149.2 & 1.2 & 149.1 & 0.12 & 150.2 & 0.28 \\ [.5ex] -3 & $+\frac{1}{2}$, $-\frac{7}{2}$ & 159.5 & 1.7 & 159.7 & 0.31 & 159.6 & 0.45 \\ [.5ex] -3 & $+\frac{1}{2}$, $-\frac{7}{2}$ & 165.9 & 0.6 & 165.9 & 0.0002 & 165.9 & 0.001 \\ [.5ex] -2 & $+\frac{1}{2}$, $-\frac{5}{2}$ & 141.7 & 1.4 & 141.4 & 0.12 & 143.0 & 0.36 \\ [.5ex] -2 & $+\frac{1}{2}$, $-\frac{5}{2}$ & 154.9$^{\ast}$ & 2.0 & 154.8 & 0.50 & 155.1 & 0.81 \\ [.5ex] -2 & $+\frac{1}{2}$, $-\frac{5}{2}$ & 162.7 & 1.7 & 162.6 & 0.07 & 162.9 & 0.60 \\ [.5ex] +5 & $+\frac{1}{2}$, $+\frac{9}{2}$ & 114.47(5) & 1.5(5) & 115.9 & 0.91 & 114.78& 1.82 \\ [.5ex] \hline \hline \end{tabular} \end{table} \section{Summary and Conclusion} \label{sect:Discussion} We have presented a model to accurately describe Feshbach resonances. The model allows for fast and accurate prediction of resonance positions and widths with very little experimental input. The reduction of the basis to only a few states allows to describe Feshbach resonances in a large variety of systems without accurate knowledge of scattering potentials. Using the ABM in combination with the accumulated phase method allows to describe Feshbach resonances in alkali systems with a large degree of accuracy, using only three input parameters. Additionally, the fast computational time of the model allows to map all available Feshbach resonances in a system and select the optimal resonance required to perform a certain experiment. For the $^6$Li-$^{40}$K mixture we have utilized this ability to find a broad resonance as presented in Ref. \cite{dePRL}. In addition, locating e.g. overlapping resonances in multi-component (spin)mixtures can be performed easily well using the ABM. An additional important feature is that the model can be stepwise extended to include more phenomena allowing to describe more complex systems. For example, a possible extension would be to include the contribution of the dominant virtual state in the Mittag-Leffler expansion, this would allow for an accurate description of the resonance widths for systems with a large and negative $a_{\mathrm{bg}}$. Additionally, including the dipole-dipole interaction allows to describe systems where Feshbach resonances occur due to dipole-dipole coupling. Finally, it has already been shown by Tsherbul, et al. \cite{tscherbul10} that the ABM can be succesfully extended by including coupling to bound states by means of an externally applied radio-frequency field. The approach as described in the ABM is in principle not limited to two-body systems. Magnetic field induced resonances in e.g. dimer-dimer scattering have already been experimentally observed \cite{chin05}. For few-body systems an approach without having to solve the coupled radial Schr\"odinger equations is very favorable. This large variety of unexplored features illustrate the richness of the model. This work is part of the research program on Quantum Gases of the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). \bibliographystyle{apsrev}
1007.0366
\section{Introduction} \end{center} In recent years, there are many works on self-similar automorphism groups of the rooted tree $X^{*}$ (\cite{LGN}, \cite{Gri}, \cite{Nekrashevych}). The adding machine group is a typical example for self-similarity. We denote this group by $A$. $A$ is a cyclic group generated by \begin{equation*} a=(\underset{p-1 \ \text{times}}{\underbrace{1,1,\ldots,1}},a)\sigma \end{equation* where $a$ is an automorphism of the $p-$ary rooted tree and $\sigma =(012\ldots (p-1))$ is a permutation on $X=\{0,1,2,\ldots,(p-1)\}$. Thus, $A$ is isomorphic to $\mathbb{Z}$. On the other hand, one can consider the automorphism $a$ as adding one to a $p-$adic integer. That is why the term adding machine is used (\cite{Gri}). In \cite{Holly}, $p-$adic integers is pictured on a tree. This picture serves that any ultrametric space can be drawn on a tree. In this paper, we equip $Aut(X^{*})$ with a natural metric and prove that the group of $p-$adic integers is both isometric and isomorphic to the closure $\overline{A}$ of the adding machine group, a subgroup of the automorphism group of the $p-$ary rooted tree. First we recall basic definitions and notions. \noindent \textit{$p-adic$ integers:} A $p-$adic integer is a formal series \begin{equation*} \sum_{i\geq0}a_{i}p^{i} \end{equation*} where each $a_{i}\in \{0,1,2,\ldots, (p-1)\}$ and the set of all $p-$adic integers is denoted by $\mathbb{Z}_{p}$. Suppose that $a=\sum_{i\geq 0}a_{i}p^{i}$ and $b=\sum_{i\geq 0}b_{i}p^{i}$ be elements of $\mathbb{Z}_{p}$. Then $a$ addition with $b$, $c=\sum_{i\geq 0}c_{i}p^{i}$, is determined for each $m\in \{0,1,2,\ldots\}$ by \begin{equation*} \sum_{i= 0}^{m}c_{i}p^{i}\equiv\sum_{i= 0}^{m}(a_{i}+b_{i})p^{i} \ \ \ (mod\ p^{m+1}) \end{equation*} where $c_{i}\in \{0,1,\ldots,(p-1)\}$. $\mathbb{Z}_{p}$ is a group under this operation and is called the group of $p-$adic integers. Let $a=\sum_{i\geq0}a_{i}p^{i} $ be an element of $\mathbb{Z _{p}$ and $a\neq0$. Thus, there is a first index $v(a)\geq 0$ such that a_{v}\neq0$. This index is called the order of $a$ and is denoted by $ord_{p}(a)$. If $a_{i}=0$ for $i=0,1,2,\ldots$ then ord_{p}(a)=\infty$. On the other hand, the $p-$adic value of $a$ is denoted by \begin{equation*} |a|_{p}=\left\{ \begin{array}{lll} 0 & , & \text{if} \ a_{i}=0 \ \text{for} \ i=0,1,2,\ldots, \\ p^{-ord_{p}(a)} & , & \text{otherwise \end{array \right. \end{equation*} and $d_{p}=|a-b|_{p}$ for $a,b \in \mathbb{Z}_{p}$ is a metric on $ \mathbb{Z}_{p}$ (for details see \cite{Fernando}, \cite{Robert} and \cite{Sch}). \noindent \textit{The automorphism group of the rooted tree:} Let $X$ be a finite set (alphabet) and let \begin{equation*} \begin{array}{c} X^{\ast }=\{x_{1}x_{2}\ldots x_{n} \text{ }|\text{ }x_{i}\in X,n\geqslant 0\text{ }\ \end{array \end{equation* be the set of all finite words. The length of a word $v=x_{1}x_{2}\ldots x_{n}\in X^{\ast }$ is the number of its letters and is denoted by $|v|$. The product of $v_{1},v_{2}\in X^{\ast }$ is naturally defined by concatenation $v_{1}v_{2} . One can think of $X^{\ast }$ as vertex set of a rooted tree. \begin{figure}[h] \label{tree} \centering \pstree[nodesep=0.75pt]{\TR[name=R]{$\emptyset$} }{ \pstree{ \TR{0} }{ \pstree{ \TR{00} } { \TR[name=d3]{000} \TR{001} } \pstree{ \TR{01} } { \TR{010} \TR{011} } } \pstree{ \TR{1} }{ \pstree { \TR{10} } { \TR{100} \TR{101} } \pstree { \TR{11} } { \TR{110} \TR[name=d2]{111} } } } \caption{The first three levels of the binary rooted tree $X^{\ast }$ for X=\{0,1\}$} \end{figure} The set $X^{n}=\{v\in X^{\ast } \text{ }| \text{ } |v|=n \}$ is called the $nth$ level of $X^{\ast }$. The empty word $\emptyset$ is the root of the tree $X^{\ast }$. Two words are connected by an edge if and only if they are of the form $v,vx $ where $v\in X^{\ast }$ and $x\in X$. A map $f:X^{\ast }\rightarrow X^{\ast }$ is an endomorphism of the tree X^{\ast }$ if it preserves the root and adjacency of the vertices. An automorphism is a bijective endomorphism. The group of all automorphisms of the tree $X^{\ast }$ is denoted by $Aut(X^{\ast })$. If $G\leq Aut(X^{\ast })$ is an automorphism group of the rooted tree X^{\ast }$ then for $v\in X^{\ast }$, the subgroup \begin{equation*} G_{v} =\{g\in G\text{ }|\text{ } g(v)=v\} \end{equation*} is called the vertex stabilizer. The $nth$ level stabilizer is the subgroup \begin{equation*} St_{G}(n) =\bigcap_{v\in X^{n}} G_{v}. \end{equation*} We need a useful way to express automorphisms of the rooted tree $ X^{\ast }$. For this aim, we give a definition and a proposition from \cite{Nekrashevych}. \begin{definition}[\cite{Nekrashevych}] Let $H$ be a group acting (from the right) by permutations on a set $X$ and let $G$ be an arbitrary group. Then the (permutational) wreath product $G\wr H$ is the semi-direct product $G^{X}\rtimes H$, where $H$ acts on the direct power $G^{X}$ by the respective permutations of the direct factors. \end{definition} Let $|X|=d$. The multiplication rule for the elements $(g_{1},g_{2},...,g_{d})h\in G\wr H$ is given by the formula \begin{equation*} (g_{1},g_{2},...,g_{d})\alpha(h_{1},h_{2},...,h_{d})\beta=(g_{1}h_{\alpha(1)},g_{2}h_{\alpha(2)},...,g_{d}h_{\alpha(d)})\alpha \beta \end{equation*} where $g_{i},h_{i}\in G,\alpha,\beta\in H$ and $\alpha(i)$ is the image of $i$ under the action of $\alpha$. \begin{proposition}[\cite{Nekrashevych}] Denote by $S(X)$ the symmetric group of all permutations of $X$. Fix some indexing $\{x_{1},x_{2},...,x_{d}\}$ of $X$. Then we have an isomorphism \begin{equation*} \psi:Aut(X^{\ast })\rightarrow Aut(X^{\ast})\wr S(X), \end{equation*} given by \begin{equation*} \psi(g)=(g|_{x_{1}},g|_{x_{2}},...,g|_{x_{d}})\alpha, \end{equation*} where $\alpha$ is the permutation equal to the action of $g$ on $X\subset X^{\ast}.$ \end{proposition} Thus, $g\in Aut(X^{\ast })$ is identified with the image $\psi(g)\in Aut(X^{\ast})\wr S(X)$ and it is written as \begin{equation*} g=(g|_{x_{1}},g|_{x_{2}},...,g|_{x_{d}})\alpha. \end{equation*} \noindent \textit{The adding machine group:} Let $a$ be the transformation on $X^{\ast }$ defined by the wreath recursio \begin{equation*} a=(\underset{p-1 \ \text{times}}{\underbrace{1,1,\ldots,1}},a)\sigma \end{equation* where $\sigma =(012\ldots (p-1))$ is an element of the symmetric group on $X=\{0,1,2,\ldots,(p-1)\}$. The transformation $a$ generates an infinite cyclic group on $X^{\ast }$. This group is called the adding machine group and we denote this group by $A$. \begin{figure}[h] \centering \includegraphics[scale=0.75]{adding} \caption{Portrait of the transformation $a$ for $X=\{0,1\}$ and $X=\{0,1,...,p-1\}$} \end{figure} For example, using permutational wreath product we obtain tha \begin{equation*} \begin{array}{ccl} a^{p} & = & (1,\ldots,1,a)\sigma (1,\ldots,1,a)\sigma \ldots(1,\ldots,1,a)\sigma \\ & = & (a,a,\ldots,a)\sigma ^{p} \\ & = & (a,a,\ldots,a \end{array \end{equation* (for details see \cite{LGN}, \cite{Nekrashevych}). \begin{center} \renewcommand{\thefigure}{\arabic{section}.\arabic{figure}} \setcounter{figure}{0} \renewcommand{\therc}{\arabic{section}.\arabic{rc}} \setcounter{rc}{0} \section{The Metric Space ($Aut(X^{\ast })$, $d$)} \end{center} We define a natural metric on the automorphism group of the $p-$ary rooted tree $X^{\ast}$ where $X=\{0,1,2,...,p-1\}$. This metric is used by \cite{Sunic}. \begin{definition} \label{metric} Let $g_{1},g_{2}\in Aut(X^{\ast })$ \begin{equation*} d(g_{1},g_{2})=\left\{ \begin{array}{lll} \frac{1}{p^{k}} & & \text{for} \ g_{1}^{-1}g_{2}\in St_{Aut(X^{\ast })}(k) \ \text{and} \ g_{1}^{-1}g_{2}\notin St_{Aut(X^{\ast })}(k+1), \\ 0 & & \text{for} \ g_{1}=g_{2} \end{array \right. \end{equation* In other words, if $g_{1}$ and $g_{2}$ agree on all vertices of level $k$ but do not agree at least one vertex of level $(k+1)$ of the tree $X^{\ast }$ then the distance between $g_{1}$ and $g_{2}$ is $\frac{1}{p^{k}}$. \end{definition} ($Aut(X^{\ast })$, $d$) is a metric space. Moreover, it can easily be shown that the metric space ($Aut(X^{\ast })$, $d$) is compact. \begin{proposition} $Aut(X^{\ast })$ is a topological group. \end{proposition} \begin{proof} First we prove that \begin{equation*} \begin{array}{ccccc} \psi & : & Aut(X^{\ast }) \times Aut(X^{\ast })& \longrightarrow & Aut(X^{\ast }) \\ & & (g,h) & \longmapsto & gh \end{array \end{equation* is a continuous map. We take an arbitrary $(g_{0},h_{0})\in Aut(X^{\ast })\times Aut(X^{\ast })$. Let $U$ be a neighborhood of $g_{0}h_{0}$. There exists an integer $n$ such that \begin{equation*} B\Big(g_{0}h_{0},\frac{1}{p^{n}}\Big)=\Big\{f\mid d(f,g_{0}h_{0})< \frac{1}{p^{n}}\Big\}\subseteq U. \end{equation*} We take an open set \begin{equation*} V=V_{1}\times V_{2}=\{(g,h) \ | \ g\in V_{1}, h\in V_{2} \} \end{equation*} of $Aut(X^{\ast })\times Aut(X^{\ast })$ such that \begin{equation*} V_{1}=B\Big(g_{0},\frac{1}{p^{n}}\Big)=\Big\{g \ | \ d(g,g_{0})<\frac{1}{p^{n}}\Big\} \end{equation*} and \begin{equation*} V_{2}=B\Big(h_{0},\frac{1}{p^{n}}\Big)=\Big\{h \ | \ d(h,h_{0})<\frac{1}{p^{n}}\Big\}. \end{equation*} Now, we show that $\psi(V)\subseteq U$ where \begin{equation*} \psi(V)=\psi(V_{1}\times V_{2})=\{gh \ | \ g\in V_{1}, h\in V_{2} \}. \end{equation*} Let $gh\in \psi(V)$. Thus, we have $g\in V_{1}$ and $h\in V_{2}$. Namely, we obtain that \begin{equation} \label{denk} g^{-1}g_{0}\in St_{Aut(X^{\ast })}({n+1}) \ and \ h^{-1}h_{0}\in St_{Aut(X^{\ast })}({n+1}). \end{equation} Furthermore, we get \begin{equation*} (gh)^{-1}g_{0}h_{0}=h^{-1}(g^{-1}g_{0})h_{0}\in St_{Aut(X^{\ast })}({n+1}). \end{equation*} From (\ref {denk}), $gh\in U$. Thus, $\psi$ is continuous. Similarly, we prove that \begin{equation*} \begin{array}{ccccc} \varphi & : & Aut(X^{\ast }) & \longrightarrow & Aut(X^{\ast }) \\ & & g & \longmapsto & g^{-1} \end{array \end{equation* is continuous. We take an arbitrary $g_{0}\in Aut(X^{\ast })$. Let $U$ be a neighborhood of $g_{0}^{-1}$. So, there exists an integer $n$ such that \begin{equation*} B\Big(g_{0}^{-1},\frac{1}{p^{n}}\Big)=\Big\{f \ | \ d(f,g_{0}^{-1})<\frac{1}{p^{n}}\Big\}\subseteq U. \end{equation*} We take a neighborhood $V$ of $g_{0}$ in $Aut(X^{\ast })$ such that \begin{equation*} V = B\Big(g_{0},\frac{1}{p^{n}}\Big)=\Big\{g \ | \ d(g,g_{0})< \frac{1}{p^{n}}\Big\}. \end{equation*} Now, we show that $\varphi(V)\subseteq U$. Let $g^{-1}\in \varphi(V)$. Thus, we have $g\in V$. In other words, \begin{equation*} gg_{0}^{-1}\in St_{Aut(X^{\ast })}({n+1}). \end{equation*} Due to the definition of $U$, $g^{-1}\in U$. That is, $\varphi$ is continuous. \end{proof} \begin{proposition} $\overline{A}$ is a subgroup of $Aut(X^{\ast })$. \end{proposition} \begin{proof} We show that $gh\in \overline{A}$ and $g^{-1}\in \overline{A}$ for all $g,h\in \overline{A}$. Suppose that $g,h\in \overline{A}$. This means that there are sequences $(g_{n})$, $(h_{n})$ in $A$ such that \begin{equation*} \lim_{n\rightarrow \infty}g_{n}=g \ \text{and} \ \lim_{n\rightarrow \infty}h_{n}=h. \end{equation*} Thus, it follows that $\lim_{n\rightarrow \infty}(g_{n},h_{n})=(g,h)$. On the other hand, we proved that \begin{equation*} \begin{array}{ccccc} \psi & : & Aut(X^{\ast }) \times Aut(X^{\ast })& \longrightarrow & Aut(X^{\ast }) \\ & & (g,h) & \longmapsto & gh \end{array \end{equation* is continuous. Hence, we have \begin{equation*} \lim_{n\rightarrow \infty}g_{n}h_{n}=gh. \end{equation*} It follows that $gh\in\overline{A}$ since the sequence $g_{n}h_{n}\in A$. Similarly, because \begin{equation*} \begin{array}{ccccc} \varphi & : & Aut(X^{\ast }) & \longrightarrow & Aut(X^{\ast }) \\ & & g & \longmapsto & g^{-1} \end{array \end{equation* is continuous we obtain \begin{equation*} \lim_{n\rightarrow \infty}g_{n}^{-1}=g^{-1}. \end{equation*} That is, $g^{-1}\in \overline{A}$. Thus, $\overline{A}$ is a subgroup of $Aut(X^{\ast })$. \end{proof} \begin{center} \renewcommand{\thefigure}{\arabic{section}.\arabic{figure}} \setcounter{figure}{0} \renewcommand{\therc}{\arabic{section}.\arabic{rc}} \setcounter{rc}{0} \section{Embedding of the Group of $p-$adic Integers into the Automorphism Group of the $p-ary$ Rooted Tree} \end{center} Now we give a formula for the distance between two elements of the adding machine group. Notice that this expression is similar to the distance between two elements of $p-$adic integers. \begin{proposition} \label{psel} For $a^{n},a^{m}\in A$, the distance $d(a^{n},a^{m})$ is \begin{equation*} \begin{array}{lllll} d & : & A\times A & \rightarrow & A \\ & & (a^{n},a^{m}) & \mapsto & d(a^{n},a^{m})=\left\{ \begin{array}{lll} 0 & & \text{for} \ n=m,\\ \frac{1}{p^{k}} & & \text{for} \ n-m=tp^{k} \end{array \right \end{array \end{equation* where $t,k\in\mathbb{Z}$, $p$ is prime number and $(p,t)=1.$ \end{proposition} \begin{proof} First we compute $St_{A}(1)$. Using permutational wreath product we obtain that \begin{equation*} \begin{array}{ccl} a^{p} & = & (1,1,\ldots,a)\sigma (1,1,\ldots,a)\sigma \ldots(1,1,\ldots,a)\sigma \\ & = & (a,a,\ldots,a) \end{array \end{equation*} Thus, $St_{A}(1)=\langle a^{p} \rangle$. Moreover, we get \begin{equation*} \begin{array}{ccl} a^{p^{2}} & = & a^{p}a^{p}\ldots a^{p} \\ & = & (a,a,\ldots,a)(a,a,\ldots,a)\ldots(a,a,\ldots,a) \\ & = & (a^{p},a^{p},\ldots,a^{p} \end{array \end{equation*} We have $a^{p^{2}}\in St_{A}(2)$ because $a^{p}\in St_{A}(1)$. Therefore, $St_{A}(2)=\langle a^{p^{2}} \rangle$. By proceeding in a similar manner, we compute $St_{A}(k)=\langle a^{p^{k}} \rangle$. So, elements of $A$ which are in $St_{A}(1)$ but are not in $St_{A}(2)$ can be expressed as \begin{equation*} St_{A}(1)-St_{A}(2)=\{a^{tp}:(p,t)=1 \} \end{equation*} and in general, we have \begin{equation*} St_{A}(k)-St_{A}(k+1)=\{a^{tp^{k}}:(p,t)=1\}. \end{equation*} Let us take arbitrary $a^{n},a^{m}\in A$. If $n=m$ then $a^{n}=a^{m}$ and $d(a^{n},a^{m})=0$. Assume $n\neq m$. So there is a unique expression $n-m=tp^{k}$ such that $(p,t)=1$. Then we obtain \begin{equation*} a^{-m}a^{n}=a^{n-m}=a^{tp^{k}}\in St_{A}(k)-St_{A}(k+1) \end{equation*} and $d(a^{n},a^{m})=\frac{1}{p^{k}}$. \end{proof} \begin{proposition} \label{yak} Let $\sum_{i \geq 0} \alpha_{i}p^{i}\in \mathbb{Z}_{p}$. Then the sequence \begin{equation*} a^{\alpha_{0}}, a^{\alpha_{0}+\alpha_{1}p}, a^{\alpha_{0}+\alpha_{1}p+\alpha_{2}p^{2}},\ldots \end{equation*} is convergent. \end{proposition} \begin{proof} For any $\varepsilon>0$, there is a positive integer $n_{0}$ such that $\frac{1}{p^{n_{0}}}<\varepsilon$. If $k>l$ and $k,l\geq n_{0}$ then it is obtained that \begin{equation*} d(a^{\alpha_{0}+\alpha_{1}p+...+\alpha_{k}p^{k}},a^{\alpha_{0}+\alpha_{1}p+\ldots+\alpha_{l}p^{l}})=\frac{1}{p^{l}}<\varepsilon. \end{equation*} from Proposition \ref{psel}. Thus, it is a Cauchy sequence. Because $Aut(X^{\ast })$ is a complete metric space, this sequence is convergent. \end{proof} Now we give our main proposition: \begin{proposition} {\normalsize \label{gomme} } We define {\normalsize \begin{equation*} \begin{array}{ccccc} \varphi & : & \mathbb{Z}_{p} & \rightarrow & \overline{A} \\ \end{array \end{equation*} such that $\varphi(\sum_{i \geq 0} \alpha_{i}p^{i})$ is the limit of the sequence $a^{\alpha_{0}}, a^{\alpha_{0}+\alpha_{1}p}, a^{\alpha_{0}+\alpha_{1}p+\alpha_{2}p^{2}},\ldots$. Then $\varphi$ is both an isometry and a group isomorphism.} \end{proposition} {\normalsize \begin{proof} From Proposition \ref{yak}, $\varphi$ is well-defined. Now we show that $\varphi$ is an isometry. In other words, we show that $d_{p}(\alpha, \beta)=d(\varphi(\alpha), \varphi(\beta))$ for every $ \alpha, \beta \in \mathbb{Z}_{p}$. Let $\alpha=\sum_{i \geq 0} \alpha_{i}p^{i}$ and $\beta=\sum_{i \geq 0} \beta_{i}p^{i}$. If $d_{p}(\alpha, \beta)=0$ then we obtain $d(\varphi(\alpha), \varphi(\beta))=0$ since $\alpha_{i}=\beta_{i}$ for $i=0,1,2,\ldots$. If $d_{p}(\alpha, \beta)=\frac{1}{p^{k}}$ then $\alpha_{i}=\beta_{i}$ for $i< k$ and $\alpha_{k}\neq \beta_{k}$. We must show that $d(\varphi(\alpha), \varphi(\beta))=\frac{1}{p^{k}}$. Because $\varphi(\alpha)$ and $\varphi(\beta)$ are the limits of the sequences $a^{\alpha_{0}}, a^{\alpha_{0}+\alpha_{1}p}, a^{\alpha_{0}+\alpha_{1}p+\alpha_{2}p^{2}},\ldots$ and $a^{\beta_{0}}, a^{\beta_{0}+\beta_{1}p}, a^{\beta_{0}+\beta_{1}p+\beta_{2}p^{2}},\ldots$ respectively, it is obtained that \begin{equation*} \lim_{k\rightarrow\infty}(a^{\alpha_{0}+\alpha_{1}p+...+\alpha_{k}p^{k}},a^{\beta_{0}+\beta_{1}p+...+\beta_{k}p^{k}})=(\varphi(\alpha),\varphi(\beta)). \end{equation*} Since any metric function is continuous, \begin{equation*} d(a^{\alpha_{0}},a^{\beta_{0}}), d(a^{\alpha_{0}+\alpha_{1}p},a^{\beta_{0}+\beta_{1}p}),\ldots\rightarrow d(\varphi(\alpha),\varphi(\beta)). \end{equation*} From Proposition \ref{psel}, we get \begin{equation*} 0,0,...,0,\frac{1}{p^{k}},\frac{1}{p^{k}},\ldots,\frac{1}{p^{k}},\ldots\rightarrow \frac{1}{p^{k}}. \end{equation*} So, we get $d(\varphi(\alpha),\varphi(\beta))=\frac{1}{p^{k}}$. Namely, $\varphi$ is an isometry map. Moreover, $\varphi$ is injective since $\varphi$ is an isometry map. Now we show that $\varphi$ is surjective. Let $b\in \overline{A}$ be arbitrary. Thus, there exists a sequence \begin{equation*} \label{dizi} a^{n_{0}},a^{n_{1}},\ldots,a^{n_{k}},\ldots\rightarrow b \end{equation*} whose elements are in $A$. Furthermore, every integer $n_{k}$ can be expressed in $\mathbb{Z}_{p}$ as \begin{equation} \label{i} \begin{array}{ccc} n_{0} & = & \alpha _{0}^{0}+\alpha _{1}^{0}p+\alpha _{2}^{0}p^{2}+\ldots \\ n_{1} & = & \alpha _{0}^{1}+\alpha _{1}^{1}p+\alpha _{2}^{1}p^{2}+\ldots \\ & \vdots & \\ n_{k} & = & \alpha _{0}^{k}+\alpha _{1}^{k}p+\alpha _{2}^{k}p^{2}+\ldots \\ & \vdots & \end{array \end{equation} At least one of the numbers $0,1,2,...,(p-1)$ occurs infinitely many times in the sequence $(\alpha_{0}^{k})_{k}$. We choose one of them and denote it by $\beta_{0}$. Let $(\alpha_{1}^{k_{l}})_{l}$ be a subsequence of $(\alpha_{1}^{k})_{k}$ such that $\alpha_{0}^{k_{l}}=\beta_{0}$ for $l=0,1,2,\ldots$. Similarly, we denote by $\beta_{1}$, any one of the numbers that appears infinitely many times in the sequence $(\alpha_{1}^{k_{l}})_{l}$. Proceeding in this manner, we obtain a sequence \begin{equation*} \label{dizi2} a^{\beta_{0}},a^{\beta_{0}+\beta_{1}p},\ldots,a^{\beta_{0}+\beta_{1}p+\ldots+\beta_{k}p^{k}},\ldots. \end{equation*} From Proposition \ref{yak}, this sequence is convergent. Now we show this sequence converges to $b$. Due to the construction of (\ref{i}), there exists a subsequence $(n_{k_{s}})$ of the sequence $({n_{k}})$ whose $p-$adic expression of term $s$th such that \begin{equation*} \beta_{0}+\beta_{1}p+\beta_{2}p^{2}+\ldots+\beta_{s}p^{s}+\gamma_{s+1}p^{s+1}+\gamma_{s+2}p^{s+2}+\ldots \end{equation*} Hence, because \begin{equation*} \lim_{s\rightarrow\infty}d(a^{\beta_{0}+\beta_{1}p+\ldots+\beta_{s}p^{s}},a^{n_{k_{s}}})=0 \end{equation*} and from the triangle inequality, the sequence $(a^{\beta_{0}+\beta_{1}p+\ldots+\beta_{k}p^{k}})$ converges to $b$. So, $\varphi(\sum_{i\geq0}\beta_{i}p^{i})=b$ and $\varphi$ is surjective. Finally, we prove that $\varphi$ is a homomorphism. In other words, we prove that \begin{equation*} \varphi(\alpha+\beta)=\varphi(\alpha)\varphi(\beta) \end{equation*} for every $\alpha,\beta\in\mathbb{Z}_{p}$. Let $\alpha=\alpha _{0}+\alpha _{1}p+\alpha _{2}p^{2}+\ldots$, $\beta=\beta_{0}+\beta_{1}p+\beta_{2}p^{2}+\ldots$ and \begin{equation*} \alpha+\beta=\gamma_{0}+\gamma_{1}p+\gamma_{2}p^{2}+\ldots. \end{equation*} From the definition of $\varphi$, \begin{equation*} a^{\gamma_{0}},a^{\gamma_{0}+\gamma_{1}p},a^{\gamma_{0}+\gamma_{1}p+\gamma_{2}p^{2}},...\rightarrow \varphi(\alpha+\beta). \end{equation*} Moreover, it follows that \begin{equation*} a^{(\alpha_{0}+\beta_{0})},a^{(\alpha_{0}+\beta_{0})+(\alpha_{1}+\beta_{1})p},a^{(\alpha_{0}+\beta_{0})+(\alpha_{1}+\beta_{1})p+(\alpha_{2}+\beta_{2})p^{2}},\ldots\rightarrow \varphi(\alpha)\varphi(\beta) \end{equation*} since $Aut(X^{*})$ is a topological group, \begin{equation*} a^{\alpha_{0}},a^{\alpha_{0}+\alpha_{1}p},a^{\alpha_{0}+\alpha_{1}p+\alpha_{2}p^{2}},\ldots\rightarrow \varphi(\alpha) \end{equation*} and \begin{equation*} a^{\beta_{0}},a^{\beta_{0}+\beta_{1}p},a^{\beta_{0}+\beta_{1}p+\beta_{2}p^{2}},\ldots\rightarrow \varphi(\beta). \end{equation*} In $\mathbb{Z}_{p}$, \begin{equation*} \begin{array}{cclcl} \alpha _{0}+\beta_{0} & = & \gamma _{0}+ \overline{\gamma_{0}}p+0p^{2}+0p^{3}+\ldots \\ \alpha _{0}+\beta_{0} + (\alpha _{1}+\beta_{1})p & = & \gamma _{0}+\gamma _{1}p+\overline{\gamma_{1}}p^{2}+0p^{3}+0p^{4}+\ldots \\ & \vdots & & & \\ \alpha _{0}+\beta_{0} +\ldots+(\alpha _{k}+\beta_{k})p^{k} & = & \gamma _{0}+\gamma _{1}p+\ldots+\gamma _{k}p^{k}+\overline{\gamma_{k}}p^{k+1}+0p^{k+2}+0p^{k+3}+\ldots.\\ &\vdots & & & \end{array \end{equation*} Let $x= \alpha _{0}+\beta_{0} +\ldots+(\alpha _{k}+\beta_{k})p^{k}$ and $y=\gamma _{0}+\gamma _{1}p+\ldots+\gamma _{k}p^{k}+\overline{\gamma_{k}}p^{k+1}+0p^{k+2}+0p^{k+3}+\ldots$. Then we have \begin{equation*} d(a^{x}, a^{y})=\left\{ \begin{array}{lll} \frac{1}{p^{k}} & &\text{if} \ \overline{\gamma_{k}}\neq 0, \\ 0 & &\text{if} \ \overline{\gamma_{k}}=0 \end{array \right. \end{equation* So we get $\varphi(\alpha+\beta)=\varphi(\alpha)\varphi(\beta)$ since \begin{equation*} d(a^{\alpha _{0}+\beta_{0}},a^{\gamma _{0}}),d(a^{\alpha _{0}+\beta_{0} + (\alpha _{1}+\beta_{1})p},a^{\gamma _{0}+\gamma _{1}p}),\ldots\rightarrow d(\varphi(\alpha)\varphi(\beta), \varphi(\alpha+\beta)) \end{equation*} and \begin{equation*} \lim_{k\rightarrow \infty}d(a^{x}, a^{y})=0. \end{equation*} Thus the proof is completed. \end{proof} } Consequently, the group of $p-$adic integers $\mathbb{Z}_{p}$ can be isometrically embedded into the metric space $Aut(X^{*})$ since $\overline{A}\subseteq Aut(X^{*})$. \begin{example} We show $\varphi(-1)$ for $p=2$ in Figure \ref{aaa}. It is well-known that \begin{equation*} -1=1+1.2^{1}+1.2^{2}+...+1.2^{k}+...\in \mathbb{Z}_{2}. \end{equation*} Due to the definition of $\varphi$, $\varphi(-1)$ is the limit of the sequence \begin{equation*} a^{1},a^{1+1.2^{1}},a^{1+1.2^{1}+1.2^{2}},... \end{equation*} in $A$ for $X=\{0,1\}$. This limit equals to $a^{-1}=(a^{-1},1)\sigma$ because of Proposition \ref{psel}. \begin{figure}[h] \centering \includegraphics[scale=0.70]{add} \caption{The image of $-1\in \mathbb{Z}_{2}$ under the map $\varphi$} \label{aaa} \end{figure} \end{example}
1708.07264
\section{introduction} It has been essentially two years since a superconducting transition in the vicinity of $200$ K was first reported in hydrogen sulfide.\cite{drozdov14} Since this time, however, experimental results concerning this system have been few, and to our knowledge, only one as-yet unpublished report has independently confirmed high temperature superconductivity via the Meissner effect.\cite{huang16} Nonetheless, the crystal structure has now been determined \cite{einaga16} to be one of two variations of body-centred-cubic (BCC), and is associated with the stoichiometry H$_3$S. An optical spectroscopy study has also appreared,\cite{capitani16} which claims to provide significant support for an electron-phonon-based mechanism for superconductivity. Much of the work to date on this compound has been on the theoretical side. Remarkably, even before the experimental discovery of superconducting H$_3$S, a Density Functional Theory (DFT) calculation \cite{duan14} predicted the correct high pressure structure, and a crude estimate based on the Allen-Dynes-McMillan formula\cite{mcmillan68} suggested $T_c \approx 200$ K. Follow-up DFT calculations confirmed this work.\cite{duan15,errea15,papa15,bianconi15,flores-livas15} Several of these authors furthermore emphasized the electron-phonon interaction as the mechanism for superconductivity, primarily through the high frequency optical modes affiliated with the hydrogen atoms. These authors disagree, however, on the importance of anharmonicity, with Errea et al. and Papaconstantopoulos et al. finding evidence for large anharmonic effects, while Flores-Livas et al. do not. In the meantime, Hirsch and one of the present authors \cite{hirsch15} have suggested that it is the conduction by holes through the sulfur ions that plays a primarily role in the superconductivity. The theoretical framework for the mechanism involved is expanded upon in earlier work,\cite{hirsch89,marsiglio90} and will not be further discussed here. The point we wish to make in this paper is that, somewhat independent of the mechanism, a large density of states near the Fermi level will enhance superconducting $T_c$. This point has been made repeatedly in the past, starting with the A15 compounds in the 1960's and continuing with the cuprates over the past three decades. A survey of the effects of van Hove singularities in two and three dimensions on superconducting $T_c$ was published recently.\cite{souza16} Here we wish to emphasize that the three dimensional BCC structure, pertinent to superconducting H$_3$S, has a logarithmic (squared) singularity in the density of states when only nearest-neighbour hopping is taken into account, and this has a significant impact on superconducting properties.\cite{souza16} This was already recognized long ago by Jelitto.\cite{jelitto69} As already discussed in Ref. [\onlinecite{souza16}], a singularity also exists for the (face-centred-cubic) FCC structure, and in fact occurs at a filling where nesting effects [which favour other instabilities (e.g. charge density waves)] are not present. We will focus on the BCC structure in this paper, and maintain a non-zero next-nearest neighbour hopping probability, as this seems to more accurately describe the actual situation in H$_3$S; it also serves to eliminate deleterious effects due to nesting, that would occur in the nearest-neighbour hopping only case for the BCC structure. \section{The BCS formalism} The BCS equations are \cite{schrieffer64,tinkham96} \begin{equation} \Delta_k = -{1 \over N} \sum_{k^\prime} V_{kk^\prime} { \Delta_{k^{\prime}} \over 2 E_{k^\prime}} \left[ 1 - 2f(E_{k^\prime}) \right], \label{bcs1} \end{equation} and \begin{equation} n = {1 \over N} \sum_{k^\prime} \left[ 1 - {\epsilon_{k^\prime} - \tilde{\mu} \over E_{k^\prime}} \left( 1 - 2f(E_{k^\prime})\right) \right], \label{bcs2} \end{equation} with \begin{equation} E_k \equiv \sqrt{ (\epsilon_{k^\prime} - \tilde{\mu})^2 + \Delta^2_{k^\prime}}. \label{bcs3} \end{equation} Here, the wave vector summations cover the First Brillouin zone (FBZ), and we focus on a single band, whose characteristics are contained within $\epsilon_k$. Similarly the pairing potential, $V_{k,k^\prime}$ is specified by the model under consideration, and the chemical potential, $\mu$, gives us the density of electrons, $n$. In practice, we `know' the electron density, $n$, and therefore need to determine the chemical potential that leads to the desired electron density, for a particularly pairing potential and temperature (as included through the Fermi-Dirac distribution function, $f(x) \equiv 1/[{\rm exp}(\beta x) + 1]$, where $\beta \equiv 1/[k_BT]$ is the inverse temperature, with $k_B$ the Boltzmann constant). In Eqs. (\ref{bcs1}-\ref{bcs3}), we use $\tilde{\mu}$, which is assumed to include corrections to $\mu$ associated with the normal state. In what follows we assume a featureless attractive interaction, denoted as $V_{k,k^\prime} = -V$, with $V > 0$. This model constitutes the so-called attractive Hubbard model, as a constant in wave vector space implies an onsite attraction only. As discussed by Eagles,\cite{eagles69}, Leggett,\cite{leggett80} and Nozi\`eres and Schmitt-Rink,\cite{nozieres85} these equations are valid for all pairing strengths (at $T=0$); we discuss a particular limit in the Appendix where these equations can be solved exactly. Here in the main text, we introduce a cutoff for the pairing potential, so that attraction occurs only for states within an energy $\hbar \omega_D$ of the Fermi energy, i.e. \begin{equation} V_{kk^\prime} = -V \theta \left[ \hbar \omega_D - |\epsilon_k - \mu | \right] \theta \left[ \hbar \omega_D - |\epsilon_{k^\prime} - \mu | \right] \label{vpair} \end{equation} where $\theta[x]$ is the Heaviside step function. Note that removal of this restriction reduces this model to the usual attractive Hubbard model; identification of $\omega_D$ with a phonon energy scale follows the original BCS treatment, though a more accurate procedure would be to use the Eliashberg equations,\cite{eliashberg60,marsiglio08} where retardation effects are properly accounted for. We note that Sano et al.\cite{sano16} have already done for H$_3$S. The main purpose of this paper is to highlight the importance of electronic structure, through peaks in the electronic density of states (EDOS) for superconducting $T_c$. Both Quan and Pickett,\cite{quan16} and Sano et al.\cite{sano16} have included and highlighted this point, based on the results of DFT calculations. In our previous work\cite{souza16} we have focused on simple tight-binding descriptions, where, in our opinion, the origin of the peak in the density of states is more transparent. We utilize the BCC; including both nearest and next-nearest neighbour hopping parameters results in the dispersion \begin{eqnarray} \epsilon_k &=& -8t\left[ {\rm cos}({k_xa \over 2}) {\rm cos}({k_ya \over 2}) {\rm cos}({k_za \over 2}) \right] \phantom{aa} {\rm [bcc \ \ NNN]} \nonumber \\ & & -2t_{2}\left[ {\rm cos}(k_xa) + {\rm cos}(k_ya) +{\rm cos}(k_za) \right], \label{bcc_dispersion} \end{eqnarray} where $t$ and $t_2$ are the nearest and next-nearest neighbour hopping amplitudes, respectively. The only real impact on the BCS equations is most readily seen by rewriting them as follows (we also replace the pairing potential $V_{k,k^\prime} = -V$ and linearize the equations so that they are valid only at $T=T_c$), \begin{equation} {1 \over V} = \int_{\mu_-}^{\mu_+} \ d\epsilon g(\epsilon) {{\rm tanh}[\beta_c (\epsilon -\mu)/2] \over 2(\epsilon - \mu)} \phantom{aaaaa} [T=T_c] \label{bcs1tc} \end{equation} and \begin{equation} n = 2\int_{\epsilon_{\rm min}}^{\epsilon_{\rm max}} \ d\epsilon g(\epsilon) f(\epsilon - \mu), \phantom{aaaaaaaaaa} [T=T_c] \label{bcs2tc} \end{equation} where only the EDOS, $g(\epsilon)$, contains information about the structure. Here $\beta_c \equiv 1/[k_BT_c]$. The integration limits in Eq.~(\ref{bcs1tc}) are normally $\mu_- = \mu - \hbar \omega_D$ and $\mu_+ = \mu + \hbar \omega_D$, but when $\mu$ is close to a band edge, then these limits are given by $\mu_- \equiv{\rm max}[ \mu - \hbar \omega_D, \epsilon_{\rm min}]$, and $\mu_+ \equiv{\rm min}[ \mu + \hbar \omega_D, \epsilon_{\rm max}]$, where $\epsilon_{\rm min}$ ($\epsilon_{\rm max}$) is the energy of the bottom (top) of the band. \bigskip \section{Results} \subsection{The BCC electronic density of states} The EDOS for the BCC structure with nearest-neighbour (nn) hopping only is given by\cite{jelitto69,souza16} \begin{equation} g_{\rm BCC}(\epsilon) = {2 \over a^3} {1 \over 2 \pi^3 t}\int_{|\bar{\epsilon}|}^1 dx \ {1 \over \sqrt{x^2 - \bar{\epsilon}^2}} K\left[1-x^2\right]. \label{bcc_anal} \end{equation} where $\bar{\epsilon} \equiv \epsilon/(4t)$, and $K(z)$ is the complete elliptic integral of the first kind.\cite{olver10} This function diverges logarithmically at $z\rightarrow 0$, and results in \begin{equation} \lim_{\bar{\epsilon} \rightarrow 0} g_{\rm BCC}(\epsilon) \approx {\ln}^2({1 \over |\bar{\epsilon}|}), \label{asym} \end{equation} which is a stronger divergence than occurs in two dimensions. When next-nearest-neighbour (nnn) hopping is included, then we use a limiting representation for the $\delta$-function and determine the EDOS through \begin{equation} g_\delta(\epsilon) = {1 \over 2 t a^3} {1 \over \sqrt{\pi \delta^2}} \int_0^1 dx \int_0^1 dy \int_0^1 dz \ e^{-\left[{\epsilon - \epsilon_k \over 2t\delta}\right]^2}, \label{num_dense} \end{equation} where we have substituted $x \equiv k_x a/\pi$ and similarly for $y$ and $z$, and $\delta$ is some small numerical smearing parameter (e.g. $\delta = 0.0005t$). In fig.~\ref{fig1bled} we show the BCC density of states for a variety of values of $t_2/t$. Note how the singularity evolves (and disappears) once $t_2/t$ departs from zero. Nonetheless, a highly peaked structure remains for modest values of $t_2/t$. \begin{figure}[tp] \begin{center} \includegraphics[height=3.4in,width=2.8in,angle=-90]{fig1bled.eps} \end{center} \caption{Plot of the tight-binding 3D BCC EDOS for different values of the $nnn$ hopping parameter, $t_2$, with $\rho \equiv t_2/t$. Note that the singularity for $\epsilon = 0$ disappears as $t_2$ becomes non-zero. Nonetheless a large peak, displaced from $\epsilon = 0$, remains in its place. Results are shown for negative $t_2$ since the results from DFT indicate a structure in the EDOS very similar to this one.\cite{quan16} Moreover, for positive values of $\rho$ the results are symmetric (about $\epsilon = 0$) to those shown. We used $\delta = 0.001 t$ to generate these results using Eq.~(\ref{num_dense}) [the result for $t_2 = 0$ is indistinguishable from the more accurate result given by Eq.~(\ref{bcc_anal})].} \label{fig1bled} \end{figure} \subsection{$T_c$} To determine $T_c$ one must insert the EDOS from Eq.~(\ref{bcc_anal}) or Eq.~(\ref{num_dense}) into Eqs.~(\ref{bcs1tc},\ref{bcs2tc}), and perform the ensuing integrals numerically. Based on weak coupling, it is natural to examine dimensionless quantities, such as $T_c/(\hbar \omega_D)$, vs. $V/t$, $\hbar \omega_D/t$, and $n$. In Fig.~\ref{fig2bled} we show $T_c/(\hbar \omega_D)$ as a function of electron density, $n$, for various values of $V$ as indicated. We use $\omega_D = 0.01t$ for definiteness, although this ratio will vary with the specific mechanism that one has in mind. For these values of coupling strength the ratio $T_c/(\hbar \omega_D)$ is fairly insensitive to $\omega_D/t$, and so this figure can be used for other values of $\omega_D$ to estimate $T_c$ in real units. This is indicative of weak coupling, so in fact the shape of $T_c$ vs $\mu$ will resemble closely the density of states (as a comparison with the relevant curve in Fig.~\ref{fig1bled} indicates). Here there will be some distortion since $T_c$ is plotted vs. $n$ and not versus chemical potential. \begin{figure}[tp] \begin{center} \includegraphics[height=4.0in,width=3.6in,angle=-90]{fig2bled.eps} \end{center} \caption{\textcolor{black}{Plot of $T_c/\omega_D$ vs. electron density $n$ for various values of coupling strength, $V/t = 2$, $3$, and $4$. We have use a value of $t_2 = -0.2 t$; the EDOS is plotted in the insert, and resembles very closely the result obtained using DFT.\cite{quan16} As an example, with $V = 2t$ ($V/(16t) = 0.125$), and $\omega_D = 100$ meV, then $T_c \approx 200$ K (at $n\approx 1$). In this range of $\omega_D$ the results for $T_c$ scale with $\omega_D$.}} \label{fig2bled} \end{figure} \section{Summary} This study does not directly address the mechanism for superconductivity in H$_3$S. Instead, we have found, as have other DFT-based studies, that the BCC structure itself will tend to amplify pairing effects, due to the possibly very high electronic density of states at the Fermi level. More generally, the superconducting community should be more aware that singularities in the electronic densities of states can occur in three dimensions as well as lower dimensions, in all three types of cubic structures, simple, face-centred, and body-centred cubic. The existence of this possibility was first pointed out by Jelitto,\cite{jelitto69} and we elaborated on the nearest-neighbour models considered by him to those that include $nnn$ hopping as well.\cite{souza16} Although not addressed here, it is also worth noting that the isotope effect is expected to display some peculiar characteristics, again due to the presence of van Hove singularities in the EDOS.\cite{souza16} When $nnn$ hopping is introduced, the singularity disappears in the EDOS.\cite{remark1} The peak that remains is in some ways more `robust' --- it (and therefore superconducting $T_c$) will withstand more readily the degradation that is inevitable due to impurities and imperfections. Note that the realization that the presence of a BCC structure in the material will lead to an enhanced $T_c$ occurred also through DFT studies. Nonetheless, it is beneficial to have simplified tight-binding models like the one presented here to help identify important structure characteristics for enhancing $T_c$. It is clear from characteristics of the EDOS, that doping with electrons (should that become possible) will lead to a lower $T_c$ if this is all that mattered. Some mechanisms (e.g. the "hole" mechanism\cite{hirsch15}) predict a strong doping dependence independently of changes in the EDOS, and then the qualitative prediction of this model will depend on whether H$_3$S lies on the electron- or hole-side of the maximum predicted in that model. The dependence of the effective interaction on doping is expected to overwhelm the dependence of the EDOS on doping in this particular model. It will be interesting to see if such experiments can be carried out. \begin{acknowledgments} This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). TXRS is a recipient of an "Emerging Leaders in the Americas Program" (ELAP) scholarship from the Canadian government, and we are grateful for this support. \end{acknowledgments}
2212.01078
\section{Introduction} Quantum field theory in curved spacetime provides many important insights into the interaction of matter and gravity \cite{Birrell:1982ix,Parker:2009uva,Wald:1995yp}. An unavoidable consequence of the theory is the amplification of field and metric fluctuations during inflation \cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi,Starobinsky:1980te}, which later become the seeds for the structure formation in the cosmos \cite{Sasaki:1986hm,Mukhanov:1988jd}. These fluctuations get imprinted in the anisotropies of the Cosmic Microwave Background (CMB), which provide an excellent observational window to the physics of the early universe \cite{Akrami:2018jri,BICEP:2021xfz}. In this work we consider a free scalar field $\phi$ propagating in a curved spacetime. In the simplest model realizations of inflation, the accelerated expansion of the universe is indeed generated by a scalar field. For (quasi-free) Gaussian states, the information about the state is encoded in the two-point correlation function \begin{eqnarray} G(x,x'):=\langle \phi(x) \phi(x')\rangle \ , \label{unrG} \end{eqnarray} which acts as a building block for the construction of other relevant quantities such as the stress-energy tensor in the coincident limit $x'\to x$. For instance, the trace of such tensor can be written as \begin{equation} \langle T_a^a(x)\rangle=\left(3\left(\xi-\frac16\right)\Box+m^2\right)\langle \phi^2(x)\rangle \ , \end{equation} where we have used the signature convention $(+,-,-,-)$ (the same as in \cite{Birrell:1982ix,Parker:2009uva}). The quantity $\langle \phi^2(x)\rangle$ contains both quadratic and logarithmic ultraviolet (UV) divergences, which must be removed in order to obtain a physical finite quantity. Several regularization and/or renormalization methods have been developed for this purpose, see e.g.~\cite{Birrell:1982ix,Wald:1995yp,Parker:2009uva} for reviews on the subject. We can use for instance the \textit{point-splitting} regularization method \cite{Christensen:1976vb} to isolate the singular part of the correlation function $G_s (x,x')$ and remove it in a local and covariant manner, before taking the coincident point limit. The regularized two-point function can then be defined as \begin{equation} \langle{:}\, \phi^2(x) \,{:}\rangle=\lim_{x'\to x}\left(G(x,x')-G_s(x,x')\right) \ . \label{renG} \end{equation} Nevertheless, other regularization prescriptions can yield different expressions for $\langle{:}\, \phi^2(x) \,{:}\rangle$. A fundamental result of renormalization theory in curved spacetime is that two regularized two-point functions $\langle{:}\, \phi^2(x) \,{:}\rangle$ and $\langle{:}\,\overline{\phi^2(x)} \,{:}\rangle$, regularized with two different methods, are unique up to two arbitrary geometrical contributions of the form \begin{equation} \langle{:}\,\overline{\phi^2(x)} \,{:}\rangle - \langle{:}\, \phi^2(x) \,{:}\rangle = \alpha m^2+ \beta R \ , \label{difreg} \end{equation} where $\alpha$ and $\beta$ are two dimensionless parameters, $R$ is the Ricci scalar, and $m^2$ is the mass squared of the field (see for example \cite{Hollands:2001nf}). This result comes from imposing the minimum requirements of covariance, locality and analiticity in the limit $m^2\to0$. In the case of a Friedmann-Lemaitre-Robertson-Walker (FLRW) metric, $ds^2=a^2(\tau)(d\tau^2-d\vec{x}^2)$, the symmetries of spacetime allow us to express \eqref{unrG} in terms of fields modes as \cite{Lueders:1990np} \begin{eqnarray} G(\vec{x},\tau,\vec{x}',\tau')=\frac{1}{(2\pi)^3}\int d^3\vec{k} \, \frac{\chi_k(\tau)}{a(\tau)}\frac{\chi^*_k(\tau')}{a(\tau')}e^{i\vec{k}(\vec{x}-\vec{x}')}.\label{Gcosmo} \end{eqnarray} An important quantity that (partially) encodes the information about the two point correlation function is its \textit{power spectrum $P_{k}(\tau)$}, defined in terms of its Fourier transform at equal times $\tau = \tau '$ as \begin{eqnarray} \mathcal{G}(\tau,k,\tau,k') = (2\pi)^3 \delta^2(k-k')\mathcal{P}_{k}(\tau) , \end{eqnarray} where $k=|\vec{k}|$. Again, \eqref{Gcosmo} is divergent in the limit $\vec{x} \rightarrow \vec{x'}$. This formulation in terms of momentum space allows to envisage a computationally convenient regularization method, consisting in adding an appropriate set of subtraction terms $\mathcal{Q}_{\vec{k}}(\tau)$ inside the integral as follows, \begin{eqnarray} \langle{:}\, \phi^2(\tau) \,{:}\rangle=\frac{1}{(2\pi)^3}\int d^3\vec{k} \left(\mathcal{P}_{k}(\tau)-\mathcal{Q}_{k}(\tau)\right), \label{eq:integral} \end{eqnarray} such that the UV divergences get cancelled and the integral is finite. We define the \textit{regularized power spectrum} as $\mathcal{P}_{k}^{\rm (reg)} \equiv \, \mathcal{P}_{k}-\mathcal{Q}_{k}$. A well-established regularization method in FLRW spacetimes is \textit{adiabatic regularization}. It is based on an adiabatic expansion of the field modes \cite{Parker:1968mv,Parker:1969au,Parker:1971pt}, which captures their ultraviolet behavior and allows to isolate their divergent contributions to the quantity one wishes to regularize. When applied to the two-point function, the method yields a set of subtraction terms $\mathcal{Q}_{k}(\tau)$ that renders the integral (\ref{eq:integral}) finite (the specific form of $\mathcal{Q}_{k}(\tau)$ is given in Eq.~(\ref{Hk1}) below). The method can also be applied to the regularization of the stress-energy tensor \cite{Parker:1974qw,Fulling:1974zr,Fulling:1974pu}, as well as to other field species such as fermions \cite{Landete:2013axa, Landete:2013lpa, delRio:2014cha, BarberoG:2018oqi}. The adiabatic scheme can also be extended to include interactions of quantum fields to classical time-dependent homogeneous backgrounds, see e.g.~\cite{Anderson:2008dg,Anderson:2015yda, delRio:2017iib} for preheating-like scenarios \cite{Kofman:1994rk,Kofman:1997yn,Greene:1997fu}, as well as \cite{Ferreiro:2018qdi,Ferreiro:2018qzr,Beltran-Palau:2020hdr} for the case of interactions to classical electric fields. The adiabatic method has also been applied to the construction of preferred vacuum states in cosmological spacetimes \cite{Agullo:2014ica, Ferreiro:2022hik}. A fundamental result of the adiabatic scheme is that, even though it assumes a pre-existing FLRW spacetime, regularization techniques in general curved backgrounds converge to equivalent subtraction terms when restricted to a FLRW spacetime \cite{delRio:2014bpa, Beltran-Palau:2021xrk}. This ensures that all obtained observables are constructed in a local covariant way. Nevertheless, although the subtraction terms obtained with the adiabatic method successfully remove the UV divergences of the two-point function, they can significantly distort the infrared part of the power spectrum, especially for light fields. This effect is clearly seen in the case of a scalar field in de Sitter spacetime: the regularized power spectrum of a light field ($m\ll H$) gets significantly suppressed at scales $m \ll ka \, (\lesssim H)$, while the one of a massless field \textit{vanishes at all scales} \cite{Parker:2007ni,Agullo:2008ka, Agullo:2009vq,Agullo:2009zi}. These results dramatically change the standard observable predictions of slow-roll inflation, and sparked a fascinating debate on how to correctly apply the regularization program to the inflationary perturbations \cite{Finelli:2007fr,Durrer:2009ii,Urakawa:2009xaa,Marozzi:2011da,Agullo:2011qg,Bastero-Gil:2013nja} (see also \cite{Woodard:2014jba}). More recent works have also tackled this problem from different perspectives: in \cite{Markkanen:2017rvi} it was shown that different renormalizations can be found by expressing the subtraction terms in de Sitter space as counterterms in the Lagrangian; in \cite{Animali:2022lig} the problem of the infrared distortions in adiabatic regularization was dealt by introducing a comoving infrared cutoff; and an alternative method based on a resummation of the entire adiabatic expansion has been proposed in \cite{Corba:2022ugu}. However, it is crucial to observe that, according to Eq.~(\ref{difreg}), there are infinite equally valid renormalization prescriptions, characterized by different choices of $\alpha$ and $\beta$, that differ from the adiabatic one by geometric contributions. In this work, we use this freedom to propose a new set of subtraction terms that are capable of successfully removing the ultraviolet divergences of the two-point function, while simultaneously minimizing the distortions introduced in the infrared part of the spectrum. More specifically, we wish to construct a regularized power spectrum such that the associated correlation function obeys the following conditions: \vspace{-0.1cm} \begin{itemize} \item[(i)] is regular at the coincident limit $x \rightarrow x'$ , \vspace{-0.2cm} \item[(ii)] is equivalent to the general curved-spacetime construction \eqref{renG}, \vspace{-0.2cm} \item[(iii)] recovers $\langle{:}\, \phi^2(x)\,{:}\rangle_{\mathcal{M}}=0$ in Minkowski spacetime, \vspace{-0.2cm} \end{itemize} Moreover, for the regularized power spectrum we ask:\vspace*{-0.1cm} \begin{itemize} \item[(iv)] that it approximates the unregularized one for all amplified infrared modes. \end{itemize} Regarding condition (i), the importance of constructing a power spectrum that produces a well behaved correlation function was already stated in \cite{delRio:2014aua}. Indeed, since we wish to encode the information about the quantum vacuum state in the power spectrum, including the consistent construction of $\langle \phi^2(x)\rangle$, it is fundamental to build such a regularized power spectrum. Condition (iii) is introduced to ensure that the standard \textit{normal ordering} prescription is recovered in the flat spacetime limit. Adiabatic regularization satisfies requirements (i)-(iii), but not necessarily (iv). Note that, since adiabatic regularization fulfills condition (ii) \cite{delRio:2014bpa, Beltran-Palau:2021xrk}, any alternative scheme constructed in momentum space that differs from the adiabatic one as \eqref{difreg} will also satisfy (ii). The rest of this paper is structured as follows. In Sect.~\ref{sec:adiabatic} we present an alternative regularization prescription that fulfills all conditions (i)-(iv) above. In Sect.~\ref{sec:examples} we apply our formalism to the regularization of the power spectrum of a massive scalar field in de Sitter space, and show that our proposed method tames the distortions in the infrared part of the spectrum. In Sect.~\ref{sec:summary} we summarize and discuss our results. Finally, in Appendix \ref{sec:appA} we examine the possibility of regularizing the power spectrum with a hard infrared cutoff. \section{Regularization of the two-point function} \label{sec:adiabatic} The equation of motion of the scalar field $\phi$ in a FLRW spacetime is \begin{equation} \phi '' + 2 \frac{a'}{a} \phi' - \nabla^2 \phi + m^2 \phi -\xi R \phi= 0 \ , \label{eq:scalar-eom} \end{equation} where we have included a non-minimal coupling $\xi R \phi^2$ to the Ricci scalar $R=6 a'' / a^3$, with $\xi$ the corresponding coupling strength. We can quantize the field by using the standard canonical \cite{Parker:2009uva} or the algebraic quantization approach \cite{Wald:1995yp,Brunetti:2015vmh}. The two point correlation function (\ref{Gcosmo}) at the coincident point $\vec{x} = \vec{x}'$ can be written as \begin{equation} \langle \phi^2(\tau) \rangle = \int_0^{\infty} d \, \text{log}\,k \, \Delta_{\phi} (k,\tau) \ , \label{tpf} \end{equation} where $\Delta_{\phi} (k,\tau)$ is the (rescaled) power spectrum, given by the field modes $\chi_k$ as \begin{equation} \Delta_{\phi} (k,\tau) := \frac{k^3}{2 \pi^2} \mathcal{P}_k (\tau) \ , \hspace{0.3cm} \mathcal{P}_k (\tau) = \frac{|\chi_k|^2}{a^2 (\tau)} \ . \end{equation} Each mode $\chi_k$ satisfies the equation \begin{eqnarray} \chi_k''+\left(k^2+m^2a^2+\left(\xi-\frac16\right)a^2 R\right)\chi_k=0 \ , \label{eq:fieldmodeeq} \end{eqnarray} as well as the normalization condition $\chi_k\chi_k^{*'}-\chi_k'\chi_k^*=ia^{-2}$. The integral in \eqref{tpf} diverges at coincident points $\vec{x} \rightarrow \vec{x}'$, so we need to regularize the expression. Following the subtraction in Eq.~(\ref{eq:integral}), we define the regularized power spectrum as \begin{equation} \Delta_{\phi}^{\rm (reg)} (k, \tau) : = \frac{k^3 }{2 \pi^2} (\mathcal{P}_{k} (\tau) - \mathcal{Q}_{k} (\tau) ) \ , \label{tpf2} \end{equation} and $\langle{:}\, \phi^2(\vec{x},\tau) \,{:}\rangle =\int_0^{\infty} d \log{k} \, \Delta_{\phi}^{\rm (reg)} (k, \tau)$ is the associated regularized two-point function.\newline {\bf a) Adiabatic regularization:} Adiabatic regularization is based on an adiabatic expansion of the field modes, which allows to identify and subtract the UV-divergent terms from the quantity to be regularized \cite{Parker:1968mv,Parker:1969au,Parker:1971pt}. The expansion is based on the following WKB ansatz for the modes, \begin{eqnarray} \chi_k \sim \frac{1}{\sqrt{2W^{(n)}_k(\tau)}}e^{-i \int^\tau W_k^{(n)}(\tau')d\tau'}.\label{eq:wkbansatz} \end{eqnarray} where $W_k^{(n)}(\tau)=w^{(0)}+w^{(1)}+...w^{(n)}$ is expanded such that the term $w^{(n)}$ is of $n$th adiabatic order, i.e.~contains $n$ time-derivatives of the scale factor. The zeroth order contribution corresponds to the Minkowski vacuum state \begin{eqnarray} W_k^{(0)}=\sqrt{k^2+m^2a^2}:=\omega_k. \end{eqnarray} Higher orders of $W_k^{(n)}$ can be obtained by substituting ansatz \eqref{eq:wkbansatz} into \eqref{eq:fieldmodeeq} and solving order by order. Given a magnitude constructed from the field modes, we know by dimensional reasoning up to which order we need to expand adiabatically to correctly isolate the UV divergences. The two-point function only requires subtraction up to second adiabatic order, which gives \begin{align} \mathcal{Q}_k &:=\left(\frac{1}{2a^2W_k^{(2)}}\right)^{(2)} \label{Hk1} \\ &= \frac{1}{2a^2\omega_k}-\frac{\left(\xi-\frac16\right) R}{4\omega_k^3}-\frac{3}{16}\frac{\omega_k'^2}{a^2\omega_k^5}+\frac{\omega_k''}{8a^2\omega_k^4} \nonumber \ . \end{align} The first term completely subtracts the divergence in Minkowski spacetime, while the second one removes the remaining logarithmic divergence in a curved background. The third and fourth terms only contribute to the two-point function with a finite term proportional to $R$. Second and higher order terms in the adiabatic expansion behave, for momenta $k \gtrsim a m$, as $\sim k^2 \times \omega_k^{-n/2} \sim k^{2-n}$ with $n\geq 3$. Therefore, if the mass of the field is smaller than the maximum comoving momenta amplified by the non-adiabatic expansion of the universe $k_+/a$ (i.e.~$m a \lesssim k_+$), these terms can significantly distort the power spectrum at scales of physical interest. The lighter the field is, the worse the distortions.\newline \vspace*{-0.1cm} {\bf b) Regularization without infrared distortions:} However, \eqref{Hk1} is not the only possible set of subtraction terms that can be obtained from an adiabatic expansion. In Ref.~\cite{Ferreiro:2018oxx}, an `off-shell' type of prescription for adiabatic regularization was proposed, in which the zeroth order term of the expansion is changed to \begin{equation} W_k^{(0)}=\sqrt{k^2+\mu^2a^2}:=\tilde{\omega}_k \ , \end{equation} where $\mu$ is now an arbitrary mass scale. By rewriting the field mode equation as \begin{equation} \chi_k''+\left(\tilde{\omega}_k^2-\mu^2a^2+m^2a^2+\left(\xi-\frac16\right)a^2 R\right)\chi_k=0 , \label{eq:fieldmodeeq2} \end{equation} we can obtain higher order terms of the expansion by substituting \eqref{eq:wkbansatz} into \eqref{eq:fieldmodeeq2} and solving order by order. In the new prescription, $\tilde \omega_k$ is taken to be of zeroth order, while $\mu^2$ and $m^2$ are of second order (i.e.~the same order as $R$). The subtraction terms for the two-point function obtained with this prescription are now \begin{equation} \widetilde{Q_k}:=\frac{1}{2a^2\tilde{\omega}_k}+\frac{(\mu^2-m^2)}{4\tilde{\omega}_k^3}-\frac{(\xi-\frac16)R }{4\tilde{\omega}_k^3}-\frac{3}{16}\frac{\tilde{\omega}_k'^2}{a^2 \tilde{\omega}_k^5}+\frac{\tilde{\omega}_k''}{8a^2\tilde{\omega}_k^4}\label{Hk2} \ , \end{equation} and coincide with the ones of the traditional adiabatic approach \eqref{Hk1} only for $\mu = m$. The difference between the regularized two-point functions obtained with the traditional and `off-shell' adiabatic prescriptions (which we denote as $\langle{:}\, \phi^2(x)\,{:}\rangle$ and $\langle{:}\, \widetilde{\phi^2(x)} \,{:}\rangle$ respectively) is \begin{eqnarray} &&\langle{:}\, \widetilde{\phi^2(x)}\,{:}\rangle-\langle{:}\, \phi^2(x) \,{:}\rangle = \\ &&\frac{1}{16\pi^2}\left [ \mu^2-m^2 \left(m^2+\left(\xi-\frac16\right)R\right) \log{\left(\frac{m^2}{\mu^2}\right)} \right] \nonumber \ . \end{eqnarray} The difference is given by geometric terms as in Eq.~\eqref{difreg}, so the `off-shell' prescription also satisfies property (ii). By setting $\mu$ large enough, we can tame the infrared distortions introduced by subtraction terms \eqref{Hk2} in the power spectrum, while still removing the ultraviolet divergent part. However, this prescription does not yield $\langle{:}\, \phi(x) \,{:}\rangle_{\mathcal{M}}=0$ for $\mu \neq m$ in Minkowski spacetime (i.e.~does not obey condition iii above). We can solve this problem by imposing $\mu = m$ only in the Minkowskian part of the subtraction [i.e.~the first two terms in \eqref{Hk2}], while still retaining the freedom of choosing $\mu$ in the other terms. Moreover, the last two terms in \eqref{Hk2} yield a finite contribution proportional to $R$ to the regularized two point function, so we can remove them while still being compatible with Eq.~\eqref{difreg}. We then propose the following subtraction terms, \begin{eqnarray} \overline{\mathcal{Q}_k}:=\frac{1}{2\sqrt{k^2+m^2a^2}}-\frac{(\xi-1/6)Ra^2}{4(k^2+M^2a^2)^{3/2}}\label{Hk3} \end{eqnarray} where $M$ is an arbitrary mass scale that plays the same role as $\mu$ in \eqref{Hk2}. The difference between the regularized two-point functions obtained with the traditional adiabatic prescription and the one obtained by subtracting \eqref{Hk3} (which we denote as $\langle {:}\,\overline{\phi^2(x)}\,{:}\rangle$) is \begin{equation} \langle {:}\,\overline{\phi^2(x)}\,{:}\rangle-\langle{:}\,\phi^2(x)\,{:}\rangle=\frac{\xi-\frac16}{16\pi^2} \log{\left(\frac{m^2}{M^2}\right)}R+\frac{R}{288\pi^2}\label{diff2} \end{equation} which again can be written as a sum of geometrical terms as in \eqref{renG}. Therefore, our proposed regularization prescription also satisfies condition (ii). Note that the last term in \eqref{diff2} is a consequence of having removed the last two subtraction terms in \eqref{Hk2}. \section{Regularized power spectrum in de Sitter spacetime} \label{sec:examples} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{dS-m001.pdf} \,\, \includegraphics[width=0.49\textwidth]{dS-m05.pdf} \caption{Power spectrum of a scalar field in de Sitter space with $m = 0.01 H$ (left) and $m = 0.5H$ (right). We show the unregularized spectra (\ref{eq:unreg-pws}) (black), the Minkowskian subtraction term $\propto (m_H^2 + x^2)^{-1/2}$ (grey dashed), the quasi scale-invariant spectrum obtained by subtracting the Minkowskian term from (\ref{eq:unreg-pws}) (red), the spectra regularized with the traditional adiabatic approach (\ref{eq:powsp-dS-trad}) (orange dashed), and the one regularized with our proposed subtraction terms \eqref{Hk3} for different values of $M_H$ (blue dotted). The values of $M_H$ are also indicated with dashed vertical bars.} \vspace*{-0.2cm} \label{fig:example-desitter} \end{figure*} In order to illustrate the differences between regularization prescriptions, let us consider a massive scalar field with $\xi= 0$ in de Sitter spacetime. The scale factor evolves as $a(\tau) = - (H \tau)^{-1}$ with (constant) Hubble parameter $H$, and the field mode equation (\ref{eq:fieldmodeeq}) becomes \begin{equation} \chi_k'' + \left( k^2 + \frac{m_H^2 - 2 }{\tau^2} \right) \chi_k = 0 \ , \hspace{0.4cm} m_H \equiv \frac{m}{H} \label{equbd}\ . \end{equation} A natural solution is the Bunch-Davies vacuum \cite{Bunch:1978yq}\footnote{If $m > (3/2)H$, the solution must include an extra factor $e^{-\frac{\pi}{2} Im[\nu]}$ in order to be correctly normalized.} \begin{equation} \chi_k = \frac{-i\sqrt{\pi \tau}}{2} H_{\nu}^{(1)} (-k \tau) \ , \hspace{0.3cm} \nu \equiv \sqrt{\frac{9}{4} - m_H^2} \ , \label{eq:fieldsol} \end{equation} and the unregularized power spectrum is in this case \begin{equation} \Delta_{\phi} (k, \tau) = \frac{H^2 x^3}{8 \pi} |H_{\nu}^{(1)} (x) |^2 \ , \hspace{0.3cm} x \equiv \frac{k}{H a} \ , \label{eq:unreg-pws} \end{equation} where $H_{\nu}^{(1)}$ is the Hankel function of the first kind. In the $m=0$ case we simply have $\Delta_{\phi} (x) = H^2 (1 + x^2 ) /(4 \pi^2)$. Adiabatic regularization yields the following regularized power spectrum [$ \Delta_{\phi}^{\rm (reg)} \equiv \Delta_{\phi} - k^3 \mathcal{Q}_{k}/(2 \pi^2 )$] \cite{Parker:2007ni} \begin{align} \Delta_{\phi}^{\rm (reg)}& = \frac{H^2 x^3}{8 \pi} \left( |H_{\nu}^{(1)} (x) |^2 - \frac{g (x; m_H)}{4 \pi \left(m_H^2+x^2\right)^{7/2}} \right) , \nonumber \\[5pt] & g (x; m_H) \equiv 8 x^6 +8 x^4 \left(3 m_H^2 + 1\right)+\label{eq:powsp-dS-trad}\\ & \hspace{0.9cm} 2 m_H^2 x^2 \left(11+12 m_H^2\right)+m_H^4 \left(9+8 m_H^2\right) , \nonumber \end{align} which gives exactly $\Delta_{\phi}^{\rm (reg)} = 0$ for $m=0$. Regularizing the two-point function with our proposed subtraction terms (\ref{Hk3}) yields instead the following result [$\overline{\Delta}_{\phi}^{\rm (reg)} \equiv \Delta_{\phi} - k^3 \overline{\mathcal{Q}}_{k}/(2 \pi^2 )$], \begin{equation} \overline{\Delta}_{\phi}^{\rm (reg)} = \frac{H^2 x^3}{8 \pi} \left( |H_{\nu}^{(1)} (x) |^2 - \frac{2\pi^{-1}}{ \sqrt{m_H^2 + x^2}} - \frac{2\pi^{-1}}{(M_H^2 + x^2)^{\frac{3}{2}}} \right) \label{eq:powsp-dS-alt} \end{equation} with $M_H \equiv M/H$. The power spectrum in the massless case $m=0$ is now \begin{equation} \overline{\Delta}_\phi^{\rm (reg)} =\frac{H^2}{4\pi^2}\left[1+\left(1+\frac{M_H^2}{x^2}\right)^{-3/2}\right] \ ,\end{equation} which recovers the scale-invariant result $\overline{\Delta}_\phi^{\rm (reg)} \simeq H^2 /(4 \pi^2)$ at infrared scales\footnote{Note that the Bunch-Davies vacuum generates a well-known infrared divergence in the two-point function, see e.g.~\cite{Ford:1977in}.} $x \ll M_H$. In the left panel of Fig.~\ref{fig:example-desitter} we compare the power spectra obtained with different regularization prescriptions, for the choice $m=0.01 H$. In red we depict the quasi scale-invariant power spectrum obtained by subtracting only the first term in (\ref{Hk3}) to the unregularized expression (\ref{eq:unreg-pws}), as we would do in Minkowski spacetime. The regularized two-point function associated to such spectra is UV divergent. In orange we show the regularized power spectrum obtained with the traditional adiabatic approach (\ref{eq:powsp-dS-trad}): such prescription removes the UV divergences, but significantly suppresses the spectrum at scales $m_H \lesssim x (\lesssim 1)$. The different blue lines show the power spectra regularized with our proposed subtraction terms \ref{eq:powsp-dS-alt} for different choices of $M_H \equiv M /H$. By setting $M_H = 1$, we can recover the quasi scale-invariant power spectrum at all superhorizon modes $x \lesssim 1$ while removing the UV tail at $x \gtrsim 1$. The regularized two-point functions obtained from the integration of these spectra are finite. The right panel of Fig.~\ref{fig:example-desitter} depicts the same comparison for the heavier field $m = 0.5 H$, which illustrates that all regularized spectra recover the characteristic tilt at scales $x \ll 1$ independently of the choice of $M_H$. \section{Summary and conclusions} \label{sec:summary} In this work we have proposed an alternative regularization prescription for the two-point function of a scalar field in a FLRW spacetime, consisting in subtracting (\ref{Hk3}) from the integrand of the two-point function in momentum space, see Eq.~(\ref{eq:integral}). Unlike adiabatic regularization, our proposed subtraction terms are designed to subtract the UV-divergent tail of the power spectrum while minimizing the distortions at infrared scales, meaning the part of the spectrum amplified by the non-adiabatic expansion of the universe. Our prescription depends on a free mass scale $M$ that acts as a soft cutoff, and differs from traditional adiabatic regularization as in Eq.~(\ref{difreg}). As a consequence it is constructed in a local and covariant way. We have illustrated our method by regularizing the power spectrum of a scalar field in de Sitter space: by setting $M \approx H$, we can recover the standard result $\Delta_{\phi}^{\rm (reg)} \simeq H^2 /(4\pi^2)$ for all superhorizon modes, while simultaneously removing the UV tail for subhorizon ones. In future work we plan to reexamine the regularization of the stress-energy tensor, which in curved spacetime is also not unique. Similarly to the two-point function, the regularized stress-energy tensor is not uniquely defined, so we could potentially use the intrinsic ambiguities of the renormalization program in order to find a set of subtraction terms that also minimize the infrared distortions. Our proposed regularization prescription can potentially also be extended to other scenarios such as fields with interactions to homogeneous time-dependent fields, or to other field species like fermions or gauge fields. We plan to study these topics elsewhere.\vspace*{-0.2cm}
2212.00970
\section{Additional Plots for the Synthetic Dataset} \label{sec:appendix} \begin{figure}[!b] \centering \includegraphics[width=5.0in]{figures/syntheticSurface.pdf} \caption{Surface plots of the PINN predictions of $u$.} \label{fig:syntheticsurface} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.0in]{figures/syntheticSurfaceDudx.pdf} \caption{Surface plots of the PINN predictions of $\partial u / \partial x$.} \label{fig:syntheticSurfaceDudx} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.0in]{figures/syntheticSurfaceDudy.pdf} \caption{Surface plots of the PINN predictions of $\partial u / \partial y$.} \label{fig:syntheticSurfaceDudy} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.0in]{figures/syntheticSurfaceDudt.pdf} \caption{Surface plots of the PINN predictions of $\partial u / \partial t$.} \label{fig:syntheticSurfaceDudt} \end{figure} \clearpage \section{Additional Plots for the Fire Dataset} \label{sec:appendix2} \subsection{S03 Fire Dataset} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_noAssimilation_S03_Braidwood_appendix.pdf} \caption{Results for the PINN with no data assimilation.} \label{fig:fireResults_noAssimilation_appendix} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_assimilation_S03_Braidwood_appendix.pdf} \caption{Results for the PINN with data assimilation.} \label{fig:fireResults_assimilation_appendix} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_assimilation_bpinn_S03_Braidwood_appendix.pdf} \caption{Results for the B-PINN.} \label{fig:fireResults_assimilation_bpinn_appendix} \end{figure} \clearpage \subsection{E06 Fire Dataset} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_noAssimilation_E06_Braidwood_appendix.pdf} \caption{Results for the PINN with no data assimilation.} \label{fig:fireResults_noAssimilation_E06_Braidwood_appendix} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_assimilation_E06_Braidwood_appendix.pdf} \caption{Results for the PINN with data assimilation.} \label{fig:fireResults_assimilation_E06_Braidwood_appendix} \end{figure} \begin{figure}[!b] \centering \includegraphics[width=5.39in]{figures/fireResults_assimilation_bpinn_E06_Braidwood_appendix.pdf} \caption{Results for the B-PINN.} \label{fig:fireResults_assimilation_bpinn_E06_Braidwood_appendix} \end{figure} \section{Conclusion} We contrast and compare the PINN, B-PINN and the level-set method in solving the level-set equation in the context of wildfire fire-front modelling. The level-set equation is a PDE which, when solved, provides the means to model how a fire-front propagates through the spatio-temporal domain according to external factors such as wind and fuel load. We propose an approach to assist the PINN in maintaining causal dynamics by encouraging some form of continuity across time. To demonstrate the approach, we construct a synthetic dataset which exhibits extreme changes in the external factors, including extreme changes in wind direction and fuel load. We show that, without our approach, the PINN can fail to model these changes. With the proposed approach, the PINN and B-PINN are successfully applied to modelling two recorded grassland fires. We show how to apply data assimilation in the wildfire context and demonstrate how it can improve predictions. Furthermore, we show how the B-PINN can generate plausible simulations of a wildfire and provide uncertainty quantification in the simulations. Although the proposed predictive likelihood assists the PINN in tracking changes in external factors, it still does not completely solve the causality problem. We show that the PINN still finds a global solution that tends to ignore the delay affects that an obstruction has on the level set function. This is an interesting challenge which is left for future work. Future work could also involve applying the PINN to the inverse problem (e.g. see \cite{raissi2017physics2}) to determine difficult-to-measure parameters such as fuel type and load from data. A key feature of the PINN is that it solves the problem in continuous space and time, whereas the Level-Set Method requires complex linear approximations of discrete derivatives. Finally, unlike the level-set-method, the PINN offers data assimilation and uncertainty quantification. As such, the PINN provides a compelling solver for the level-set function and could be integrated into existing wildfire modelling platforms to provide a feature rich systems. \section{Discussion} Compared to traditional linear solvers (such as the Level-Set Method), the PINN offers several appealing properties. These include a continuous solution over time and space, a non-linear representation of the solution, the ability to perform data assimilation, and the ability to provide uncertainty quantification. The level-set method requires a discretisation of space and time, which is often required to be at a high resolution. Small errors caused by inaccuracies in the linear approximations of derivatives or aliasing can grow into large artefacts resulting in a diverging solution. As such, fine grids and complex approximates to derivatives are required. Furthermore, a significant challenge in the level-set method is how to accurately perform reinitialisation. The PINN does not require discretisation and naturally handles non-linearity. Data assimilation is a key benefit as it allows for the correction of errors in the propagation of the level-set function over time. Furthermore, it provides a means to perform post-fire reconstruction which interpolates between fire isochrones and models how a fire may have evolved between the times where the isochrone data are available. Data for assimilation may be captured from satellite and drone images and there are existing automated methods for detecting fire-front isochrones from such data (e.g. \cite{Schroeder2014New}). Finally, the B-PINN provides a means for uncertainty quantification, which is not naturally available in the level-set method. With mean-field variational inference, each parameter in the neural network is assigned a mean and variance, which implies the overall number of parameters in the B-PINN increases by a factor of two compared to the PINN. Especially with smaller neural networks such as the one used in this study, this does not add a significant amount of memory or computational overhead. The benefit is that the B-PINN is able to offer various plausible fire-front simulations via MC samples and quantify uncertainty. Relating to PINN challenges, we especially encountered problems associated with spectral bias and causality. Regarding spectral bias, the PINN would be slow to learn the high-frequency features associated with the discontinuity of the signed distance function. To address this, we over-sampled the region within the zero-level-set and we trained the model for a large number of epochs. Owing to its piece-wise nature, the Rectified Linear Unit (ReLU) activation function provides better approximations to the signed-distance function. This however comes at the cost of introducing discontinuities into the gradients. For the synthetic dataset, the tanh activation function was used, and for the fire dataset, the ReLU activation function was used. We encountered significant challenges associated with the PINN not modelling causality. Strategies such as sequence-to-sequence learning \cite{krishnapriyan2021characterizing} and weighted residual loss \cite{wang2022respecting} were not effective in our problem and led to the development of the predictive (causal) likelihood. Introducing this likelihood provided significant improvement and allowed the PINN to model changes in the external wind vector field. We note that the Euler approximation used for the predictive likelihood is a linear approximation and can be inaccurate with highly non-linear changes of $\u$ in time. Provided that these events are scarce, the neural network is able to treat the prediction errors as noise in the data. Furthermore, the error can be reduced by reducing $\Delta t$ or by using more advanced integration approximations such as higher-order Runge-Kutta methods. \section{Introduction} The wildfires that swept Australia in 2019 and 2020 burned over 19 million hectares of land, destroyed over 3094 homes, killed an estimated 1.25 billion animals, and resulted in the loss of 33 human lives \cite{wwf2022australia}. This is but one example of the disastrous effects of wildfires that occur across the globe every year \cite{Bowman_2020}. Capturing wildfire dynamics in a model to allow emergency management decision-makers to readily access, interpret and make trusted decisions in the face of uncertainty is paramount as highlighted by \cite{huston2022creating}. Capturing wildfire dynamics under uncertainty in a model is a challenging task. Wildfire prediction platforms such as SPARK \cite{miller2015spark} and WRF-SFIRE \cite{mandel2011coupled} are based on the level-set method, which implicitly models the fire-front using an auxiliary three-dimensional surface called the level-set function. This function is evolved over space and time using a Partial Differential Equation (PDE) known as the level-set equation. The fire-front is represented by the zero level-set, which comprises a set of closed curves or isochrones located where the level set function equates to zero. As the fire-front is not directly parameterised, the level-set method is able to follow changes in the topology, such as shape splits and mergers \cite{Osher2001Level}. The disadvantages of the level-set method include (1) complicated finite difference approximations to derivatives, (2) a finely-scaled discretised grid, and (3) a reinitialisation step to maintain the signed distance function properties of the level-set function \cite{Osher2001Level,sethian1999level}. Furthermore, in the context of wildfires, the level-set method does not provide any natural means for data assimilation and uncertainty quantification \cite{yoo2022bayesian,dabrowski2022towards}. Without data assimilation, inaccuracies can grow into large errors with no means for correction \cite{Rochoux2013Regional}. Furthermore, in the absence of uncertainty quantification, making informed decisions can be challenging \cite{huston2022creating,kuhnert2018making}. The Physics Informed Neural Network (PINN) \cite{Lagaris1998Artificial,raissi2019physics} is a machine learning approach that uses a neural network to solve a PDE. PINNs have recently received increased interest as they take advantage of the non-linearity, differentiability, and the universal approximation properties of neural networks to provide an approximate solution to PDEs over \textit{continuous} time and space \cite{Cuomo2022Scientific}. This is achieved by embedding the PDE, as well as initial and boundary conditions into a neural network's training process. Solving the PDE conditions is performed in a partially unsupervised manner, which allows a machine learning model to incorporate knowledge when data is scarce. Furthermore, the PINN offers some form of interpretability as its underlying neural network is constrained to produce solutions of the PDE. Finally, the PINN offers a means to perform data assimilation and provide uncertainty quantification. In this study we explore the application of the PINN to solve the level-set equation for wildfire fire-front modelling. Our objectives are (1) construct a PINN that is able to solve the level-set equation and model the the dynamics of a fire with varying external conditions, such as wind and fuel load; (2) devise an approach to assimilate data in the form of observations of the fire-front into the PINN; and (3) provide a means to quantify the uncertainty of a PINN's predictions in a statistically meaningful way. For the first objective we show how the PINN can fail to model extreme changes in the external conditions and propose the use of a predictive likelihood function to address this problem. To demonstrate this, we construct a synthetic dataset that highlights the problem and shows how the predictive likelihood encourages some form of continuity of the PINN predictions over time. For the second objective, we propose a likelihood function that compels the PINN to make predictions that correlate with fire-front observations. For the third objective, we use variational inference as a Bayesian approach to train the PINN, thereby providing a means to model and quantify uncertainty in the PINN and its predictions. Throughout these endeavours, we compare and contrast the PINN to the level-set method. We begin by introducing the problem and approaches and discuss related work. This is followed by a presentation of the level-set equation, the level-set method, and the PINN. We present results on a synthetic dataset to demonstrate how the PINN can overcome challenging issues due to causation. The PINN and level-set method are then contrasted on a fire datasets where data is assimilated into the model and uncertainties are quantified. We conclude the study with a discussion and conclusions about the usefulness of PINNs moving forward as a method for approximating complex spatio-temporal dynamics. \section{The Level-Set Equation} The level-set equation in its first form is a Hamilton–Jacobi equation given by \cite{Osher2001Level,sethian1999level} \begin{align} \label{eq:levelSetEquation1} \frac{\partial \u}{\partial t} + s \| \nabla \u \| = 0 \end{align} where $\u(t,x,y)$ is the level set function\footnote{For notational convenience, we may represent $\u(t,x,y)$ in shorthand form as $\u$.} in two-dimensional space $(x,y)$ and time $t$, $s$ is a scalar describing the speed at which the level-set function moves in its normal direction, and $\| \cdot \|$ indicates the Euclidean norm. The level-set can be propagated by solving the level-set equation \cite{Osher2001Level,sethian1999level}. In the context of a wildfire, $s$ specifies how the fire expands over time (in the normal direction) and includes factors such as fuel properties and landscape topography. This can be viewed as an internal self-generated vector field that causes the level-set function to propagate in normal direction with a velocity proportional to its curvature. However, a fire is also propagated by wind, which can be represented by an external vector field $\boldsymbol{w}$ that advects the level set function across space. The level-set equation can generally be represented by an internal and external vector field \cite{osher2003level} by representing it in an advection equation form given by \begin{align} \label{eq:levelSetEquation} \frac{\partial \u}{\partial t} + \mathbf{c} \cdot \nabla \u = 0 \end{align} where $\mathbf{c}$ is a vector field which containing both internal and external factors. Let $\hat{\boldsymbol{n}}$ be the normal vector of the level-set function and let $\boldsymbol{s} = s \hat{\boldsymbol{n}}$. According to the Rothermal wildfire Rate Of Spread (ROS) model \cite{rothermel1972mathematical}, the vector field $\c$ can be expressed as \begin{align} \c = \boldsymbol{s} + \boldsymbol{w} \end{align} With this, we have that \begin{align} & \frac{\partial \u}{\partial t} + (s \hat{\boldsymbol{n}} + \boldsymbol{w}) \cdot \nabla \u = 0 \nonumber \\ =& \frac{\partial \u}{\partial t} + s \| \nabla \u \| + \boldsymbol{w} \cdot \nabla \u = 0 \end{align} Here the second term corresponds to the level-set equation of (\ref{eq:levelSetEquation1}) and the third term corresponds to an advection equation. In the wildfire application we require that $\c \geq 0$. If $\c < 0$, the level set function moves in a direction of concavity causing circular level sets to shrink, which would represent a fire that backtracks on itself. Since this is physically unlikely, we add a constraint on $\c$ \cite{hilton2015effects}. For a given location $(x, y)$ and time $t$, we constrain $\c$ such that: \begin{align} c(x,y,t) = \begin{cases} s(x,y,t) + w(x,y,t) & c(x,y,t) > 0 \\ 0 & c(x,y,t) \leq 0 \end{cases} \end{align} The fire-front is represented by the zero-level-set $\Gamma$ of the level-set function $\u$, which is a closed curve given by \begin{align} \Gamma = \left\{ (x,y) | \u(t,x,y) = 0 \right\} \end{align} As level-set function evolves over time and space, the zero-level-set can be tracked to provide a simulation of a wildfire fire-front. \subsection{Distance Functions} \label{sec:signedDistanceFunction} The signed distance function is the most commonly used level-set function \cite{osher2003level}. It is positive on the exterior, negative on the interior, and zero on the boundary of the zero-level-set. Furthermore, the condition of \begin{align} \| \nabla \u \| = 1 \end{align} may be imposed for a signed distance function. The cone signed distance function is given by \begin{align} \label{eq:cone} u(t=0,x,y) = \sqrt{x^2 + y^2} - k \end{align} where $k$ is a constant. In this form, the initial zero level set is a circle, with its radius defined by $k$. Similarly, for an initial elliptical zero level-set, an elliptical cone is given by \begin{align} \label{eq:ellipticalCone} u(t=0,x,y) = \sqrt{ \frac{ \left( x \cos(\alpha) + y \sin(\alpha) \right)^2 }{a^2} + \frac{ \left( x \sin(\alpha) - y \cos(\alpha) \right)^2 }{b^2} } - k \end{align} where $a$ and $b$ define the width and height of the ellipse, $\alpha$ is the rotation angle of the ellipse, and $k$ defines the offset of the cone from the zero-$xy$ plane. Note that the elliptical cone does not necessarily conform to the signed-distance function condition of $\| \nabla \u \| = 1$. \section{The Level-Set Method} \subsection{Discretisation} The level-set method solves the level-set equation using accurate finite difference numerical methods. The first order hyperbolic upwind finite difference method approximation to (\ref{eq:levelSetEquation}) is given by \cite{sethian1999level} \begin{align} \label{eq:discrete_level_set} \u_{ij}^{n+1} = \u_{ij}^{n} - \Delta t \left( \max(c, 0) \nabla_{ij}^{+} + \min(c,0) \nabla_{ij}^{-} \right) \end{align} where \begin{align} \label{eq:nabla_plus} \nabla_{ij}^{+} = \Big[& \max(D^{-x} \u_{ijk}^n, 0)^2 + \min(D^{+x} \u_{ijk}^n, 0)^2 + \nonumber \\ & \max(D^{-y} \u_{ijk}^n, 0)^2 + \min(D^{+y} \u_{ijk}^n, 0)^2 \Big]^{1/2} \end{align} and \begin{align} \label{eq:nabla_minus} \nabla_{ij}^{-} = \Big[& \min(D^{-x} \u_{ijk}^n, 0)^2 + \max(D^{+x} \u_{ijk}^n, 0)^2 + \nonumber \\ & \min(D^{-y} \u_{ijk}^n, 0)^2 + \max(D^{+y} \u_{ijk}^n, 0)^2 \Big]^{1/2} \end{align} Here, $D^{-x}$ and $D^{+x}$ are the backward and forward difference operators in the $x$-direction respectively. The indices $i,j$ index the discretised $x$ and $y$ axes, $n$ indexes discrete time, and $\Delta t$ is the discrete time step size. \subsection{Reinitialisation} After the level set function evolves over time under $\c$, it generally does not remain a signed distance function. The level-set function can be reinitialised by finding a new $\u$ with the same zero-level set but with $| \nabla \u | = 1$. One option is to use the following reinitialisation equation \cite{sussman1994level}: \begin{align} \label{eq:reinitialisation} \u_t +\text{sign}(\u)( | \nabla \u | - 1) = 0 \end{align} This reinitialisation equation can be discretised similar to the level set equation. For this, the discontinuity in the sign function can be smoothed over a few grid cells. \subsection{Algorithm} In practice, the level set function is defined over a spatial grid. At each time step, the entire level set function is updated at each point in the grid according to (\ref{eq:discrete_level_set}). The algorithm for the level-set method is presented in Algorithm \ref{alg:level_set_method}. \begin{algorithm}[!h] \begin{algorithmic}[1] \Require The discrete time step size $\Delta t$, a spatial grid over $x$ and $y$ with step size $\Delta h$, the total number of time steps $T$. \State Initialise the level-set function over the spatial grid (e.g. an ellipse at a given location). \For{$n \in (1,T)$} \State Calculate the forward and backward differences over $x$ and $y$: \hskip\algorithmicindent $D^{+x} \u_{ijk}^n$, $D^{-x} \u_{ijk}^n$, $D^{+y} \u_{ijk}^n$, and $D^{-y} \u_{ijk}^n$. \State Calculate $\nabla_{ij}^{+}$ and $\nabla_{ij}^{-}$ according to (\ref{eq:nabla_plus}) and (\ref{eq:nabla_minus}) respectively. \State Update the level-set function according to (\ref{eq:discrete_level_set}). \State Optionally perform a reinitialisation step using (\ref{eq:reinitialisation}) \EndFor \end{algorithmic} \caption{Level-set method algorithm.} \label{alg:level_set_method} \end{algorithm} \section{Physics Informed Neural Network} We apply the PINN to solve the level-set PDE given in (\ref{eq:levelSetEquation}). For this, the level set function $\u(t, x, y)$ is approximated with a neural network $\tilde{\u}(t, x, y, s, w_x, w_y)$. Although $\u$ is not explicitly a function of $\c$, according to (\ref{eq:levelSetEquation}), its dynamics are. As such, the components of $\c$ ($s$, $w_x$, and $w_y$) are provided as inputs to neural network $\tilde{\u}$ to allow modelling of the external effects of $\c$ over time. Although various neural network architectures have been considered for PINNs \cite{Cuomo2022Scientific}, the Multilayer Perceptron (MLP) architecture is used in this study. Given $\tilde{\u}$, we define the PINN according to (\ref{eq:levelSetEquation}) as \cite{raissi2019physics, raissi2017physics} \begin{align} \label{eq:pinn} f(t,x,y; \boldsymbol{\theta}) := \frac{\partial \tilde{\u}}{\partial t} + \c \cdot \nabla \tilde{\u} \end{align} where $\boldsymbol{\theta}$ are the PINN's' parameters, which typically comprise the neural network parameters (i.e. $\tilde{\u} := \tilde{\u}(t, x, y, s, w_x, w_y; \boldsymbol{\theta})$). The partial differentiation of $\tilde{\u}$ with respect to its inputs $t$, $x$, $y$ is performed using automatic differentiation (also known as algorithmic differentiation), which algorithmically computes the derivatives according the neural network architecture \cite{baydin2018automatic}. The overall model structure of the PINN is illustrated in Figure \ref{fig:pinn}. The figure outlines the flow of information from the inputs to the neural network, to the prediction of the level-set function, to the formation of the PDE. \begin{figure}[!t] \centering \input{figures/pinnArchitecture.tex} \caption{Architecture of the proposed PINN. The MLP is a neural network that takes six inputs $t, x, y, s, w_x, w_y$ and predicts the level set function $\tilde{\u}$. The PDE is then constructed from partial derivatives that are calculated using automatic differentiation through the MLP.} \label{fig:pinn} \end{figure} \subsection{Likelihood and Optimisation} Consider a dataset $\mathcal{D}$ comprising several subsets, including a set of initial (boundary) conditions $\mathcal{D}_i = \{ t_j, x_j, y_j, u_j \}_{j=1}^{N_i}$, a set of assimilation data $\mathcal{D}_a = \{ t_j, x_j, y_j \}_{j=1}^{N_a}$, a set of collocation points over the spatio-temporal domain $\mathcal{D}_f = \{ t_j, x_j, y_j \}_{j=1}^{N_f}$, and a set of predictions $\mathcal{D}_p = \{ t_j, x_j, y_j, \hat{u}_j \}_{j=1}^{N_p}$. We assume the data are i.i.d. and Gaussian-distributed. The PINN, which is parametrised by $\boldsymbol{\theta}$, is optimised by maximising the likelihood given by \begin{align} \label{eq:ltot} p(\mathcal{D} | \boldsymbol{\theta}) = p(\mathcal{D}_i | \boldsymbol{\theta}) p(\mathcal{D}_f | \boldsymbol{\theta}) p(\mathcal{D}_a | \boldsymbol{\theta}) p(\mathcal{D}_p | \boldsymbol{\theta}) \end{align} The likelihoods $p(\mathcal{D}_i | \boldsymbol{\theta})$ and $p(\mathcal{D}_f | \boldsymbol{\theta})$ are referred to as the initial likelihood and the physics likelihood respectively. In the literature, these are typically represented as Mean Squared Error (MSE) loss functions and are the key components to a PINN loss function \cite{raissi2019physics, raissi2017physics}. The predictive (causal) likelihood $p(\mathcal{D}_p | \boldsymbol{\theta})$ and the data assimilation likelihood $p(\mathcal{D}_a | \boldsymbol{\theta})$ are novel additions to the PINN optimisation objective in this study, and are described in Sections \ref{sec:causalLoss} and \ref{sec:dataAssimilationLoss} respectively. As the dataset $\mathcal{D}_i$ comprises initial conditions (boundary conditions typically do not exist in the wildfire context), it guides the neural network to learn the initial shape of $\u(t,x,y)$. The dataset $\mathcal{D}_i$ can be generated using distance functions such as (\ref{eq:cone}) and (\ref{eq:ellipticalCone}). If we assume that the outputs of the neural network are Gaussian distributed with a known standard deviation of $\sigma_i$, the initial likelihood for $\mathcal{D}_i$ is given by \begin{align} \label{eq:li} p(\mathcal{D}_i | \boldsymbol{\theta}) = \prod_{j=1}^{N_i} \frac{1}{2 \pi \sigma_i^2} \exp \left( - \frac{ \left( \tilde{u}(t_j, x_j, y_j; \boldsymbol{\theta}) - u_j \right)^2 }{ 2 \pi \sigma_i^2} \right) \end{align} The physics likelihood has the purpose of minimising $f(t,x,y;\boldsymbol{\theta})$ in (\ref{eq:pinn}) to constrain the PINN to follow the spatio-temporal dynamics (or so-called ``physics'') of the PDE. The dataset $\mathcal{D}_f$ comprises collocation points of $(t,x,y)$ over the spatio-temporal domain. Each collocation point may be sampled randomly or sampled from a predefined spatio-temporal grid. Note that samples of $\u$ (i.e. ground-truth) are not required in $\mathcal{D}_f$. As the solution to the PDE is when $f(t,x,y;\boldsymbol{\theta})=0$, we assume a zero-mean Gaussian with a standard deviation of $\sigma_f$. That is, for the collocation data points $\mathcal{D}_f$, the physics likelihood is given by \begin{align} \label{eq:lf} p(\mathcal{D}_f | \boldsymbol{\theta}) = \prod_{j=1}^{N_f} \frac{1}{2 \pi \sigma_f^2} \exp \left( - \frac{ f(t_j, x_j, y_j; \boldsymbol{\theta})^2 }{ 2 \pi \sigma_f^2} \right) \end{align} Maximising this likelihood encourages $f(t,x,y;\boldsymbol{\theta}) = 0$, which is the solution to the PDE in (\ref{eq:levelSetEquation}). To represent a perfect solution to the PDE, $\sigma_f$ should ideally be zero to force the Gaussian into a Dirac distribution at the origin. However, as an approximation, the PINN is not guaranteed to find a perfect solution. Furthermore, an arbitrarily small value can cause the optimiser to focus on maximising $p(\mathcal{D}_f | \boldsymbol{\theta})$ and ignore the other likelihoods in $p(\mathcal{D} | \boldsymbol{\theta})$. The value is thus usually set larger than $\sigma_i$ to ensure that the optimiser sufficiently fits to the initial condition data $\mathcal{D}_i$. The PINN is trained over several epochs with the log-likelihood, $\ln p(\boldsymbol{\theta} | \mathcal{D})$. The result is that the neural network models a surface over the space of $(t,x,y)$ and represents a continuous solution to the PDE. This solution can be evaluated by providing a point $(t,x,y)$ to the neural network to return approximation of $\u(t,x,y)$. \subsection{Causality} \label{sec:causalLoss} The sequential sampling \cite{krishnapriyan2021characterizing,mattey2022novel} and sequential weighting \cite{wang2022respecting} solutions to the causality problem in PINNs force the PINN to treat the data in a sequential manner. However, they do not necessarily enforce any form of continuity over time. The non-linear nature of a neural network allows it to make non-linear transitions in time which maximise the physics likelihood, but these transitions may not necessarily be physically justified. For example, the PINN can solve an unforced PDE by simply setting all derivatives to zero. As we show in our results, the PINN can shift to such a degenerate solution with non-linear change in $\c$. We thus propose an alternative approach to enforce causality with the predictive likelihood function $p(\mathcal{D}_p | \boldsymbol{\theta})$. To encourage some form continuity of the level-set function over time, a forecast $\u_{n+1}$ is made at some time $n$ of the level-set function at time $n+1$. At time $n+1$ the level set function is compared with the prediction from the previous time $n$ according to the likelihood function $p(\mathcal{D}_p | \boldsymbol{\theta})$. The prediction is calculated based on the reformulation of (\ref{eq:levelSetEquation}) into the integral form of \begin{align} \u_{n+1} = \u_{n} + \Delta t \int_0^1 \c \cdot \nabla \u_{n} dt \end{align} where $\Delta t = t_{n+1} - t_{n}$ and $\u_{n} := \u(t=n,x,y)$. The Euler approximation to this integral is \begin{align} \label{eq:prediction} \hat{\u}_{n+1} = \tilde{\u}_{n} + \Delta t \c \cdot \nabla \tilde{\u}_{n} \end{align} where we have also used the neural network approximation $\tilde{\u} \approx \u$. The predictive likelihood function associated with the prediction is thus given by \begin{align} \label{eq:lp} p(\mathcal{D}_p | \boldsymbol{\theta})= \prod_{n=1}^{N_f} \frac{1}{2 \pi \sigma_p^2} \exp \left( - \frac{ \left( \tilde{u}_{n+1} - \hat{u}_{n+1} \right)^2 }{ 2 \pi \sigma_p^2} \right) \end{align} The collocation points and corresponding neural network outputs for the physics likelihood can be used to provide $t_j=n$, $x_j$, $y_j$ and $\tilde{u}_n$. With these observations, $\hat{\u}_{n+1}$ is calculated using (\ref{eq:prediction}) and $\tilde{\u}_{n=1}$ corresponds to the initial conditions. That is, the dataset $\mathcal{D}_p$ is self-generated and shares the collocation points with $\mathcal{D}_f$ for $t>1$. This approach requires a grid-based sampling regime to generate the set of collocation points $\{ t_j, x_j, y_j \}_{j=1}^{N_p}$. While these collocation points are grid based, the PINN solution remains continuous. \subsection{Data Assimilation} \label{sec:dataAssimilationLoss} A key feature that the PINN offers (and the level-set method does not), is the ability to perform data assimilation. To perform this, it is assumed that a dataset is provided that contains samples of the zero-level set over space and time. In the wildfire context, this dataset would contain the location of points along the fire-front at various times. Suppose $N_a$ data assimilation samples are provided. As these samples lie on the zero-level set, the level-set function evaluates to zero at all the points represented by the samples. We thus assume that the samples are Gaussian-distributed with a zero mean and a known variance $\sigma_a^2$ such that the assimilation likelihood is given by \begin{align} \label{eq:la} p(\mathcal{D}_a | \boldsymbol{\theta}) = \prod_{j=1}^{N_a} \frac{1}{2 \pi \sigma_a^2} \exp \left( - \frac{ \tilde{u}(t_j, x_j, y_j; \boldsymbol{\theta})^2 }{ 2 \pi \sigma_a^2} \right) \end{align} where $(x_j, y_j)$ is a point on the zero-level set at time $t_j$ and $j$ indexes a data sample in $\mathcal{D}_a$. These samples can be randomly distributed over the zero-level set but do not necessarily have to span across the entire zero-level set contour. Similarly, samples across time can be random and do not have to span the entire considered time scale. Note that assimilation data are typically not provided at time $t=0$ to avoid any conflict between the initial level-set function and the assimilation data. \subsection{Uncertainty Quantification and the Bayesian PINN} \label{sec:bpinn} To provide uncertainty quantification, we consider the Bayesian PINN (B-PINN) \cite{yang2021bpinns}. The premise of the B-PINN is to compute the posterior distribution over the parameters of the PINN, which is given by \begin{align} p(\boldsymbol{\theta} | \mathcal{D}) = \frac{p( \mathcal{D} | \boldsymbol{\theta}) p(\boldsymbol{\theta})}{p(\mathcal{D})} \end{align} As $p(\mathcal{D})$ is generally intractable in neural networks, we resort to finding an approximation to the posterior using variational inference. In particular, we follow the Bayes-by-backprop approach \cite{blundell2015weight}. Consider the variational posterior distribution $q_{\boldsymbol{\phi}}(\boldsymbol{\theta})$ parametrised by $\boldsymbol{\phi}$, which provides an approximation to the posterior $p(\boldsymbol{\theta} | \mathcal{D})$. The variational inference approach optimises the parameters $\boldsymbol{\phi}$ to minimise the negative evidence lower bound (ELBO) given by \begin{align} -\mathfrak{L}(q) = \mathbb{E}_{q_{\boldsymbol{\phi}}(\boldsymbol{\theta})}[ \log q_{\boldsymbol{\phi}}(\boldsymbol{\theta}) - \log p(\mathcal{D}| \boldsymbol{\theta}) - \log p(\boldsymbol{\theta})] \end{align} The prior $p(\boldsymbol{\theta})$ is assumed to take a spike-and-slab form, which is a scale-Mixture Gaussian given by \cite{blundell2015weight} \begin{align} \log p(\boldsymbol{\theta}) = \prod_j \frac{1}{2} \mathcal{N}(\theta_j | 0, \exp(-0)) + \frac{1}{2} \mathcal{N} (\theta_j | 0, \exp(-6)) \end{align} The mean-field approximation is assumed for the variational posterior and is given by \begin{align} q_{\boldsymbol{\phi}}(\boldsymbol{\theta}) = \prod_j \mathcal{N}(\boldsymbol{\theta}_j | \mu_j, \sigma_j) \end{align} such that $\phi_j = (\mu_j, \sigma_j)$. In training (model fitting), the parameters $\boldsymbol{\phi}$ are optimised using back-propagation and the path-wise gradient estimator (reparametrisation trick) \cite{kingma2014autoencoding}. To make a prediction, a set of $N_{\text{MC}}$ Monte Carlo (MC) samples of the neural network parameters $\boldsymbol{\theta}$ can be drawn from the variational posterior distribution. Each sample of $\boldsymbol{\theta}$ can be used to produce a new simulation, generating a set of $N_{\text{MC}}$ MC simulations. The distribution over these simulations provides a MC estimate of the posterior predictive distribution. Statistics such as mean, standard deviation, and confidence intervals can be computed from these MC simulations to quantify the uncertainty of the PINN predictions. \subsection{Model Configurations and Nomenclature} A nomenclature is defined for the set of PINN configurations used in this study. These configurations are constructed based on variations in the likelihood function and the use of the Bayesian inference. PINN-e is the elementary PINN defined in the literature, and its likelihood is given by \begin{align} p(\mathcal{D} | \boldsymbol{\theta}) = p(\mathcal{D}_i | \boldsymbol{\theta}) p(\mathcal{D}_f | \boldsymbol{\theta}) \end{align} PINN-p includes the predictive (causal) likelihood in (\ref{eq:lp}) such that \begin{align} p(\mathcal{D} | \boldsymbol{\theta}) = p(\mathcal{D}_i | \boldsymbol{\theta}) p(\mathcal{D}_f | \boldsymbol{\theta}) p(\mathcal{D}_p | \boldsymbol{\theta}) \end{align} PINN-a includes the predictive (causal) likelihood in (\ref{eq:lp}) and the assimilation likelihood in (\ref{eq:la}) such that \begin{align} p(\mathcal{D} | \boldsymbol{\theta}) = p(\mathcal{D}_i | \boldsymbol{\theta}) p(\mathcal{D}_f | \boldsymbol{\theta}) p(\mathcal{D}_a | \boldsymbol{\theta}) p(\mathcal{D}_p | \boldsymbol{\theta}) \end{align} In this study, the B-PINN includes all likelihoods and is the Bayesian counterpart of PINN-a. \section*{Acknowledgements} This work was supported by the CSIRO MLAI Future Science Platform. \bibliographystyle{unsrtnat} \section{Related Work} \label{sec:relatedWork} Spatio-temporal models have been widely used to capture complex interactions and dynamics of environmental systems. Approaches for modelling have been mixed and have ranged from traditional machine learning approaches such as random forests where it is assumed the input variables are accommodating the spatio-temporal variation in the data \cite{kuhnert2010incorporating}, kriging \cite{ZammitMangion2021frk}, to more sophisticated physical statistical models \cite{kuhnert2014physical} using Bayesian Hierarchical modelling frameworks \cite{wikle1998hierarchical,wikle2010general,gladish2016spatiotemporal,wikle2019spatio}, and more recently, emulation approaches that use machine learning to approximate physical systems in an attempt to speed up estimation and allow for the quantification of uncertainty \cite{bolt2022spatiotemporal}. \citet{bolt2022spatiotemporal} in particular has demonstrated emulation approaches for the Spark wildfire model. The level-set method provides an effective approach for modelling the spread of wildfires \cite{mallet2009modeling,hilton2015effects,hilton2016curvature,miller2015spark,mandel2011coupled,yoo2022bayesian}. It uses finite difference approximations to the level-set PDE, such as the Essentially Nonoscillatory (ENO) scheme and hyperbolic upwind techniques \cite{osher2003level,sethian1999level}. However, these methods can be complicated and often require a fine spatio-temporal grid to maintain stability. This becomes computationally expensive when modelling large regions. A key disadvantage of the level-set method is that it does not offer any natural means for data assimilation. Data assimilation is valuable in wildfire applications and it is often based on Bayesian filtering methods such as Kalman filtering and Sequential Monte Carlo methods \cite{Srivas2016Wildfire,Xue2012Data,Silva2014Application}. It is possible to incorporate the level-set method into the Bayesian filtering paradigm, however examples of this are scarce in the literature. To our knowledge, only the Best Linear Unbiased Estimator (BLUE) (a simplified Kalman filter) \cite{Rochoux2013Regional} and the particle filter \cite{dabrowski2022towards} are have been considered for this purpose. These approaches can add a significant amount of computational complexity to the problem and it can be challenging to ensure that the approaches operate within the constraints of the level-set method (e.g. maintaining smooth gradients in the level-set function and performing reinitialisation). Uncertainty quantification is challenging with the level-set method. It requires representing the level-set function (and its derivatives) in a probabilistic paradigm and operating on this representation with the finite difference approach of the level-set method. One approach is to combine a mechanistic dynamic level-set model and a stochastic spatio-temporal dynamic front velocity model using a hierarchical Bayesian approach \cite{yoo2022bayesian}. An alternative approach combines the particle filter and level set method \cite{dabrowski2022towards}. Both involve inference methods that are computationally expensive. Several recent reviews on PINNs have been presented, such as \cite{Cuomo2022Scientific,Karniadakis2021Physics,Kollmannsberger2021Physics}. Gaussian processes were developed as a machine learning approach for solving differential equations \cite{raissi2017machine}. However, in non-linear settings, Gaussian processes have limitations that can restrain the accuracy and robustness of the solutions \cite{raissi2019physics}. \citet{raissi2019physics} presented the PINN in the context of key recent developments (e.g. optimisation frameworks and automatic differentiation) and also presented a novel implicit Runge–Kutta discrete-time stepping schemes approach. Furthermore, it was shown how the PINN is able to solve both the forward problem (solving the PDE) and the inverse problem (discovery of the PDE). PINNs have since been applied to problems such as hemodynamics, flows, optics, electromagnetics, molecular dynamics, and geoscience \cite{Cuomo2022Scientific}. Several research avenues on PINNs have been considered, including neural network architectures and optimisation approaches \cite{Cuomo2022Scientific}. The PINN offers a means to quantify uncertainty. Given the unsupervised nature of the PINNs training process, uncertainty is typically represented in the form of epistemic uncertainty, which relates to the uncertainty in the PINN parameters \cite{hullermeier2021aleatoric}. In the Bayesian approach, the posterior distribution of the PINN parameters given the data is computed using methods such as variational inference and MCMC \cite{yang2021bpinns}. Under the Bayesian paradigm, the PINN is commonly referred to as the Bayesian PINN or B-PINN. Other approaches to incorporating uncertainty in the PINN include randomised prior functions \cite{Costabal2020Physics}, Monte Carlo Dropout \cite{zhang2019quantifying}, and adversarial approaches \cite{yang2019adversarial,Gao2022Wasserstein}. Compared to traditional PDE linear solvers in general, PINNs may have long training times and can have limited accuracy \cite{markidis2021old}. The universal approximation theorem \cite{hornik1989multilayer} alludes to high levels of accuracy with neural networks, which is not always simple to achieve in practice. It is often not a lack in capacity of the neural network that limits accuracy, but rather challenges associated with optimising over a complex landscape \cite{krishnapriyan2021characterizing}. These challenges include gradient pathologies \cite{wang2021understanding} and spectral bias \cite{wang2021eigenvector,wang2022when}. A significant challenge with PINNs is that they can fail to model causality resulting in catastrophic failure \cite{wang2022respecting,mojgani2022lagrangian}. PINNs seek a global solution and can tend to ignore the sequential nature of the progression of a system over time. Several strategies have been proposed to address this problem, including sequential sampling \cite{krishnapriyan2021characterizing,mattey2022novel}, sequential weighting \cite{wang2022respecting}, and reformulating PINNs on a Lagrangian frame of reference \cite{mojgani2022lagrangian}. Sequential strategies typically force the PINN to train in a sequential manner, but they do not enforce any form of continuity over time. The discrete PINN \cite{raissi2019physics} offers another possible solution to the causality problem as it solves the PDE sequentially. However, with changing external conditions (such as wind in wildfires), the neural network requires retraining at each time-step. The Recurrent Neural Network (RNN) provides an appealing alternative to the discrete PINN where Runge-Kutta integration is encoded in an RNN cell \cite{nascimento2020tutorial,yucesan2020physics}. Retraining is not necessary, however ground-truth data are required at each time-step for supervised training. Unfortunately, such ground-truth data are typically not available in a wildfire context. At the time of writing, applications of PINNs to the level-set equation are scarce in literature. An example of implementing the level-set equation with a PINN is given in \cite{zubov2021neuralpde}, however the results and discussion relate to convergence of the PINN rather than the solution of the PDE. The PINN has also been applied to modelling wildfires with the level-set equation in the technical report \cite{bottero2021physics}. The approach implements the level-set equation and also includes other atmospheric modelling PDEs from the WRF-SFIRE simulator \cite{mandel2011coupled}. The focus is on the implementation of in the Julia language rather than on more broader challenges associated with the application of a PINN to the level-set equation and wildfires. Furthermore, a key limitation of the implementation \cite{mandel2011coupled} is that it is constrained to a static fuel map. This limits their method to being applied in a dynamical setting. \section{Synthetic Dataset and Results} Particularly in reference to the first objective of this study, the PINN-e, the PINN-p, and the level-set method are compared on a synthetic dataset to demonstrate how the predictive (causal) likelihood $p(\mathcal{D}_p, \boldsymbol{\theta})$ can address critical causal failures in the PINN. In this dataset, the failure is caused by an extreme change in the wind direction. The aim is to show how the inclusion of the predictive likelihood in the PINN-p addresses this problem. \subsection{Synthetic Dataset} A synthetic dataset is generated over a two-dimensional space $x\in[0,1]$, $y\in[0,1]$ and over the time period $t \in [0,1]$. The vector field $\c$ is varied in both of its components $\boldsymbol{s}$ and $\boldsymbol{w}$. To emulate an extreme change in wind direction, the wind vector direction is initially in the northerly direction for 10\% of the time, and it changes to an easterly direction for the remaining time. That is, the wind vector at time $t$ is given by \begin{align} \boldsymbol{w}(t) = \begin{cases} [0.0 ~~~ 0.4]^\top & t \leq 0.1 \\ [0.4 ~~~ 0.0]^\top & t > 0.1 \end{cases}, ~~ \forall (x,y) \end{align} Furthermore, an obstruction is placed in the square region by enforcing the following values for the vector field $\c$: \begin{align} c(t,x,y) = \begin{cases} 0, & (0.6 < x < 0.8) ~ \land ~ (0.6 < y < 0.8) \\ c(t,x,y), & \text{otherwise} \end{cases} \end{align} This obstruction could represent a region that can not be burned, such as a water body. \subsection{Methodology} A spatio-temporal grid is created over the range of the data such that the spatial step sizes are $\Delta x = \Delta y = 1/35$ and the temporal step size is $\Delta t = 1/48$. This spatio-temporal grid is used to sample collocation points for the PINNs and it is used as the level-set method grid. The initial fire perimeter is configured as a circle with a radius of $0.1$ using (\ref{eq:cone}) with $k = -0.1$. The hyper-parameters for the PINN-e and PINN-p are provided in the first row of Table \ref{table:pinn_config}. Other than $\sigma_p$, the hyper-parameters are identical between the models. These parameters were empirically selected using grid searches. The ADAM algorithm \cite{kingma2014adam} is used to optimise the negative log-likelihood of the models. \begin{table}[!t] \centering \caption{PINN and B-PINN Configurations. $N_t$, $N_x$, and $N_y$ define the sample grid size.} \label{table:pinn_config} \begin{scriptsize} \begin{tabular}{ccccccccccc} \toprule Dataset & Layer dims. & $N_t$ & $N_x$ & $N_y$ & l.r. & Epochs & $\sigma_i^2$ & $\sigma_f^2$ & $\sigma_p^2$ & $\sigma_a^2$ \\ \midrule Synthetic & [6, 64, 64, 1] & 48 & 35 & 35 & $1e-3$ & 6 000 & $\frac{1}{2 \pi 1000}$ & $\frac{1}{2 \pi}$ & $\frac{1}{2 \pi 50}$ & N/A \\ Fire S03 & [6, 128, 128, 1] & 68 & 71 & 71 & $5e-4$ & 50 000 & $\frac{1}{2 \pi 1000}$ & $\frac{1}{2 \pi}$ & $\frac{1}{2 \pi 50}$ & $\frac{1}{2 \pi 1000}$ \\ Fire E06 & [6, 128, 128, 1] & 69 & 57 & 57 & $1e-4$ & 50 000 & $\frac{1}{2 \pi 1000}$ & $\frac{1}{2 \pi}$ & $\frac{1}{2 \pi 50}$ & $\frac{1}{2 \pi 1000}$ \\ \bottomrule \end{tabular} \end{scriptsize} \end{table} Three datasets are constructed: the initial condition data $\mathcal{D}_i$ at time $t=0$; the collocation points $\mathcal{D}_f$, which range over $t\in[0,1]$; and the predictive dataset $\mathcal{D}_p$, which is generated from the collocation points during training and inference. For all datasets, a set of samples are drawn over the spatio-temporal grid (at the relevant times) for the neural network inputs $x$, $y$, $t$, $s$, $w_x$, and $w_y$. The six inputs are concatenated and the set of samples over the spatio-temporal grid are collapsed into a single dimension to form one large batch. For example, given the grid size provided in Table \ref{table:pinn_config}, the dimensions of the neural network input for the collocation data are $[48 \cdot 35 \cdot 35, 6] = [55800, 6]$. We found that the neural network was slow to converge to a solution that accurately depicted the discontinuity at the point of the cone. This is especially problematic in accurately representing the zero-level set. We thus increase the resolution of the initial condition samples in $\mathcal{D}_i$ within the region bounded by the zero-level set at time $t=0$ such that an extra $35\cdot35=1225$ samples originate from within this region. This was not performed on the collocation samples in $\mathcal{D}_f$ and $\mathcal{D}_p$. The results are evaluated using plots and the Jaccard index. The Jaccard index is an intersection-over-union measure between two predicted burned regions $A$ and $B$ given by \begin{align} \label{eq:jaccard} J(A,B) = \frac{|A \cap B|}{|A \cup B|} \end{align} The burned regions are defined as the region where the level-set function is less than zero (below the zero level-set). \subsection{Results} The PINN-e results are shown in Figure \ref{fig:nopredcontours}. As the wind direction changes, the PINN-e loses track of the level-set function. This is especially clear in the level-set function surface plots in Figure \ref{fig:noPredSurfaces}, where the level-set function takes on a new form as the wind changes direction. The physics log-likelihood ($\log p(\mathcal{D}_p | \boldsymbol{\theta})$) of the PINN-e is plotted over time in Figure \ref{fig:windChangeLoss} and provides an indication of the extent to which level-set equation has been solved. A log-likelihood of zero indicates an exact solution to the PDE. As the wind changes direction, the log-likelihood increases significantly after $t>0.1$. This is achieved by the optimiser simplifying the level-set function to minimising the gradients over time and space. The PINN thus takes advantage of the discontinuity in the wind direction to avoid maintaining continuity and causality of the level-set function over time. Although this is not the optimal solution in a physical sense, it is optimal with respect to the optimiser. The causality problem in general is a known problem with PINNs as discussed in section \ref{sec:causalLoss}. \begin{figure}[!t] \centering \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/noPredContours.pdf} \caption{PINN-e predictions for the first 6 time steps.} \label{fig:nopredcontours} \end{subfigure} % % \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/noPredSurfaces.pdf} \caption{Surface plots of the PINN-e predictions for the first 6 time steps.} \label{fig:noPredSurfaces} \end{subfigure} \caption{PINN-e results with no predictive likelihood. The PINN-e takes advantage of the sudden change in wind direction at time t=5 to simplify the level-set function into a form that is more conducive to a PDE solution. This change however is not physically warranted and is not causal.} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.5in]{figures/windChangeLoss.pdf} \caption{Physics log likelihood over time. The red marker indicates the time that the wind changes direction. At this point the PINN conveniently changes the form of the level-set function, but in a way that is not physically warranted. See Figure \ref{fig:noPredSurfaces}.} \label{fig:windChangeLoss} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=5.39in]{figures/syntheticResults.pdf} \caption{Synthetic dataset results. The red plots are the PINN results and the green plots are the Level-Set Method results. Note how the PINN maintains the circular shape of the zero-level set as it passes over the obstruction.} \label{fig:syntheticResults} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.5in]{figures/jaccardSynthetic.pdf} \caption{Jaccard index between the PINN and level-set method results over time for the synthetic dataset.} \label{fig:jaccardSynthetic} \end{figure} The results for the PINN-p and the Level-Set Method are illustrated in Figure \ref{fig:syntheticResults}. With the predictive likelihood, the PINN-p is able to adapt to the changes in the wind and maintains the form of the level-set function over time. By relating outputs of the neural network across time using the predictive likelihood, the PINN-p is forced to maintain some form of continuity over time and causality remains intact. The Jaccard index between the PINN-p and level-set method are plotted over time in Figure \ref{fig:jaccardSynthetic}. As the wind changes direction, the Jaccard index drops to $76\%$, but remains above $80\%$ otherwise. This indicates a high level of similarity between the PINN-p and level-set method results. Additional surface plots of the results are presented in \ref{sec:appendix}. A key difference between the PINN-p and the level-set method results is in how they propagate the level-set function around the obstruction. In the level-set method, as the fire propagates around the obstruction, it is delayed. The result is that the level-set function is sequentially altered by the obstruction as it passes around it. The PINN-p ignores this effect and appears to pass the level-set function through the obstruction, maintaining its circular nature. This is owing to how the PINN-p is optimised to seek global solution rather than focusing on the sequential nature of the problem. The predictive likelihood is unfortunately not able to correct this problem. However, the Jaccard index still remains well above $80\%$ as the level-set function passes the obstruction. \section{Application of PINNs to Australian Fires} \label{sec:FireDatasetAndResults} The PINN-p, the PINN-a, the B-PINN, and the level set method are compared and contrasted on a dataset comprising recorded grassland fires. The aims are to (1) compare the three approaches, (2) demonstrate data assimilation in the PINN, and (3) demonstrate uncertainty quantification with the PINN. The motivating problem consists of two recorded grassland fires. \subsection{Fire Dataset} \begin{figure}[!t] \centering \begin{subfigure}[t]{4.5in} \centering \includegraphics[width=4.5in]{figures/braidwoodDataset.pdf} \caption{Fire S03.} \label{fig:braidwoodDataset_S03} \end{subfigure} % \begin{subfigure}[t]{3.6in} \centering \includegraphics[width=3.6in]{figures/braidwoodDataset_E06_Braidwood.pdf} \caption{Fire E06.} \label{fig:braidwoodDataset_E06} \end{subfigure} % \caption{Depiction of two fires (S03 and E06) from the Braidwood dataset. Each image is the final fire state and is overlaid with fire-front isochrones showing fire progression at 10 second intervals. In fire~S03 the wind speed and direction vary significantly over time. In fire~E06 there is a strong dominant wind blowing in the north-west direction. Axes are provided in meters.} \label{fig:braidwoodDataset} \end{figure} The Braidwood fire dataset \cite{sullivan2018study} is created from data collected from a grassland fires which were filmed overhead near Braidwood, New South Wales, Australia. The two example fires, labelled S03 and E06, are illustrated in Figure \ref{fig:braidwoodDataset} and represent two types of fire regimes. Fire S03 houses more complicated fire dynamics induced by variable wind speed and direction throughout the course of the fire. In contrast, fire E06 is more predictable due to a strong dominant wind blowing in the north-west direction. The fire-fronts were manually mapped into isochrones at 10 second intervals based on the video frames from the overhead footage. Each fire-front isochrone comprises a set of samples of coordinates along the fire-front. Additionally, wind speed and direction were measured with an anemometer located approximately 35 meters upwind from the ignition point and at a height of 2 meters. \subsection{Methodology} The ignition points of the fires are located at the centre of the predefined areas for each fire. For fire S03, the area is $71m \times 71m$ in area and for fire E06, the area is $57m \times 57m$. A spatio-temporal grid is overlayed on the data such that the spatial and temporal step sizes are $\Delta x = \Delta y \approx 1m$ and $\Delta t = 1s$ respectively. The spatio-temporal grid is scaled to a range of $[0,1]$ for all dimensions. The speed $\boldsymbol{s}$ and wind $\boldsymbol{w}$ components are scaled according to the scaling of the spatio-temporal grid; except when applied as inputs to the neural network, where they are scaled to a range of $[0,1]$. The initial fire perimeter is set as an ellipse using the elliptical cone signed distance function in (\ref{eq:ellipticalCone}) with $a^2=5$, $b^2=1$, $k=-0.02$, and $\alpha=30^\circ$ for fire S03 and $a^2=0.5$, $b^2=7$, $k=-4.5$, and $\alpha=222^\circ$ for fire E06. The ellipse of E03 overestimates the initial fire size to account for the initial acceleration of the fire before it reaches a steady state. The initial value dataset $\mathcal{D}_i$ is constructed according to the elliptical cones and the collocation dataset $\mathcal{D}_f$ is constructed on the spatio-temporal grid. The predictive dataset $\mathcal{D}_p$ is constructed from the collocation points during training and inference. The assimilation dataset $\mathcal{D}_a$ is constructed based on the fire isocrones shown in Figure \ref{fig:braidwoodDataset}. The region bounded by the zero-level-set is oversampled for the initial value dataset, with an additional $5041$ samples ($71 \times 71$). The six inputs $(\boldsymbol{x}, \boldsymbol{y}, \t, \boldsymbol{s}, \boldsymbol{w}_x, \boldsymbol{w}_y)$ are concatenated and the set of samples over the spatio-temporal grid are collapsed into a single dimension. For example, given the grid size provided in Table \ref{table:pinn_config}, the dimensions of the neural network input for the collocation data for fire S03 are $[68 \cdot 71 \cdot 71, 6] = [342788, 6]$. The hyper-parameters for the PINN-p, the PINN-a and the B-PINN (which are identical to the PINN-a) are provided in the second row of Table \ref{table:pinn_config}. These hyper-parameters were empirically selected using grid searches. To provide some form of cross validation, we use the same likelihood variance parameters for all datasets. It is however possible to learn these parameters with multi-objective optimisation \cite{rohrhofer2021pareto}. The ADAM algorithm \cite{kingma2014adam} is used to optimise the negative log-likelihood of the model. The approaches are evaluated using plots and the Jaccard index given by (\ref{eq:jaccard}). The B-PINN uncertainty quantification is evaluated according to the ground-truth fire isochrones. At each 10 second interval, the 95\% confidence interval is calculated from a set of 100 MC simulations drawn from the B-PINN. The observed coverage is computed as the average number of ground-truth samples that lie within the 95\% confidence interval. The closer this observed coverage is to the 95\% confidence interval, the higher the uncertainty quantification quality is. \subsection{Fire S03: Complicated fire dynamics} Qualitative results for the PINN-p (without data assimilation) and the level-set method are presented in Figure \ref{fig:fireResults_noAssimilation}. While both approaches produce similar results, neither is able to exactly reproduce the evolution of the actual fire-front. This is expected as the methods assume a homogeneous fuel type, fuel load, wind speed, and wind direction, which vary in reality. Furthermore, the fire is in its early stages of burning and may not have reached a steady state. The Jaccard index results for the PINN-p and level-set method are plotted in Figure \ref{fig:jaccardFire_noAssimilation}. The PINN-p and level-set method predictions are compared with each other, as well as with ground-truth fire data. The results indicate that both the PINN-p and level-set methods are initially not accurate, but improve over time. The initial lack of accuracy is due to an over-estimation of the initial fire front ellipse angle. The Jaccard index between the PINN-p and level-set method remains above $70\%$, indicating that both methods produce similar prediction of the fire-front. However, the level-set method generally has higher Jaccard indexes than the PINN-p, suggesting that it performs slightly better. \begin{figure}[!p] \centering \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_noAssimilation_S03_Braidwood.pdf} \caption{PINN-p versus the level-set method.} \label{fig:fireResults_noAssimilation} \end{subfigure} % \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_assimilation_S03_Braidwood.pdf} \caption{PINN-a versus the level-set method.} \label{fig:fireResults_assimilation} \end{subfigure} % \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_assimilation_bpinn_S03_Braidwood.pdf} \caption{B-PINN versus the level-set method.} \label{fig:fireResults_assimilation_bpinn} \end{subfigure} \caption{PINN-p, PINN-a, B-PINN, and level-set method results on the E03 fire dataset. LSM denotes the level-set method.} \label{fig:fireResults} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_noAssimilation_S03_Braidwood.pdf} \caption{PINN-p versus the level-set method.} \label{fig:jaccardFire_noAssimilation} \end{subfigure} % \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_assimilation_S03_Braidwood.pdf} \caption{PINN-a versus the level-set method.} \label{fig:jaccardFire_assimilation} \end{subfigure} % \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_assimilation_bpinn_S03_Braidwood.pdf} \caption{B-PINN versus the level-set method.} \label{fig:jaccardFire_assimilation_bpinn} \end{subfigure} \caption{Jaccard index results for fire S03. LSM denotes the level-set method. Mean values across time are provided in brackets in the legend.} \label{fig:jaccardFire} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=4.5in]{figures/coverage_S03_Braidwood.pdf} \caption{Uncertainty coverage for the S03 fire. The mean value is computed over time.} \label{fig:coverage_S03_Braidwood} \end{figure} Qualitative results for PINN-a (with data assimilation) and the level-set method are shown in Figure \ref{fig:fireResults_assimilation}. The PINN-a results follow more closely to the actual fire due to the assimilation of the fire-front isochrones. It initially fits to the elliptical cone of $\mathcal{D}_i$ and transitions closer to the fire-front isochrones over time. The PINN does not over-fit to these isochrones as it is regularised by the limited size of the neural network and by early stopping. The Jaccard index results for the PINN-a and level-set method are plotted in Figure \ref{fig:jaccardFire_assimilation}. Compared with the PINN-p, the Jaccard indexes for the PINN-a have significantly increased. Furthermore, with higher Jaccard indexes than the level-set method, the PINN provides a better simulation of the fire. The average value of the Jaccard indexes between the PINN-a and the level-set method over time is $82\%$, indicating that the PINN-a still maintains the physics defined by the level-set equation. Qualitative results for the B-PINN and the level-set method are shown in Figure \ref{fig:fireResults_assimilation_bpinn}. To quantify uncertainty, a set of 100 MC samples of neural network parameters are drawn from the posterior. These are used to produce 100 MC simulations of fire. The 95\% confidence interval is computed from these 100 MC and is plotted by the grey region. The mean of the MC simulations is illustrated by the red isochrone. Note that the MC simulations demonstrate variation in both size and shape of the isochrones. This is indicated by the variation in the shape of the mean and 95\% confidence interval plots. Compared to the PINN-p, the data assimilation in B-PINN allows it to produce results that are closer to the data. The B-PINN however does not produce results that are as close to the data as the PINN-a. This is owing to the natural regularisation introduced by the prior distribution and the Bayesian approach. The coverage results for the B-PINN are illustrated in Figure \ref{fig:coverage_S03_Braidwood}. Owing to the over estimation of the fire-front at $t=30$, the coverage for this first result is excluded. The average coverage over time is 81.9\%, which is a 13.1\% error from the 95\% target, indicating an uncertainty quantification of reasonable quality. The Jaccard index for the B-PINN results are shown in Figure \ref{fig:jaccardFire_assimilation_bpinn}. We find that, as for the PINN-a, the data assimilation draws the predictions closer to the data to produce a more accurate representation of the fire. However, the average Jaccard index for the B-PINN is slightly lower than the PINN-a due to the regularisation introduced by the prior distribution and the Bayesian approach. This is preferable as the fire isochrone were manually created according to the overhead video footage. As illustrated in Figure \ref{fig:braidwoodDataset}, the smoke and flames can obscure the fire-front, which introduces uncertainty in the assimilation data fire isochrones. \subsection{Fire E06: Predictable Fire Dynamics} The E06 fire simulations for the PINN-p, PINN-a, B-PINN, and level-set method are provided in Figure \ref{fig:fireResults_E06_Braidwood} and the Jaccard index plots are provide in Figure \ref{fig:jaccardFire_E06_Braidwood}. The PINN-p propagates the fire-front slightly faster than the level-set method. Data assimilation in the PINN-a constrains the fire closer to the data to increase the average Jaccard index from 0.72 to 0.84. The B-PINN provides a distribution of MC simulations of the fire providing uncertainty quantification. The variance of the distribution tends to be higher in the directions where the fire moves at higher velocities, which is where the curvature of the zero level-set is high and in the direction of the wind. According to the Jaccard indexes, the B-PINN and the PINN-a provide similar accuracy. The B-PINN however additionally offers the uncertainty quantification. The coverage results for the B-PINN are illustrated in Figure \ref{fig:coverage_E06_Braidwood}. The average coverage over time is 93.6\%, which is a 1.4\% error from the 95\% target, indicating an uncertainty quantification that is high in quality. \begin{figure}[!p] \centering \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_noAssimilation_E06_Braidwood.pdf} \caption{PINN-p versus the level-set method.} \label{fig:fireResults_noAssimilation_E06_Braidwood} \end{subfigure} % \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_assimilation_E06_Braidwood.pdf} \caption{PINN-a versus the level-set method.} \label{fig:fireResults_assimilation_E06_Braidwood} \end{subfigure} % \begin{subfigure}[t]{5.39in} \includegraphics[width=5.39in]{figures/fireResults_assimilation_bpinn_E06_Braidwood.pdf} \caption{B-PINN versus the level-set method.} \label{fig:fireResults_assimilation_bpinn_E06_Braidwood} \end{subfigure} \caption{PINN and Level-Set Method results on the E06 fire dataset. LSM denotes the level-set method.} \label{fig:fireResults_E06_Braidwood} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_noAssimilation_E06_Braidwood.pdf} \caption{PINN-p versus the level-set method.} \label{fig:jaccardFire_noAssimilation_E06_Braidwood} \end{subfigure} % \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_assimilation_E06_Braidwood.pdf} \caption{PINN-a versus the level-set method.} \label{fig:jaccardFire_assimilation_E06_Braidwood} \end{subfigure} % \begin{subfigure}[t]{4.5in} \includegraphics[width=4.5in]{figures/jaccardFire_assimilation_bpinn_E06_Braidwood.pdf} \caption{B-PINN versus the level-set method.} \label{fig:jaccardFire_assimilation_bpinn_E06_Braidwood} \end{subfigure} \caption{Jaccard index results for the E06 fire. LSM denotes the level-set method. Mean values across time are provided in brackets in the legend.} \label{fig:jaccardFire_E06_Braidwood} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=4.5in]{figures/coverage_E06_Braidwood.pdf} \caption{Uncertainty coverage for the E06 fire. The mean value is computed over time.} \label{fig:coverage_E06_Braidwood} \end{figure}
hep-ph/9711205
\section{Introduction} This year's lattice conference attracted more than 300 physicists from all over the world. Lattice QCD remains to be the dominant subject at these conferences, but other topics are also being addressed, such as the electroweak phase transition, quantum gravity and supersymmetric Yang-Mills theories. Much of the work done in QCD is spent to improve the computations of hadron masses, decay constants and weak transition matrix elements. Calculations of moments of structure functions, the running coupling and quark masses and many other physical quantities have also been reported. A good place to look for specific results are the proceedings of the 1996 lattice conference~\cite{LatProceedings} and more recent contributions can be found in the hep-lat section of the Los Alamos preprint server. \subsection{Numerical simulations} Quantitative results in lattice QCD are almost exclusively obtained using numerical simulations. Such calculations proceed by choosing a finite lattice, with spacing $a$ and linear extent $L$, which is sufficiently small that the quark and gluon fields can be stored in the memory of a computer. Through a Monte Carlo algorithm one then generates a representative ensemble of fields for the Feynman path integral and extracts the physical quantities from ensemble averages. Apart from statistical errors this method yields exact results for the chosen lattice parameters and is hence suitable for non-perturbative studies of QCD. Numerical simulations require powerful computers and a continuous effort for algorithm and software development. Technical expertise is also needed to be able to cope with the systematic errors incurred by the finiteness of the lattice and by the data analysis. In practice such calculations are being performed by collaborations of (say) 5--15 physicists. For these groups to remain competitive it is vital that they have access to dedicated computer systems. An adequate amount of computing power would otherwise be difficult to obtain over a longer period of time. \subsection{Computers} Until recently leading edge numerical simulations of lattice QCD have been performed on computers with sustained computational speeds on the order of 10 Gflops. The community is now moving to the next generation computers which deliver several 100 Gflops for QCD programs. One of these machines, the CP-PACS computer~\cite{CPPACS}, has been installed last year at the Center for Computational Physics in Tsukuba. With its 2048 pro\-ces\-sing nodes, a total memory of 128 GB and a theoretical peak speed of 614 Gflops, this computer is a unique research facility for lattice QCD. Other machines that will be available for dedicated use by the lattice theorists include the QCDSP and the APEmille computers~\cite{QCDSP,APEmille}. The first of these has been designed by a consortium of physicists in the US. Machines of various size are being assembled and will be installed in the course of this year at different places, totalling more than 1000 Gflops of peak computational power. The APEmille grew out of a long-term effort of INFN, now also supported by DESY-Zeuthen, to construct affordable computers optimised for lattice gauge theory applications. It has a scalable massively parallel architecture with the largest system delivering more than 1000 Gflops. A fully operational, medium-size machine is expected to be available next summer. It should be said at this point that the theoretical peak speed of a computer is a relatively crude measure of its performance. The sustained computational speed that can be attained also depends on many other parameters such as the bandwidths for memory-to-processor and node-to-node communications, for example. All computers mentioned here are well balanced in this respect and achieve a high efficiency for QCD programs. \subsection{Lattice QCD at 100 Gflops} At present most calculations in lattice QCD neglect sea quark effects, because the known simulation algorithms slow down dramatically when they are included. If one is willing to make this approximation (which is called ``quenched QCD"), the new computers are good for lattice sizes up to about $128\times64^3$. Such a lattice may be arranged to have a spacing $a=0.05$ fm, for example, in which case its spatial extent will be $3.2$ fm. This is a very comfortable situation for calculations of the light hadron masses and many other quantities of interest. In general the increased computer power allows one to explore a greater range of lattices with higher statistics and thus to achieve better control on the systematic errors. When a doublet of sea quarks is included in the simulations, lattice sizes up to $64\times32^3$ are expected to be within reach. This will be quite exciting, because studies of full QCD on large lattices have been rare so far, leaving many basic questions unanswered. The physics programme is essentially the same as in quenched QCD, but since one cannot afford to perform simulations at very many different values of the parameters, and since the generated ensembles of field configurations tend to be smaller, the results will be generally less precise. \subsection{Topics covered in this talk} While the progress in computer technology is impressive, one cannot ignore the fact that the accessible lattices are too small to accommodate very large scale differences. This talk addresses two important issues which arise from this limitation and which must be resolved if one is interested in results with reliable error bounds. One of the problems is that the lattice spacing cannot be made arbitrarily small compared to the relevant physical scales (the confinement radius for example). Taking the continuum limit thus is a non-trivial task and a lot of work has recently been spent to answer the question of how precisely the limit is approached and whether the lattice effects are negligible at current values of the lattice spacing. The other topic that will be discussed goes under the heading of non-perturbative renormalization. In physical terms the problem is to establish the relation between the low-energy properties of QCD and the perturbative regime. Hadronic matrix elements of operators, whose normalization is specified at high energies through the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme of dimensional regularization, are an obvious case where this is required. Again a large scale difference is involved which makes a direct approach difficult, but promising ways to solve the problem have now been found. \section{Lattice effects and the approach to the continuum limit} \subsection{Perturbation theory} In lattice QCD one is primarily interested in the non-perturbative aspects of the theory. Perturbation theory can, however, give important structural insights and it has proved useful to study the nature of the continuum limit in this framework. A remarkable result in this connection is that the existence of the limit has been rigorously established to all orders of the expansion~\cite{Reisz}. The Feynman rules on the lattice are derived straightforwardly from the chosen lattice action. Compared to the usual rules, an important difference is that the propagators and vertices are relatively complicated functions of the momenta and of the lattice spacing $a$. In particular, at tree-level all the lattice spacing dependence arises in this way. A simple example illustrating this is the quark-gluon vertex. Using the standard formu\-lation of lattice QCD (which goes back to Wilson's famous paper of 1974~\cite{Wilson}), one finds \vspace{-0.3cm plus 0.05cm} \rightline{ \begin{minipage}{6cm} \epsfxsize=1.5cm\vspace{0.4cm}\hspace{4.5cm}\epsfbox{vertex.eps} \end{minipage} \begin{minipage}{10cm} \begin{equation} =\;g_0\lambda^a\left\{ \dirac{\mu}+\frac{i}{2}a(p+p')_{\mu}+{\rm O}(a^2)\right\}, \hspace{2.5cm} \end{equation} \end{minipage}} \vspace{0.2cm plus 0.05cm} \noindent where $g_0$ denotes the bare gauge coupling and $\lambda^a$ a colour matrix. It is immediately clear from this expression that the leading lattice corrections to the continuum term can be quite large even if the quark momenta $p$ and $p'$ are well below the momentum cutoff $\pi/a$. Moreover the corrections violate chiral symmetry, a fact that has long been a source of concern since many properties of low-energy QCD depend on this symmetry. Lattice Feynman diagrams with $l$ loops and engineering dimension $\omega$ can be expanded in an asymptotic series of the form~\cite{SymanzikI,SymanzikII} \begin{equation} a^{-\omega}\sum_{k=0}^{\infty}\sum_{j=0}^l c_{kj}a^k\left[\ln(a)\right]^j. \end{equation} After renormalization the negative powers in the lattice spacing and the logarithmically divergent terms cancel in the sum of all diagrams. The leading lattice corrections thus vanish proportionally to $a$ (up to logarithms) at any order of perturbation theory. \begin{figure}[t] \vspace{-2.3cm} \hbox{\epsfxsize=11.0cm\hspace{2.0cm}\epsfbox{scale_plot1.eps}} \vspace{-1.2cm} \begin{center} \footnotesize Figure~1: Calculated values of the vector meson mass (full circles) and linear extrapolation to the continuum limit (cross). Simulation data from Butler et al.~(GF11 collab.)~\cite{WeingartenI}. \end{center} \vspace{0.0cm plus 0.3cm} \end{figure} \subsection{Cutoff dependence of hadron masses} Sizeable lattice effects are also observed at the non-perturbative level when calculating hadron masses, for example. An impressive demonstration of this is obtained as follows. Let us consider QCD with a doublet of quarks of equal mass, adjusted so that the mass of the pseudo-scalar mesons coincides with the physical kaon mass. This sets the quark mass to about half the strange quark mass and one thus expects that the mass of the lightest vector meson is given by \begin{equation} m_{\hbox{\sixrm V}}\simeqm_{\hbox{\sixrm K}^{\ast}}=892\,{\rm MeV}. \end{equation} Computations of the meson masses using numerical simulations however show that this is not the case at the accessible lattice spacings (see Figure~1). Instead one observes a strong dependence on the lattice spacing and it is only after extrapolating the data to $a=0$ that one ends up with a value close to expectations. \subsection{Effective continuum theory} In phenomenology it is well known that the effects of as yet undetected substructures~or heavy particles may be described by adding higher-dimensional interaction terms to the Stan\-dard Model lagrangian. From the point of view of an underlying more complete theory, the Standard Model together with the added terms then is a low-energy effective theory. A similar situation occurs in lattice QCD, where the momentum cutoff may be regarded (in a purely mathematical sense) as a scale of new physics. The associated low-energy effective theory is a continuum theory with action~\cite{SymanzikI,SymanzikII} \begin{equation} S_{\rm eff}=\int{\rm d}^4x \left\{ {\cal L}_0(x)+a{\cal L}_1(x)+a^2{\cal L}_2(x)+\ldots\right\}, \end{equation} where ${\cal L}_0$ denotes the continuum QCD lagrangian and the ${\cal L}_k$'s, $k\geq1$, are linear combinations of local operators of dimension $4+k$ with coefficients that are slowly varying functions of $a$ (powers of logarithms in perturbation theory). Through the effective continuum theory, the lattice spacing dependence is made explicit and a better understanding of the approach to the continuum limit is achieved. In particular, neglecting terms that do not contribute to on-shell quantities, or which amount to renormalizations of the coupling and the quark masses, the general expression for the leading lattice correction is \begin{equation} {\cal L}_1=c_1\,\overline{\psi}\,\sigma_{\mu\nu}F_{\mu\nu}\psi, \end{equation} with $F_{\mu\nu}$ being the gluon field strength and $\psi$ the quark field. The lattice thus assigns an anomalous colour-magnetic moment of order $a$ to the quarks. Very many more terms contribute to ${\cal L}_2$ and a simple physical interpretation is not easily given. The pattern of the lattice effects of order $a^2$ should hence be expected to be rather complicated. \subsection{O($a$) improvement} The effective action, Eq.~(4), depends on the physics at the scale of the cutoff, i.e.~on how precisely the lattice theory is set up. By choosing an improved lattice action one may hence be able to reduce the size of the correction terms and thus to accelerate the convergence to the continuum limit~\cite{SymanzikIII}. Different ways to implement this idea are being explored and there is currently no single preferred way to proceed. At last year's lattice conference the subject has been reviewed by Niedermayer~\cite{ImpReview} and many interesting contributions have been made since then. O($a$) improvement is a relatively modest approach, where the leading correction ${\cal L}_1$ is cancelled by replacing the Wilson action through~\cite{SW} \begin{equation} S_{\rm Wilson}+a^5\sum_{x}c_{\rm sw}\, \overline{\psi}(x)\frac{i}{4}\sigma_{\mu\nu}F_{\mu\nu}(x)\psi(x) \end{equation} and tuning the coefficient $c_{\rm sw}$. At the time when Sheikholeslami and Wohlert published their paper~\cite{SW}, the proposition did not receive too much attention, because systematic studies of lattice effects were not feasible with the available computers. The situation has now changed and there is general agreement that improvement is useful or even necessary, particularly in full QCD where simulations are exceedingly expensive in terms of computer time. An obvious technical difficulty is that $c_{\rm sw}$ (which is a function of the bare gauge coupling) needs to be determined accurately. The problem has only recently been solved by studying the axial current conservation on the lattice~\cite{AlphaI}. Chiral symmetry is not preserved by the lattice regularization and the PCAC relation satisfied by the $\bar{u}d$ component of the axial current, \begin{equation} \partial_{\mu}(\bar{u}\dirac{\mu}\dirac{5}d)= (\overline{m\kern-1pt}\kern1pt_{\rm u}+\overline{m\kern-1pt}\kern1pt_{\rm d})\bar{u}\dirac{5}d+\epsilon(a), \end{equation} thus includes a non-zero error term. In general $\epsilon(a)$ vanishes proportionally to $a$, but after improvement the error is reduced to order $a^2$ if $c_{\rm sw}$ has the proper value. Conversely this may be taken as a condition fixing $c_{\rm sw}$, i.e.~the coefficient can be computed by minimizing the error term in various matrix elements of Eq.~(7). Proceeding along these lines one has been able to calculate $c_{\rm sw}$ non-peturbatively in quenched QCD~\cite{AlphaI,SCRIa} and now also in full QCD with a doublet of massless sea quarks~\cite{AlphaII}. \subsection{Impact of O($a$) improvement on physical quantities} Once the improvement programme has been properly implemented, the question arises whether the lattice effects on the quantities of physical interest are significantly reduced at the accessible lattice spacings. Several collaborations have set out to check this~\cite{QCDSFa,UKQCDa,APETOVa}, but it is too early to draw definite conclusions. The status of these studies has recently been summarized by Wittig~\cite{WittigI}. For illustration let us again consider the calculation of the vector meson mass $m_{\hbox{\sixrm V}}$ discussed in Subsection~2.2. A preliminary analysis of simulation results from the UKQCD collaboration gives, for the O($a$) improved theory, $m_{\hbox{\sixrm V}}=924(17)$ MeV at $a=0.098$ fm and $m_{\hbox{\sixrm V}}=932(26)$ MeV at $a=0.072$ fm. These numbers do not show any significant dependence on the lattice spacing and they are also compatible with the value $m_{\hbox{\sixrm V}}=899(13)$ that one obtains through extrapolation to $a=0$ of the results from the unimproved theory (left-most point in Figure~1). Other quantities that are being studied include the pseudo-scalar and vector meson decay constants and the renormalized quark masses. The experience accumulated so far suggests that the residual lattice effects are indeed small if $a\leq0.1$ fm. Most experts would however agree that further confirmation is still needed. \subsection{Synthesis} At sufficiently small lattice spacings the effective continuum theory provides an elegant description of the approach to the continuum limit. Whether the currently accessible lattice spacings are in the range where the effective theory applies is not immediately clear, but the observed pattern of the lattice spacing dependence in the unimproved theory and the fact that O($a$) improvement appears to work out strongly indicate this to be so. Very much smaller lattice spacings are then not required to reliably reach the continuum limit. It is evidently of great importance to put this conclusion on firmer grounds by continuing and extending the ongoing studies of O($a$) improvement and other forms of improvement. \vfill\eject \section{Non-perturbative renormalization} \subsection{Example} We now turn to the second subject covered in this talk and begin by describing one of the standard ways to compute the running quark masses in lattice QCD. The need for non-perturbative renormalization will then become clear. Any details not connected with this particular aspect of the calculation are omitted. A possible starting point to obtain the sum $\overline{m\kern-1pt}\kern1pt_{\rm u}+\overline{m\kern-1pt}\kern1pt_{\rm s}$ of the up and the strange quark masses is the PCAC relation \begin{equation} m_{\hbox{\sixrm K}}^2f_{\lower1.0pt\hbox{\sixrm K}}=(\overline{m\kern-1pt}\kern1pt_{\rm u}+\overline{m\kern-1pt}\kern1pt_{\rm s}) \langle0|\,\bar{u}\dirac{5}s\,|K^{+}\rangle. \end{equation} Since the kaon mass $m_{\hbox{\sixrm K}}$ and the decay constant $f_{\hbox{\sixrm K}}$ are known from experiment, it suffices to evaluate the matrix element on the right-hand side of this equation. On the lattice one first computes the matrix element of the bare operator $(\bar{u}\dirac{5}s)_{{\rm lat}}$ and then multiplies the result with the renormalization factor $Z_{\rm P}$ relating $(\bar{u}\dirac{5}s)_{{\rm lat}}$ to the renormalized density $\bar{u}\dirac{5}s$. \renewcommand{\arraystretch}{1.3} \begin{table}[b] \caption{Recent results for $\overline{m\kern-1pt}\kern1pt_{\rm s}$ (quenched QCD, ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme at $\mu=2$ GeV)} \vspace{0.2cm} \begin{center} \footnotesize \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{$\overline{m\kern-1pt}\kern1pt_{\rm s}$ [MeV]} & \multicolumn{1}{c|}{reference} \\ \hline \hspace{0.5cm}$122(20)$\hspace{0.5cm} & Allton et al.~(APE collab.)~\cite{QM_AlltonEtAl} \\ \hspace{0.5cm}$112(5)$ & G\"ockeler et al.~(QCDSF collab.)~\cite{QCDSFa} \\ \hspace{0.5cm}$111(4)$ & Aoki et al.~(CP-PACS collab.)~\cite{QM_CPPACS} \\ \hspace{0.5cm}$\phantom{0}95(16)$ & Gough et al.~\cite{QM_GoughEtAl} \\ \hspace{0.5cm}$\phantom{0}88(10)$ & Gupta \& Bhattacharya~\cite{QM_Gupta} \\ \hline \end{tabular} \end{center} \end{table} \renewcommand{\arraystretch}{1.0} Some recent results for the strange quark mass obtained in this way or in similar ways are listed in Table~1 (further results can be found in the review of Bhattacharya and Gupta~\cite{QM_Review}). The sizeable differences among these numbers have many causes. An important uncertainty arises from the fact that, in one form or another, the one-loop formula \begin{equation} Z_{\rm P}=1+{g_0^2\over4\pi}\left\{(2/\pi)\ln(a\mu)+C\right\} +{\rm O}(g_0^4) \end{equation} has been used to compute the renormalization factor, where $g_0$ denotes the bare lattice coupling, $\mu$ the normalization mass in the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme and $C$ a calculable constant that depends on the details of the lattice regularization. Bare perturbation theory has long been known to be unreliable and various recipes, based on mean-field theory or resummations of tadpole diagrams, have been given to deal with this problem~\cite{Parisi,Lepenzie}. Different prescriptions however give different results and it is in any case unclear how the error on the so calculated values of $Z_{\rm P}$ can be reliably assessed. \subsection{Intermediate renormalization} An interesting method to compute renormalization factors that does not rely on bare perturbation theory has been proposed by Martinelli et al.~\cite{IR_Martinelli}. The idea is to proceed in~two steps, first matching the lattice with an intermediate momentum subtraction (MOM) scheme and then passing to the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme. The details of the intermediate MOM scheme do not influence the final results and are of only practical importance. One usually chooses the Landau gauge and imposes normalization conditions on the propagators and the vertex functions at some momentum~$p$. In the case of the pseudo-scalar density, for example, the renormalization constant $\zp^{\raise1pt\hbox{\sixrm MOM}}$ is defined through \vbox{ \vspace{1.0cm} \begin{equation} =\;Z_2^{\raise1pt\hbox{\sixrm MOM}}/\zp^{\raise1pt\hbox{\sixrm MOM}}\;\times \end{equation}} \vbox{ \vspace{-1.35cm} \hbox{\epsfxsize=6.0cm\hspace{4.4cm}\epsfbox{zpmom.eps}} \vspace{0.3cm}} \noindent where $Z_2^{\raise1pt\hbox{\sixrm MOM}}$ denotes the quark wave function renormalization constant and the diagrams represent the full and the bare vertex function associated with this operator. On a given lattice and for a range of momenta, the quark propagator and the full vertex function can be computed using numerical simulations~\cite{IR_Martinelli,IR_Giusti}. $Z_2^{\raise1pt\hbox{\sixrm MOM}}$ and $\zp^{\raise1pt\hbox{\sixrm MOM}}$ are thus obtained non-perturbatively. The total renormalization factor relating the lattice normali\-zations with the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme is then given by \begin{equation} Z_{\rm P}(g_0,a\mu)=\zp^{\raise1pt\hbox{\sixrm MOM}}(g_0,ap)X_{\rm P}(\bar{g}_{\kern0.5pt\overline{\hbox{\sixrm MS\kern-0.10em}},p/\mu), \end{equation} with $X_{\rm P}$ being the finite renormalization constant required to match the MOM with the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme. $X_{\rm P}$ is known to one-loop order of renormalized perturbation theory and could easily be worked out to two loops. While this method avoids the use of bare perturbation theory, it has its own problems, the most important being that the momentum $p$ should be significantly smaller than $1/a$ to suppress the lattice effects, but not too small as otherwise one may not be confident to apply renormalized perturbation theory to compute $X_{\rm P}$. On the current lattices the values of $1/a$ are between $2$ and $4$ GeV and it is hence not totally obvious that a range of momenta exists where both conditions are approximately satisfied~\cite{IR_Crisafulli,IR_Goeckeler}. A simple criterion which may be applied in this connection is that the calculated values of $Z_{\rm P}$ should be independent of~$p$. \subsection{Non-perturbative renormalization group} It should now be quite clear that further progress depends on whether one is able to make contact with the high-energy regime of the theory in a controlled manner. As has been noted some time ago, this can be achieved through a recursive procedure~\cite{FSTa}. A general solution of the non-perturbative renormalization problem is then obtained~\cite{FSTb,FSTc,FSTd,FSTe}. The basic idea of the method can be explained in a few lines. One begins by \hbox{introducing} a special intermediate renormalization scheme, where all normalization conditions are imposed at scale $\mu=1/L$ and zero quark masses, $L$ being the spatial extent of the lattice. We could choose a MOM scheme, for example, and set $p$ equal to $2\pi/L$, the smallest non-zero momentum available in finite volume. But this is not the only possibility and other schemes are in fact preferred for technical reasons. In such a scheme the scale evolution of the renormalized parameters and operators can be studied simply by changing the lattice size $L$ at fixed bare parameters. One usually simulates pairs of lattices with sizes $L$ and $2L$. Up to lattice effects the running couplings on the two lattices are then related through \begin{equation} \bar{g}^2(2L)=\sigma(\bar{g}^2(L)), \end{equation} where $\sigma$ is an integrated form of the Callan-Symanzik $\beta$-function. Similar scaling functions are associated with the renormalized quark masses and the local operators. An important point to note is that these functions can be computed for a large range of $\bar{g}^2$ without running into uncontrolled lattice effects, because the lattice spacing is always much smaller than $L$, on any reasonable lattice, no matter how small $L$ is in physical units. Once the scaling functions are known, one can move up and down the energy scale by factors of $2$. With only a few steps a much larger range of scales can be covered in this way than would otherwise be possible. \begin{figure}[t] \begin{center} \small \renewcommand{\arraystretch}{2.0} \begin{tabular}{l c l} $\Lambda_{\overline{\hbox{\sixrm MS\kern-0.10em}}=k\Lambda$, $M$ &\hspace{0.0cm}$\longleftarrow$\hspace{1.0cm} &\hspace{0.5cm}$\Lambda$, $M$ \\ \hspace{1.0cm}$\downarrow$ & &\hspace{1.0cm}$\uparrow$ \\ \begin{minipage}{3.0cm} perturbative\\ evolution \end{minipage} && \begin{minipage}{3.0cm} perturbative\\ evolution \end{minipage}\\ \hspace{1.0cm}$\downarrow$ & &\hspace{1.0cm}$\uparrow$ \\ $\alpha_{\overline{\hbox{\sixrm MS\kern-0.10em}}(\mu)$, $\overline{m\kern-1pt}\kern1pt_{\overline{\hbox{\sixrm MS\kern-0.10em}}(\mu)$ && $\bar{g}$, $\overline{m\kern-1pt}\kern1pt$ at $\mu=100$ GeV \\ & &\hspace{1.0cm}$\uparrow$ \\ && \begin{minipage}{3.0cm} non-perturbative\\ evolution \end{minipage}\\ & &\hspace{1.0cm}$\uparrow$ \\ \begin{minipage}{3.0cm} $f_{\pi},m_{\pi},m_{\hbox{\sixrm K}},\ldots$ \end{minipage} &\hspace{0.0cm}$\longrightarrow$\hspace{1.0cm} & \begin{minipage}{4.0cm} finite-volume scheme\\ $\bar{g}$, $\overline{m\kern-1pt}\kern1pt$ at $\mu=0.6$ GeV \end{minipage}\\ \end{tabular} \renewcommand{\arraystretch}{1.0} \end{center} \vspace{0.6cm} \begin{center} \footnotesize Figure~2: Strategy to compute the running coupling and quark masses, taking low-energy data as input and using the non-perturbative renormalization group to scale up to high energies. \end{center} \vspace{0.0cm plus 0.3cm} \end{figure} \subsection{Application} So far the recursive procedure described above has been used to compute the running coupling in quenched QCD~\cite{FSTb,FSTc} and first results are now also being obtained for the running quark masses~\cite{FSTd,FSTe}. The calculation follows the arrows in the diagram shown in Figure~2, starting at the lower-left corner. In this plot the energy is increasing from the bottom to the top while the entries in the left and right columns refer to infinite and finite volume quantities respectively. The computation begins by calculating the renormalized coupling $\bar{g}$ and quark masses $\overline{m\kern-1pt}\kern1pt$ in the chosen finite-volume scheme at some low value of $\mu$, where contact with the hadronic scales can easily be made using numerical simulations. In the next step one takes these results as initial values for the non-perturbative renormalization group and scales the coupling and quark masses to high energies. At still higher energies the perturbative evolution equations apply and the $\Lambda$-parameter and the renormalization group invariant quark masses \begin{equation} M=\lim_{\mu\to\infty} \overline{m\kern-1pt}\kern1pt\left(2b_0\bar{g}^2\right)^{-d_0/2b_0} \end{equation} may be extracted with negligible systematic error ($b_0$ and $d_0$ denote the one-loop coefficients of the $\beta$-function and the anomalous mass dimension). The renormalization group invariant quark masses $M$ are scheme-independent and thus do not change when we pass from the finite-volume to the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme, while the matching of the $\Lambda$-parameters involves an exactly calculable proportionality constant $k$ (top line of Figure~2). The perturbative evolution in the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme, which is now known through four loops~\cite{RitbergenI,Chetyrkin,RitbergenII}, finally yields the running coupling and quark masses in this scheme. \begin{figure}[t] \vspace{-2.3cm} \hbox{\epsfxsize=11.0cm\hspace{2.0cm}\epsfbox{alpha.eps}} \vspace{-1.2cm} \begin{center} \footnotesize Figure~3: Simulation results for the running coupling $\alpha=\bar{g}^2/4\pi$ in the SF scheme (full circles). The solid (dashed) lines are obtained by integrating the perturbative evolution equation, starting at the right-most data point and using the 3-loop (2-loop) expression for the $\beta$-function. \end{center} \vspace{0.0cm plus 0.3cm} \end{figure} Figure~3 shows the scale evolution of the running coupling in the SF scheme, which is the particular finite-volume scheme that has been employed. The data points are separated by scale factors of $2$, i.e.~the recursion has been applied $8$ times. At the higher energies the scale dependence of the coupling is accurately reproduced by the perturbative evolution, which has recently been worked out to three loops in this scheme~\cite{Bode}. The perturbative region has thus safely been reached and, using the $3$-loop evolution in the range $\alpha\leq0.08$, one obtains~\cite{FSTe} \begin{equation} \Lambda^{(0)}_{\overline{\hbox{\sixrm MS\kern-0.10em}}=251\pm21\;{\rm MeV}. \end{equation} The index $\raise1pt\hbox{$\scriptstyle(0)$}$ reminds us that this number is for quenched QCD which formally corresponds to zero flavours of light sea quarks. Otherwise no uncontrolled approximations have been made and Eq.~(14) thus is a solid result. \begin{figure}[t] \vspace{-2.3cm} \hbox{\epsfxsize=11.0cm\hspace{2.0cm}\epsfbox{mbar.eps}} \vspace{-1.2cm} \begin{center} \footnotesize Figure~4: Simulation results for the running quark mass in the SF scheme. The solid (dashed) lines are obtained using the 2-loop (1-loop) expression for the anomalous mass dimension. \end{center} \vspace{0.0cm plus 0.3cm} \end{figure} The scale evolution of the quark masses in the SF scheme is also accurately matched by perturbation theory and the renormalization group invariant masses are hence easily obtained [cf.~Eq.~(13)]. Some preliminary simulation results~\cite{FSTe} for the flavour-independent ratio $\overline{m\kern-1pt}\kern1pt/M$ are plotted in Figure~4. The left-most data point corresponds to a normalization mass $\mu$ around $290$ MeV and a ratio $M/\overline{m\kern-1pt}\kern1pt=1.18(2)$. This factor provides the required link between the low-energy and the perturbative regime of the theory. To complete the computation of (say) the strange quark mass in the ${\rm \overline{MS\kern-0.05em}\kern0.05em}$ scheme, one still needs to go through a few steps, but the renormalization problem has been solved at this point and what is left to do are some standard calculations of meson masses and of the vacuum-to-kaon matrix element of the unrenormalized pseudo-scalar density. The fact that the curves in Figures~3 and 4 agree so well with the data down to very low energies should not be given too much significance. Rather than a general feature of the theory, the absence of large non-perturbative corrections to the scale evolution should be taken as a property of the chosen renormalization scheme. Other schemes behave differently in this respect and there is usually no way to tell in advance at which energy the perturbative scaling sets in. \vfill\eject \section{Concluding remarks} The theoretical developments described in this talk lead to a better understanding of the continuum limit and of the parameter and operator renormalization in lattice QCD. In particular, using improved actions and the new techniques for non-perturbative renormali\-zation, one will be able to obtain more reliable results and to approach difficult problems such as the calculation of moments of structure functions~\cite{SFa} and $K\to\pi\pi$ decay rates~\cite{KPPa,KPPb} with greater confidence. Most of the examples and results that have been mentioned here refer to quenched~QCD, but the theoretical discussion also applies to the full theory with any number of sea quarks. At this point the bottleneck are the simulation algorithms, which remain to be rather inefficient when sea quark effects are included. Continuous progress is however being made~\cite{Guesken} and the new generation of dedicated computers will no doubt allow the lattice theorists to move a big step forward in this area too. While there are many indications that O($a$) improvement and other forms of improvement are successful, the conclusion that this will lead to dramatic savings in computer time, ultimately allowing the solution of lattice QCD on a PC~\cite{PCa,PCb,PCc}, is not justified. Fast computers are indispensible if one is interested in obtaining good control on the systematic errors. They are also needed to tackle the more complicated physics issues mentioned before and the inclusion of sea quark effects is clearly beyond the capabilities of present-day PC's. \section*{Acknowledgements} I would like to thank Guido Martinelli, Hubert Simma, Stefan Sint, Rainer Sommer, Hartmut Wittig and Tomoteru Yoshie for helpful correspondence and Peter Weisz for critical comments during the preparation of this talk. \section*{References}
astro-ph/9711333
\section{Introduction} \begin{figure*} \centerline{\psfig{file=multispec.eps,width=14.8cm}} \caption{Results of the \hi\ absorption experiment. The uniformly weighted, 21~cm continuum image is displayed as contours over greyscale. The white circle indicates the location of the AGN according to the alignment of Capetti et al. (1995). The naturally weighted spectra towards each bright component of the radio jet are displayed as overlays. \hi\ absorption is clearly detected towards component 6, but a search over the data cube reveals no other significant absorption. The continuum contour levels, in mJy beam$^{-1}$, are: $\pm 0.53$ ($3\sigma$), 1.2, 2.9, 6.8, 15.9, and 37.3 (logarithmic scaling). The restoring beam dimensions are: natural weight, $0\farcs32 \times 0\farcs27$, P.A. $-21\hbox{$^\circ$}$; and uniform weight, $0\farcs16 \times 0\hbox{$.\!\!^{\prime\prime}$} 14$, P.A. $88\hbox{$^\circ$}$.} \label{f_results} \end{figure*} Emission from hot, ionised gas distinguishes active galactic nuclei (AGNs) from quiescent galaxies. However, conventional models for AGNs depend on the distribution and kinematics of colder, neutral media. Firstly, the host galaxy is a massive reservoir of neutral gas which might ultimately feed an energetic accretion disc, although the means by which gas funnels down to sub-parsec scales in not well understood (Rees 1984). Secondly, the unifying schemes for AGNs propose that the apparent differences between broad-line AGNs (i.e. Seyfert 1s) and narrow-line AGNs (Seyfert 2s) result from selective obscuration through neutral, dusty gas located along the sight-line to the broad-line region (Antonucci \& Miller 1985). Exploring the neutral gas in AGNs is challenging because the surface brightness of emission is generally too faint to detect on scales much smaller than $\sim 1\hbox{$^{\prime\prime}$}$. We are instead continuing a programme to explore neutral hydrogen (\hi) in {\em absorption} towards AGNs with the goal of establishing the distribution and kinematics on scales as small as $0\farcs1$, or roughly 10~parsecs in the nearest Seyfert galaxies (Pedlar et al. 1995; Mundell et al. 1995; Gallimore et al. 1994). In this work, we present MERLIN observations of 21~cm absorption towards the Seyfert 1.5 nucleus of Mkn~6. The localisation of the \hi\ absorption suggests a particular alignment between the host galaxy disc and the radio jet. After first describing the observations and results, we discuss the implications of this alignment in further detail. For comparison with earlier papers, we adopt a distance of 77~Mpc to Mkn~6, appropriate for $H_0 = 75$~km s$^{-1}$\ Mpc$^{-1}$, and giving a scale of 1\hbox{$^{\prime\prime}$} = 374~pc (Meaburn et al. 1989). \section{Observations} \begin{figure} \centerline{\psfig{file=rotcurv.eps,width=8.8cm}} \caption{The location of the \hi\ absorption line (open circle) on the position-velocity diagram for Mkn~6. The centroid velocities of the extended narrow line region (ENLR; filled squares) and NLR (open triangle) are taken from Meaburn et al. (1989). The ENLR traces kinematically quiescent gas that is exposed to the AGN, and so defines the inner rotation curve. Within the errorbars, the \protect\hi\ absorption is located roughly where expected on the rotation curve, and so probably arises from gas in normal rotation about the galaxy center. } \label{rotcurv} \end{figure} We observed Mkn~6 with the 8-element MERLIN array (Wilkinson 1992), including the Lovell telescope; the results are summarised in Fig.~\ref{f_results}. The observations were tuned to the 1420~MHz hyperfine transition of \hi\ centered near the Doppler velocity $cz = 5800$~km s$^{-1}$\ (heliocentric, optical convention). The systemic velocity of the host galaxy is actually $5640\pm 10$~km s$^{-1}$\ (Meaburn et al. 1989), well within the observed bandwidth. The velocity resolution of the observations is 26.4~km s$^{-1}$, and, after removing end channels with poor frequency response, the effective bandwidth is $\sim$6.6~MHz (1400~km s$^{-1}$). Data reduction followed standard techniques employed for MERLIN data, including initial calibration and processing with software local to Jodrell Bank. Further data processing, including self-calibration against line-free continuum channels, was performed within the AIPS data reduction package. Channel maps and line-free continuum images were produced following standard numerical Fourier transform techniques and deconvolution using the CLEAN algorithm (H\"ogbom 1974). A more detailed description of the MERLIN data reduction techniques employed can be found in Mundell et al. (1995). We constructed both naturally and uniformly weighted spectral line cubes. Continuum images were generated by averaging over channels with no significant line detections. For the naturally weighted images, the restoring beam dimensions (FWHM) are $0\farcs32 \times 0\farcs27$, P.A. $-21\hbox{$^\circ$}$, and the respective continuum and spectral line sensitivities are $0.13$~mJy\ beam$^{-1}$\ and $0.68$~mJy\ beam$^{-1}$\ ($1\sigma$). The resolution of the uniformly weighted images is $0\farcs16 \times 0\hbox{$.\!\!^{\prime\prime}$} 14$, P.A. $88\hbox{$^\circ$}$, and the continuum and spectral line sensitivities are $0.19$~mJy\ beam$^{-1}$\ and $1.2$~mJy\ beam$^{-1}$. \section{Results} In contrast to the radio continuum emission from Mkn~6, which is extended and highly structured (e.g., Kukula et al. 1996; Fig.~\ref{f_results}), \hi\ absorption is detected only towards component~6, a compact source located at the northern end of the arcsecond-scale radio jet (Fig~\ref{f_results}; component numbering following Kukula et al. 1996). Discussed further below, the linewidth is very narrow in comparison with \hi\ absorbed radio jets in other Seyfert galaxies; formally, the linewidth (FWHM) is $33\pm 6$~km s$^{-1}$\ (corrected for the instrumental resolution) and the maximum opacity is $\tau_{max} = 0.45\pm0.01$. The integrated absorption profile corresponds to a foreground column of $$N_{HI} = (2.6\pm0.3) \times 10^{21}\ (T_S/100{\rm\ K}){\rm\ cm^{-2}}\ ,$$ where $T_S$ is the spin (excitation) temperature of the ground state. This column is not unusual for a sight-line through an inclined disk galaxy. However, we note that a similar column, detected in NGC~4151, was interpreted as absorption in a nuclear torus (Mundell et al. 1995). \begin{figure*} \centerline{\psfig{file=sketch.eps,width=14.8cm}} \caption{Illustration of the \hi\ absorbing medium of Mkn~6. The {\em left panel} is an overlay of the 21~cm radio continuum and an archival HST image taken in the F606W (wide V-band) filter. We have subtracted an elliptical isophote model of the smooth, bulge light from the HST image in order to enhance the contrast of the underlying structure. The halftone rendering of the HST image is displayed in the positive sense: the dark band across the nucleus is an apparent band of high extinction, presumably arising in a dust lane. We chose the Capetti et al. (1995) alignment between the MERLIN and HST images. Only component 6 among the brighter jet features lies, in projection, within the dust lane. The cartoon in the {\em right panel} depicts a plausible ring geometry for the neutral, absorbing gas. The proposed location of the AGN, near radio component 3, is indicated by the dot. This cartoon is purely illustrative and is not intended to be a detailed model for the MERLIN \hi\ absorption and HST data. } \label{HST} \end{figure*} The limits placed by non-detections better define the localisation of the \hi\ absorption around component~6. Towards the brighter regions of the southern jet, components~2--4, the ($3\sigma$) limit is $\tau_{\nu} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.07$, corresponding to a foreground column density $$N_{HI} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 4\times 10^{20}\ {\rm cm^{-2}}\ (T_s/100) (\Delta v/30\ {\rm km\ s^{-1}})\ .$$ The absorbing gas would easily have been detected had the gas completely covered the jet. On the other hand, component 5, which is the nearest neighbor to the absorbed component, is much fainter, and so the limits are less stringent: $\tau_{\nu} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.9$, or $$N_{HI} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 5\times 10^{21}\ {\rm cm^{-2}}\ (T_s/100) (\Delta v/30\ {\rm km\ s^{-1}})\ .$$ We can conclude is that the \hi\ absorbing gas covers a region including component~6 and extending no further south than component 5, or roughly 0\farcs75 (280~pc in projection). However, we can place no limits on the extent of the absorbing gas in other directions. The centroid velocity of the absorption line is $5584\pm 3$~km s$^{-1}$, blue-shifted relative to systemic by $56\pm10$~km s$^{-1}$. For comparison, the position-velocity curve is plotted in Fig.~\ref{rotcurv}. The details of the rotation curve within the inner few arcseconds are unknown, but the velocity of the 21~cm absorption line does not appear significantly displaced from any plausible rotation curve. We conclude that the absorption line arises in otherwise normally rotating gas, and there is no evidence for streaming motions greater than $\sim 50$~km s$^{-1}$. Furthermore, we do not detect any velocity gradients across component~6. Assuming that the absorbing gas completely covers the background source (Sect.~\ref{discuss}), the upper limit for the velocity gradient is approximately the width of the absorption line divided by the component size ($\sim 0\farcs08$; Kukula et al. 1996), or $< 1.0$~km s$^{-1}$\ pc$^{-1}$. For comparison, the projected velocity gradient of the \hi\ absorption seen towards NGC~4151 is $\sim 3$~km s$^{-1}$\ pc$^{-1}$\ (Mundell et al. 1995). \section{Discussion}\label{discuss} The trivial explanation for the localised \hi\ absorption is an isolated cloud which fortuitously aligns with component~6. We consider it more likely, however, that the absorbing gas lies in the galaxy disk surrounding the nucleus. For example, this result compares favorably with the localised \hi\ absorption observed towards the radio jet of NGC~4151 (Mundell et al. 1995). The interesting question is whether, as was proposed for NGC~4151, the absorbing gas might be located in small-scale ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 100$~pc) disc surrounding the AGN. In the case of Mkn~6, however, we find that absorption from gas distributed on kpc-scales is more consistent with the observations. The first evidence is that the linewidth is very narrow, $\sim 30$~km s$^{-1}$, which is less than half the \hi\ absorption linewidth of NGC~4151. In contrast, \hi\ absorption linewidths towards Seyfert and starburst galaxies often exceed 100~km s$^{-1}$, particularly in those cases where the \hi\ absorption is known to trace gas deep in the nucleus (Pedlar et al. 1996; Mundell et al. 1995; Gallimore et al. 1994; Dickey 1986). This evidence is not sufficient, however, since we cannot rule out the possibility that the absorption arises from a compact, circularly rotating disc viewed nearly face-on. Nevertheless, the narrowness of the line is consistent with that expected from a larger scale ring or disc. We next examine the displacement of the absorption from the AGN. Unfortunately, the correspondence between components in the optical and radio images is not accurately known. Moreover, the continuum spectra and sizes of the radio features are indistinct, and so there is currently no clear radio candidate for the AGN proper (Kukula et al. 1996). Clements (1983) places the optical nucleus somewhere between component 5 and (the \hi\ absorbed) component 6, but the uncertainties are roughly one quarter the length of the radio jet. Nevertheless, the Clements position is significantly displaced southward from component 6 (Kukula et al. 1996). Capetti et al. (1995) propose an alignment between the radio and optical images based on {\em Hubble Space Telescope} images. They found a linear extension of \boiiib\ emission that agrees well both in orientation and detailed shape with the southern part of the radio jet (i.e., components~1--5). Aligning the radio and optical jet structures places the AGN $\sim 1\hbox{$^{\prime\prime}$}$ ($\sim 380$~pc in projection) south of component~6, somewhere nearer component~3 (from Kukula et al.: $\alpha_{\rm (J2000)} = 6^h\ 52^m\ 12\fs336$, $\delta_{(J2000)} = 74\hbox{$^\circ$}\ 25\hbox{$^\prime$}\ 37\farcs08$; $S_{\nu}(20\ {\rm cm}) = 16$~mJy). Adopting this alignment, and further considering the narrowness of the absorption line, we are drawn to the conclusion that the \hi\ absorption in Mkn~6 arises from neutral gas displaced from the nucleus by $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 400$~pc. For reference, the strongest absorption lines observed towards the Seyfert nucleus of NGC~1068 similarly trace a $\sim 500$~pc radius, central disc (Gallimore et al. 1994). From a more detailed study of the optical and radio continuum structures of the nucleus (Holloway et al. in preparation), we have discovered a conspicuous candidate for the \hi\ absorber. Illustrated in Fig.~\ref{HST}, there is an obvious band of increased extinction which crosses $\sim 1\hbox{$^{\prime\prime}$}$ north of the optical nucleus. For convenience, we refer to this dark region simply as a dust lane. According to the alignment of Capetti et al. (1995), the dust lane encompasses the position of the \hi\ absorbed radio feature. The high aspect ratio of the dust lane suggests a disk or spiral arms viewed edge-on. The simplest picture is that the dust lane traces a kpc-scale disc or ring surrounding the nucleus, or perhaps a spiral arm segment lying in front of the nucleus. The radio jet must be oriented with component 6 lying behind the disc to the north and components 1--5 in front of the disc to the south. There are two important implications of this result. Firstly, the location of the \hi\ absorbed radio feature within the newly discovered dust lane lends self-consistent support for the Capetti et al. alignment, which, as a corollary, strengthens their argument for an interaction between the radio jet and the NLR gas. The second implication is that the northern jet and NLR structures fall behind the galaxian disc, contrary to our earlier model for the northern ionisation cone (Kukula et al. 1996). More specifically, there is a strong correspondence between \boiiib\ emission and radio emission only at the southern end of the jet. The lack of \boiiib\ emission towards the northern end of the jet (i.e., component~6) is naturally explained by extinction in our model for the \hi\ absorption. We will explore a revised model for the ionisation cone structure in a follow-up paper (Holloway et al. 1997). \section{Conclusions} Our primary results and conclusions are as follows. \begin{enumerate} \item There is no \hi\ absorption detected toward the probable location of the AGN of Mkn~6. This result is consistent with the more general picture that sight-lines towards Seyfert 1 nuclei are relatively unobscured. \item The detected \hi\ absorption probably arises from a kpc-scale distribution of gas, possibly a disc, spiral arms, or a ring, surrounding the nucleus and associated with a conspicuous dust lane passing north of the AGN. \item The kinematics of the \hi\ absorption line gas places it near the systemic velocity as interpolated from measurements of the ENLR. Unlike other \hi\ absorbed Seyfert nuclei (Dickey 1986), there is no evidence for rapid streaming motions in the absorbing gas. \item The radio jet is probably oriented behind the galactic disc to the north and in front of the galactic disk to the south. If, as appears to be the case for most Seyfert nuclei, the NLR and ENLR gas share a similar axis with the radio jet, this result places the northern ENLR on the far side of the disc, contrary to earlier models. \end{enumerate} \begin{acknowledgements} J.F.G. received collaborative travel support from the University of Manchester Dept. of Astronomy and computer support at NRAL, Jodrell Bank during the completion of this work. C.G.M. acknowledges receipt of a PPARC Research Fellowship. \end{acknowledgements}
quant-ph/9711068
\section{Introduction. Classical and quantum characteristic exponents.} A notion of {\it quantum characteristic exponent} has been introduced in Ref.% \cite{Vilela2}, which has the same physical meaning as the corresponding classical quantity (the Lyapunov exponent). The correspondence is established by first rewriting the classical Lyapunov exponent as a functional of densities and then constructing the corresponding quantity in quantum mechanics. The construction is explained in detail in Ref.\cite {Vilela1}, where the required functional spaces are identified and the infinite-dimensional measure theoretic framework is developed. Here we just recall the main definitions and emphasize some refinements concerning the support properties of the quantum characteristic exponents, which turn out to be relevant for the numerical computations of Sect.2. Expressed as a functional of {\it admissible} $L^1-$densities, the classical Lyapunov exponent is\cite{Vilela1} \begin{equation} \label{1.1}\lambda _v=\lim _{n\rightarrow \infty }\frac 1n\log \left| -v^i\frac \partial {\partial x^i}D_{\delta _x}\left( \int d\mu (y)\smallskip% \ y\smallskip\ P^n\rho (y)\right) \right| \end{equation} where $\rho $ is an initial condition density, $P$ the Perron-Frobenius operator, $x$ a generic phase-space coordinate, $v$ a vector in the tangent space, $\mu $ the invariant measure and $D_{\delta _x}$ the Gateaux derivative along the generalized function $\delta _x$. The possibility to define Gateaux derivatives along generalized functions with point support and the need for a well-defined $\sigma $-additive measure in an infinite-dimensional functional space lead almost uniquely to the choice of the appropriate mathematical framework, that is, {\it admissible densities} are required to belong to a nuclear space. Being ergodic invariants, the Lyapunov exponents exist on the support of a measure. In the nuclear space framework, measures with support on generalized functions, which are in one-to-one correspondence with the usual measures in phase space, may be constructed by the Bochner-Minlos theorem\cite{Vilela1}. To construct, in quantum mechanics, a quantity with the same operational meaning as (\ref{1.1}) let $U^n$ ($n$ continuous or discrete) be the unitary operator of quantum evolution acting on the Hilbert space $H$ and $% \widetilde{X}$ a self-adjoint operator in $H$ belonging to a commuting set $S $. For definiteness $\widetilde{X}$ is assumed to have a continuous spectrum, to be for example a coordinate operator in configuration space. One considers, as in the classical case, the propagation of a perturbation $% \partial _i\delta _x$ , where by $x$ we mean now a point in the spectrum of $% \widetilde{X}$. \begin{equation} \label{1.2}v^iD_{\partial _i\delta _x}\left( U^n\Psi ,\widetilde{X}U^n\Psi \right) =2Re\bigskip\ v^i\frac \partial {\partial x^i}<\delta _x,U^{-n}% \widetilde{X}U^n\Psi > \end{equation} For the proper definition of the right-hand side of (\ref{1.2}) one requires $\Psi \in E$ to be in a Gelfand triplet\cite{Gelfand}% $$ E^{*}\supset H\supset E $$ By $<\delta _x|$ or $<x|$ we denote a generalized eigenvector of $\widetilde{% X}$ in $E^{*}$. Notice also that $U^n$, being an element of the infinite-dimensional unitary group, has a natural action both in $E$ and $% E^{*}$\cite{Hida}. One obtains then the following definition of {\it quantum characteristic exponent} \begin{equation} \label{1.3}\lambda _{v,x}=\lim \sup _{n\rightarrow \infty }\frac 1n\log \left| \textnormal{Re}\bigskip\ v^i\frac \partial {\partial x^i}<\delta _x,U^{-n}% \widetilde{X}U^n\Psi >\right| \end{equation} The support properties of this quantum version of the Lyapunov exponent have to be carefully analyzed. In Eq.(\ref{1.3}), $\Psi $ defines the state which, in quantum mechanics, plays the role of a (non-commutative) measure% \cite{Connes1}. The quantum exponent may depend on the state, but the measure that, as in the classical case, provides its support is not the state but a measure in the space of the perturbations of the initial conditions, that is, in the space where the Gateaux derivative operates. In the classical case these two measures coincide, in the sense that to which invariant measure in phase-space corresponds an infinite-dimensional measure in the space of generalized functions\cite{Vilela1}. In the quantum case, however, the two entities are different, the second one being the measure on the spectrum of $\widetilde{X}$ induced by the projection-valued spectral measure and the state, that is \begin{equation} \label{1.4}\nu (\Delta x)=(\Psi ,\int\limits_{\Delta x}dP_x\Psi ) \end{equation} A particular case where an infinite-dimensional measure-theoretical setting, similar to the classical one, may be used to define the quantum exponents% \cite{Vilela1}, is when the quantum evolution is implemented by substitution operators in configuration space, as in some sectors of the configurational quantum cat\cite{Weigert1}\cite{Vilela2}\cite{Weigert2}. However this formulation is not very useful in general and the {\it state plus spectrum-measure} framework seems to be the one that has general validity. In this framework the following existence theorem holds \underline{{\em Theorem}}: Let $\widetilde{X}$ be a self-adjoint operator, $E $ a test function space in a Gelfand triplet containing the generalized eigenvectors of $\widetilde{X}$ in its dual $E^{*}$ and $\Psi \in E$. Then if $U^n\Phi \in E$ and $\widetilde{X}\Phi \in E$ ($\forall \Phi \in E$, $% \forall n\in Z$) and the following integrability condition is fulfilled \begin{equation} \label{1.4a}\left| \int d\nu (x)\log \frac{\left| \textnormal{Re}v^i\partial _{x_i}<x|U^{-1}\Phi >\right| }{\left| \textnormal{Re}v^i\partial _{x_i}<x|\Phi >\right| }\right| <M \end{equation} $\forall \Phi \in E$, the limit in Eq.(\ref{1.3}) exists as a $L^1(\nu )-$% function, that is , the average quantum characteristic exponent is defined for any measurable set in the support of $\nu $.\bigskip\ \bigskip Proof: We write Eq.(\ref{1.3}) as \begin{equation} \label{1.4b}\lambda _v(x)=\lim \sup _{n\rightarrow \infty }\frac 1n\sum_{n=0}^{n-1}\log \frac{\left| \textnormal{Re}v^i\partial _{x_i}<x|U^{-n+k}% \widetilde{X}U^{n-k}\Phi >\right| }{\left| \textnormal{Re}v^i\partial _{x_i}<x|U^{-n+k+1}\widetilde{X}U^{n-k-1}\Phi >\right| } \end{equation} Then from the integrability condition (\ref{1.4a}) the integral of the sequence in the right-hand side of Eq.(\ref{1.4b}) is bounded and the Bolzano-Weierstrass theorem insures the existence of the $\lim \sup $. Therefore $\lambda _v(x)$ is well defined as a $L^1(\nu )-$function. $\Box $ Notice that we really need the $\lim \sup $ in the definition of the characteristic exponent because we have no natural $U$-invariant measure in $% E$ to be able, for example, to apply Birkhoff or Kingman's theorem and prove $\lim \sup $=$\lim \inf $. Also the sense in which the measure $\nu $ provides the support for the quantum characteristic exponent is different from the classical ergodic theorems. We have not proven pointwise existence of the exponent a. e. in the support a measure. What we have obtained here is the possibility to define an average quantum characteristic exponent for arbitrarily small $\nu -$measurable sets. Other definitions of characteristic exponents in infinite-dimensional spaces have been proposed by several authors\cite{Ruelle2} \cite{Vilela3} \cite {Haake} \cite{Zycz} \cite{Majewski} \cite{Emch}. They characterize several aspects of the dynamics of linear and non-linear systems. The definition discussed here, proposed for the first time in \cite{Vilela2}, seems however to be the one that is as close as possible to the spirit of the classical definition of Lyapunov exponent. Like the classical Lyapunov exponent the quantum analogue (\ref{1.3}) cannot in general be obtained analytically. There is however a non-trivial example where it can. This is the configurational quantum cat introduced by Weigert% \cite{Weigert1}\cite{Weigert2}. The phase space of this model is $T^2\times R^2$. A mapping similar to the classical cat operates as a quantum kick in the configuration space $T^2$, and the rest of the Floquet operator is a free evolution. This system has the appealing features of actually corresponding to the physical motion of a charged particle on a torus acted by an impulsive electromagnetic field and, as show by Weigert\cite{Weigert2}% , to be exactly solvable. The Floquet operator is \begin{equation} \label{1.5}U=U_FU_K \end{equation} \begin{equation} \label{1.6}U_F=\exp [-i\frac T2\widetilde{p}^2];\bigskip\ U_K=\exp [-\frac i2(\widetilde{x}\cdot V\cdot \widetilde{p}+\widetilde{p}\cdot V\cdot \widetilde{x})] \end{equation} $U_F$ is a free evolution and $U_K$ a kick that operates in a simple manner on momentum eigenstates and on (generalized) position eigenstates \begin{equation} \label{1.7}U_K\left| p\right\rangle =\left| M^{-1}p\right\rangle \end{equation} \begin{equation} \label{1.8}U_K\left| x\right\rangle =\left| M\textnormal{ }x\right\rangle \end{equation} $M$ being an hyperbolic matrix with integer entries and determinant equal to 1. The momentum has discrete spectrum, $p\in (2\pi Z)^2$. To compute the quantum characteristic exponent (Eq.(\ref{1.3})), let the operator $\widetilde{X}$ be \begin{equation} \label{1.9}\widetilde{X}=\sin (2\pi l\cdot x) \end{equation} $l\in Z^2$. This operator has the same set of generalized eigenvectors as the position operator $\widetilde{x}$. To construct the measure $\nu $ (Eq.(% \ref{1.4})) in the spectrum of the operator $\widetilde{X}$ we cannot use the energy eigenstates $\mid P\alpha \rangle $ because they are not normalized. However all one requires is invariance of the measure, and using the (normalizable) momentum eigenstates one such measure is obtained. \begin{equation} \label{1.10}\nu (A)=\langle p\mid \int_Adx\mid x\rangle \langle x\mid p\rangle \end{equation} This invariant measure in this case happens to be simply the Lebesgue measure in $T^2$. Defining \begin{equation} \label{1.11}\gamma _n(x)=\langle x\mid U^{-n}\widetilde{X}U^n\mid p\rangle \end{equation} the result for the quantum characteristic exponent is \begin{equation} \label{1.12} \begin{array}{c} \lambda _v=\lim \sup _{n\rightarrow \infty }\frac 1n\log ^{+}\left| Rev^i\frac \partial {\partial x^i}\gamma _n(x)\right| \\ =\lim \sup _{n\rightarrow \infty }\frac 1n\log ^{+}\left| v^i(2\pi M^nl)_i\{\cos \theta _n(p,l,x)+\cos \theta _n(p,-l,x)\}\right| \end{array} \end{equation} with $$ \theta _n(p,l,x)=\frac T2\left( \sum_{k=0}^{n-1}(M^{-k}p)^2+\sum_{k=0}^{n-1}(M^k(2\pi l+M^{-n}p))^2+x\cdot (2\pi M^nl+p)\right) $$ For the $\lim \sup $ the cosine term plays no role and finally \begin{equation} \label{1.13}\lambda _v=\lim \sup _{n\rightarrow \infty }\frac 1n\log \left| v^i(M^nl)_i\right| \end{equation} The characteristic exponent is then determined from the eigenvalues of the hyperbolic matrix $M$ and is the same everywhere in the support of the measure $\nu $. If $\mu _1$, $\mu _2$ ($\mu _1>\mu _2$) are the eigenvalues of $M$, one obtains $\lambda _v=\log \mu _1$ for a generic vector $v$ and $\lambda _v=\log \mu _2$ iff $\nu $ is orthogonal to the eigenvector associated to $\mu _1$. Hence, in this case, one obtains a positive quantum characteristic exponent whenever the corresponding classical Lyapunov exponent is also positive. This exact example will be used in Sect.2 as a testing ground for the numerical algorithm and an illustration of the kind of precision problems and support properties to be expected when computing quantum characteristic exponents. In the numerical calculation of the quantum characteristic exponents two delicate points are identified. The first is that the calculation requires a high degree of precision, because, if the exponent is positive, the derivative of $U^{-n}\widetilde{X}U^n\Psi (x)$ grows very rapidly with $n$. Therefore in the positive exponent case acceptable statistics is only obtained by taking average values over the configuration space. Second, if the situation is as in the classical case where different invariant measures coexist in phase space, the quantum exponent may depend on the state $\Psi $ used to define the measure $\nu $ in the spectrum of $\widetilde{X}$. Therefore, in all rigor, one should first construct stationary states and then study the $\Psi -$dependence of the quantum exponent. Such study has not yet been carried out and, in the calculations below, a flat wave function is used as the initial state. \section{Numerical computation of quantum characteristic exponents} For kicked quantum systems corresponding to the Hamiltonian \begin{equation} \label{2.1}H=H_0+V(x)\sum_j\delta (t-j\tau ) \end{equation} the Floquet operator is \begin{equation} \label{2.2}U=e^{-iV(x)}e^{-i\tau H_0(\frac \partial {\partial x_i})} \end{equation} in units where $\hbar =1$. For the computation of the action of $U$ on a wave function $\psi (x)$, a fast Fourier transform algorithm $F$ and its inverse $F^{-1}$ are used \begin{equation} \label{2.3}U\psi (x)=e^{-iV(x)}F^{-1}e^{-i\tau H_0(ik)}F\psi (x) \end{equation} In this way one obtains a uniform algorithm for any potential. In the configurational quantum cat, Eq.(\ref{1.6}), the computation is similar with the multiplicative kick $e^{-iV(x)}$ replaced by the substitution operator \begin{equation} \label{2.4}\psi (x)\rightarrow \psi (M^{-1}x) \end{equation} The quantum characteristic exponent is obtained from the calculation of \begin{equation} \label{2.5}\partial _xU^{-n}\widetilde{X}U^n\psi (x) \end{equation} in the limit of large $n$. The precision of the algorithm is checked by insuring that \begin{equation} \label{2.6}\left| \left( U^{-n}U^n-1\right) \psi (x)\right| <\epsilon \end{equation} for a small $\epsilon $, and that the finite difference used to compute the derivative in (\ref{2.5}) does not approach the maximum possible value allowed by the discretization. \subsection{The configurational quantum cat} Here the configuration space is the 2-torus $T^2$, the Floquet operator is the one of Eq.(\ref{1.6}), and the kick is a substitution operator with matrix \begin{equation} \label{2.7}M=\left( \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right) \end{equation} Numerically we have computed the quantities \begin{equation} \label{2.8}\frac 1n\left\langle D_n-D_0\right\rangle =\frac 1n\left\langle \log \frac{\left| Re\bigskip\ v^i\frac \partial {\partial x^i}\left( U^{-n}% \widetilde{X}U^n\Psi \right) (x)\right| }{\left| Re\bigskip\ v^i\frac \partial {\partial x^i}\left( \widetilde{X}\Psi \right) (x)\right| }% \right\rangle _{T^2} \end{equation} $\widetilde{X}$ being the operator in (\ref{1.9}). The initial wave function is $\Psi (x)=1$ and the average is taken over the whole of configuration space. It turns out that, in this case, the derivative in the numerator of (% \ref{2.8}) grows so fast at some points that one reaches, after a few iterations, the maximum difference (2 in this case) for the wave function at two nearby points in the discretization grid. When this happens the calculation cannot be reliably taken to higher $n$ with that discretization. In the numerical calculation of the classical Lyapunov the computation becomes a local evaluation at each step by rescaling the transported tangent vector. Here, because of the linearity of matrix elements and quantum evolution, a similar procedure is not possible and one has to carefully control the growth of the quantities in (\ref{2.8}). To improve statistics the average over configuration space has been taken. This can be safely done in this case because we know exactly the supporting measure (\ref{1.10}), but in general there will be no guarantee that the supporting spectral measure is uniform. In any case average quantities like (\ref{2.8}) are exactly we expect to be able to compute reliably. Fig.1 shows the evolution of $\frac 1n\left\langle D_n-D_0\right\rangle $ obtained with a discretization grid of 292681 points in the unit square, for two different directions $\nu $. The calculation was interrupted when the local finite differences reached one half of the maximum. The lines are fits to the points constrained to approach the same value at large $n$. The resulting numerical estimate for the largest quantum characteristic exponent is 0.95. The exact value obtained from (\ref{1.13}) is 0.9624. \subsection{Quantum kicked rotators} The configuration space is the circle $S^1$, \begin{equation} \label{2.9}V(x)=q\cos (2\pi x) \end{equation} $x\in [0,1)$ and for $H_0$ the following two possibilities were explored \begin{equation} \label{2.10} \begin{array}{c} H_0^{(1)}=-\frac 1{2\pi } \frac{d^2}{dx^2} \\ H_0^{(2)}=-2\pi \cos \left( \frac 1{2\pi i}\frac d{dx}\right) \end{array} \end{equation} The operator $\widetilde{X}$ is \begin{equation} \label{2.11}\widetilde{X}=\sin (2\pi x) \end{equation} The quantity that is numerically computed is \begin{equation} \label{2.12}\left\langle D_n\right\rangle =\left\langle \log \left| Re% \bigskip\ \frac \partial {\partial x}\left( U^{-n}\widetilde{X}U^n\Psi \right) (x)\right| \right\rangle \end{equation} and, in all cases, one starts from a flat initial wave function. In both cases and for the very many values of $q$ that were studied, this quantity seems either to stabilize or to have a very small rate of growth for large $n $. Fig.2, for example, shows the results for the $H_0^{(1)}$ case with $\tau =\frac{\sqrt{5}}2$ and $q=5$. The (numerical) conclusion is that the quantum characteristic exponent vanishes. This conclusion does not seem to be a numerical artifact because the discretization grid for the fast Fourier transform has always been chosen sufficiently small to insure a small local finite difference for all iterations. For example in the example shown in Fig.2, the grid has 4096 points which keeps observed local differences below 0.1. Also the vanishing of the quantum characteristic exponent in quantum kicked rotators is not dependent on the phenomena of localization because also for quantum resonances it may exactly be shown to vanish\cite{Vilela2}. In Fig.2 $\left\langle D_n\right\rangle $ seems to tend to a constant at large $n$. In other cases very slow rates of growth are observed. This is shown in Figs.3a,b for the $H_0^{(2)}$ case with $\tau =\frac{\sqrt{5}}2$ and $q=11$. \section{Conclusions} Both the classical Lyapunov exponent (\ref{1.1}) and its quantum counterpart (\ref{1.3}), measure the exponential rate of separation of matrix elements of $\widetilde{X}$ when the density (or the wave function) suffers a $\delta _x^{^{\prime }}$ perturbation, $x$ being a point in the spectrum of $% \widetilde{X}$. The configurational quantum cat example shows that there are instances of {\it true quantum chaos}, in the sense of exponential growth of the matrix element separation. However, as the numerical study of the quantum kicked rotators seems to show, exponential growth may be rather exceptional in quantum mechanics. Furthermore the taming effect of quantum mechanics on exponential chaos goes deeper than the phenomenon of localization, because also for quantum resonances, where no localization is present, the quantum characteristic exponent vanishes. Although distinct from one another, all known ways that now exist to approach the problem of quantum chaos, seem to agree in one point, namely that quantum mechanics has a definite taming effect on chaos. This is now probably the main issue in quantum chaos, not only from the theoretical point of view, but also in the context of quantum control. Even if quantum characteristic exponent, as defined in (\ref{1.3}) might be zero in most quantum systems, the rate of growth index $D_n$ or its average $\left\langle D_n\right\rangle $ might still be useful as a characterization of quantum dynamics because, even if weaker than exponential, a growth of this quantity would still be an indication of sensitivity to initial conditions. In particular, as suggested by the numerical results, subexponential rates of growth might characterize distinct complexity classes of quantum evolution. \section{Figure captions} Fig.1 - Calculation of $\frac 1n\left\langle D_n-D_0\right\rangle $, Eq.(\ref {2.8}), in the configurational quantum cat for two orthogonal directions $% \nu $ and a fit constrained to the same limit at large $n$. Fig.2 - $\left\langle D_n\right\rangle $, Eq.(\ref{2.12}), for the quantum standard map at $q=5$, $\tau =\frac{\sqrt{5}}2$. Fig.3 - (a) $\left\langle D_n\right\rangle $, Eq.(\ref{2.12}), for a kicked rotator with kinetic Hamiltonian $H_0^{(2)}$ at $q=11$, $\tau =\frac{\sqrt{5}% }2$ ; (b) the same scaled by $\log (\log (n+1))$.
0808.0311
\section{Introduction} \label{sec:intro} Response function calculation of NaI based scintillators has many applications like process control tasks in manufacturing industry, in oil detection, in safety and alarm systems, in Prompt Gamma Neutron Activation Analysis (PGNAA) and others (see for example \cite{hakimabad_ARI_65_07_918},\cite{tickner_ARI_53_00_507}, \cite{nafee_ARI_xx_08_xx} and references there in). In general, Monte Carlo techniques are used to calculate the interactions of source photons with the detector (\cite{mitra_ARI_63_05_415}, \cite{yalcin_ARI_65_07_1179}, \cite{cengiz_ARI_xx_08_xx}) and thus the response function. Also both analytical (\cite{abbas_ARI_55_01_245}, \cite{abbas_ARI_64_06_1057}) and statistical (\cite{sabharwal_ARI_xx_08_xx} techniques have been used too. On the other hand, during the detection of gamma rays, several problems are encountered, ie. the efficiency vs. resolution of semiconductor or scintillation detectors used, the geometry, which causes in turn uncertainties in the solid-angle determination, etc. Also the form of the spectrum becomes more complex due to the following properties: (i) the scintillation detectors have a lower energy resolution compared to Ge detectors, (ii) the environment and/or shielding play an important role because of the scattering of high energy x-rays into the detector, (iii) the first and second escape peaks become important at high energies and (iv) a significant tail develops towards the low-energy continuum due to Compton scattering and escape of bremsstrahlung from the detector. These effects reduce the detection efficiency in the full-energy peak, and have also other serious consequences. If the spectrum is complex, with a continuous $\gamma$-yield (e.g. due to statistical $\gamma$-rays following the decay of highly excited nuclei), the large superimposing continuous tails of the high-energy $\gamma$-rays may hamper an accurate evaluation of the continuous $\gamma$-yield. To improve these drawbacks several attempts have been made in the past. In the experiments a combination of different detectors (Ge and BaF, anti-Compton shields, etc.) has been used. However, these techniques either reduce the overall efficiency by rejecting a large part of the detected events (anti-Compton), or hamper a precise determination of the overall efficiency (addition of coincident signals from different types of detectors). In the data analysis the generally used forward method fits the measured spectrum using appropriate physical models (input information): a master-spectrum is generated using e.g. statistical model calculations (some model parameters are to be adjusted later), which is then folded with the detector response function and the resulting spectrum is compared with observation. Finally, the model parameters are adjusted, until an acceptable agreement is found. Problems arise here from peaks in the experimental spectrum due to contaminants in the target creating discrete lines, which cannot be simulated easily. The remaining problem is the choice of the physical model and the appropriate model parameters. If several physical processes compete, the generation of the master spectrum can often be ambiguous \cite{sukosd_NIMPR_355_95_552}. On the other hand, even if the model spectrum is accurate, the accuracy of the unfolding process is reduced due to two main reasons: (i) the noise in the measuring spectrum and (ii) the fact that the measuring spectrum represents the total counts recorder in a finite energy interval, which is the channel width of the detector. The purpose of this work is to present a new method which can improve the unfolding procedure of a given measured spectrum. The method interpolates the ideal spectrum with the use of special designed derivative kernels. Preliminary simulation results are presented which show that this approach is very effective even in spectra with low statistics. \section{Derivative kernels in unfolding procedure} \label{sec:main} Consider the case where a radioactive source emits photons in a uniform medium and at a given point a photon detector has been placed. Photons, after their emission and before they reach the detector, interact with the atoms of the uniform medium and can change their energy due to Compton scattering or pair production, or disappear due to the photoelectric effect. The effect of the interaction of photons with the medium can be formulated as follows. Let $S(E)$ be the source spectrum and $M(E)$ the measured one. In vacuum, \begin{equation} M(E)=\int _0^{\infty} R(E,V)\cdot S(V)\cdot dV \end{equation} where $R(E,V)$ is equal to the number of photons that will be recorded at energy $E$ when one photon is emitted with energy $V$. The function $R(E,V)$ is known as the transfer function of the detector. In the uniform medium, this relation is more complicated. If a photon with initial energy $U$ is emitted, then there is a probability $P(V,U)$ that the photon will reach the detector surface with a final energy $V$. Thus, the measured spectrum now will be given by: \begin{equation} M(E)=\int _0^{\infty} R(E,V)\cdot \left( \int _0^{\infty} P(V,U) \cdot S(U)\cdot dU \right)\cdot dV \end{equation} Changing the order of integration, the function \begin{equation} \hat{R}(E,V)= \int _0^{\infty} R(E,U)\cdot P(U,V)\cdot dU \end{equation} can be regarded now as the modified transfer function of the detector, for operation inside the uniform medium \cite{vlachos_JER_82_05_21}. The measured spectrum can now be expressed as: \begin{equation} M(E)=\int _0^{\infty} \hat{R}(E,V)\cdot S(V)\cdot dV \end{equation} But instead of the function $M(E)$ the detector integrates this function in small energy intervals, called channels. Thus, the detector output $\bar{M}(E)$ is given: \begin{equation} \bar{M}(E)=\int _E^{E+\epsilon }M(V)\cdot dV \end{equation} where $\epsilon$ is the channel width. Consider now the function \begin{equation} m(E,E')=\int _{E}^{E+E'}M(V)\cdot dV \end{equation} Since $\bar{M}(E)$ is equal to $m(E,\epsilon )$, the 2-dimensional function $m(E,E')$ is known on the grid $(n_1\cdot \epsilon ,n_2 \cdot \epsilon ),\; n_1,n_2=0(1)N$. Our purpose now is to find optimal derivative kernels in order to calculate derivatives of the function $m(E,E')$. Then, we can calculate $M(E)$: \begin{equation} lim _{E'\rightarrow 0} \frac{\partial m(E,E')}{\partial E'}=M(E) \label{eq_def} \end{equation} An important property of $m(E,E')$ which allows for the application of equation (\ref{eq_def}) is that: \begin{equation} m(E,-E')=-m(E-E',E') \end{equation} Figure \ref{fig_2dim} shows the measured spectrum from a NaI detector in an underwater experiment, described in \cite{tsabaris_MMS_6_05_35}. Both $\bar{M}(E)$ and the calculated from equation (\ref{eq_def}) $M(E)$ are shown in Figure \ref{fig_both}. The unfolding of the gamma ray spectrum $M(E)$ can now be easily obtained in the case where radioactive sources emit photons in discrete energies and the counting rate is low enough to avoid additive effects in the detector. In this case and based on the linearity of the folding mechanism, we assume that \begin{equation} S(E)=\sum _{n=0}^k a_n \delta (E-E_n) \end{equation} and we want to calculate both $a_n$ and $E_n$. Then, it is easily found that \begin{equation} M(E)=\sum _{n=1}^k a_n \hat{R}(E,E_n) \end{equation} Finally, consider a continuous function $g:R^2\rightarrow R$ and its discrete version \begin{equation} g_s=\sum _{n_1,n_2 =-\infty}^{\infty}g(x,y)\delta (x-n_1T)\cdot \delta (y-n_2T) \end{equation} where $\delta $ is the Dirac delta function. The knowledge of the discrete version $g_s$ can lead to the reconstruction of the continuous function $\bar{g}$ with the aid of a kernel $K$ such that \begin{equation} \bar{g}(x,y)=\sum _{n_1,n_2 =-\infty}^{\infty} g_s(n_1,n_2) K(x-n_1T,y-n_2T) \end{equation} The ideal interpolation where $\bar{g}=g$ is achieved if $K(x,y)=s_T(x)\cdot s_T (y)$, where \begin{equation} s_T(x)=\frac{sin(\pi x /T)}{\pi x/T} \end{equation} and $T$ is the Nyquist rate. For practical reason, we assume that \begin{equation} K(x,y)=d_0(x)\cdot d_0(y) \end{equation} and \begin{equation} \frac{d^nd_0(x)}{dx^n}=d_n(x) \end{equation} Then, the expression for the derivative with respect to x of the reconstructed function $\bar{g}$ becomes: \begin{equation} D_x\{\bar{g}\}(x,y)=\sum _{n_1,n_2 =-\infty}^{\infty}g_s(n_1,n_2)d_1(x-n_1T)d_0(y-n_2T) \end{equation} In order to construct an efficient kernel, it is not necessary that $d_1(x)=d_0'(x)$. Although this seems controversial consider the following example: it is common to use a sampled Gaussian and its derivative. However, because the Gaussian is not strictly bandlimited, sampling introduces artifacts, thus destroying the derivative relationship between the resulting kernels. So, instead we choose to simultaneously design a pair of discrete kernels that optimally preserve the required derivative relationship. If \begin{equation} D_0(\omega)=\sum _n d_0(nT)e^{-i\omega n} \;\;,\;\; D_1(\omega)=\sum _n d_1(nT)e^{-i\omega n} \\ \end{equation} with $\omega =2\pi /T$ are the discrete Fourier transforms of $d_0,d_1$, we can demand that \begin{equation} i\omega D_0(\omega )=D_1(\omega ) \end{equation} in the case of one dimensional signals $g$. In the case of two dimensional signals, we can demand for example that the pair of kernels preserve the derivative relationship in all directions \cite{farid_ICCAIP_97}. \section{Simulation results} \label{sec:res} In order to test the new method, a simulation experiment was performed. A folded spectrum is produced using the transfer function for a NaI based measuring system calculated in \cite{vlachos_NIMPR_539_05_414}. The spectrum is folded again using the method presented in \cite{vlachos_CPC_174_06_391} in order to simulate an underwater measuring system. Several simulated spectra were produced, with different number of photo-peaks and different number of total recorded counts in order to account for the spectrum statistics. Figure \ref{fig_comp1} shows the overall error in the unfolded spectrum, using a Gaussian derivative kernel ($\circ$) and three derivative kernels DK3 ($\Box$), DK4 ($\times$) and DK5 ($\diamond$) with 3,4 and 5 points respectively calculated in \cite{farid_ICCAIP_97}. Furthermore, in Figure \ref{fig_comp2} the dependence of the error in the unfolded spectrum on spectrum statistics is shown for the Gaussian ($\circ$) and DK5 ($\times$) derivative kernel. It is clear that the new method is very promising even in cases with low statistics. A special experimental setup is under construction to test the new method in real spectrums. Moreover, new derivative kernels are designed in order to optimize their behavior. \section{Conclusions} \label{sec:concl} Preliminary results on interpolation of a measured spectrum with derivative kernels, show that the unfolding procedure becomes more accurate even in cases of low statistics. The use of derivative kernels facilitate the numerical differentiation which is of high importance in both peak detection and spectrum unfolding. \section*{Acknowledgments} This paper is part of the $03ED51$ research project, implemented within the framework of the "\emph{Reinforcement Programme of Human Research Manpower}" (PENED) and co-financed by National and Community Funds ($25\%$ from the Greek Ministry of Development-General Secretariat of Research and Technology and $75\%$ from E.U.-European Social Fund). \bibliographystyle{elsarticle-num}
0808.0245
\section*{} \section{Introduction} The thin accretion disk model describes flows in which the viscous heating of the gas radiates out of the system immediately after generation (Shakura \& Sunyaev 1973). However, another kind of accretion has been studied during recent years where radiative energy losses are small so that most of the energy is advected with the gas. These Adection-Dominated Accretion Flows (ADAF) occur in two regimes depending on the mass accretion rate and the optical depth. At very high mass accretion rate, the optical depth becomes very high and the radiation can be trapped in the gas. This type of accretion which is known under the name 'slim accretion disk' has been studied in detail by Abramowicz et al. (1988). But when the accretion rate is very small and the optical depth is very low, we may have another type of accretion (Narayan \& Yi 1994; Abramowitz et al. 1995; Chen 1995). However, numerical simulations of radiatively inefficient accretion flows revealed that low viscosity flows are convectively unstable and convection strongly influences the global properties of the flow (e.g., Igumenshchev, Abramowicz, \& Narayan 2000). Thus, another type of accretion flows has been proposed, in which convection plays as a dominant mechanism of transporting angular momentum and the local released viscous energy (e.g., Narayan, Igumenshchev, \& Abramowicz 2000). This diversity of models tells us that modeling the hot accretion flows is a challenging and controversial problem. We think, one of the largely neglected physical ingredient in this field, is {\it thermal conduction}. But a few authors tried to study the role of "turbulent" heat transport in ADAF-like flows (Honma 1996; Manmoto et al. 2000). Since thermal conduction acts to oppose the formation of the temperature gradient that causes it, one might expect that the temperature and density profiles for accretion flows in which thermal conduction plays a significant role to appear different compared to those flows in which thermal conduction is less effective. Just recently, Johnson \& Quataert (2007) studied the effects of electron thermal conduction on the properties of hot accretion flows, under the assumption of spherical symmetry. In another interesting analysis, Tanaka \& Menou (2006) showed that thermal conduction affects the global properties of hot accretion flows substantially. They generalized standard ADAF solutions to include a saturated form of thermal conduction. In the second part of their paper, a set of two dimensional self-similar solutions of ADAFs in the presence of thermal conduction has been presented. The role of conduction is in providing the extra degree of freedom necessary to launch thermal outflows according to their 2D solutions. On the other hand, ADAFs with winds or outflows have been studied extensively during recent years, irrespective of possible driven mechanisms of winds. But thermal conduction has been neglected in all these ADAFs solutions with winds. In advection-dominated inflow-outflow solutions (ADIOS), it is generally assumed that the mass flow rate has a power-law dependence on radius, with the power law index, $s$, treated as a parameter (e.g., Blandford \& Begelman 1999; Quataert \& Narayan 1999; Beckert 2000; Misra \& Taam 2001; Fukue 2004; Xie \& Yuan 2008). Beckert (2000) presented self-similar solutions for ADAFs with radial viscous force in the presence of outflows from the accretion flow or infall. Turolla \& Dullemond (2000) investigated how, and to what extent, the inclusion of the source of ADAF material affects the Bernoulli number and the onset of a wind. In their model, the accretion rate decreases with radius ($s<0$). Misra \& Taam (2001) studied the effect of a possible hydrodynamical wind on the nature of hot accretion disc solutions. They showed that their solutions are locally unstable to a new type of instability called wind-driven instability, in which the presence of a wind causes the disc to be unstable to long-wavelength perturbations of the surface density. Kitabatake, Fukue \& Matsumoto (2002) studied supercritical accretion disc with winds, though angular momentum loss of the disc, because of the winds, has been neglected. Comparisons with observations reveal that the X-ray spectra of such a wind-driven self-similar flow, can explain the observed spectra of black hole candidates in quiescence (Quataert \& Narayan 1999; Yuan, Markoff \& Falcke 2002). Lin, Misra \& Taam (2001) find that the spectral characteristics of high-luminosity black hole systems suggest that winds may be important for these systems as well. Considering extensive works on hot accretion flows with winds and significant role of thermal conduction in deriving outflows (Tanaka \& Menou 2006), we study ADAFs with outflows {\it and} thermal conduction using height-integrated set of equations. A phenomenological way is adopted in which we parameterize the rate at which mass, angular momentum and energy are extracted by outflow or wind. In the next section, we present basic equations of the model. Self-similar solutions are investigated in section 3. The paper concludes with a summary of the results in section 4. \section{General formulation} We consider an accretion disc that is axisymmetric and geometrically thin, i.e. $H/R < 1$. Here $R$ and $H$ are, respectively, the disk radius and the half-thickness. The disc is supposed to be turbulent and possesses an effective turbulent viscosity. Consider stationary height-integrated equations described an accretion flow onto a central object of mass $M_{\ast}$. The continuity equations reads \begin{equation} \frac{\partial}{\partial R} (R\Sigma v_{\rm R}) + \frac{1}{2\pi} \frac{\partial \dot{M}_{\rm w}}{\partial R} = 0,\label{eq:con} \end{equation} where $v_{\rm R}$ is the accretion velocity ($v_{\rm R}<0$) and $\Sigma = 2\rho H$ is the surface density at a cylindrical radius $R$. Also, $\rho$ is the midplane density of the disc and the mass loss rate by outflow/wind is represented by $\dot{M}_{\rm w}$. So, \begin{equation} \dot{M}_{\rm w}(R) = \int 4\pi R' \dot{m}_{\rm w} (R') dR',\label{eq:mdot} \end{equation} where $\dot{m}_{\rm w} (R)$ is mass loss rate per unit area from each disc face. Xie \& Yuan (2008) derived the height-integrated accretion equations including the coupling between the inflow and outflow, to investigate the influence of outflow on the dynamics of hot inflow. They showed that under reasonable assumptions to the properties of outflow, the main influence of of outflow can be properly included by adopting a radius dependent mass accretion rate. We write the dependence of the accretion rate $\dot{M}$ as follows (e.g., Blandford \& Begelman 1999) \begin{equation} \dot{M}=-2\pi R \Sigma v_{\rm R} = \dot{M}_{0}(\frac{R}{R_{0}})^{s},\label{MMdot} \end{equation} where $\dot{M}_{0}$ is the mass accretion rate at the outer boundary $R_{0}$. The parameter $s$ describes how the density profile and the accretion rate are modified. In this paper, typical values of $s$ considered are between $s=0$ (no winds) and $s=0.3$ (moderately strong wind). The above prescription for mass accretion rate has been used widely for $s>0$ (e.g., Quataert \& Narayan 1999; Beckert 2000; Misra \& Taam 2001; Fukue 2004) or $s<0$ (Turolla \& Dullemond 2000). Considering equations (\ref{eq:con}), (\ref{eq:mdot}) and (\ref{MMdot}), we can obtain \begin{equation} \dot{m}_{\rm w}=\frac{s}{4\pi R_{0}^{2}} \dot{M}_{0} (\frac{R}{R_{0}})^{s-2}. \end{equation} The equation of motion in the radial direction is \begin{equation} v_{\rm R}\frac{dv_{\rm R}}{dR}=R(\Omega^{2}-\Omega_{\rm K}^{2})-\frac{1}{\rho}\frac{d}{dR}(\rho c_{\rm s}^{2}), \end{equation} where $\Omega$ is angular velocity and $\Omega_{\rm K}=\sqrt{GM_{\ast}/R^{3}}$ represents Keplerian velocity. Also, $c_{\rm s}$ is sound speed and from vertical hydrostatic equilibrium, we have $H=c_{\rm s}/\Omega_{\rm K}$. Similarly, integration over $z$ of the azimuthal equation of motion gives (e.g., Knigge 1999) \begin{equation} R\Sigma v_{\rm R} \frac{d}{dR} (R^{2}\Omega) = \frac{d}{dR}(R^{3}\nu \Sigma \frac{d\Omega}{dR})-\frac{(lR)^{2}\Omega}{2\pi}\frac{d\dot{M}_{\rm w}}{dR}, \end{equation} where the last term of right hand side represents angular momentum carried by the outflowing material. Here, $l=0$ corresponds to a non-rotating wind and $l=1$ to outflowing material that carries away the specific angular momentum it had at the point of ejection (Knigge 1999). Also, $\nu$ is a kinematic viscosity coefficient and we assume \begin{equation} \nu = \alpha c_{\rm s} H, \end{equation} where $\alpha$ is a constant less than unity (Shakura \& Sunyaev 1973). In order to implement thermal conductivity correctly it is essential to know whether the mean free path is less than (or comparable to) the scale length of the temperature gradient. For electron mean free path which are greater than the scale length of the temperature gradient the thermal conductivity is said to 'saturate' and the heat flux approaches a limiting value (Cowie \& McKee 1977). But when the mean free paths are much less than the temperature gradient the heat flux depends on the coefficient of thermal conductivity and the temperature gradient. Generally, thermal conduction transfers heat so as to oppose the temperature gradient which causes the transfer. Tanaka \& Menou (2006) discussed hot accretion likely proceed under weakly-collisional conditions in these systems. Thus, a saturated form of "microscopic" thermal conduction is physically well-motivated, as we apply in this study. However, one of the primary problems for studying the effects of thermal conduction in plasmas is the unknown value of the thermal conductivity. Now, we can write the energy equation considering energy balance in the system. We assume the generated energy due to viscous dissipation and the heat conducted into the volume is concerned are balanced by the advection cooling and energy loss of outflow. Thus, \begin{displaymath} \frac{\Sigma v_{\rm R}}{\gamma -1}\frac{dc_{\rm s}^{2}}{dR}-2Hv_{\rm R}c_{\rm s}^{2}\frac{d\rho}{dR}=f\frac{\alpha\Sigma c_{\rm s}^{2}R^{2}}{\Omega_{\rm K}}(\frac{d\Omega}{dR})^{2} \end{displaymath} \begin{equation} -\frac{2H}{R}\frac{d}{dR}(RF_{\rm s})-\frac{1}{2}\eta \dot{m}_{\rm w}(R) v_{\rm K}^{2}(R), \end{equation} where the second term on right hand side represents energy transfer due to the thermal conduction and $F_{\rm s} = 5 \phi_{\rm s} \rho c_{\rm s}^{3}$ is the saturated conduction flux on the direction of the temperature gradient (Cowie \& McKee 1977). Dimensionless coefficient $\phi_{\rm s}$ is less than unity. Also, the last term on right hand side of energy equation is the energy loss due to wind or outflow (Knigge 1999). Depending on the energy loss mechanism, dimensionless parameter $\eta$ may change. We consider it as a free parameter of our model so that larger $\eta$ corresponds to more energy extraction from the disc because of the outflows (Knigge 1999). \section{Self-similar solutions} A self-similar solution is not able to describe the {\it global} behaviour of an accretion flow, because no boundary condition has been taken into account. However, as long as we are not interested in the behaviour of the flow at the boundaries, such a solution describes correctly the true solution asymptotically at large radii. We assume that each physical quantity can be expressed as a power law of the radial distance, i.e. $R^{\nu}$, where the power index $\nu$ is determined for each physical quantity self-consistently. The solutions are \begin{equation} \Sigma (R) = \omega_{0} \Sigma_{0}( \frac{R}{R_{0}})^{s-\frac{1}{2}}, \end{equation} \begin{equation} \Omega (R) = \omega_{1} \sqrt{\frac{GM_{\ast}}{R_{0}^{3}}} (\frac{R}{R_{0}})^{-3/2}, \end{equation} \begin{equation} v_{\rm R}(R) = -\omega_{2} \sqrt{\frac{GM_{\ast}}{R_{0}}} (\frac{R}{R_{0}})^{-1/2}, \end{equation} \begin{equation} P(R) = \omega_{3} P_{0} (\frac{R}{R_{0}})^{s-\frac{3}{2}}, \end{equation} \begin{equation} c_{s}^{2}(R) = \frac{\omega_{3}}{\omega_{0}} (\frac{GM_{\ast}}{R_{0}}) (\frac{R}{R_0})^{-1}, \end{equation} \begin{equation} H(R) = \omega R_{0} (\frac{R}{R_0}), \end{equation} where $\Sigma_0$ and $R_{0}$ provide convenient units with which the equations can be written in the non-dimensional forms. Note that $P(R)$ is the height-integrated pressure. By substituting the above self-similar solutions into the dynamical equations of the system, we obtain the following system of dimensionless equations, to be solved for $\omega$, $\omega_0$, $\omega_1$, $\omega_2$ and $\omega_3$: \begin{equation} \omega_{0}\omega_{2}=\dot{m}, \end{equation} \begin{equation} \omega_{0}\omega^{2}-\omega_{3}=0, \end{equation} \begin{equation} -\frac{\omega_{2}^{2}}{2}=\omega_{1}^{2}-1+(\frac{5}{2}-s)\frac{\omega_{3}}{\omega_{0}}, \end{equation} \begin{equation} \frac{1}{2}\dot{m}-\frac{3}{2}(s+\frac{1}{2})\alpha \omega_{3}-sl^{2}\dot{m}=0, \end{equation} \begin{displaymath} \left [\frac{1}{\gamma -1}+(s-\frac{3}{2}) \right ]\omega_{2}\omega_{3}=\frac{9}{4}\alpha f \omega_{1}^{2}\omega_{3}+5(2-s)\phi_{\rm s} \end{displaymath} \begin{equation} \times\omega_{3}\sqrt{\frac{\omega_3}{\omega_0}}-\frac{\eta s \dot{m}}{4}, \end{equation} where $\dot{m}=\dot{M}_{0}/(2\pi R_{0}\Sigma_{0}\sqrt{GM_{\ast}/R_{0}})$ is the nondimensional mass accretion rate. After algebraic manipulations, we obtain a forth order algebraic equation for $\omega$: \begin{displaymath} \frac{9\alpha^2}{8} (\displaystyle\frac{1+2s}{1-2sl^2})^{2} \omega^4 + [ \frac{5}{2} - s + \frac{1+2s}{f(1-2sl^{2})}(\frac{2s}{3}+ \end{displaymath} \begin{equation} \displaystyle \frac{5/3 - \gamma}{\gamma -1}) ] \omega^2 + \frac{20 (s-2)\phi_{\rm s}}{9 \alpha f} \omega + \frac{\eta s}{6f}(\frac{1+2s}{1-2sl^2})-1=0,\label{eq:main} \end{equation} and the rest of the physical variables are \begin{figure*} \vspace*{+100pt} \includegraphics[scale=0.8]{Figure1.eps} \caption{Profiles of the physical variables of the accretion disc versus the saturation constant $\phi_{\rm s}$, taking $\alpha=0.2$, $\gamma=1.5$, $l=1$, $\eta=1$ and $f=1$. Each curve is labeled by its corresponding exponent $s$.} \label{fig:1} \end{figure*} \begin{figure*} \includegraphics[scale=0.8]{Figure2.eps} \caption{The same as Figure \ref{fig:1}, but $s=0.3$ and each curve is labeled by its corresponding coefficient $\eta$. Solid curves are for $l=1$ and dashed lines represent solutions with $l=0$.} \label{fig:2} \end{figure*} \begin{equation} \omega_{0}=\frac{2}{3\alpha}(\frac{1-2sl^2}{1+2s})\dot{m}\omega^{-2},\label{eq:omega0} \end{equation} \begin{equation} \omega_{1}=\displaystyle\sqrt{1-(\frac{5}{2}-s)\omega^{2}-\frac{9\alpha^{2}}{8}(\displaystyle\frac{1+2s}{1-2sl^{2}})^{2}\omega^{4}},\label{eq:omega1} \end{equation} \begin{equation} \omega_{2}=\frac{3\alpha}{2}(\frac{1+2s}{1-2sl^{2}})\omega^{2}, \end{equation} \begin{equation} \omega_{3}=\frac{2}{3\alpha}(\frac{1-2sl^2}{1+2s})\dot{m}. \end{equation} We can solve algebraic equation (\ref{eq:main}) numerically and clearly only real roots which correspond to positive $\omega_{1}^{2}$ are physically acceptable. Without mass outflow and thermal conduction, i.e. $s=l=\eta=0$ and $\phi_{\rm s}=0$, equation (\ref{eq:main}) and similarity solutions reduce to the standard ADAF solutions (Narayan \& Yi 1994). Also, in the absence of wind but with thermal conduction, equation (\ref{eq:main}) reduces to equation (17) of Tanaka \& Menou (2006). But our main algebraic equation includes both outflows and thermal conduction. Now we can analysis behavior of the solutions in the presence of the wind and thermal conduction. Our primary goal is to consider the effects of winds and thermal conduction via parameters $s$, $l$, $\eta$ and $\phi_{\rm s}$. First, we can summarize typical behavior of the standard ADAF solutions as follows (Narayan \& Yi 1994): (a) The surface density increases with the accretion rate, and decreases with the viscosity coefficient $\alpha$; (b) But the radial velocity is directly proportional to the viscosity coefficient; (c) The gas rotates with sub-Keplerian angular velocity, more or less independent of the coefficient $\alpha$; and finally (d) the opening angle of the disc is fixed, independent of $\alpha$ and $\dot{m}$. Mass outflows and thermal conduction may modify these behaviors according to our solutions. Equation (\ref{eq:omega0}) shows that surface density is directly proportional to the mass accretion rate. But dependence of the surface density on the exponent of accretion rate $s$ is determined by solving equation (\ref{eq:main}). The rotational and the radial velocities are both independent of nondimensional mass accretion rate $\dot{m}$. Also, dependence of the radial velocity on the viscosity coefficient is determined by the main algebraic equation. In the absence of mass outflows and thermal conduction, the opening angle of the disc is independent of the accretion rate and the viscosity coefficient and this can be understood from equation (\ref{eq:main}) by setting $s=l=\eta=\phi_{\rm s}=0$. But depending on the angular momentum and energy exchanges due to the wind (i.e. $l$ and $\eta$) and the variable accretion rate (i.e. $s$) and the thermal conduction (i.e. $\phi_{\rm s}$) the thickness of the disc may change according to the acceptable roots of equation (\ref{eq:main}). We can see that the rotational velocity is sub-Keplerian according to equation (\ref{eq:omega1}). Figure \ref{fig:1} shows profiles of the physical variables versus thermal conduction coefficient $\phi_{\rm s}$ for different accretion rate exponent, i.e. $s=0$, $0.1$, $0.2$ and $0.3$. The value of $s$ measures the strength of outflow, and a larger $s$ denotes a stronger outflow. The other input parameters are $\alpha=0.2$, $\gamma=1.5$, $f=1$ and $l=\eta=1$ and each curve is labeled by its corresponding $s$. Recent work by Sharma et al. (2006) suggest that the viscosity parameter in a hot accretion flow will be larger than in a standard thin disc. If $\alpha$ is much smaller than $0.25$, the maximum accretion rate up to which the ADAF solution is possible, decreases significantly and the maximum luminosity of the models becomes much smaller than the observed luminosities (Quataert \& Narayan 1999). So, $\alpha=0.2$ is probably not unrealistic for our analysis. Clearly profiles with $s=0$ represent no-wind solutions (Tanaka \& Menou 2006). We can see that ADAFs with winds rotate more quickly than those without winds. Also, the viscous dissipation is expected to be larger in the presence of winds and outflows. Strong-wind models have a lower surface density than weak-wind models. While solutions without thermal conduction are recovered at small $\phi_{\rm s}$ values, we can see significant deviations as $\phi_{\rm s}$ increases. Note that for a given set of the input parameters, the solutions reach to a non-rotating limit at a specific value of $\phi_{\rm s}$ which we denote it by $\phi_{\rm s}^{\rm c}$. We can not extend the profiles beyond $\phi_{\rm s}^{\rm c}$, because equation (\ref{eq:omega1}) gives a negative $\omega_{1}^{2}$ which is clearly unphysical. As profiles of Figure \ref{fig:1} show the critical magnitude of conduction viscosity $\phi_{\rm s}^{\rm c}$ for which the solutions tend to non-rotating limit highly depends on the accretion rate exponent $s$. In fact, higher values of $s$ correspond to larger $\phi_{\rm s}^{\rm c}$. When there is mass outflow, one can expect lower surface density. Figure \ref{fig:1} shows for non-zero $s$, surface density is lower than the standard ADAF solution and for stronger outflows, this reduction of the surface density is more evident. However, profile of the surface density is not affected by $\phi_{\rm s}$ when this parameter is small. But as conduction coefficient $\phi_{\rm s}$ increases, surface density decreases. In particular, this reduction due to the thermal conduction is more significant for larger accretion rate exponent $s$. In other words, when the disc losses more mass because of the winds or outflows, surface density decreases more significantly due to the thermal conduction in comparison to the no-wind solutions. Actually, outflows play as a cooling agent and thermal conduction provides extra heating and there is a competition between these physical factors. While effects of winds and conduction on the profiles of the surface density and the accretion velocity are similar, their effects on the rotational velocity and the temperature oppose each other. Both winds and thermal conduction lead to enhanced accretion velocity and reduced surface density. Although winds increases rotational velocity and decreases the temperature, thermal conduction not only decreases rotational velocity but increases the temperature too. Angular momentum conservation implies $(1-2sl^2)>0$ which is trivially valid for non-rotating winds (i.e. $l=0$). But for rotating winds, this inequality implies $s<1/2$. Figure \ref{fig:2} show physical profiles for various $\eta$ and $l$, taking $s=0.3$, $\alpha=0.2$, $\gamma=1.5$ and $f=1$. Profiles corresponding to the rotating winds (i.e $l=1$) are shown by solid curves, but dashed curves show non-rotating solutions ($l=0$). Each curve is labeled by its corresponding coefficient $\eta$. Higher this parameter, more energy flux is taking away by winds. Solutions with rotating winds are more sensitive to the variations of the parameter $\eta$ according to the plots of Figure \ref{fig:2}. As more energy flux is extracted from the disc by winds (i.e. higher $\eta$), a lower level of the saturated thermal conduction is enough to significantly modify physical profiles. In particular, this behavior is more evident for solutions with rotating winds. \section{Discussion and Summary} Theoretical arguments and observations suggest that mass loss via winds may be important in sub -Eddington, radiatively inefficient, accretion flows. On the other hand, thermal conduction may play a significant role in such systems (Tanaka \& Menou 2006). Using a simplified model, we included both winds and thermal conduction in unified model in order to understand their possible combined effects on the dynamics of the system. Accounting for a variable mass accretion rate in the flow in proportion to $R^{s}$ and a saturated form of thermal conduction with coefficient $\phi_{\rm s}$, a set of self-similar solutions are presented in this study. We have varied $s$ and $\phi_{\rm s}$ in our models to judge their sensitivity to these parameters. The most important finding of our analysis is that as more mass, angular momentum and energy flux are taking away from the disc (i.e. stronger winds), possible modifications of the physical profiles due to the thermal conduction will occur at higher level of conduction. Although our self-similar solutions are too simplified to be used to calculate the emitted spectrum, their typical behavior show importance of thermal conduction in the presence of winds. For future the global solution rather than the self-similar solutions is required. This is because most of the radiation of an ADAF comes from its innermost region where the self-similar solution breaks down. \acknowledgments I appreciate the referee, N. Shakura, for his useful comments. This research was funded under the Programme for Research in Third Level Institutions (PRTLI) administered by the Irish Higher Education Authority under the National Development Plan and with partial support from the European Regional Development Fund.